Fed to Fed

Trustworthiness and AI: Finding Balance in Innovation

GOVTECH CONNECTS Season 2

How can an organization effectively create systems to protect against compromised AI data models to build a secure, transparent infrastructure that builds trust while accelerating innovation? 

In today's episode of the Fed to Fed podcast, Ben Cushing from Red Hat joins us to discuss how agencies can "shift left" on security and apply supply chain best practices to AI and software models. 

Thanks for listening! Would you like to know more about us? Follow our LinkedIn page!

EDITED Red Hat Carahsoft Podcast


 0:00
 Welcome to the Fed to Fed podcast, where we dive into the dynamic world of government technology.


 0:07
 In this podcast series, we'll be joined by current and former federal leaders and industry Trail Blazers who are at the forefront of innovation.


 0:16
 Here, we speak openly and honestly about the challenges and opportunities facing the federal government and the Department of Defense and its partners in the modern age, driving innovation and the incredible capabilities of technology along the way.


 0:33
 Whether you're a federal leader, a tech industry professional, or simply fascinated by IT modernization just like us, this podcast is for you.


 0:43
 And we're so happy to have you tuning in.


 0:47
 Ben, thank you so much for joining us today.


 0:50
 Thank you.


 0:50
 Happy to be here.


 0:52
 Well, we're really excited about this discussion and I'm going to jump right into the first question.


 0:57
 Could you start by explaining what we mean by shift left when it comes to security threats and why it's so important for organizations to address these threats in the earliest stages of the development life cycle?


 1:10
 So shift left it refers to the moving of security considerations which have traditionally been handled in the software release process, moving that all into the earliest stages of development.


 1:23
 The the hope being that you catch vulnerabilities and any tampering of code or the pipe, the build pipeline early on.


 1:31
 And this would of course, reduce the risk to organizations because of compromise dependencies that are propagating downstream.


 1:41
 And the this proactive approach is one of the best ways to limit opportunities for attackers to insert malicious code or any sort of artifacts that might go unnoticed.


 1:53
 And it also helps maintain the trust throughout the entire software supply chain.


 1:58
 Excellent.


 1:58
 So Ben, Sig Store has been mentioned as a key player in securing supply chains.


 2:04
 Could you give us a high level view of what Sig Store is and why it's so critical?


 2:09
 Sig Store is a it's an open source project, and it aims to provide a standard, easy to use framework for crypto, graphically signing and verifying any software artifact.


 2:20
 Its goal is to democratize signature signing practices, and it's there to ensure the developers of all sizes can add verifiable authenticity and integrity checks to their releases.


 2:35
 And this transparency helps establish who created or modified a piece of software, in theory will prevent unknown or malicious actors from injecting any sort of vulnerability without detection, which has occurred a couple times.


 2:49
 Could you explain how one fits into the bigger picture of establishing trust and provenance in software artifacts?


 2:56
 Ricor is a tamper evident Ledger.


 3:00
 OK, so it's where the artifacts actually get signed.


 3:03
 The details about the signature and artifact are recorded in Wrecker, and this means that the signing history is publicly verifiable and auditable.


 3:14
 This would ensure there's no retroactive changes that can be hidden from the the global open source community.


 3:23
 The Open ID Connect was the next one.


 3:26
 It provides like an identity layer, so it's ensuring that signatures come from only authenticated sources.


 3:32
 So these addresses are The Who and the chain of trust, so you know exactly which user or service account is performing the actual signing.


 3:41
 And then lastly, Cosign is the tool that interfaces with both open ID connect and record to create and verify digital signatures.


 3:49
 Essentially, Cosign automates the signing process and checks record for the signature's authenticity and tamper resistance.


 3:55
 And when you combine all three of these, you create a pipeline where you know who signed, what they signed, when they signed it, and can verify it all in a public transparent log.


 4:06
 That's excellent.


 4:07
 So if an organization is new to these concepts and wants to improve its supply chain security, what are the first steps for adopting 6 Doors tools and best practices?


 4:16
 Strong starting point would be to integrate Cosign into your continuous Integration, continuous delivery, or CICD workflow, signing all the container images or other artifacts in the process.


 4:29
 In parallel, team should adopt Open ID connect provider for identity management and set up or leverage a public record instance for actually logging signatures.


 4:41
 I'd say it's critical to educate developers and dev OPS teams that you work with on the importance of verifying artifacts at every stage.


 4:52
 So like, you know, pulling from unverified sources is a primary vector for these types of attack.


 4:57
 So over time, you're going to need to build policies around these tools to enforce the signature verification before promoting any one of your artifacts to production.


 5:07
 Seeing AI models move from experimental to operational, with some systems becoming truly agentic, able to make and act on decisions in these scenarios, how crucial is it to apply the same supply chain security rigor to AI model provenance?


 5:24
 It is absolutely vital once AI models are entrusted with decision making or automation, which is still a ways out we're still have, we usually rely on a human in the middle, but when we get to that that level of automation, any manipulation of the model could lead to disastrous outcomes.


 5:45
 So anything ranging from incorrect business decisions to large scale system failures.


 5:50
 And in a gentic architecture, the models often act in real time on behalf of users or the organization.


 5:57
 So a compromised model could be exploited to leak sensitive data, they could manipulate outputs, it could sabotage operations.


 6:06
 So the verifiable provenance ensures the model you're deploying is exactly what was intended and that no adversary has manipulated the training data or the final artifacts.


 6:18
 Excellent.


 6:18
 So what specific steps can organizations take to protect against malicious infiltration in AI model supply chains?


 6:27
 And how do they maintain compliance and provide indemnification to customers?


 6:32
 The principles are largely the same as traditional software artifacts.


 6:38
 Comment.


 6:38
 We'll just review those real quick.


 6:39
 So we want we need cryptographic signing.


 6:43
 So we need to sign and verify the AI models at each step in the build stage.


 6:47
 So during post training, quantization, fine tuning every every step you might have, we need to be able to sign and verify the models.


 6:54
 Secondly, we need a tamper evident Ledger.


 6:57
 We need the to record the model hashes and signatures in the systems like record like I mentioned O or similar immutable Ledger.


 7:04
 We need access control and monitoring.


 7:06
 So we need to ensure that only authorized personnel and processes can alter those models and maintain strong audit logs across the way.


 7:14
 So we have to know exactly who did what and when.


 7:16
 Lastly, vulnerability scanning.


 7:18
 So regularly evaluating model code, the code dependencies, any known vulnerabilities, especially in the frameworks and libraries, and for compliance and indemnification, having rigorous records of every step in the chain, plus the policies that are enforcing those security measures is how we demonstrate due diligence.


 7:38
 It also provides a clear paper trail for legal and compliance inquiries, audits of any kind.


 7:44
 And it would also support claims that the model was delivered with integrity and it's free from any sort of modification, malicious or otherwise.


 7:52
 Excellent, thank you so much, Ben.


 7:53
 So finally, given the incredible positive impact AI can have when fully trusted, and the equally incredible negative impact a compromise system can create, how do you see organizations balancing innovation?


 8:08
 The security imperatives?


 8:10
 It's a balancing act.


 8:10
 Like like all good innovation, it's a balancing act.


 8:13
 And the growing consensus is that security must be baked into the entire innovation process rather than treat it as a bolt on the potential rewards of AI proved efficiency, data-driven insights, automated decision making.


 8:29
 As I mentioned, they're enormous, but so is the risk of large scale misuse or system failure if the AI pipeline is not secure.


 8:37
 I'd say organizations are learning that investing in secure infrastructure and transparent provenance not only protects against the catastrophic breaches that we've seen from a number of organizations over the last five years, but it also fosters greater innovation in the long run because the stakeholders, the customers, and the regulators trust the system in total.


 8:58
 In essence, the robust supply chain security becomes a catalyst for confidently pursuing cutting edge AI development.


 9:08
 Wow.


 9:08
 So Ben, in closing, is there anything that you would like to share that could help other organizations out there really focus on this and be successful?


 9:20
 I would I would say to organizations that what is old is new.


 9:24
 A lot of what I just described are practices that are already in use by the best developers.


 9:30
 Simply using modern tools and modern pipelines and applying it against AI practice is one way to increase the rigor that we need.


 9:41
 In addition to that, I would say the a lot of the agentic pieces that I mentioned, there's already a lot of ways to monitor automation and process instances and we need to be able to apply that same level of scrutiny against these processes as they execute.


 9:57
 One of the ways to do that sufficiently within an enterprise is to create an anti corruption layer between your non deterministic and deterministic models.


 10:08
 So break that down a little bit.


 10:11
 Most generative AI is non deterministic, meaning when it outputs content we don't.


 10:17
 It's not always the same, it's always a different output.


 10:20
 And that's obviously it's trouble because inside of an enterprise you are generally testing against a repeatable pattern.


 10:26
 Because it's not the same every time.


 10:28
 We need to put a layer between that non deterministic output and the expectations of the tests that are looking for deterministic content.


 10:38
 I refer to that again, old is new.


 10:41
 There's a, this is a borrowed term from integration systems called an anti corruption layer.


 10:46
 And I highly recommend investing in that kind of mentality in order to bring large language models and generative AI up to the security requirements that are expected within a modern enterprise, whether it's a federal agency, a corporation or anything in between.


 11:03
 Wonderful.


 11:04
 And I do have one more question for you.


 11:05
 So if people wanted to be able to do that, what would be the question that they would ask their partner, a contractor, somebody internally that is helping support this effort?


 11:16
 I would ask them what do they currently use or creating trust in a system.


 11:21
 So for instance, if they have a some form of policy enforcer, whether it's a rules engine, authorization system, whatever, is that system modern enough to work with the output of a large language model?


 11:34
 And so these things go hand in hand.


 11:36
 Modernization will go hand in hand with AI.


 11:39
 In some ways, AI is actually going to help drive the modernization that we have all been looking for across the IT landscape.


 11:47
 Again, whether it's federal agencies, corporations, or whatever, everyone wants to take advantage of the, of an AI future to get there, modernization will have to happen.


 11:57
 And so you're going to, I think you're going to start to see modernization dollars get freed up specifically for AI initiatives.


 12:04
 And, and then they feed on each other like the AI will require the modernization and in turn will get modernization to help the AI models behave in the way we hope and lead to a secure modern development and deployment pipeline.


 12:17
 That's great, Ben, I always look forward to my discussions with you.


 12:21
 I learned so much and learned all about the things that we can look to the future, the short term future, and the things to help us make sure that we're protected and safe.


 12:33
 So thank you so much for your time.


 12:35
 You're welcome.


 12:36
 It's great.


 12:37
 Stay safe out there.


 12:38
 This concludes today's episode of the Fed to Fed podcast.


 12:42
 If you enjoyed this episode, please don't forget to subscribe, rate and leave a review.


 12:47
 Your feedback helps us continue bringing you thought provoking conversations with the brightest minds in government technology.


 12:55
 Stay tuned for our next episode where we will continue to explore opportunities to harness the power of technology and explore what's next in developing a more innovative and efficient government.


 13:07
 Until then, this is the Fed to Fed podcast by Govtech Connects.


 13:12
 Thank you for joining us.