Fed to Fed

Real Strategies for Building Trust in AI Security

GOVTECH CONNECTS Season 3 Episode 1


Government agencies and their industry partners are accelerating AI adoption, but that also increases risk. How can leaders integrate emerging technologies, like AI, securely and with transparency and resilience?  

In today's episode, we feature Rob Wood, former Chief Information Security Officer at CMS, and now co-founder of Sidekick, to explore the intersection of artificial intelligence and cybersecurity. From proactive practices for securing AI systems to strengthening collaboration between government and private partners, Rob shares practical insights on risk management, data integrity, and preparing for real-world incidents, drawing from lessons learned from the frontlines of modernization.

Thanks for listening! Would you like to know more about us? Follow our LinkedIn page!

Welcome to the Fed to Fed podcast, where we dive into the dynamic world of government technology. In this podcast series, we'll be joined by current and former federal leaders and industry trailblazers at the forefront of innovation. Here, we speak openly and honestly about the challenges and opportunities facing the federal government and the Department of Defense and its partners. in the modern age, driving innovation and the incredible capabilities of technology along the way. Whether you're a federal leader, a tech industry professional, or simply fascinated by IT modernization, just like us, this podcast is for you. And we're so happy to have you tuning in. Welcome to today's episode. We're thrilled to bring you a conversation with two incredible leaders in the federal government contracting arena. First, we're joined by Robert Hicks, Transform IT Strategic Growth Executive. With over 25 years of real world government contracting experience driving organizations to achieve transformational growth. Alongside him is Robert Wood, co-founder of Sidekick Security LLC, a team of cybersecurity experts dedicated to helping your organization achieve its mission securely and effectively. Rob is also the former chief information security officer at the centers for Medicare and Medicaid Services, where he led security strategies and innovation across one of the largest federal agencies. federal agencies are allocating significant resources and invest millions each year in security practices and programs that often become unsustainable and less effective as the adoption of AI increases. In this episode, we'll explore and discuss the big questions within AI security that federal and industry leaders everywhere are grappling with today. Rob and Robert. Thank you so much for joining us today. We're so excited about this discussion. Thank you Susan. I really appreciate this opportunity. Rob would be and you've had a busy year. I see you came out of government. As the former CMS CSO in 2024. And I think now you got to focus on your company. Okay. How are you, my friend? And how are things going with Sackett? Things are fantastic. And I am generally and personally pretty darn good. But, Well, great to hear that, Rob. I remember those days, at CMS. You had a reputation that preceded you. You know, you were a, what I like to call a positive disruptor, right? You embody, you know, pushing the envelope or static thinking, right? You challenged industry as well as the government with, not accepting, you know, the status quo. And and so, Rob, I want to leverage on that perspective, that mindset, because with so many, government leaders and government agencies now, they're really focusing on implementing an accelerated approach. I'd like to ask you a few questions around that topic. And my first one, my first thought or question to you is what would you say? I mean, because you had experience like you was in government, you've been in history, but what would you say are some proactive, security practices that. Chief information officer and chief information security officers and government agencies. You know, what should they consider implementing to identify? Vulnerabilities before the bad actors exploit them? So part of this in my mind, it depends on what you're doing with AI. You're building something that is AI as a part of a bigger platform or features or something like that. You really need to be like engaging in focused testing of that model or those features that you're building. Threat modeling. If you're using any third party services you understand the data flows, things of that nature. It's not dissimilar from application security or software security, where you're building something. You need to take the appropriate measures to make sure that it is done right. Now, on the flip side, if you are consuming and enabling your organization with AI tools, you want to roll out, you know, Claude or Gemini or Copilot or any number of other recording tools, the meet and meeting note takers, all of that stuff. Then you really need to be thinking about how you set policy and how that cascades throughout your organization, because it's going to trickle down into everything from your supplier risk management to the way that you that your IT teams manage, AI related features and all of that. Because in a lot of places, these these features are just getting turned on. And it's as simple as a checkbox getting flipped at one point in time, and then it's just on. And you might not have any idea. And so you really need to be, in my opinion, be taking AI security related measures and weaving them into existing capabilities like your supplier risk management, your third party risk management, the way you do SAS security, the way you do it, like tools management, the way you test and build and manage your, your software. And so it's really about like an it's an integrated approach, not some like new bolt on standalone thing that you're just going to do and you're going to have a, you know, an AI team. That's not necessarily, in my opinion, the best way to go about it. Yeah. You mentioned earlier in your, response, you said the word engagement. And that always resonates with me. Like it being engaged, engaging. Right. There's a lot of, opportunities now with government leaders and industry folks are coming together. There's a lot of conversation. There's a lot of engagement between the two, arenas. Okay. Government and industry. So with that thought, how should government and industry, security firms collaborate right to, to ensure that data integrity? Right. That's a big thing about data integrity. There are real time data. So how how can you ensure data integrity and, transparency in real time? Because you got different, you know, AI enabled health care, data exchanges. So what are your thoughts around how can they collaborate collectively to ensure data integrity? So I think it's important to differentiate. You've got collaboration at a macro level. You know many different industry partners, corporate private corporations supporting the government through contracting or product sales means. And the agencies, you know, many, many to many relationships at industry wide scale or government wide scale. And then you've got agency to government contractor partner, you know, operating on a more 1 to 1 scale. And there's collaboration in both of those contexts. At a macro scale. You I think it's important for organizations to continue to be open and transparent in the way that they, share their lessons learned about what's working, what's not working. And, you know, like governance models that really work, policy statements that really work, or drawbacks, lessons learned, retrospectives, post-mortems of things that don't work. Tabletop exercise scenarios that you know organizations can take and run with and like. This is something that we really lean into, like when we do a tabletop or something like that, like sharing that stuff out amongst our broader customer base. So that way we do something once everyone benefits from it. And I could see that at a macro level that really driving a lot of value. Now, at a more 1 to 1 level, we've got agency and industry partner, working together on a system. I think in some ways it's the same, the same dynamic. You need to make sure that like there's transparency and openness amongst the team. But I think there is a there's a, a different set of skills that industry needs to be bring to the table when it comes to supporting these AI contracts, like your bread and butter. Old hat compliance, people are not really going to cut it. Your bread and butter, like security operators are not really going to cut it. Like you've got more you've got to have more AI savvy, folks who are familiar with the tools like almost like solutions architects or more engineering centric talent supporting these kind of programs. And because like these, these are these are tools. Yes. It's more it's, you know, it's more software, it's more infrastructure. Like in some ways it's the same as when serverless started becoming a thing or containers or whatever. You know, it's just another unit of software or unit of technology that you're kind of fitting into a bigger picture. But it does have unique elements because it sits at the intersection of like third party risk and software data. I think it's a bunch of different things combined, even like thinking about like, what is a model? Like if an agency is building its own models to combat fraud, for example, like what goes into a model? You know, it's not it's not a database, it's not code, it's not data. It's like a vector database of curated trained data sets, code and logic and all of that. The specific like models that might be sourced from, you know, huggingface or custom built, it's the infrastructure that goes into hosting it. It's a pipeline that goes into building. It's all of this stuff, but it's classified very simply as a model. And so there's a new wave of talent that really needs to be brought to the table to support this kind of work, because it has its own unique complexities. And, and I think it would be a shame and a disservice to the challenge at hand for us to just show up and try to like, treat it like it's just another thing, because it's not just the other thing. Yeah. Fair. Yeah. I love the way how you can just take something and make it so visual. You know, people with things cyber security and AI, it could be so complicated. But the way you can visualize it managed is just you simplify it. And with that being said, you mentioned something about new talent coming to the table. I like that. So think back. Maybe in your days when you were the, CMO Siso. And from that viewpoint, what mechanisms are thinking things like, such as dashboards, KPIs, you know, things like that can or should be implemented, you know, to provide leadership with visibility and give them some confidence in monitoring you. Maybe there are risk and in performance. And here's the kicker in real time, because that's where we're starting to move, right. Every day is just being to celebrate it. So things are going to be evaluated in real time. So summarize the question what mechanisms, dashboards, KPIs can or should be implemented to provide leadership with that confidence, in a real time manner? It is really important to, I think, not rush to that real time state. And the reason I say that is in order to drive real time reporting, you need to have streaming data for real time data, right? Like how do you get that real time data as it pertains to your AI risk? Now? Like that's a complicated question. That could mean a lot of different. Like because I risk spans the spectrum of like what's happening with your suppliers, what's happening with your usage, what's happening with your, you know, maybe like the like if you are building and maintaining models, like how those are instrumented and protected, what's happening with features in terms of like scanning and more traditional security methods applied to those, AI features. Like, it kind of starts with understanding what part of the AI risk spectrum you're trying to to drive here. Now, I think real time data is maybe too ambitious. Now I'm going to slightly reframe the question and say getting real time risk data using AI is something that is really exciting and upon us because of stuff like Model Context Protocol or MCP, which is basically where you can connect like an AI tool, like a Claude, to other tools in this client server kind of connection. So you go into Claude and say, what's the state of all my projects in JIRA? You know what? Things are going to fall off the bandwagon from a risk standpoint. And it goes out and queries Jira and using this connection and summarizes all of that and gives you an interpretation of what it's learned. Now you think about connecting these AI tools to all of these other security tools in your ecosystem, and you start to lower the barrier to entry for an executive or somebody who needs real time insight to get that data. They no longer have to go to the deputy to go to the division director, to go to the federal GTL, who goes to the contractor, who goes to the actual person doing the work and playing telephone all the way down to try to get an answer and then circle it all the way back up. The executive can pop it open, ask their question, and have a have a dialog, and they can get real time insights much faster. And it's much more directed. Decentralize this access to data in real time. That's a super exciting thing. Now the other thing, as far as like getting insights to risks that is really relevant is like going back to that kind of integrated approach where you think about AI security things in other core security activities, tbem system security, cloud security, etc.. SAS security, having explicitly called out metrics or like risks on your risk register that are going to get talked about and get reports and get briefs, and have remediation plans associated with them, I think that is key. And like if you if you want intentional tracking or briefings or what have you on high risk, just weave it into your risk register, call out, let's say if you're a federal CSO or a federal CIO and you want you want to understand the AI risk pertinent to like your core enterprise services or all your fisma systems are like put in an item like an, risk register item for for all the systems that are going to have to get talked about where everyone's going to have to respond, and that's going to give you a feedback loop. You don't have to keep that open indefinitely. But that's your kind of point of leverage as an authorizing official. And so use it to like gather up a bunch of information and then you can kind of take that and do other things where you can you can take that and let it feed your strategy, let it feed your budget plan what a future your team, resource allocation, all that stuff. Oh, man. In preparing for this interview, preparing for, you know, this conversation with you, my love, my research on you. And I was taking a look at some of your past comments and presentations. And one thing that you, talked about were, incident responses. All right. So I'll give you another opportunity to rephrase my question, but here's my question. What is the significance of establishing an incident response, plan? Is that certainly not tailored to, you know, AI driven, data sharing and then, follow up with how can the federal government effectively design such a plan to ensure a trust in operation, door security incident? So I really want to focus on the incident response, because I'm really interested in hearing, you know, about and establishing an incident response plan that's tailored to, AI driven data sharing. So I think I think you really need to start by enumerating a couple of scenarios that are relevant for your agency. So I'll enumerate a few that I think that I think are relevant AI related vendor mess. Is something up? You know, think like you input a bunch of data into their systems or they're connected somehow. They have a they have a data breach or some sort of unauthorized access. That's one thing. And, you know, it's an AI vendor. Now, there might be, you know, an employee of the agency or a contractor uses an unauthorized AI tool, send something into it that they shouldn't. That's like the shadow, right? If somebody logs on to ChatGPT, uploads a bunch of sensitive information to try to get their job done quickly, and that causes a problem like, you know, come up with a couple of scenarios that are relevant for you as it pertains to AI, and just proactively pull your team together and think about what needs to happen in order to respond to that. You don't need to call out specifics like the vendors in question. You don't need to call out specifics like the data in question, but you need to have the rough mechanics figured out of how you are going to respond to that. What teams need to be involved? What does containment look like? If anything, what is the escalation process look like? How do you what sort of prompts are you going to give your team? Not AI prompts, but like what sort of instructions are you going to give your teams as far as like what to investigate, what logs to pull, all of that kind of stuff. And if you figure that out in advance, like that's 90% of the battle, then you want to take that scenario and run an internal tabletop. It doesn't have to be a big, complicated like, you know, war game, take two hours, three hours, run a tabletop exercise. And in that tabletop exercise, you're basically you're like at your scenario is that thing happening with a series of injects to let the thing unfold. And at each stage, what you're doing is you're talking about what the team is going to do. Relevant to that, inject. You can even use AI to build the thing for you. Like, you know, you don't have to overcomplicate it, but like feed the scenario in build me a two hour tabletop exercise for this. Yeah, and let it run. And basically what you're doing is like, yes, you're going to run through the the whole process. People are going to get some amount of practice and familiarity with it as a result of that. But what you what I would encourage folks to really be looking for are the opportunities for more like structural improvement that that come out as a result of that. So do you have the logs like if somebody brings up some kind of log, do you have access to that log? Does the discussion reveal preventative controls? That should be ideally in place but aren't? Do you have the vendor relationships to like reach out to, let's say OpenAI or anthropic or Microsoft or whatever? Should something like this happen? Do you have the things that you need in place to be able to respond per the incident, and just note those things as gaps and then go start fixing them like, you know, you prioritize it amongst other things. But like that's the kind of feedback loop. You build a plan, you test the plan. You know, we're in football season right now, so people are going to practice their run in their playbooks, in practice and in preseason and all of that. And then you get to the game and you're better at it. You know, you're not just like showing up on game day, figuring it out on the fly. That's not how that works. And so, at least not if you want to be good. That's really as simple as it is. And my opinion is it's not easy, but it's simple. Yeah, but you you got me with the football analogy. You know, I'm a football guy. I appreciate that. You know, we need to have a, ascribe anything type of, podcast going on so we can sit here and has more conversations because I, I could talk to you all day about that. And I'm sure a lot of the folks can hear and listen and can leverage, your insight and your thoughts. And so really appreciate your time, Robert. And, Susan, I appreciate you for allowing us this opportunity to chat just a little bit. Well, Rob Wood and Robert Hicks, thank you both so much. And we really appreciate your time and thought leadership that you shared with others. Of course. Thank you. This concludes today's episode of the Fed to Fed podcast. If you enjoyed this episode, please don't forget to subscribe, rate and leave a review. Your feedback helps us continue bringing you thought provoking sessions with the brightest minds in government, technology. Stay tuned for our next episode, where we will continue to explore opportunities to harness the power of technology and explore what's next in developing a more innovative and efficient government. Until then, this is the Fed to Fed podcast by GovTech Connects. Thank you for joining us.