AI Is Moving Fast in Federal Cybersecurity. Are We Securing It Fast Enough?
During a recent FedInsider webinar, I shared a story I heard that still gives me pause.
A federal CIO was getting ready to roll out an AI system that had been carefully scoped, reviewed, and approved.
On paper, everything looked right. Then they started testing it.
The AI began returning answers it shouldn’t have known, using data sources that were never intended to be in scope.
Nothing malicious had happened, but the security risk was real and immediate. If that system had gone live, the outcome would’ve been bad.
That story captures where many federal agencies are today with AI. They’re moving fast because they have to. But speed without visibility and control creates new risks just as quickly as it creates new capability.
AI is already embedded in government environments. The question is whether we are securing it deliberately or discovering its security impact after the fact.
AI in government environments is a strategic advantage for everyone
I think it's important to level set that AI isn’t optional.
Adversaries are already using AI to automate reconnaissance, refine phishing campaigns, mimic voices, generate malware, and scale attacks faster than any human team ever could.
That means defenders don’t get to sit this one out.
At the same time, federal agencies are dealing with environments that are more complex than ever, including hybrid architectures and legacy systems.
The volume of telemetry alone is overwhelming. Humans simply cannot process all of that information in a meaningful way.
AI helps because it can do something we can’t: analyze massive amounts of data in real time and surface what actually matters.
But the issue is that the same AI that helps defenders find the needle in the haystack also expands the haystack itself.
Every AI model is another application. Every AI pipeline is another set of connections. Every data source is another potential path for abuse.
If you don’t control those paths, AI doesn’t reduce risk but accelerates it.
Why traditional security guardrails fail in an AI-driven federal environment
One of the hardest truths for federal cybersecurity leaders to accept is that their current guardrails aren’t sufficient.
For years, success in cybersecurity was measured by prevention. The goal was to keep attackers out, harden the perimeter, and stop the breach.
But intrusions still happen, from Fortune 500 companies to federal agencies. And they happen despite our best efforts.
AI doesn’t change that dynamic but rather reinforces it.
AI systems run on servers and code, and they rely on data access. That makes them just as vulnerable to lateral movement, misconfiguration, and over-privileged access as any other application.
Worse, many organizations still operate with what I call the “Tootsie Pop” model: hard on the outside, soft on the inside. Once an attacker gets in, they can move freely.
That’s disastrous for AI systems.
If your AI engine has unfettered access to internal systems, sensitive data, or external resources, you’re trusting it far more than you should. AI must be protected from external threats, internal threats, and in many cases, from itself.
Visibility is the foundation of AI security in federal systems
Before you can secure AI, you have to see your environment clearly.
It sounds obvious, but in many cases, it isn’t.
I spent years as a CIO looking at clean Visio diagrams that showed how systems were supposed to connect. But reality never matched the diagram.
Modern federal environments are dynamic and ever-changing. AI can help here, but only if visibility comes first.

AI gives agencies a three-dimensional view of their environment, also called observability. It can reveal not just assets and alerts, but the relationships between systems, how they communicate, and what actually matters.
When AI highlights traffic flowing to countries it shouldn’t, or services talking to the internet without a valid reason, that’s actionable insight. And it needs to happen now, not weeks later during a post-incident review.
Real-time visibility turns AI from a novelty into a defensive capability.
Zero Trust isn’t optional for securing federal AI
In my opinion, AI security without Zero Trust is wishful thinking.
Zero Trust starts with a simple assumption: breaches will happen. The goal is to minimize their impact.
ということは:
- Only allowing connections that are explicitly required
- Turning off everything else by default
- Drawing clear protection boundaries around critical systems
- Enforcing controls internally, not just at the perimeter
For AI systems, this matters even more.
You must control what can talk to your AI models and what your AI models can talk to. This includes inbound and outbound communications, training and inference data, internal systems, and external sources.
Without segmentation as part of your Zero Trust strategy, an AI compromise becomes an enterprise compromise. But with segmentation, a compromise is contained.

This is how federal agencies move from breach anxiety to breach resilience as they build AI models.
AI should never be “set it and forget it”
One of the biggest misconceptions I see is the idea that AI solves operational problems on its own.
It doesn’t.
AI requires governance, oversight, and constant validation. Poor data quality, excessive access, and unclear ownership all undermine AI outcomes.
If you already have technical debt, AI won’t fix it. It’ll just expose it.
Federal agencies need to approach AI deployment the same way they should approach any mission-critical system grounded in their Zero Trust strategy:
- Start with visibility
- Define boundaries clearly
- Enforce least-privilege access
- Monitor continuously
- Assume failure and plan for containment
That discipline is what separates useful AI from dangerous AI.
AI can strengthen federal cybersecurity with the right guardrails
Despite the risks, I remain optimistic about AI.
AI can dramatically enhance federal cybersecurity teams, reduce alert fatigue, and surface meaningful context faster. It can help agencies compete despite staffing challenges and resource constraints.
Used correctly, AI becomes an extension of the workforce. It augments human judgment instead of replacing it. It allows skilled professionals to focus on decisions instead of drowning in data.
But that only works when AI is deployed intentionally, secured properly, and aligned with Zero Trust principles.
This is the standard federal leaders should hold themselves to now before AI defines their security posture for them.
Illumio Insightsを無料でお試しください to get the visibility and context you need to understand risk, contain threats, and secure AI-driven environments.
.webp)
.webp)
.webp)