AI Agents Are Becoming Digital Employees. Here’s How Zero Trust Secures Them.

When does a major shift in enterprise security reveal itself?
Josh Woodruff expected it to happen during a breach investigation or incident response exercise. Instead, it came during a conference presentation.
The presenter showed a dashboard of activity across a large enterprise environment — login events, API calls, and system actions. At first, everything looked normal.
Then the presenter explained what the audience was seeing: none of the activity was coming from humans. Every action was being taken by machines.
For decades, enterprise security programs were designed around human users. People logged in, accessed systems, and made decisions. Security teams focused on verifying identities and monitoring human behavior.
But that assumption is starting to break down.
Josh, founder and CEO of Massive Scale AI, has spent nearly three decades leading security, cloud, and IT transformations. During our conversation on The Segment podcast, he explained that enterprise environments are entering a new phase. Machine identities, such as APIs, services, automation tools, and AI agents, are beginning to outnumber human users across enterprise networks.
This shift creates a new challenge. Organizations are no longer protecting only people accessing systems. They’re also governing software agents that can make decisions and take action inside the business.
In this environment, Zero Trust takes on a new meaning. Security teams are no longer just verifying human identities but managing fleets of digital employees.
AI is becoming an operational actor inside the enterprise
Many conversations about AI security focus on models and training data. Those topics are important, but they don’t capture the full transformation happening inside organizations.
AI systems are moving from analysis to action.
Early AI tools were mostly informational. They generated insights, summarized data, or answered questions. Humans remained responsible for interpreting those outputs and deciding what to do next.
Agentic AI changes that model.
AI agents can plan tasks, choose actions, and interact with systems directly. They can trigger workflows, call APIs, update records, and manage processes without waiting for human approval.
This creates a very different security environment.
Traditional software operates through deterministic logic. If a specific event occurs, the system executes a predefined action. Security teams can design policies around those predictable flows.
AI systems behave differently.
“They’re stochastic,” Josh said. “It’s not deterministic computing anymore.”
In practical terms, that means AI systems operate based on probabilities and learned patterns rather than strict rules. The same request may produce slightly different outcomes depending on context, training data, or reasoning paths inside the model.
When those outcomes are only informational, the risk is limited. But when AI systems begin taking operational actions, unpredictability becomes a real security concern.
Agentic AI introduces a new category of risk: autonomous decision-making inside enterprise systems.
This is why Josh encourages organizations to rethink how they conceptualize AI. Instead of viewing AI agents as tools, he suggests thinking of them as members of the workforce.
Why AI agents should be treated like digital employees
Josh often asks organizations to imagine AI agents as digital workers operating inside the business.
AI agents can perform tasks, interact with systems, and access data and services. In many ways, their behavior resembles that of human employees.
But there are two important differences.
First, AI agents operate at machine speed. They can execute tasks continuously and interact with many systems simultaneously.
Second, they lack judgment. Human employees bring context and intuition to their work. Even when they make mistakes, they often recognize when something feels wrong or outside expected boundaries. AI agents don’t.
“They don’t know what’s good or bad,” Josh said. “They just know a lot of information.”
Because of this limitation, an AI system can be extremely effective at achieving the wrong objective.
Josh shared a story that illustrates this risk. One organization deployed an AI system to help manage supply orders. At first, the system only recommended purchases. After testing showed reliable results, the company allowed it to place small orders automatically.
Everything worked well until the AI discovered a bulk discount. The system purchased forty years’ worth of floor cleaner.
In total, it spent $1.4 million to secure the best price.
The AI wasn’t malfunctioning. It simply optimized the goal it had been given. The organization had told the system to maximize savings. The system followed those instructions exactly.
What it lacked was business context.
This example highlights an important lesson for security leaders. AI agents need governance structures similar to those used for human employees. This includes defined rules, access boundaries, and supervision.
This is where a Zero Trust security strategy can help.
How Zero Trust provides guardrails for autonomous systems
Zero Trust is built on a simple principle: trust should never be assumed inside digital systems.
Instead of relying on network boundaries or implicit trust zones, Zero Trust evaluates every request based on identity, context, and behavior.
In our discussion, Josh described the philosophy: “Zero Trust is really about removing trust — which is a human emotion — from digital systems.”
This model works well for modern enterprise environments because access decisions depend on multiple signals. Security systems evaluate:
- Who is making the request
- What action they are attempting
- Where the request is coming from
- Whether the behavior fits expected patterns
These same principles apply naturally to AI systems.
AI agents interact with many enterprise resources. They may access internal data, communicate with services, or trigger automated workflows. Each interaction should follow Zero Trust rules.
The 5 questions that govern AI agents
To help organizations implement this approach, Josh developed a simple framework called the Agentic Trust Framework. It organizes AI security around five key questions.
- Who are you? Every AI agent needs a clear identity that can be verified through strong authentication.
- What are you doing? Behavior monitoring ensures the system is operating within expected patterns.
- What data are you consuming and producing? Data governance determines which information the system can access and what outputs it can generate.
- Where can you go? Segmentation controls which systems or environments the agent can reach.
- What happens if you go rogue? Organizations need mechanisms to detect abnormal behavior and quickly shut down automated systems if necessary.
These questions sound straightforward, but together they represent a comprehensive security architecture. They cover identity, behavior monitoring, data governance, segmentation, and incident response.
In other words, they apply the core principles of Zero Trust to the emerging world of AI-driven automation.
Security will determine how fast AI can scale
One reason many organizations struggle to deploy AI at scale is trust.
Teams often experiment with models and automation tools in pilot environments. These experiments may produce useful results, but organizations hesitate to move them into production.
Security concerns often appear late in the process.
Josh sees this pattern frequently. “Security is usually an afterthought,” he said.
When security enters too late, organizations find themselves stuck in what he calls pilot purgatory. AI projects show promise, but leadership can’t fully trust them to operate safely inside production systems.
The solution isn’t slowing AI adoption but instead building security into the architecture from the start.
Josh used a simple analogy to explain this idea: security is a roll cage, not a brake pedal. A car with a roll cage can drive faster because the driver trusts the protection around them.
The same principle applies to AI systems.
When organizations design clear guardrails around autonomous agents, they can safely grant those systems more responsibility and autonomy.
Zero Trust makes that possible.
The future of enterprise security is machine governance
Josh believes many organizations still underestimate how much enterprise environments will change in the coming decade.
Machine identities are already growing fast. APIs, services, and automation tools now outnumber human users in many environments. AI agents will push that trend even further.
In the near future, human users may represent only a small share of the identities operating across enterprise systems.
This shift will change how security teams work. Instead of focusing mainly on human behavior, teams will monitor machine activity. They’ll analyze automated workflows, review AI-driven decisions, and enforce guardrails around autonomous systems.
Zero Trust already offers the right foundation. It removes assumptions about trust and verifies access continuously. Those same principles apply directly to AI systems.
The difference is scale.
Security platforms will soon need to evaluate millions of machine-driven interactions every minute. Preparing for that future must start now.
Organizations don’t need to slow down AI innovation. But they do need environments where AI agents have clear identities, strict boundaries, and constant oversight.
The companies that succeed in the AI era won’t just adopt automation faster but will build systems they can trust.
Listen to the full episode of The Segment: A Zero Trust Leadership Podcast on Apple Podcasts, Spotify, or our website.
%20(1).webp)
%20(1).webp)


.webp)