/
Cyber Resilience

The Future of Cybersecurity Is Anti-Fragile, Not Just Resilient

For years, resilience has been the standard for cybersecurity. Systems fail, teams respond, services come back online, and the business keeps moving.

But that model makes assumptions that no longer hold up. It assumes the environment you return to after an incident is still fit for today’s threats. It also assumes defenders can keep pace with attackers and that yesterday’s architecture can handle what comes next.

The RSAC 2026 session, Beyond Resilience: Building Anti-Fragile Cyber Systems, challenged those assumptions.

John Kindervag, Illumio chief evangelist and creator of Zero Trust, and Anthony Rodriguez, CVS assistant VP of application security engineering and threat management health, argued that recovery alone is no longer enough in environments that are always changing and under constant pressure.

The shift they outlined is toward anti-fragility. Instead of just recovering from incidents, organizations need to use them to improve how their systems operate. This means designing security programs that limit impact, adapt in real time, and get stronger with every disruption.

That shift changes how we build and measure cybersecurity. It moves the focus from restoring service to reducing risk, from reacting to incidents to learning from them, and from static controls to systems that evolve with the threats they face.

Resilience isn’t the destination anymore

Cyber resilience still matters, but it’s no longer the end goal.

Kindervag described resilience as a loop most organizations know well: something breaks, you fix it, and you return to where you were before. The system survives, but it doesn’t change.

Anti-fragility changes that loop. Every incident becomes a source of learning. Systems are expected to evolve, not just recover.

“Anti-fragility [means] we learn, we adapt, and we grow stronger,” Kindervag said.

This is where a Zero Trust strategy steps in. It’s not just a set of controls or a framework for access decisions but a mechanism for continuous adaptation.

Kindervag called it “an adaptation engine,” which is a useful way to think about it. When systems are built to observe, enforce, and adjust in real time, they don’t just withstand pressure. They respond to it in ways that make them stronger over time.

Designing for the real world, not the happy path

Too often, today’s systems are designed around ideal conditions.

As Rodriguez explained, “We often look down the happy path, but the happy path is toxic.”

In practice, environments are anything but predictable. Configurations drift, dependencies change, and users behave in ways that don’t match design assumptions.  

Attackers exploit those gaps because they exist outside the “happy path” most systems are built around.

Designing for anti-fragility means accepting that stress is constant. It’s not something to avoid or minimize. Teams must plan for and incorporate stress into how systems operate.  

When stress becomes an expected input, rather than an exception, systems can be built to handle it more effectively.

That shift shows up in everything from how applications are tested to how infrastructure is managed. It also changes how teams think about failure. Instead of treating it as an anomaly, it becomes part of the process.

Why static environments don’t hold up

A consistent theme throughout the session was the risk of relying on static security models in dynamic environments.

“Static is toxic,” Rodriguez said.

That statement reflects a broader problem. Many traditional controls are designed for environments that don’t change frequently. Security teams define policies upfront, those policies grant access based on a single decision point, and policy enforcement assumes stability.

Modern environments don’t behave that way. Applications are distributed across clouds and data centers. Workloads scale up and down. Users connect from a wide range of locations and devices.

Rodriguez pointed to one of the clearest examples of this gap in how authentication is handled. “We used to make decisions based on a single signal like MFA,” he said. “You hit the button, you’re in, and no one looked at what your packet was doing post-authentication.”

That approach treats trust as a moment in time. But risk doesn’t stop once a user is authenticated. It continues throughout the entire session.

Moving toward anti-fragility requires shifting from static decisions to continuous evaluation. Systems need to observe behavior over time and adjust accordingly.  

That’s where Zero Trust principles become critical, especially when combined with the ability to enforce controls dynamically and contain risk as it emerges.

Making better use of signals

Another important shift discussed in the session is how organizations handle signals.

Security teams have no shortage of data. The challenge has always been turning that data into something actionable.  

Rodriguez emphasized the importance of focusing on meaningful signals. “As we focus on more signals and less noise, your system will be more resilient,” he said. “It’ll recover, adapt, and evolve.”

Kindervag connected this to the limitations of traditional approaches. “We haven’t been able to consume signals well because of manual processes.”

That’s starting to change. With better analytics and AI-driven approaches, organizations can process more signals and respond to them faster. But the real value comes from what happens next.

Signals shouldn’t just trigger alerts; they should drive action. And that action should feed back into the system, improving how it behaves in the future.

This creates a feedback loop where visibility, enforcement, and learning are all connected. Over time, that loop becomes the foundation for anti-fragility. It allows systems to continuously refine how they operate based on real-world conditions.

Breaking the cycle of repeat incidents

One of the more candid parts of the discussion focused on what happens after an incident.

Kindervag highlighted a pattern that most teams will recognize. “Too often we wait for bang. Companies don’t care until the bad thing happens.”

Even when organizations do respond effectively, the follow-through is often limited. Issues are resolved in the moment, reports are written, and then the same problems resurface weeks or months later.

“The traditional disaster recovery process is linear: failure, failover, and recover,” Rodriguez said.

What’s missing is systemic improvement. Anti-fragility requires organizations to take what they learn from each incident and apply it broadly. Fixing a single issue isn’t enough. The goal is to eliminate entire classes of problems.

That means security teams should focus on:  

  • Updating policies across multiple environments
  • Improving visibility in areas that were previously overlooked
  • Automating responses that were handled manually before

Without that step, organizations remain stuck in a cycle of repeat incidents.

Rethinking how success is measured

As organizations move toward anti-fragility, Kindervag and Rodriguez also discussed why the way teams measure success needs to change.

Traditional metrics tend to focus on response times and incident volumes. Those metrics still have value, but they don’t capture whether systems are actually improving.

Rodriguez offered a different perspective:

“A key metric isn’t how many incidents you’re on but how many you’re not on,” he said.

That shift moves the focus from activity to outcomes. It emphasizes containing attacks, reducing risk, and avoiding disruption as much as possible.  

Other metrics he mentioned, like risk reduction per incident and service disruption avoidance, align more closely with what the business cares about. They reflect whether security efforts are making a meaningful difference, not just whether teams are staying busy.

Where to start with anti-fragility

For many organizations, the challenge is figuring out how to apply them.

“The biggest problem is that people don’t know how to start,” Rodriguez said. “And when they do start, they try to build a utopia.”

Trying to solve everything at once usually leads to stalled progress. A more practical approach is to focus on incremental improvements.

That might begin with gaining better visibility into how systems communicate, especially across environments where risk is harder to see. From there, organizations can start to enforce more granular controls, reduce reliance on static policies, and build feedback loops that connect signals to action.

Testing also plays a role early on. Introducing controlled stress into systems, even in limited ways, can reveal where the biggest gaps exist and where to focus next.

Over time, those steps add up. They create a foundation that supports more advanced capabilities and a more adaptive approach to security.

Why the shift from resilience to anti-fragility matters now

Kindervag’s and Rodriguez’s conversation at RSAC reflects a broader shift happening across the industry.

Environments are becoming more complex, and attackers are moving faster. AI is accelerating both sides of the equation.

In that context, a cyber strategy built solely on resilience will struggle to keep up. Recovering from incidents is important. But it doesn’t address the underlying issue. If systems return to the same state after every disruption, the same vulnerabilities remain.

Anti-fragility offers a different path. It treats stress as an input for improvement and builds systems that can adapt over time.

That shift is especially important when it comes to containing breaches. The longer an attacker can move through an environment, the greater the impact. Reducing that movement, limiting blast radius, and learning from each event are all part of building a more durable defense.

And at a time when the gap between threats and defenses continues to widen, that difference is what will determine which companies keep up and who falls behind.

See how the Illumio platform contains breaches, limits lateral movement, and turns a Zero Trust strategy into an anti-fragile system.

Related articles

Experience Illumio Insights today

See how AI-powered observability helps you detect, understand, and contain threats faster.