Cybersecurity Is Broken: Why Outcomes Aren’t Improving and What Needs to Change
“Cybersecurity is foundationally, fundamentally, and systemically broken.”
That was the opening salvo from Illumio founder and CEO Andrew Rubin at a standing-room-only panel discussion at RSAC 2026.
The real surprise? Some of today's leading cyber experts — a former White House CIO, Microsoft's head of threat intelligence, the CISO of SolarWinds, and the chief security officer of one of the UK's largest financial institutions — all agreed with him.
Over the last decade, cybersecurity has become one of the most heavily funded functions in the enterprise. Teams have grown, tooling has matured, and new categories have emerged to solve increasingly specific problems.
On paper, it should be working.
But the outcomes tell a different story. Breaches aren’t slowing down. They’re getting larger, more complex, and more expensive to recover from, often spreading quickly across environments that were assumed to be secure.
The panel, “Hard Truths in Cybersecurity: Fear, Liability, and the Industry's Biggest Lies,” packed a ballroom at the Hyatt Regency in San Francisco’s SOMA district. Joining Rubin was David Boda of Nationwide Building Society, Tim Brown of SolarWinds, Sherrod DeGrippo of Microsoft, and Theresa Payton, former White House CIO.
What came through clearly is that while the industry hasn’t been standing still, it hasn’t been changing outcomes either. And fixing it requires rethinking how we measure success, how we assess risk, and how we limit the impact of inevitable breaches.
The following four takeaways from the panel show why the current model isn’t working and why containment, not just prevention, needs to become central to modern security strategy.
1. We’re measuring the wrong things and calling it progress
The panelists agreed that cybersecurity hasn’t struggled because of a lack of effort but because the industry has been optimizing for the wrong outcomes.
Tim Brown described how organizations define maturity today.
“Maturity often times is measured in terms of activity instead of reduction of threat surface,” he said.
The distinction carries real consequences.
Security teams stay busy. They implement controls and follow set frameworks. Dashboards fill up with alerts that lead to investigations. To the board, it might look like steady progress.
But breaches don’t happen in dashboards. They happen in the gaps between controls, and the assumptions teams make about coverage. It’s the difference between checking a box and actually reducing exposure.
Brown also pointed to compliance as something that can quietly reinforce this problem. Frameworks bring structure, but they can also create false confidence.
Organizations meet the requirements, pass the audit, and still find themselves dealing with incidents those same controls were meant to address.
This is where the conversation starts to shift toward breach containment. If success only means stopping every attack, then every breach looks like failure. But if success includes limiting the fallout of breaches, the focus moves toward reducing how far an attacker can go and how much damage they can cause.
2. Cybersecurity still treats risk as binary. Reality doesn’t.
Rubin pressed on a deeper problem: cybersecurity still treats outcomes as binary. You’re either secure or you’re breached.
In most disciplines, risk lives on a spectrum. Problems get graded, and response scales with severity.
Rubin used the analogy that if a doctor tells you that you have a cold, you don’t assume the worst. You recover and move on. But if the diagnosis is more serious, your response becomes more serious, too.
Cybersecurity hasn’t fully adopted that mindset. Strategies still tend to revolve around trying to eliminate all risk, even though that isn’t achievable in today’s threat landscape.
David Boda approached this from the perspective of cyber resilience.
“We’re not always [able] to secure,” he said. “We’re building our ability to be resilient in the face of the threats that come.”
A shift toward resilience changes how organizations prepare. It leads to designing systems that can absorb disruption without letting it spread unchecked. It also aligns closely with containment strategies, where the goal is to keep incidents small and controlled rather than allowing them to escalate.
3. You can’t protect everything, and that should shape your cyber strategy
Former White House CIO Theresa Payton brought a perspective shaped by operating in environments where the stakes are immediate and unavoidable.
At the White House, cybersecurity protection doesn’t get applied evenly across every asset because it can’t.
“It’s impossible to protect everything,” she said.
That constraint forces clarity. Teams have to decide what matters most and focus their efforts accordingly.
Payton described how organizations need to identify and rank their most critical assets in practical terms. That might include regulatory exposure, customer trust, or proprietary data.
Each organization defines its own version of what matters most, but the exercise of prioritizing is essential.
That’s easier said than done. Many organizations still don’t have full visibility into their environments. Payton compared it to “Monty Python’s search for the holy grail.” It drew a laugh during the panel but also underscored the real challenge that without visibility, prioritization becomes guesswork.
When teams do have clarity, it changes how they approach security architecture. They can isolate critical systems, restrict communication paths, and tightly control access. If something goes wrong, the fallout stays contained.
That’s a very different approach from trying to apply the same level of protection everywhere.
4. AI is accelerating attackers’ advantage
AI surfaced as an immediate force shaping the threat landscape.
Sherrod DeGrippo described what that shift could look like:
“I believe we will see the advent very soon of the unicorn threat actor — an apex-level threat actor that has incredible capability with one human.”
Her idea reflects how quickly capabilities are scaling. Tasks that once required coordinated teams can now be handled by individuals using automation and AI-driven tools. The barrier to entry continues to drop, while the potential impact keeps rising.
Tim Brown tied that shift back to something more fundamental: incentives.
“You still can make plenty of money and not go to jail,” he said.
That dynamic hasn’t changed. Attackers still have strong financial motivation and limited consequences. What has changed is the speed, scale, and persistence they can bring to an attack.
Brown described how AI enables long-running, patient campaigns that don’t rely on quick wins. These attacks can observe systems over time, adapt their approach, and strike when the opportunity is right.
It creates a different kind of pressure for defenders. Detection remains important, but it isn’t enough anymore. Leaders need to think carefully about what happens after an attacker gains access and how far that access can extend.
Cybersecurity is broken — now what?
The Hard Truths panel offered a clearer view of where the cyber industry stands and where it needs to go.
The current model isn’t changing outcomes. More tools and more activity haven’t reduced the overall impact of breaches. AI is adding speed and scale to an already difficult problem.
What came through most clearly is that cybersecurity needs to evolve in how it defines success. Success includes the ability to withstand attacks and continue operating. It also means limiting the spread of an incident and protecting what matters most.
That’s where breach containment plays a central role. It works alongside prevention and detection, shaping what happens after an attacker gets in.
Cybersecurity doesn't improve by continuing down the same path. It changes when organizations stop measuring activity and start measuring impact — and when containment becomes as central as prevention.
Learn how Illumio contains breaches to stop lateral movement and shut down attacks fast.
%20(1).webp)
.webp)


.webp)