Words That Work: Virgin Money CISO Neil Robinson on Speaking Security to Power
A few weeks ago, a bombshell dropped in the security world.
Anthropic’s frontier AI model Claude Mythos showed it can chain together a 32-step attack sequence autonomously. It’s the kind of sophisticated, multi-pivot lateral movement that previously required elite human attackers working over days.
Within hours, the CISO WhatsApp groups were on fire. And within days, board members and C-suite executives who had never once asked a probing question about cybersecurity were suddenly calling their CISOs.
For a lot of security leaders I know, that moment was equal parts validation and pressure.
I had a chance to dig into this dynamic with Neil Robinson, CISO at Virgin Money, on a recent episode of The Segment podcast. He’s three years into a £52 million security uplift at Virgin, and he’s thought deeply about what it actually takes to turn security conversations into organizational action.
Security fluency: speaking your audience’s language
Early in our conversation, Neil described the CISO role as being, in many ways, that of a “chief storyteller.” And then — because Neil is nothing if not honest — he immediately pushed back on his own framing.
“It's true, and it’s equally false,” he said.
What he meant was that the storytelling label can make it sound like spin or packaging. But the real skill is translating fast-moving, highly technical risk into language that actually resonates with whoever you're talking to, whether that's a customer, a COO, or an engineer who’s being asked to prioritize patching over shipping.
Par exemple :
- With a customer, it might be a conversation about keeping their phone’s software updated.
- With a COO, it’s about the business services at risk.
- With an engineering team being pushed to ship faster, it’s about the moral imperative to protect the people whose data they're responsible for.
It’s the same security reality but through the lens of three completely different conversations.
That audience-first discipline is something many security programs get wrong. Security teams tend to default to technical framing, such as CVE scores, MTTR, or attack surface metrics, when the people who need to act on security information are thinking in terms of customer impact, operational continuity, or competitive pressure.
The gap between those two languages is where security programs lose momentum.
When a news cycle becomes the perfect security conversation opener
The AI headline moment that kicked off our conversation is actually a useful case study in how external events can shift the internal communication dynamic for CISOs.
Neil made an astute observation that the announcement garnered so much attention partly because of who made it.
Anthropic, a high-profile AI company with significant media presence, created a news event that cut through to executives and board members who don't normally track cybersecurity closely.
Suddenly, the conversation that security leaders have been trying to have for years about operating at machine speed and the pace of the threat landscape had an audience that was leaning in rather than tuning out.
“There’s a lot of board members and executives who are talking about the Anthropic announcement,” Neil said. “Being able to react at machine speed has been a theme we’ve monitored in cybersecurity for at least the last year. I think the announcement really just helps with that conversation.”
For security leaders, the lesson here isn’t about riding news cycles but about recognizing that external events periodically create windows where the appetite for serious security conversation at the executive level spikes. The CISOs who are prepared with a clear, plain-language narrative about their security posture, their risk profile, and their investment priorities are the ones who make real progress when those windows open.
Neil was careful to frame this without triumphalism. Yes, it’s a moment of heightened attention. But it also brings in stakeholders who are coming to the topic from very different starting points and at different speeds.
“We have to engage earnestly with people who have maybe been outside the bubble and haven’t been monitoring and observing the changes that are going on,” he said. “And we have to be respectful that people are coming to this from different places at different times.”
That kind of patience where you meet people where they are rather than where you wish they were is a practical skill CISOs need in their toolset.
Connecting infrastructure work to customer outcomes
One of the more persistent challenges in large security programs is the sheer distance between the technical work happening at the infrastructure level and the customer outcomes the organization actually cares about.
Patching a network device, segmenting a VLAN, updating an endpoint agent — none of these things feel connected to whether a customer can safely make a payment or trust that their data is protected.
Neil has thought carefully about how to bridge that gap inside Virgin Money, and his approach is straightforward. Everything connects back to the customer.
“Our job is to keep the customers of the bank safe, to keep their data safe, to keep their payments safe,” he told me. “Everything that we do is in service of that.”
That sounds obvious when you say it out loud. But maintaining that orientation consistently across a security team of 150 people doing everything from vulnerability management to application security to network controls requires deliberate effort.
Most importantly, it requires leaders who can articulate why the boring stuff matters. Why patching a legacy operating system protects a customer's savings. Why a network segmentation project limits the blast radius of a ransomware attack. Why the hours spent on compliance controls help prevent a regulatory event that would shake customer trust.
This is particularly relevant for Zero Trust programs, where so much of the work is invisible to end users and distant from any customer-facing outcome.
The teams building microsegmentation policies or enforcing least-privilege access aren’t in the headlines. Making the connection explicit, repeatedly, between that infrastructure work and the organization’s core mission is part of how security leaders keep programs funded and prioritized.
The patching problem: why human judgment still matters
We spent a good chunk of our conversation on something that might seem mundane but is actually one of the most revealing stress tests of any security program: patch management.
The reason patching is so hard in large enterprises is that the decision about whether to deploy a patch is almost never straightforward. The process often goes something like this:
- You need to know what's running on the asset.
- You need to know who owns it.
- You need to know what applications depend on it and whether they've been tested against the update.
- You need a change window.
- You need an approver.
- You need evidence that the patch actually works in your environment before you push it to production.
Each one of those steps involves judgment calls that span multiple teams, and the consequences of getting any one of them wrong can be severe.
Neil described an agentic-AI future where much of that procedural work gets automated. AI agents will be able to walk across configuration management databases, correlate asset owners, surface the relevant context to a human approver, and dramatically shrink the time from CVE disclosure to remediation.
He's seen startups prototyping exactly these kinds of workflows, particularly for cloud environments.
But the reason we're not there yet, he was clear, is governance. Deploying autonomous AI agents into critical operational workflows requires a level of identity control, observability, and accountability infrastructure that most organizations haven't built yet.
“We are understandably very conservative around allowing AI models to operate autonomously,” he said. “You need to have the guardrails, and a lot of those bits of infrastructure is also well restricted and observed.”
The communication challenge here for security leaders is actually the same as the broader one. How do you explain to a board or a budget committee why investing in governance infrastructure for AI agents is a security priority, when the payoff is abstract and the cost is real?
You tell the story of what happens if you don’t. It means you get an autonomous agent with overprivileged access operating without audit trails in a security architecture that was never designed to account for non-human identities.
Good, honest storytelling: what the best security leaders have always known
There’s a thread running through everything Neil said that I think gets to the heart of what makes security programs succeed or fail at scale.
Technology is almost never the limiting factor. It’s whether the people responsible for security can communicate clearly enough, consistently enough, and in the right language for the right audience, to actually change behavior across a complex organization.
It means walking a COO through what “important business services” actually means in terms of operational risk. But it also means being honest about uncertainty.
Neil was candid about what we don't yet know regarding the latest AI capabilities, such as the economics of deploying these models at criminal scale, how much noise they generate, or how quickly open-source equivalents will reach parity.
Good security storytelling communicates risk ranges, acknowledges what's unknown, and focuses action on the things you can control.
“I think ultimately where we get to is a better world where the AI models will be deploying secure code themselves,” Neil said. “Let's hope I'm right.”
That combination of honest uncertainty about the future grounded in optimism about the trajectory and a clear focus on what to do now is exactly the kind of narrative that moves organizations.
It’s what the best security leaders have always done, and it’s more valuable today than it’s ever been.
Écoutez l'épisode complet de The Segment : Un podcast sur le leadership sans confiance sur Apple Podcasts, Spotifyou notre site web.

.webp)
