A Zero Trust Leadership Podcast

 AI Just Raised the Stakes in Cybersecurity—Now What? | Neil Robinson
Season Four
· Episode
6

AI Just Raised the Stakes in Cybersecurity—Now What? | Neil Robinson

Neil Robinson joins for a candid look at how one of the industry’s top security leaders is thinking about the next phase of AI-driven cybersecurity risk.

Transcript

Raghu Nandakumara  00:03

Well, there are AI headlines, and then there are AI headlines, and I think on the seventh of April, we had the AI cybersecurity headline, at least thus far anyway, all about that in a bit. So today, on The Segment, I am so excited to be joined by Neil Robinson, Chief Information Security Officer at Virgin Money, where he leads cybersecurity across the organization and focuses on measurable outcomes, resilience, and risk reduction at scale. Neil, Welcome to The Segment!

Neil Robinson  00:45

Raghu, big thanks and very, very kind introduction.

Raghu Nandakumara  00:48

So thanks for that. No, no problem. Always a pleasure to have a chat with you. So here we are, just over two weeks since the bombshell dropped and got everyone into at least everyone in cybersecurity, into a combination of mass hysteria, mass panic, mass excitement. The spinsters have been spinning right all these things, but we'll go there. So Neil, tell us a bit about what you do your background. Yeah, off you go.

Neil Robinson  01:31

Yeah. So, Neil Robinson, head of information and cybersecurity. So, CISO at Virgin Money, I've been here for three years, spent 15 years out in Asia before that, and been in financial services for the better part of 25 years. We're just coming into the final year of a three-year, 52-million-pound strategy where we've been uplifting the security of Virgin Money to tier one banking. And I'm very blessed to have a bunch of incredibly talented people who've been helping me on that journey. It's about 150 of them, and yeah, they're very excited to have the conversation today.

Raghu Nandakumara  02:09

Awesome. So let's say till three, two and a half weeks ago, what was the biggest concern, biggest challenge you were having to address on a, maybe on a strategic basis.

Neil Robinson  02:25

Yeah, I think it's a very Virgin Money specific response. But I think that's where the best you know, CISOs and security starts is we shouldn't forget that our job is to keep the customers of the bank safe, to keep their data safe, to keep their payments safe, and so everything that we do is in service of that. And so in that regard, we've been doing a bunch of really interesting things, both the basics, the hygiene, so really keeping sure that all of our operating systems are up to date, that all of our vulnerabilities are patched, you know, all of that sort of really, sort of boring but good stuff. And then we'd been doing some more innovative stuff around things with design-based security, but also some things around network security. And again, it's very basic stuff. It's quite hard in a big enterprise. And so it's really about execution and just staying focused on the things that keep our customers safe.

Raghu Nandakumara  03:21

So you kind of mentioned, sort of the mission is, in your case, is, how do we keep our customers safe, their data safe, their experience safe, etc. So just like, what are the, what are the things that you put in place, I guess from, let's say, from a storytelling perspective, to connect, constantly connect kind of, because sometimes, when we're working, whether it's network security, like infrastructure security, it can be very far removed from the customer. So how do you constantly connect those two things?

Neil Robinson  03:53

Yeah, look, you're right. That is, you know, I, I sometimes refer to, you know, security leaders as, you know, Chief storytellers. And it's true, and it's you know, and it's equally false, you know. And a big part of what we do is trying to make topics that are really fast moving and really complex, relatable, and understandable. And it really depends on who you're talking to, whether we are talking to a customer, and you know, you might talk to them about, you know, how do they keep their, you know, their Android phone, their iPhone, up to date, and why that's important, you know, make it relevant to them. If I'm talking to a COO, invariably, it comes down to, these days, important business services, and what we're doing around those whether it's, you know, making a payment, if it's maybe an internal IT colleague. When we're trying to coerce them into patching quicker and quicker and quicker, it will be about that moral imperative to do the right thing when they're under a lot of pressure to prioritize faster and faster shipping of code, but at the same time, we've got to do it in a safe and secure way so it's. Constantly just trying to understand who your customer is, internally or externally, and trying to talk to them in a language that resonates with them.

Raghu Nandakumara  05:09

Well, I don't know if you answered that in a way to really connect with my next question, but it was very well done. I'm sure maybe you did, maybe you did amazing interview. I mean, that's always the case, right? A gold award winning, of course, the so you said patching faster and faster, while the application teams want to ship code faster and faster. So there was this little bit of news two and a half weeks ago. At the time of recording, two and a half weeks ago. So let's, let's go to that date you see this headline kind of crop up in your feed. First reaction, instant reaction, without looking at the content itself, when you saw the headline, what was your reaction?

Neil Robinson  05:55

Yeah, look just super personal, like it tells you probably more about me than anything, which is, I was sort of quite excited, you know, we've been, you know, I've been following this trend for about 12 months as the models, you know, when chat GPT first came out, and there's that sort of oh wow moment. And I think we've got used to having this technology in our lives, and we think it's normal. But if you go back and look at those, really, original models, they feel kind of disappointing when you go back and we've forgotten that, and they just keep getting better and better every day. Honestly, you log on to a Claude or a cursor and they they're releasing multiple times a day. So the first time I saw Mythos and the headlines, I was sort of excited that, you know, this exponential curve is still continuing. And that was shortly followed by the realization of what it meant to me. So technology, that techno optimist side was like, Oh wow, it's still happening. It's still doubling. And then the other side was like, Oh, shit. What's that going to mean? What does that mean for my supply chain? What does that mean for the patching what does it mean for my vulnerability team? And yeah, so the reality quickly grounded me back in the Yeah, day to day operational rhythm that we've got,

Raghu Nandakumara  07:22

but it's good to hear that the first reaction was one of excitement a geek at heart, right? Like, that's, it's all about the technology and the evolution of technology. Yeah. Okay, so, okay, right? So you've had time then to essentially digest the news, right? Again, like big picture, not necessarily about it's what that means to Virgin Money, but big picture, what do you think it means to cybersecurity and how we approach the problem that we're the problems we are solving?

Neil Robinson  08:02

Yeah, I think first things first. ​     ​It's well documented that we are humans, the we are always a little bit over optimistic in the short term with innovation and a little bit too pessimistic in the long term. And I think this isn't very much different. I don't think in that. It's just really hard to get a sense of how quickly this is evolving. So I think the optimist in me believes, and I've discussed with many other CISOs and many other leaders, that defenders do have an advantage because we have access to our own data somewhat. That is a walled garden, and that's what drives these models. I think the counter argument to that, which is also valid, is that we are not entirely in control of our defenses. So some of our data is processed by our partners, but also we don't know at what pace some of this technology is going to be released into the ecosystem. And so I think it is going to be quite bumpy. But I think ultimately where we get to is a better world where the models in themselves, will be deploying secure code. And so as that refresh washes through our technology, we'll get to a better place. But I think in between now and there, there's going to be a very, very bumpy ride.

Raghu Nandakumara  09:28

Yeah, yeah. So what's the chat on the CISO WhatsApp group?

Neil Robinson  09:33

I think everybody is going through this, you know, emotional cycle at different in different rates, and depending what industry you're on the West Coast of the U.S. You know, it's very different to if you're anywhere else, if you're one of the small group of organizations that actually has access to Mythos, you're in a really different place to if you're not in that group. So, yeah, just really different. I think, broadly speaking, the. See says I speak to once they've got over that emotional reaction, they feel like this is actually quite a positive thing. Is positive that, you know, Anthropic have done such an amazing job at marketing this. And it really is, you know, you really want to be that marketing person, don't you, because you've just absolutely nailed your annual objectives, yeah. So they've done a really good job of that. But what that really means, the positive is that there's a lot of board members, there's a lot of executives that are talking about it, and this has been a theme that we've been monitoring for at least the last year, which is that we really need to be able to react at machine speed. And so I think it really just helps with that conversation.

Raghu Nandakumara  10:40

Yeah, yeah, totally I think, I think I completely agree. I think that the marketing folks at Anthropic, and I think it's also that this announcement came from Anthropic, right, versus, let's say, someone who is working on, on a less prominent, for a less prominent AI company, right? I think Anthropic definitely own, sort of the, sort of the media attention at the moment. And definitely, they had their time. They had sort of their the they had the, what is it, the soapbox from which to announce this in vogue at the moment.

Raghu Nandakumara  11:21

Yeah, yeah, absolutely. And I think it's, I agree. I think the elevation of the conversation at board level of, really, oh, okay, I see that we have to potentially do something about this, right? And, and I guess the next question is, like, what is the right way to approach the reaction, like, is, that is, have you got to that stage of Yeah, even as from an ideation perspective about how to think about what is the right sort of approach going forward.

Neil Robinson  11:51

So ​     ​I think we have to engage earnestly with people who have maybe been outside the bubble and have not been monitoring and observing the changes that are going on. And I think we have to be respectful that people are coming to this from different places at different times. So that's one part, which is really just being respectful of all the people who we need to bring on the journey. And then I think it's, it's the basics, you know, it's governance, it's discipline around processes. It's about ensuring that we've got the right technology, but that we've also got the right people and that we are mindful of, you know, burning people out in a time where we are looking down the barrel of already some of the biggest patching numbers that we've seen in our lifetimes. But I don't you know, okay, maybe it's a step change, but I don't think that's really any different to what most security leaders are already doing and thinking about, and hopefully that realization is aligned with the executives that are responsible for the budgets, and also the partnerships that we have to have with our CIOs and heads of infrastructure, who ultimately do most of the patching. And thankfully, you know, we've got some incredible partners who really take security very, very seriously, and it's more about just making sure that we're coordinated in our response,

Raghu Nandakumara  13:29

yeah, and you've mentioned like the basics a few times, right? Is so? And there's, there's always a conversation around like, we must focus on the basics, and it's, and then the kind of the reaction to that is like, stop telling us that. Yeah, we know that, etc. So I think what you're I think going by what you're saying, it's the basics have, have never been forgotten. But this reemphasizes, essentially, the fact that they're non-negotiable, right? That if you don't do those things well, then all the bells and whistles and exciting stuff that you want to do just don't matter.

Neil Robinson  14:08

Yeah, I mean, so I think in times when you feel that the threat is, you know, lower you might get comfortable with a certain level of risk. Clearly, this elevates people's perception of the risk. And so, you know, things like, you know, really simply operating systems that are out of support, you know, maybe where we thought that a mitigating control might be good enough, actually, now maybe we're thinking, No, we need to patch. Yeah, I think also the basics are just creating a enough capacity to absorb this extra input. So, you know, one thing that you know we've been kicking around is, you know, what would a bad day look like? Would it be 10x more? Vulnerabilities coming in over the next six months. And what would that mean for our vulnerability management team that needs to analyze it, if they've got the right technology, can they report can they how do they communicate that at scale to teams that are starting to ship code a lot more quickly? Yeah, so, so,

Raghu Nandakumara  15:18

so, just that, on, that, on sort of, you mentioned about rethinking the risk that that we're carrying. Is this a case of, like, all this news? Is this a case of saying, like, is it about right? Yeah, we already understood those risks. And I think you said, like, what's those examples where you know that you're carrying, for example, there's a vulnerability you've not been able to patch it. You've got to compensate in control. Well, that's clearly a risk you're aware of, at a risk you're managing. So does this like, is the is the conversation about, yeah, right, okay, it's just further, or re-emphasizing all the things we already know about? Or is it a, oh my like? Is there a sense of, Oh my god, I didn't realize that, that the hole was so much deeper?

Neil Robinson  16:07

Yeah, I think it's a bit more profound than just understanding that that wrist that you're wearing or that issue is still there and that is relevant to this particular event. It's more that you reassess how likely it is that that specific issue is going to be discovered and exploited. So what's the impact going to be and so I think it really brings to life your assessment of that. And ​     ​one of the I think, really interesting and slightly scary things about these new models is that they are not seen it yet, but they are saying that they are able to chain together vulnerabilities in a within the model. So not just within the automation around the way they use the models, but within the model, which you know, if you think about that from an enterprise perspective, that could be quite, quite scary.

Raghu Nandakumara  17:05

Yeah, there was a, I think the UK AI Institute, or someone that was like, showed that, that they're talking about how Mythos could, was able to connect together, like a 32 step attack, something that they've they'd like, they've never, they've never seen any level of automation ever come close to that which is, which is, which is huge, which is huge in itself. And actually, the whole, I think the comment about the likelihood going up, right? I read a really good LinkedIn article a couple of days ago where the author was really talking about how sort of that the CVSS score, right? You can take every vulnerability out there and say, right, add one or two to the likelihood straight away, right? That's then look at kind of your environment, and reframe your vulnerability management plan based on, based on that. And I think I agree. I think the it's, it's the likelihood which is shot through the roof, the likelihood of exploit was shot through the roof. I'm not sure we truly know the impact just yet.

Neil Robinson  18:20

Yeah, yeah. And I think, I mean, maybe, just to sort of balance it out, I think at the moment, there are a couple of bits of really important information that we don't yet know. So it's been well documented that it's a very expensive model. And so I think, yeah, you know, it'll be interesting to see, once this and other models come out, what the affordability is, because any criminal organization that might use one of these models, they're trying to make money, so they need to make a profit out of it. I think the other bit that is not well documented as well is that it's, we don't know how noisy they are, so how much manual processing do you need to do after the model has looked at, I just, actually just had lunch with a very old friend who works at one of the big hyper scalers. And I think I'd like to be optimistic that the commercials don't work, but having spoken to him about some of the things that they are doing with the models, internally, it's, it's really mind blowing some of the things that these models can do. And so I think it's not just about speed and the impact, but it's also genuinely, I think they're going to be able to do things that were so expensive to achieve. So it's not just vulnerabilities. It will be logic floors. It will be multimodal. So it will be, you know, dialing up your health that help desk as well as trying to hack you at the same time. So these things are capabilities that are available at the moment, and we have not yet seen any credible intelligence that they're like. You, but you can see that they are fast approaching, yeah, yeah.

Raghu Nandakumara  20:05

I think that cost, the cost argument is, I think it's really interesting, because I feel that in all the reporting that's gone on, it's not an area that has been truly touched on. It's always, it's very much been about the capabilities versus the operational effort required to really benefit from this at at scale. Though, on the flip side of that, there is the there is the very real fact that some of the open weight models and the smaller models have, at least not, maybe not capabilities without sophistication, but have the capability to find like vulnerabilities in this in this manner. So there is that to sort of keep in mind. So it's not that the attackers won't be able to it's just that it may not be that they have access, or they are going to want to use, kind of the top of the range, most expensive model?

Neil Robinson  21:06

Yeah. I mean, I was on a call recently with, it's an international group of CISOs, and one of the questions to somebody who is part of that group that had access to meet us was, when do you think that the open-source models will have an equivalent capability? And obviously that's highly speculative. Nobody knows. But yeah, it was an interesting thought exercise. They said, well, conceivably within 12 months. So I mean, even if it's 24 months, okay, that's a very interesting scenario that we need to plan for and worry about, and try and think about mitigating,

Raghu Nandakumara  21:49

Yeah, yeah. And that's sort of what we're what we're planning for. Sorry, go on. You're going to say something. No, no. So what we're planning for, and I think, going back to some, one of the earlier things is that, I think, kind of like the curve that people sort of visualize here is that there'll be when, when it is public, there'll be this, or when the findings become truly public, there'll be this spike in in vulnerabilities, right? And will there'll be a ma, there'll be a huge scramble to sort of catch up and patch or compensating controls, etc. But then I think, as you said, right, once, kind of, but that that will then essentially reach a peak, and then it will calm, because then then you're also incorporating these capabilities as part of your SDLC, so you're naturally building, hopefully better constructed software, right, less bugs, and so over time, the long tail is, is that it's short term pessimism, I think, to your point, but long term optimism,

Neil Robinson  22:53

Yeah, I think it's pessimism, and it's going to be very bumpy and uneven. Yeah, I think that thing is, it's, it's going to be, definitely, I'm an optimist, and I think it'll be a better future. We'll have more secure code. I think over time, it will reduce the net cost of security, because we'll be securing the code at point of before it goes live. So yeah, that's, let's hope I'm right.

Raghu Nandakumara  23:19

Yeah, so if you now go on sort of the like our side, right the defender side, and you're looking at sort of capabilities that are going to help you get really ahead of this game, what are, what are capabilities? Or maybe it's the use of AI within cyber, cybersecurity that you think is really going to become essential, or game changing to combat this clear and sort of present threat.

Neil Robinson  23:55

I think just in real, like a really practical step that I think we need to lean into is just really encouraging our teams to experiment with and use these models. So I think there's been an there's been a very understandable reticence to understand AI safety. I think that the horse has bolted. We are in a future where this is coming, and we should be encouraging people to use it to run these even at the moment, we don't have access to Mythos, but we have, you know, 5.4 we have Opus 4.6 you can run these right now against your repos, and they will bring vulnerabilities back that you don't know about. So I think immediately that's the thing that we want to be getting our engineering teams to look at. I think then, you know, it goes back to earlier. It's the same basics, ultimately, which is identification. So, you know, where are our code bases? Where are we. Using AI, you know, really simple stuff. And with each technology wave, we always get it wrong. We always forget to kind of put in identification, you know, CMDB, and all those good things that we're still trying to perfect. And then it's, it's really being risk centric. So not all of these risks are going to ripple through in the same order. They're not all equally important. And so it's really just being risk, aware and engaging with each of your development teams so that you are, you are focusing on the right things first, and then I think, you know, I do quite a lot of work with the startup ecosystem, there is a also an avalanche of interesting entrepreneurial opportunities for how we might secure AI, but also how we're going to use it to defend against some of the AI attacks.

Raghu Nandakumara  25:57

So do you want to, I mean, I'd love to hear about anything that you can discuss in that space, like, what are the what are like, sort of standout ideas or direction that you, that you see being explored.

Neil Robinson  26:12

So I think one of the things that is becoming clearer is that clearly, AI native technology is shipped faster. It is got a level of capability per line of code that is significantly better than the stacks that most of the enterprises are running. And so I think that really changes the what we need to defend. And so there are a number of startups that I've seen and pitched and speak to me that are going after that space. So they are AI native, and they're trying to forecast when enterprises are going to start buying this stuff. I think the one of the key success drivers is going to be how we evolve our management of identity, or how do we go from a human in the loop, AI interaction, to something that we're comfortable letting update the repo because it's found a vulnerability? And I think, yeah, that's something that I've seen a couple of interesting pictures around, nothing that we are buying at the moment, but it's, I think that is got to be solved in order for us to do it, because it's something that we understandably are very Conservative around, which is allowing AI models to operate autonomously, you need to have the guardrails, and I think a lot of those bits of infrastructure, so ensuring you've got the right guardrails, ensuring that you've got the right observability around what it can and can't do, and making sure that the identity is also well restricted and observed.

Raghu Nandakumara  28:00

So actually, let's talk a bit, a bit about that, right? So there is, right at the top of the conversation, you kind of pointed about, essentially, attackers operating at machine speed now, or the possibility, and really, from their perspective, automating the hell out of everything is in their best interests. And they've got, they've got no process, etc., that they've got to follow. Or they thought of, oh my god, if we did this, what's the impact? It's, it's like, it's the least important thing. But from and sort of having been on the on, on the sort of the end user side, right, and sort of, know, like things like patching, right? There's like the patching, like, we, we, we could automate deployment of patches. We've been able to automate deployment of patches for decades, right? But it the but the human in the loop has always been necessary, because there are so many hoops to jump through to get to a point where, yeah, we're confident that we could blast this out and the environment will be fine, right? Because you've got your SOE environment that you need to, let's say it's an operating system patch your standard operating environments need to function, okay? Then you need to test that in your on your standard hardware. Then your application teams need to then deploy it into their environments, their environments, their lower environments, to validate that their application doesn't break right. Then you promote to production, and then you blast it out. So how, like, given all of these chat like, how are you thinking about sort of automating or putting in place highly trustworthy AI agents that can make a decision that's as good as humans when it comes to something like that.

Neil Robinson  29:49

So I think, in real, really simple terms, what this tech is really good at, like agentic AI is replicating, non determined. Realistic work, which is largely what a lot of these sort of procedural bank processes are. So it's not here yet, but I could foresee, and I've seen some startups that are trying to push into this space. So for example, like one really simple problem all organizations struggle with, and why we are so reticent to allow automated patching is because you want to validate the impact on the asset that you're deploying to almost every AI native startup comes with some solution where they say, that's fine, we've solved that problem. No problem. We've got an agent. It will pick up this, let's say, a vulnerability. It will walk, not literally, but it walk across the CMDB or pick up the information we've got there. It will walk over to your endpoint manager. It will pick up and look at the access onto the machine, and it will correlate them and come back to you and say, we think that Ragu is the owner of this code base. Now that's something that today we do manually through spreadsheets, maybe, if you're lucky, service now, but this is something that comes out of the box with a lot of the new startups that are coming. And then you say, okay, so we know who the user is, okay, but then we need to put in a service ticket that articulates who's the risk approver of this. What's the next change window? And again, all these things with a agentic are that people are building it at the moment, is possible, and I think very shortly, we'll be able to take a Ceph one, we'll be able to triage it through a platform, and then we'll be able to actually spin up a test environment, test it, and then find the owner, show them the evidence and ask them to approve it. And that would drastically reduce the overall operational effort. We'll end up probably paying a lot in tokens, but it were, there are already platforms that in the cloud are showing these or demoing these sorts of capabilities, not yet on prem, but for

Raghu Nandakumara  32:00

cloud based, yeah, yeah, sure, okay, okay, I think that that's a, that's kind of like, as you said, right? So much of the where the human in the loop comes in is really around these fuzzy areas, right? Because there's never a, it's never a, here's a binary answer to all of these situations. It's always a Yeah, well, kind of, well, it says it's Ragu,

Neil Robinson  32:24

but Ragu has got a different job. So it's not Raghu's job anymore, exactly. And you can only tell that by looking at the fact that you know. So it's all this sort of real-world investigation that people

Raghu Nandakumara  32:37

Yeah, yeah, yeah, for sure. All right, all right. Okay, so as you said, right? Your, your, your sort of short term, sort of pessimistic. Why I should say short term? This is going to cause a lot of work and a lot of thinking, and a long term, it's going to make our lives much better, right? So look into your crystal ball, right? Neil Robinson's crystal ball, and in five years time, if you What would you what do you think the state of things will be?

Neil Robinson  33:14

I think I really struggle to predict six months from now, let alone five years. But just sort of playing along with that. I think every security person, every tech person, will have their own agent. And I think our agents will be interacting autonomously without us. I think that software and the technology stacks as we know them today will look very different. I think that the idea that we would have static interfaces that people put up with, shall we say, you know, the thought that your banking app is the same for 2 million people, I think will be a distant memory. You can already see that with things like lovable it's so easy to dynamically create an interface for that one user. And I think there'll be a whole bunch of other really interesting ways that we interact with, you know, our banks and our other service providers. You're already starting to see that through some of the much improved experience you have around like calling into a, you know, help center, and having a automated person or bot answer that actually understands what you want and actually helps you. I mean, our completion rates on our chat box have gone through the roof. They are so much better, and I think so in five years, yeah, I think it'll be hopefully, a much better experience and a much more

Raghu Nandakumara  34:49

secure experience as well. Oh, for

Neil Robinson  34:51

sure, for sure. That's the optimist in me. I think the you know, the bit that is much less certain around the secure piece is you. Is this whole new ecosystem of LLM based technology that we really don't fully understand yet. You know, I guess if you kind of look outside banking, like OpenClaw and all of these viral code bases that are being built that are instructive as to what the future might look like. So I think that is much, much harder to predict. But I think, you know, banking has been around for hundreds of years. I don't think that's changing in the next five years. Mining, I don't think is changing in the next five years, but the way that we interact and the way that we service customers, and the way we secure it, I think will be, certainly, the security I think will be better on any of the legacy stack. I'm not so sure about the AI stack. We'll have to defer that until the next podcast.

Raghu Nandakumara  35:51

Awesome. Well, I think that's a great place to sort of wrap up the conversation. Neil, thank you so much for your time today. It's always a pleasure to spend time chatting to you. So yeah, really appreciate that.

Neil Robinson  36:05

Raghu, thank you so much. And likewise, always amazing speech here. And I hope you have a fantastic weekend.

Raghu Nandakumara  36:09

Likewise, thank you all right, right, we're done.