A Zero Trust Leadership Podcast

From Hype to Guardrails: Building AI You Can Actually Trust | Josh Woodruff
Season Four
· Episode
3

From Hype to Guardrails: Building AI You Can Actually Trust | Josh Woodruff

In this episode, Joshua Woodruff, Founder & CEO of MassiveScale.AI, explores what it really takes to adopt AI, especially agentic AI, without putting your business at risk.

Transcript

Raghu N  00:12

Hello everyone! Welcome back to The Segment. Today, I'm excited to welcome Josh Woodruff to the podcast. Josh is the founder and CEO of Massive Scale AI, a security-first AI consultancy helping organizations adopt agentic AI, using Zero Trust principles. More on that in a second. With nearly 30 years of experience leading security cloud and IT transformations from startups to Fortune 500 companies, Josh brings a rare mix of technical depth and executive perspective. He's a recognized Zero Trust thought leader, co-leads the Cloud Security Alliance Zero Trust Working Group, serves as IANS faculty, advises fortune 100 organizations across critical industries, and is also the author of Agentic AI + Zero Trust, A Guide for Business Leaders. So let me take a breath and welcome Josh to The Segment.

Josh Woodruff  01:06

Thank you so much, Raghu! So great to be here. Really appreciate you having me on the show.

Raghu N  01:10

The pleasure and privilege is all ours, Josh. So that's quite a background. Just step us through how you got to where you are now.  

Josh Woodruff  01:22

Sure, yeah, be happy to. I'll try to keep this concise. That was a great overview. Massive Scale AI was born out of a realization that AI is moving more rapidly than almost any other tech revolution in history. And a lot of folks, I mean, we've all seen this, we've been in the industry a while, you’ve seen tech evolution. I mean, gosh, I remember when the internet came out, and then, you know, mobile, and then cloud. And so you've seen these, you've seen the hype, you've seen the shiny objects. And throughout my career, I've been through these, running operations, infrastructure. IT security, always responsible for compliance and reliability, resiliency, all the ilities, and always enabling organizations to accelerate innovation, get business ideas in the hands of the customer as quickly as possible, safely, securely and with availability. But when AI burst onto the scene, I had started a consulting business a few years prior to that focused on transformation around cloud and Zero Trust security and AI, but we pivoted to massive scale AI, because we're seeing the same thing happen over these evolutions, the shiny object syndrome, everybody kind of grabbing onto it, thinking it's a tools led thing. And I started to see businesses and leaders looking at their tech teams to implement AI and drive transformation like we've done in the past as a technology driven transformation. And a few problems emerged from this observation and the patterns I was saying, number one, security was an afterthought, like it is a lot of times. So that's nothing new. But the second thing was, this is not transformation driven by technology. This is a business re-engineering phase, like, the true value of AI is really, you know, reimagining how work gets done with the injection of commoditized intelligence. So these, these two factors led us to start this business. The book came from a podcast my wife was interviewing me for. We had planned 30 minutes to interview for a podcast, and three hours later, she was like, “We got to write a book. Everybody needs to know this stuff.” So anyway, yeah, so that's, that's the, probably the long story of how I got here and why we're what we're doing at Massive Scale AI.

Raghu N  03:35

I love that. And the term you used AI is commoditized intelligence. That's a brilliant way of thinking about AI, right? And in other appearances, you've spoken about how it's not about AI replacing humans, right? It's allowing us to do like, apply ourselves to more value-driven work.  

Josh Woodruff  03:57

Exactly, exactly. And I think I mean, any new thing brings fear, right? There's a lot of fear. There's another reason why we wrote the book. We wanted to eliminate the fear of the possibilities and potential here, but we also wanted to warn that if you do it haphazardly or without thinking about guardrails and governance, it could, you know, destroy your business. But the other aspect of the fear is that it's going to take my job, right? And I think that's, that's the wrong question, yeah. I think the right question is, what parts of my job do I not like and today I just do for me so I could do the stuff that really excites me and lights me up. What a great opportunity and what a missed opportunity. If you look at it as, “Oh, I'm not using that, it's going to take over totally.”

Raghu N  04:37

I remember when I was at my previous employer, and one of the last things that I sort of built out for them was a roadmap for firewall automation, right? So, firewall automation, firewall change automation, is probably amongst the most mundane, high-volume tasks for any enterprise. And I presented this, and I said, “hey, well, by doing this, you know, 80% of our changes are just low risk. Move our change, and you'll be able to automate all of this.” So I was on this call with about 100 people presenting. I was presenting to about 100 people, and then someone from the firewall request team piped up and said this: He goes, “So what you're saying is you are trying to put me out of a job.”  

Josh Woodruff  05:21

I knew that was coming.  

Raghu N  05:22

I was like, dude, like, is this really what you think best suits your talents? I'm trying to give you time! Anyway, but yeah, so I absolutely see, like, I agree, sort of that that perspective on AI, it opens up the time for us to go by taking on those, let's say, laborious tasks, allows us to go and focus on tasks that are truly sort of inspiring and exciting. So, in your intro, you dropped in a few times the term Zero Trust, right? So I'd like you to start us there and explain from your perspective what Zero Trust means.  

Josh Woodruff  06:02

Sure, yeah, great, great question, and I have to be honest, as a private when I was a CISO, when I found out about Zero Trust and really understood what it meant by learning from the masters such as John Kindervag and Chase Cunningham and joining the Cloud Security Alliance Zero Trust Working Group, I felt like an idiot, like, why have we been doing security this way all this time? In fact, I'll put it this way is, and then I'll explain Zero Trust. But an analogy, Chase Cunningham Dr, Zero Trust asked me a question one time. He, he, you know, Zero Trust is an identity-based security model, as opposed to the previous incarnation of security that we've done for decades, the perimeter-based security model. This is what it's sometimes known as. He asked me, When was the first time the perimeter security model failed, and I don't know what's making SolarWinds or, I don't know, Colonial Pipeline. And he said, in Troy, 3000 years ago. Like the Trojan horse comes up to the city walls, the perimeter of the city. Look like the good guys let them in through the perimeter, into the implicit trust zone. Yeah. Once they're inside the wall, the bad guys jump out. They run laterally across the city and destroy it. This was 3000 years ago. This is how we've done cybersecurity for decades. Yeah. Why? Why have we done why is there a trust zone and an untrusted zone? As you know, trust is a human emotion. It has no place in digital systems, like, why have we done it this way? So I think Zero Trust is about removing Trust, which is an emotion, a human emotion, removing that from digital systems and just being explicit and never trust, always verify. And it doesn't mean never trust your colleagues, which is, I think there's been quite a few negatives, which I think we might talk about. It just means every access request doesn't matter where it's coming from. It matters who it's coming from. Who is it coming from, and you know, who, what, when, where, why, how? You know, like, should they be asked, are you allowed to access this? Can you even see this? It's all about who you are and all the other factors around you, such as contextual factors. Who are you, where are you, what system are you using? What have you been doing lately? Have you exhibited some weird behaviors? You know? So, it's never trust, always verify for access decisions. And it should be dynamic. I think the dynamic nature of Zero Trust is really what makes it so, so perfectly suited to agentic AI. I'm getting ahead of myself, I would define Zero Trust as that.  

Raghu N  08:31

So, and I love that. Love the sort of description and definition of Zero Trust, and also, yeah, sort of Chase's story about the breach of Troy, right as being the first breach of the perimeter. It doesn't matter how many times I've heard that it never gets old and never gets boring, because it's so true. So now that we've kind of established. Okay, this is what we mean by Zero Trust. I'd love you to sort of explain to our listeners and to me. Let's now kind of put it in the context of securing AI. Right before we get to agentic, let's talk about sort of AI in general. What is the perceived challenge, or new challenge when it comes to securing AI?

Josh Woodruff  09:14

I think the biggest challenge is it's stochastic. It's not deterministic computing. It's not data in run a very well known program and expect data out in the same way that you always, you know, it's not, you don't, you unit tests with AI, you do evals with AI. And is it good enough? Is the output within good enough ranges, you know, is it predictable enough? It's data science, it's not software engineering. And because it's unpredictable, and because it's stochastic in nature, it's hard to know what to expect. If it's hard to know what to expect, it's hard to secure. And I think for decades, our security paradigms have been, and even, even with Zero Trust, they've been built around deterministic outcomes. They've been built around, well, I know if A then B, then C, then D, and if it's not D, then I know something's wrong, and I should tell somebody about it, raise an alarm, take some action, remediate some incident. You know, it's a little bit different with AI, because it's unpredictable. And if just AI in general, I don't want to get too far ahead of your questions, Raghu, but I think agentic AI, more specifically, is that unpredictability results in unpredictable actions being taken. Because now it's not just unpredictable in the information it's giving you, it's unpredictable in the decisions that it's making, and it is making decisions. So that autonomy that is introduced through a genetic AI, that's also another really big risk for security, right? Autonomy creates risk. So how do you get that risk into an acceptable threshold? How do you get that to match your risk tolerance? And that's really what the book's about, and what the framework that we've come up with is really about.

Raghu N  11:03

So, so let's, let's talk a bit about like, and I think that that expression about how AI and its output is stochastic in nature, versus, sort of if A, then B, L, C, right, and predictable in nature. So, from a, from a security perspective, what is, or what are the novel challenges that a stochastic system introduces when trying to secure it?

Josh Woodruff  11:31

Yeah, I think the unpredictability about it is the biggest but I think carrying it from there, what does that mean? So AI is nothing without data, right? I mean, it's all about the data, and it's really born out of I mean, the whole reason LLMS are so good at what they do is that they've seen a lot of data, and everybody knows in the security circles, at least, everybody knows it's all about securing the data. It all comes down to protecting the data. The data is the heart and soul of what we're trying to protect. So AI will consume that data and use it in ways that are new, and it will produce results that are different, and it uses logic and reasoning and all the things that it's been trained on. But you ask it a question, in the case of just a regular LLM, and say a chatbot, such as you know, ChatGPT or Claude, you ask it a question, you get an answer back. Well, where did that answer come from? Was it just the general data it was trained on? Or did you add more data to it to give it some ground truth? You know, you hear of retrieval-augmented generation, or RAG, in the early days. That was early days, like 12 months ago, and back in the old times of AI, you would be, you want the answers grounded in some kind of reality. So you would extend your LMS to a corpus of data, maybe your company data. Now, did that? What was the sensitivity of that data like? What's the confidentiality of that data? And the person asking the question, should they get the answers that come from that corpus of data that the LLM is giving you to so and then, how do you control that? Well, how do I know Person X, using some chatbot tied to data set Y, should get access to that data? Like, did we train it on salary information? And are you wanting your you know, some random person in the company getting salary information out of the data? Probably not. So I think the data that you use to augment the AI, which is really where you get the value, the value organizations get out of AI, is in the data that they that they couple with it, you need to be careful about how that data shows up, and it's harder to track access to data and it's harder to track data sensitivity. Who's using it? Should they see that outcome, or should they not see that outcome? I think that's one of the trickier aspects around AI in general, even before you start talking about agented AI.

Raghu N  13:59

So then, like when it comes to the problem with security, the challenge here is ultimately securing the data that is being leveraged by the AI model. How does then the application of Zero Trust become so important in an like when securing an LLM, for example.  

Josh Woodruff  14:21

Yeah, and I think this is where, you know, should this person have access to this data? Well, that's, that's an identity, that's an identity, I don't know if I call it problem, but it's an identity question, like, okay, and this is even in the case before AI, it was, you know, based on this identity, which is inclusive of a role of other contextual factors we mentioned earlier, you know, like, where are you in the world, and what's your behavior like? Are you using a managed device or an unmanaged device? What's the sensitivity of the data that you're accessing, and are you allowed to see that Zero Trust helps with all of that, because it's taking all of those elements? A into an access decision. You carry that into AI, and it's, it's kind of very similar approach. You, you, it's an identity-driven access decision, and Zero Trust says default deny, explicit allow. So it's, it's Zero Trust is a small handful of allow rules, and your exposure is in those allow rules, and so at least it's, it's an inverted thing, where it's, well, default deny. But here's the few things that this identity can do. I think that it's a big win when you're using AI, because you're you're allowed to say, well, if I've got a few different chat box throughout the organization that people are using. I know what data I've used to fine-tune some of the models, and I know a certain class of people who are allowed to see the outcome of that fine-tuning. And so you could restrict access to certain LLMs that you've got certain data attached to. And again, all of this is an identity based security approach. So it always comes back to identity. And I think identity of being the heart and soul of Zero Trust and never trust, always verify, and default, deny, and explicit allow all these things come together to work exceptionally well. When you introduce AI, it's just, it's kind of a natural extension of it, but you've got some unpredictability added to the whole mix.

Raghu N  16:22

Yeah, and I agree that’s the sort of fantastic, sort of grounding to really, kind of, essentially, the fundamental building block of AI. And what you're ultimately securing is data and associating almost like an identity with each of those tokens, and then putting your security governance around that allows you to build an effective security model, assuming that you do this, I think tying back to something you said right at the beginning was but no one built it in from the outset, and that's the challenge. It's that, it's always the afterthought.

Josh Woodruff  16:58

And that's, I think, the biggest risk, and that's really what drove us to start this organization. It was like, okay, yes, AI is amazing. There's a ton of value to be had, and you should get on board with this. Don't be afraid of it, but make sure you do it right. This is also another realization I saw at a conference I was at last year, they brought up some dashboard, and you know, a typical dashboard security operations center might look at, you see all these logged in users doing all these things. And he said, None of these are humans. There were all these logged-in actions being taken. And was like, you know, almost like the air left the room, and the CISO, some of the CISOs were like, What are you talking about? So it was like, oh, so AI is no longer just giving information. We're handing AI decision-making authority, and it's in there taking actions. And so if you've got an identity, if you're able to assign an identity to these, then you can then allow it to do certain things or disallow it to do certain things. You it gives you the guardrails that you need to really control what it can and cannot do. And I think there's a there's something that a lot of folks don't understand. I like to say, AI is an opportunity for the CISOs to become champions of their organization, like get in the driver's seat. And, you know, don't say no. Say how and do it with a security-first approach, because the more guardrails you put, the more constraints you put around the AI, the actual the better it performs, like the smaller, more myopic job you give it, and the tarter the guardrails. And it's almost like a job description. Yeah. In fact, we talk a lot about treating AI agents as digital employees. Give it a job description and define what success means. You know, what are the OKRs for this, for this digital employee, and the more strict and constrained and guard railed you can you can make an agentic AI system, the better it performs. There's less ambiguity. It knows exactly what it needs to do. It knows exactly what it can't do. And if it has to guess and it's not sure, and it can pick from 1000 different tools, it's going to run off the rails and do all kinds of crazy, so ​​this is how you constrain the unpredictability. Is using guardrails and identity-driven constraints, really, as well as not just data, but tools and objectives, and clarity on what it can do.

Raghu N  19:19

Yeah, yes, it's almost like, in the same way that you've got, like, employee handbooks for your employees that have codes of conduct. It's like almost now you've got to have a your agentic code of conduct, which is essentially what the guardrails, as you say, right? All agent AI agents must operate within these guardrails. And if they don't, we can detect that, and we can kill it, effectively.  

Josh Woodruff  19:48

Absolutely, in fact, we introduce a concept of what we call the agent constitution at a few of our clients, where if an agent runs into some unpredictability or not, you know, it's stuck in a loop or it can't quite. Had achieved the objective that was given. And it's tried a few times. It's like, okay, fall back to the agent constitution, go back to your code of conduct, if you will. Like, you know, there's some fundamentals, there's some primitives that you can and can't do, and that's further constraint. And it's almost like, outside of the mission, it's given a fallback to make sure it doesn't totally run off the rails, and then ultimately it flags a human. I think human-in-the-loop is absolutely required. Even in the full autonomous AI, there are all new jobs, I think, being created by the AI orchestrators, of you know, you're going to be managing fleets of these things. So there's always human in the loop. There's always going to be exceptions, and the AI's job is to know when that agent constitution should say you need to get human involved, like, stop what you're doing. Yeah, alert a human. But if, if it doesn't reach that point, to your point, Ragu having the monitoring and the alerting, that's identity-based, and it understands the identity of an AI system is not the same as the identity of a human system. So the tolerances accommodate for machine speed. They accommodate a higher volume of API calls, data lookups, and system access as it works a lot faster, and it's going to do it some. In some cases, it's going to do it 24/7; a human wouldn't. So the behavioral monitoring, deciding what's normal, and what the baseline of normal behavior is for a human is completely different from that for a non-human system, and so you have to define what's normal, what's baseline at machine speed, but then you still have to have those tolerances. If it goes behind these norms, notify a human, and if it's really sensitive and it's really critical, have instant kill switches, like immediately terminate the same before it completely runs off the rails.

Raghu N  21:48

Employees that have codes of conducts, it's like almost now you've got to have a your agentic code of conduct, which is essentially what the guardrails as you say, right? That all agent to AI agents must operate within these guardrails. And if they don't, we can detect that and we can kill it effectively, right?  

Josh Woodruff  22:14

Absolutely. In fact, we introduce a concept of what we call the agent constitution at a few of our clients, where, if an agent runs into some unpredictability or not, you know, it's stuck in a loop, or it can't quite achieve the objective it was given. And it's tried a few times, it's like, okay, fall back to the agent constitution. Go back to your go back to the code of conduct, if you will, like, you know, there's some fundamentals, there's some primitives that you can and can't do, and that's further constraint. And it's almost like, outside of the mission, it's given a fallback to make sure it doesn't totally run off the rails, and then ultimately it flags. It human. I think human-in-the-loop is absolutely required. Even in the full autonomous AI, there's all new jobs, I think, being created of the AI orchestrators of you know you're, you're going to be managing fleets of these things. So there's always human in the loop. There will always be exceptions. And the AI's job is to know when that agent constitution should say you need to get human involved, like, stop what you're doing, alert a human. But if it doesn't reach that point to your point, value having the monitoring and the alerting, that's identity-based, and it understands the identity of an AI system is not the same as the identity of a human system. So the tolerances accommodate for machine speed. They accommodate a higher volume of API calls, data lookups, and system access, and it works a lot faster. And it's going to do it some. In some cases, it's going to do it 24/7, a human wouldn't. So the behavioral monitoring, deciding what's normal and what the baseline of normal behavior is for a human is completely different than for a non human system. And so you have to define what's normal, what's baseline at machine speed, but then you still have to have those tolerances. If it goes behind these norms, notify a human, and if it's really sensitive, and it's really critical, have instant kill switches like immediately terminate this thing before it completely runs off the rials.

Raghu N  24:11

So I kind of now, because we've kind of drifted into talking about sort of securing AI agents. And I think one of the things that you have in your book, but also in your practice, is you've got sort of five rules, or you've got, like, you've got, actually, a model for securing AI agents. Talk us through that.

Josh Woodruff  24:31

Yeah, I don't know if it's the best, the best naming convention in the world, but we call it “the agentic trust framework.” And this really comes again. My wife would beat me up if I used buzzwords or you know any language that is not well understood outside of tech and security. It's a deceivingly simple set of five questions that constitute the what we call the core elements of the agentic trust framework. And the first is, who are you? The second is, what are you doing? The third is, what are you eating and what are you serving? And the fourth is, where can you go? And the fifth is, what if you go rogue? Right? Very, very plain language, yeah, like, wow, those are five questions I can understand. They're deceivingly simple, because what they describe is a very comprehensive Zero Trust strategy. Yeah, so, so who are you? Is the identity piece, and who you are, should be, you know, unforgeable, cryptographically verifiable credentials. But, yeah, who are you is much simpler to say. Let the technicians assign these, these are the technical aspects around that identity. What are you are doing goes back to the baseline, what's normal, what's not normal? Are you behaving normally? Are you behaving out of what I think is normal and at machine speed? I think this is where AI watching AI is a really good use case. But what are you doing? Is behavior monitoring, right? What are you eating, or what are you serving? It's what data are you ingesting? What data are you outputting? What's your inference? So that's really around data governance. That's the data governance piece. Where can you go that really, right, right in line with, with Illumio’s whole, whole value add to market, is segmentation, microsegmentation. It's also kind of rules of the road, the rooms that they operate in, if you're, you know, if you're a marketing chatbot. You shouldn't be able to stumble into the room where the payroll system is, right, yeah. Like, where can you go, like you, and then what if you go rogue, goes back to the kill switches and recovery in seconds, ideally. So it's five simple questions, but by answering them, it's a way for leadership to understand a fairly complicated and at least a comprehensive security approach to implementing AI. So that's what we call the agentic trust framework.

Raghu N  26:51

And thank you for sharing that. And I love the simplicity of it. I love the clarity of it. But more than anything, what I really like is the fact that with this framework, what you're actually showing is that securing AI systems, and I use that term very generally, right? It is actually fundamentally in terms of the things that are important, the questions that you asked. It's very it's pretty much no different from securing, let's say a traditional compute system, right if you're if you're doing the right things, if you are doing things from first principles, that is exactly how you build security. So I think by doing this, and I'd love your thoughts on it, one of the key myths that you are busting is that securing AI workloads is in somehow a completely new cybersecurity discipline.

Josh Woodruff  27:47

I love that. Yeah, I completely agree. Yes, I think that is the argument I'm making. Maybe it's a subtle argument, but it is, it is a position that I would agree with regularly, it's, I think there's an additional aspect of it that accommodates for the lack of human judgment. And, you know, I know this is bad, or, you know, the commonsense stuff that us humans have. So humans know when they're being bad and they know when they're not being bad, right? Typically, the malicious intent is different than, you know, I didn't mean to, you know, put the link in the email and open up a phishing attack. So there's judgment and there's human reasoning there. I think with AI, think of them as digital employees, is a good way, because you've got all the same, same aspects apply. One difference is, I think the architectural constraints to make up for the lack of, you know, human judgment. So knowing you know it, knowing it doesn't know what's good or bad, it just knows a lot of stuff. It's like a really smart child who's been given, you know, the credit card and keys to all the cars. You know. Do you want to do that, or should your architecture and design limit what that thing can actually do? So there's a place of human judgment. Architectural constraints are probably the only one of the bigger aspects of securing AI outside of that, I think the rest of the security practices and principles that we've used to secure regular compute systems apply equally.

Raghu N  29:21

Yeah, and, and, I think just as you sort of identified, sort of that, that uniqueness, right? The unique challenge that securing AI offers that maybe traditional compute doesn't, in many ways, we can probably draw a parallel with, let's say, cloud, cloud security, right? What is the where? Again, like a lot of the fundamentals that are relevant in the data center, equally, if not more important in cloud. It's just that cloud provides a couple of unique challenges, in the ephemeral nature, the scale nature, the dynamic nature, but then also that every single workload is one minute. Configuration away from being accessible on the internet. So think that's kind of it's not an exact parallel, but it's the same way, right? It's the it's, it's just basics, but with a new challenge.  

Josh Woodruff  30:13

Right, right, with some unpredictability and right, complete lack of reasoning. It's not going to know what's bad or not. An AI agent doesn't need to be malicious to destroy your business. It just needs to be hyper competent at the wrong goal. It's interesting, but I love your comparison to cloud. And I think what we talk about in the book, what we work with, with a lot of our clients, is the security model for AI. It's not reinventing something brand new to your point, it's the same principles and practices. This is an evolution of a continuum. This is kind of this next step, it's just now you've got compute systems that have a lot of intelligence, that can reason and make decisions kind of on their own in fairly unpredictable ways, but the cloud concept, remember, cloud introduced really the really fortified the concept of IAM, identity and access management and infrastructures. Code came with it because you had no choice. You had to decide your infrastructure as code. These things came with some really good benefits, like configuration management and infrastructure as code, you can harden your configuration. You can have a known good config, and you can prevent configuration drift through things like immutable infrastructure. These things should never look different. If they look different, they are probably compromised, and you should immediately shut them down. You've got identity. You've got workload identity, which the cloud introduced, right, all the different workloads and Kubernetes or whatever, all these workloads have a different unique identity, and those identities each have different access rules. This is no different. We're piggybacking on all of these security principles and best practices, but you are now incorporating a higher level of reasoning intelligence with unpredictable outputs.

Raghu N  32:03

So I'd love to just you said this a few times, that that sort of high level reasoning, unpredictable outputs, and there's, of course, right there's the popular culture perspective on AI and AI agents in particular, that these are the ones that are going to go rogue like I'd love to in your in your experience, working with your with your clients and your customers. What is the craziest thing that you've seen AI agents get off to that is clearly outside the expected and necessary behavior?

Josh Woodruff  32:36

That's a great question. There's one of the best stories I like to tell is, we talk about a progression framework for AI autonomy. And I'll say on the record, we came out with this first, with these four levels, you know, starts at an intern, and then it's a junior, and then it's a full-fledged employee, then it's a senior and a principal. And then, you know, AWS came out with their security, their framework, they just released. It's just like, Hey, this looks very similar. But, you know, it's not, not any secret there, but the the autonomy progression allows you to graduate. AI is data science agentic. AI is solid data science. It's all about empirical trial and error, and you keep experimenting and adjusting knobs and dials until you get some enough consistent output where it's good enough. And so it's no different with AI, you do that at a junior level than so one of our customers had done this for ordering supplies, Supply Logistics manufacturing company, and they were very careful. They had a very intern, which is recommendation only, and then it went to a junior employee, which is a recommendation and a suggested action. And then after a period of time, it was looking good. They had enough success metrics to say, Okay, we're going to graduate to it can take one action. We're going to let this thing place orders under a certain amount. It's very low threshold of order amounts. And they thought they had described the goal quite well. But after graduating to this level and placing orders, it ordered 40 year’s worth of floor cleaner because it identified a 15% discount. I spent $1.4 million on 40 years of floor cleaner that it was like, Wait, we never told it to do that, but you never told it not to do that. You optimize purchasing and take advantage of discounts like it didn't want 40 years. It didn't know 40 years of floor cleaner was, was not a good thing, but that was quite unexpected.

Raghu N  34:25

That's funny. I mean, I thought where you're going to go with that was that they had, they had tuned it, or they had sort of given the guardrails that it couldn't place more than, let's say, 10, a unit of 10, but they hadn't constrained the number of these agents that they deployed in their fleet and hence, etc., and all of its siblings.

Josh Woodruff  34:46

It did. It did. It had a very low order threshold, and it placed a number of those orders. He was like, this is a great discount. Like, I'm going to say that, like the AI was very happy with itself. I can imagine not that as human emotion, but. Yeah, it's, there's all these different things you don't necessarily think about, but I think we're all still learning. This is still so new, and even agentic AI is still so new. It's still a big experiment. So you do have to be very careful with the keys and the credit cards that you hand to these, you know, emotionally unintelligent systems, and make sure you watch them closely, and you've got to learn, and then as soon as you learn and you've got it fixed, then a new model comes out, right? A new innovation habits. So your architecture and your design, your security, your approach, it needs to also be adjustable and adaptable to accommodate continued innovation. And I think that's another one of the hard parts about this

Raghu N  35:40

whole thing, it does very much feel so much like a child, right? In that the child doesn't know it has a set of rules that but it doesn't know that there are other rules that it's not aware of, right? And you don't know that you haven't set those rules, so it breaks, it sort of breaks a rule that's not there, and then you set the rule, and then it's the next one, and so on, right? Just constantly, yeah, so like in your work with your customers, your clients, when you go and speak to sort of security leaders or even business leaders, and I don't know if you ask them, like, what are you afraid of? What are you most concerned about? Is that something that you that you post to them as part of the the work. And what is the typical response when it comes to sort of agentic AI security, or AI security in general?

Josh Woodruff  36:28

Well, the fact that you mentioned AI and security in the same sentence is usually something folks aren't even thinking about to begin with. Even the tagline of my business, where we're changing security, security first. AI, sounds defensive, and it sounds slow. This is like, people don't understand that security actually enables speed, like, security is a roll cage, not a brake pedal, like, you know, if you got a roll cage, you can take corners, 200 miles an hour, and you know, you're safe if you if you've done security first AI, you can hand it real power, because it's well constrained and you can trust it. You could sleep at night. But a lot of folks just their fear is falling behind. Their fear is, I don't understand this stuff. We're not doing enough of this stuff. What's our AI strategy? The board to the C-suite to, typically, the tech team. What's our AI strategy? How are we using AI? And this is the most common fear, is getting left behind. I'm not keeping up with the Joneses, which results in chasing the shiny object syndrome as we as we've all seen. And so what we try to coach them is we got to ask different questions, where does your business have pain? Where are we slow? Like, what things do your people not like doing? Like, what toil and overhead do your employees have that if they could, if you could take that away or help them not have to worry about that part of their job, would they be more fulfilled and more productive because they're, you know, they're operating and doing things that are more exciting for them. So we reframe, don't worry about trying to keep up. Worry about where your business what business problems you have, and evaluate which of those business problems are a good fit for the use of AI. AI isn't a good fit for every business problem, but it certainly can solve and accelerate a lot of things, and it can more than you know. Time savings is usually the first ROI metric folks go for and efficiency improvements, those are good, but we find the bigger value is improved decision making, opening up new revenue, pass, reimagining how your business works and what you bring to market, how you interact with your customers. In the age of AI, a lot of this changes, and in some cases, you got to be honest with yourself, it may make the product or service you're offering today unnecessary. So how are you going to change to that? And I would say some leaders get that where they're like, Look, I'm not worried about keeping up. I'm worried about becoming irrelevant. My product or my service might become irrelevant in the face of AI, and then it's like, great. So this is back to a reimagining exercise. Brian evergreen has thought leader out in the space talks about future, solving and redefining a vision for your business, knowing that you have the capability of AI and then asking what must be true in order to accomplish this new vision, and iterating on a path to, and sometimes a complete change to a new way of bringing your goods and services to market, or servicing your customers, or even redefining the products that you may may be building. I think that's where we try to steer the conversation, and that opens up a whole new class of fear. Sometimes it's like, oh my God, I didn't think of that. Then they get worried that they're going to be irrelevant. But that's when we really try to say, okay, look, AI can accelerate the mundane parts. If you're familiar with value stream mapping, where you have value added time and waste time, AI can take that waste time in. You know, let's focus on the biggest areas and your business processes. Where are you? Where is the waste happening? Where's the weight states? Let's plug in intelligence to do that. That's kind of an easy, low-hanging fruit. But then the bigger value is now. Let's re-engineer how you're getting work done, because that whole sequence of business process may not make sense anymore when you can inject commoditized intelligence, or you can have agentic AI take on some of this work. You may rethink how you're going about business processes, and oftentimes you start with a ground up line of thinking, and then the leaders start thinking about how they might reimagine their products and their services and how they service their customers. So it's kind of a journey that you walked him through, but I would say the biggest fears is not keeping up with the Joneses, and then it's becoming irrelevant.

Raghu N  40:46

So it's like the security, fear of securing that is almost a an afterthought. It's, it's like it's not even in the conversation at that stage,

Josh Woodruff  40:56

right, right? And this, hence the change of the message of security first AI. It was like, I don't care about security. I care about AI. And about AI. And, yeah, competing in the market. It's like, okay, you know what we're you know what we do at Mass scale.ai? We accelerate the adoption of AI without destroying your business. Yeah, does that sound better, right? That's, that's more like, that's the sad truth is, security has a bad rap, right? Security, the Department of No, the team that slows you down, that, you know. And so security, I think it is shifting. Security is becoming an enabler. They're shifting from, no, we can't do that, to how can we do that? And I think that's, that's, that's, I think security's own fault of securing for security sake. Yeah, this is where Zero Trust being a business aligned security strategy helps as well. Like, what is the business trying to do? Yes, and let's secure all the things required for you to get that done. Like, like, let's align security to the business. So aligning security to accelerate the adoption of AI is no different. And yeah, once you do start playing with AI, and once leaders, especially new to AI, once they see the capabilities and what it can do. Maybe security starts coming into play, especially when you start talking about incorporating company confidential data, and do you know how that's going to show up in others hands, because it's unpredictable, then the security concerns start coming up, but that's usually not the first thing that they're worried about.

Raghu N  42:16

But I love the way you frame that as Zero Trust is very much business-aligned security, Zero Trust, done, right? And Zero Trust, and the adoption of Zero Trust can very much be an enabler for the adoption of AI. I think that's such a brilliant sort of way to phrase it. So before we wrap, I'd love to get your perspective on So, of course, right? Like, securing AI is a massive focus, right? And of course, we're seeing so many startups that are focused on the problem of securing some aspect of AI. And then we're obviously seeing sort of the larger cybersecurity players sort of hoovering up some of these because they want to play in that space, like, what? What's your perspective on the AI security startup space, right? Do you think there are some technologies out there that are solving real problems, or is this just eye candy?

Josh Woodruff  43:10

No, I think there's quite a few. And you know, AI-assisted coding allows pretty rapid prototyping and even rapid product building or enhancement building. So there's this massive acceleration of software development through AI assisted coding, so folks are able to realize ideas quicker, prototype and trial and error. So that's kind of one aspect of it to your question, though, I think there are tons of new value being brought to market because of some of it because of the acceleration factor, and some of it's just thinking through, how can I use AI to offer better security products? And there really is, I think a lot of what I've been talking about has been, there's like, two, there's two sides of this. There's securing, the use of AI, and there's AI for security, yeah, and so, so, so we're, we're talking about using AI for security. And I think there's, there's so many great use cases. I mean, think about the life of a SOC analyst and like sifting through or even a developer, like in the old days. And sadly, I think still in current days, a security team may drop a 30-page set of vulnerabilities with high CVE scores to the development team and go, you've got all these vulnerabilities. Like, okay, so what? Like, I've got a sprint, I have commitments and feature development, and I've only got tech 20, if they're lucky, to do non-feature requirements. ​     ​Which of these hundreds of vulnerabilities matter to me and which don't? So the ability for AI to find patterns of massive amounts of data and to be really good at engineering, and this AI assisted code engineering, is because it's really good at code just happens because code is very well structured, and that's a perfect use case for the use of AI. So AI is helping identify which vulnerabilities are. Are actually exploitable by the given product or the team, right? You've got vulnerabilities, AI could rapidly evaluate which of those are actually in use. And there's some companies been doing this even before AI. I mean, there's, there's, there's a number of them, but I think AI accelerates the ability to do that at lower cost with higher efficiency. Is identifying vulnerabilities that truly matter, and so making it more acute for the development team that's typically already overloaded with features and backlogs makes it very acute to say, here's the one that you should be working on, and then sifting through large amounts of data for the SOC analyst, finding those patterns and surfacing true anomalies versus false alarms, false positives. I think another thing talking about Zero Trust specifically, if for companies who've been on a Zero Trust journey, one of the side effects of it is enriching logging. Log File enrichment, they call it right because it's not just who did what, when it's all the context. So it's not just it's identity, but it's context-based access control. It's not just who you are or what role you're in, what laptop or what system are you on? Where are you at? What have you been doing lately? You know, all these different attributes and contextual factors, Zero Trust takes all of this into account for and accesses all of that lands and logs. Well, guess what that makes your scene or your consolidated logging system a lot fatter? It's you thought it was hard to deal with all this data before with Gosh. You know, some companies make, you know, dollars per bytes, and they're loving this, like, shove more data, we're going to get paid more. I wouldn't. I would care to mention any names. If you could use AI to preprocess that, or consolidate or summarize, or even just work against that larger corpus of data, significant advantages arise. So there's, and those are just a couple of use cases that we see evolution. And there's so many more new ways of applying security through the use of AI coming out, I love to see the startups of really enjoying seeing all this innovation happening at rapid velocity.

Raghu N  46:55

Totally, totally. So Josh, like we're almost at time. And so before we wrap what is the one most important bit of advice that you'd leave our listeners with when it comes to the adoption of AI, agentic AI, and securing it effectively, while also accelerating, helping them accelerate adoption and transformation,

Josh Woodruff  47:20

I would say, first and foremost, don't be afraid, and don't do it because you're trying to keep up. Embrace AI. It's here to stay, and it's adding true business value. So just acknowledge that. I would also say your best ideas for the use of AI are going to come from the boots on the ground, but give access to tools to your people and let them tell you what the best use cases are, because I can guarantee you, and I see this time and time again, teams come up. You know, individuals come up. Better use cases for AI than leadership. Can never even dream up because they're not there. They don't feel the pain. So that's another aspect, but you want to do this in a way that's methodical and with, with, with, with the security strategy, you want to be clear with your policies. So embrace AI. Be clear about what people can and can't do, and be open to for that to change and for that to evolve. And don't wait like start experimenting. You know, I guarantee a lot of leaders say that we banned AI. We don't have it. Every shop I come into that tells me that we find 40 different uses of it. Usually there's AI agents that even know about Yeah. So you know, don't be blind to the fact that people are going to use this no matter what. So embrace it, be clear about how to use it, and then allow your teams to experiment with it. But also understand that this is so this is changing so rapidly. I think by 2030 what is there's, there's some metrics about like, by 2030 human users will be the minority on enterprise networks. And so, you know, like, and we're converging. There shouldn't be a security strategy and an AI strategy. Folks need to have a secure AI strategy. We are building the infrastructure for the future, and we like to do within massive scale that I we like to start with a 30-day secure agent challenge. We're going to we're going to start with one thing. We're going to identify the biggest pain point that you have that's, that's suited to be solved by AI. We're going to build a secure AI agent within 30 days. It's, it's Give, give your team free time to do this. And once you know, once you build all the scaffolding, and then you lay down the railroad tracks for the behavioral monitoring, the I you know, identity and tracking and kill switches, it's, it's almost like when you did cloud migration, you did one app at a time, at least if you were smart about it, you did because that laid down all the facility the railroad tracks for you to do the next app even faster, same with AI agents. So don't wait. Start now, but keep your eyes open and work with your security team as a partner. And security teams need to understand, they need to be a partner. They need to be in the enabler, and it's an opportunity to be the hero. So I'd say my message is for two groups, for business leaders. Leaders, jump in, give it a try, be clear about policy, and listen to your team members, and then don't be afraid to reimagine how work gets done, because that's where you're going to find the real value. But for security teams, drive it like this is your opportunity get in the driver's seat, because security first AI is the only way to do AI. It's why 87% are in pilot purgatory because they can't ship to production because nobody trusts it. It's not secure enough. Yeah, this is security's opportunity to be a champion of the business.

Raghu N  50:31

I love that. And thank you so much, Josh, for everything that you've done because you've absolutely covered this, which is an absolutely relevant but fast topic, in such accessible terms. So really appreciate that. And connecting sort of security, Zero Trust, agentic AI and really stitching it all together. So Josh Woodruff from massive scale, AI, thank you so much, and highly recommend all our listeners go and check out Josh's book, Agentic AI + Zero Trust, A Guide for Business Leaders, available on Amazon. And thank you, Josh!

Josh Woodruff  51:06

Thank you, Raghu. It's been an absolute pleasure. Thank you for letting me join your show anytime.

Raghu N  51:13

Thanks for tuning in to this week's episode of the segment. For even more information and Zero Trust resources, check out our website at illumio.com. You can also connect with us on LinkedIn and Twitter at Illumio, and if you liked today's conversation, you can find our other episodes wherever you get your podcasts. I'm your host, Raghu Nandakumara, and we'll be back soon.