/
사이버 복원력

The CISO’s Playbook: Why AI Defense Starts with Context, Not Hype

‍The CISO’s Playbook is an ongoing series featuring strategic insight from the industry’s top cybersecurity executives. This post features Mark Hughes, Global Managing Partner, Cybersecurity Services at IBM.

When Mark Hughes, global managing partner, cybersecurity services at IBM, thinks about AI in cybersecurity, he doesn’t just think in terms of risk. He sees a fundamental shift happening on both sides of the battlefield.

On the one hand, attackers are weaponizing AI to act faster, more precisely, and more evasively. On the other, defenders now have access to tools that promise deeper observability, faster decision-making, and unprecedented automation.

But here’s the catch: those tools only work if defenders understand the context around their AI usage, from the data it ingests to the models it runs to the decisions it makes.  

According to Mark, how well you secure and harness AI today, particularly the context around your data, models, and workflows, will determine whether you stay ahead of attackers or fall behind fast.

I sat down with Mark to talk about what separates real AI defense from all the hype and why it needs to start with understanding the context of what’s happening in your network.

AI is fueling faster, smarter attacks

As a leader of IBM’s cybersecurity services, Mark has a front-row seat to how cybercriminals are already taking advantage of AI.

“We’re seeing AI being used in ways that make traditional threats faster, more accurate, and more harmful,” he said.

That includes better phishing, smarter reconnaissance, and even early examples of AI attacking AI by manipulating prompts and poisoning models.  

We’re seeing AI being used in ways that make traditional threats faster, more accurate, and more harmful.

But most notably, AI gives attackers speed. Mark said he’s seeing attacks that used to take hours or days now done in minutes.

This rapid acceleration changes the stakes for defenders. Detection, containment, and response can’t be manual anymore. That’s why Mark believes concurrent defense, not sequential workflows, is the only viable model going forward.

“With AI, you’re not waiting for one team to investigate, then another team to contain. Agents are doing these things simultaneously,” he said. “That’s the power of using AI defensively — it changes the pace.”

Don’t just use AI, secure it

While most security conversations today center on defending against AI, Mark believes that organizations also need to secure the AI they’re deploying internally. Unsecured AI being used inside your network is just as dangerous as AI-generated attacks coming from outside of it.

He outlines three key pillars to securing your organization’s AI tools:

1. Secure the data going into AI models

According to Mark, you can’t secure AI without first securing the training and input data. That includes enforcing data access controls, preventing personally identifiable information (PII) leakage, and understanding who (or what) is using the data, especially as non-human identities like AI agents proliferate.

2. Secure the models themselves

“We’ve seen organizations download public models with little testing and just start using them,” Mark said.  

That’s dangerous. Models need to be vetted, tested, and monitored, especially when they come from external sources.

3. Secure the outputs and behavior

Mark warned about prompt injection attacks, manipulated fine-tuning, and output that strays far from intent. “A lot of people don’t even check whether their model outputs are accurate or safe,” he said. “That’s where risk enters your organization.”

To address these growing threats, Mark recommends using frameworks like the OWASP Top 10 for Large Language Model Applications, which MITRE has also incorporated into its ATT&CK Framework. It’s a foundational resource, covering everything from model poisoning to excessive agency.

Context is the AI control plane

Above all, Mark emphasizes the need for getting the context of what’s happening in your network.  

This isn’t generic visibility but rather understanding what data is going into your models, where it came from, who’s accessing it, how the model was tested, and what the output is actually being used for.

In Mark’s view, this level of context is what separates secure AI deployment from dangerous experimentation. And it’s where platforms that offer observability like Illumio Insights, which leverage the power of graph-based security, rise to the top.

Screenshot of the Illumio Insights hub

Mark explained that security graphs don’t just show you controls but the relationships between systems, agents, and behaviors. “That gives you the context to make smarter decisions — or better yet, let AI make them for you,” he said.

AI + agent-based security is the future

For Mark, the future of cybersecurity lies in agent-driven systems. AI-powered agents can monitor environments, share signals, make decisions, and enforce controls in real time.

A vulnerability gets discovered, and an AI agent instantly adjusts policies and rules. Other agents verify the change, contain affected systems, and update compliance logs — all without human intervention.

“That scenario isn’t hypothetical,” Mark said. “We’re already seeing organizations free up 30–40% of their teams in high-volume security operations so they can focus on other priorities. This is all from introducing digital labor to handle detection, response, and even compliance.”

The best place to start AI transformation? Your security operations center (SOC). Mark sees it as the part of the stack where AI enhancements bring the quickest wins.  

His advice to security leaders: “Don’t rip and replace. Find a partner or platform that can augment what you already have with agents that can speed up your team, increase accuracy, and cut through the noise.”

From there, teams should prioritize breach containment. “AI isn’t going to stop every breach, but it can absolutely stop them from spreading,” he said.

Don’t rip and replace. Find a partner or platform that can augment what you already have with agents that can speed up your team, increase accuracy, and cut through the noise.

Attackers aren’t waiting to use AI. You can’t either.

Attackers are past the stage of testing AI. They’re already using it and doing it well.  

From social engineering to lateral movement, the threat landscape is getting faster, smarter, and harder to defend.

That’s why waiting to build AI-enhanced defense strategies is no longer an option.

“You need context, visibility, and concurrency,” Mark said. “AI agents make all of that possible, but only if you deploy them now.”

AI security starts with context. Experience Illumio Insights free today to get real-time, graph-based observability that detects and contains attacks.

관련 주제

관련 문서

2023년 사이버 보안 예측이 현실이 되었나요? 조사 결과는 다음과 같습니다.
사이버 복원력

2023년 사이버 보안 예측이 현실이 되었나요? 조사 결과는 다음과 같습니다.

2023년 사이버 보안 업계에 대한 3가지 주요 예측이 올해 어떻게 진행되었는지 알아보세요.

제로 트러스트 보안, '침해 가정' 사고방식, 영국의 데이터 개혁 법안
사이버 복원력

제로 트러스트 보안, '침해 가정' 사고방식, 영국의 데이터 개혁 법안

조직의 90%가 2022년에 제로 트러스트 보안 전략의 우선순위를 정할 계획이지만, 보안 침해가 발생할 것이라고 생각하는 조직은 놀랍게도 거의 없습니다.

Kubernetes 클러스터 I/O는 큰 난장판이지만 도움이 오고 있습니다.
사이버 복원력

Kubernetes 클러스터 I/O는 큰 난장판이지만 도움이 오고 있습니다.

Kubernetes 클러스터 I/O 확산과 환경을 간소화하기 위한 노력에 대해 알아보세요.

AI 보안 그래프가 클라우드 탐지 및 대응을 변화시키는 방법
사이버 복원력

AI 보안 그래프가 클라우드 탐지 및 대응을 변화시키는 방법

AI 보안 그래프가 어떻게 클라우드 탐지 및 대응(CDR)을 강화하여 측면 이동을 탐지하고 위험을 발견하며 침해가 확산되기 전에 차단하는지 알아보세요.

AI로 인한 공격: 제로 트러스트로 보호하는 방법
세분화

AI로 인한 공격: 제로 트러스트로 보호하는 방법

제로 트러스트 세분화를 핵심으로 하는 제로 트러스트 보안을 구축하는 것이 AI 위협으로부터 조직을 방어하는 데 중요한 이유를 알아보세요.

AI 기반 클라우드 통합 가시성이란 무엇인가요? 전체 가이드
사이버 복원력

AI 기반 클라우드 통합 가시성이란 무엇인가요? 전체 가이드

AI 기반 클라우드 통합 가시성이 어떻게 원시 데이터를 실행 가능한 인사이트로 변환하여 팀이 위협을 탐지하고, 측면 이동을 차단하고, 제로 트러스트를 활성화하는 데 도움을 주는지 알아보세요.

위반 가정.
영향 최소화.
복원력 향상.

제로 트러스트 세분화에 대해 자세히 알아볼 준비가 되셨나요?