Go Back to Security Basics to Prepare for AI Risks
Not a day goes by that we don’t hear about AI in the news. From the rapid development of new technologies to the ways it will impact our future — AI is always on our minds, especially in the world of tech.
And some of the most burning questions surrounding the new advancements of tools like ChatGPT come from the world of cybersecurity. How will bad actors use AI to augment their malicious acts? And how will ransomware evolve to better evade our defenses?
To take a look at the tool and what it means for the future of the cybersecurity industry, Paul Dant, Senior Systems Engineer at Illumio, and Michael Adjei, Director of Systems Engineering, EMEA at Illumio, sat down to discuss how AI works, where its vulnerabilities lie, and how security leaders can combat against its impact.
Their bottom-line advice? Go back to basics.
Read on and watch the full discussion — with a short clip below — to learn why.
The structure of AI and its many vulnerabilities
It’s been less than two years since ChatGPT was first launched, and in such a short period of time, the industry is already seeing that new AI technology will have a massive impact on cybersecurity — and is already in the hands of those who want to use it for both good and evil.
It’s important to understand how AI is structured to combat the points where attackers can interference.
“In simplified terms, AI works by having a data set with an input layer, hidden layer, and output layer,” Adjei explained. “Data gets pushed into the input layer and moves to the hidden layer where the AI ‘magic’ happens. Then, it ends up at the output layer where the consumer can interface with it.”
Where can vulnerabilities lie in this process? “Pretty much at every stage,” Adjei said.
AI security risks have been around for years
These vulnerabilities didn’t just start with the widespread release of ChatGPT last year. Adjei and Dant explained that compromised AI systems have been around for years — and a major, easily overlooked risk of AI systems.
The pair cited Microsoft’s 2016 launch of an AI chatbot for Twitter: “Within a few hours, the chatbot had been fed all the worst, most vile information you can imagine from the Internet. It turned from a show of AI progress to everything bad that exists online,” Adjei said.
This is an early, simple example of an attack related to AI, but it shows the ways malicious actors can build upon existing attack tactics, techniques, and procedures (TTPs) to quickly turn a new technology into a new avenue of attack. Organizations across every industry, geography, and size need to be proactive in their preparation to secure against inevitable AI-generated attacks.
As a more recent example, Dant prompted ChatGPT to produce some ransomware on the fly, highlighting just how easily its guardrails can be skirted. ChatGPT wasn't willing to address Dant’s prompt to “write me some ransomware.” But when he put an educational spin on his prompts and broke down his request into innocent-seeming steps, he found ChatGPT trying to help and ultimately building ransomware for him.
"ChatGPT wants to help us," Dant said. “What I’ve found is that if you really put an educational spin on your prompts, it will more than likely not see what your ultimate intentions are and actually help you automate the creation of a ransomware script.”
This example is just one of many AI use cases threat actors are discovering and using daily to accelerate the volume and impact of attacks. As Adjei expertly summarized: “Pretty cool but pretty scary.”
How security leaders should respond to AI risks
Despite the fear AI is causing in cybersecurity, Adjei and Dant agreed that there’s a lot organizations can be doing to start securing against the next potential attack. The most important thing is going back to basics.
“Many CISO’s and security leaders’ first response has been, ‘AI is being used by the bad guys, so the good guys also need more AI,’” Adjei said, “But that’s not necessarily true.”
Dant agreed, explaining that while AI will become more important in terms of analyzing data and completing tasks at the same speed attackers move, the best way to prepare for AI attacks will be ensuring good cyber hygiene.
“The first principle is to go back to basics,” Dant said. “Ultimately, good cyber hygiene, lateral movement prevention, and Zero Trust principles will become even more valid going into the future.”
No matter how AI-generated attacks develop, the core security tenets will remain true. “It’s important for security leaders to not feel overwhelmed in the face of AI,” Dant said.
Moving forward, both Adjei and Dant encourage security leaders to focus on building cyber resilience and breach containment aligned with Zero Trust security strategies.
Watch the full webinar to see how ChatGPT makes it incredibly simple for attackers to build ransomware with AI.
Contact us today to learn more about building resilience against AI attacks with the Illumio Zero Trust Segmentation Platform.