/
Cyber Resilience

A Security Practitioner’s Framework for AI Safety and Security

A logo of the AI Safety Summit 2023

In early November, the UK hosted the AI Safety Summit, the first ever global summit on artificial intelligence. It brought together international governments and AI experts to consider the risks of AI and how internationally coordinated efforts can help mitigate them. This summit follows global discussions around AI safety, including the recent U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI.  

I was expecting to see a specific framework or guidelines for AI safety come out of the summit — but I was disappointed to find no tangible guidance. In this blog post, I’ve outlined the kind of actionable framework on AI safety that I had hoped to see come out of the summit from my perspective as a senior security practitioner.

Take advantage of AI’s newness to address safety and security

Before I lay out the framework, I think it’s important to take a philosophical step back to understand why AI safety is such an important topic and why I was so disappointed by the outcomes of this year’s AI Safety Summit.  

It’s quite evident that most of the security shortcomings in the current network technologies we use today can ultimately be traced back to weaknesses in how the protocols they depend on were originally architected decades ago.  

Security was not an initial consideration. In reality, it rarely ever is.  

Take for example the original protocols for web (HTTP), mail (SMTP) and file transfer (FTP). These were all designed as plain text without any form of data security through encryption. The creators at the time did not envisage a world where banking, clinical patient information, or sensitive user information would all be freely and conveniently transmitted across the globe via these simple network protocols.  

The same is true for the IPv4 addressing scheme created in the early 1980s described in the IETF (Internet Engineering Task Force) publication RFC 791. Who would have thought at the time that the world could realistically run out of billions of publicly addressable IP addresses? Even so, trying to retrospectively bolt on security typically proves to be both a bottleneck and a disruptive undertaking.  

AI nodes represented in the shape of a brain

Generative AI such as OpenAI’s ChatGPT — which marks its first anniversary of public availability this month — has showcased the advances in machine learning and deep learning capabilities. This type of large language model (LLM) can largely mimic the neural networks of the human brain by way of Artificial Neural Networks (ANN). For the first time, the world has witnessed the culmination of various bodies of work over the years towards AI becoming readily usable in offerings such as ChatGPT. All of a sudden, even the possibility of Artificial General Intelligence (AGI) and the theoretically far superior Artificial Super Intelligence (ASI) may not be an exaggeration of fiction titles anymore.

We must take advantage of the fact that AI is still early enough in its evolution to consider its safety and security, especially with the history and baggage of networking and security. An AI safety summit like the one in November 2023, should’ve not only addressed fears but also taken a step back to assess AI safety and security against a holistic view. Unfortunately, I’m not sure this was accomplished; at least not that time round.  

3 principal domains for AI safety and security

For the first-of-a-kind, proactive international summit on AI safety and security, I expected a tangible move from geo-politics to geo-policies to address the following principal domains:

  • Development, administration, and end-usage
  • Security
  • Ethics and legal

These domains are intertwined and, therefore, related to each other. For example, the development of AI solutions should encompass the ethics and legal domains to curb source issues like bias in training data and hidden layer configuration of deep learning systems. Not everything that works is the right thing to do. At the same time, securing the integrity of the training data and development process should be included as early in the process as possible.

An AI safety framework: What I hoped to see from the AI Safety Summit

Under each of the three domains, I’ve suggested a selection of key standards and frameworks to be further considered, the end result of which should be to initiate focused groups aimed at formulating these standards and frameworks:

Development, administration, and end-usage  
  • Interfacing standards and framework: Guidelines and standards on interfacing AI between different vendors and then between vendors and end users, similar to web API with JSON and USB-C for physical devices. This can enable even faster innovation across multiple domains. A good example of this is what Wi-Fi and mobile communication interfacing has done for technology innovation in smart watches, TVs, drones, cars, ambulances, etc.
  • Differentiation standards and framework: Policies to enable the clear determination and differentiation between original artistic works and AI-generated works such as we have for Digital Rights Management (DRM) copyrighting and intellectual property protections. One important beneficiary area will be tackling misinformation and deceptive content such as deep fakes and election interference, at least at the commodity level.
  • AI skills gap: Addressing the skills gap in the key areas of AI development, AI administration, and AI usage similar to the efforts in computer programming and computer literacy (including adult education) in the early boom of personal computing till today. This is intended to level the playing field for both good AI and bad ad AI development and use.
Security
  • Singularity of form protections: Guidelines and protections for merging AI (like ChatGPT, voice AI, image recognition AI etc) with actual hardware into a single form, similar to industrial robots or humanoid robots in the future. Such a capability ultimately gives general intelligence AI physical interaction capabilities in the physical world. Basically, making sure a ChatGPT-type physical robot never turns on humans or malfunctions in a catastrophic way. There are already various occurrences where industrial robots and self-driving cars have malfunctioned and caused harm or death to humans.
  • Risk from AI: Protections against risks posed by AI. ChatGPT has already shown malicious use. This area will consider threats such as evolving malicious payloads, record speed vulnerability discovery and exploitation, voice and phishing fraud, and sabotage like the compromise of manufacturing processes or supply chain attacks.
  • Risk to AI: Protections against risks posed to AI itself. This includes the exploitation of AI development, learning processes, and bias parameters. While AI development ethics, security, usage guidelines do exist, they need to be bettered and improve upon current security practices such as secure coding, DevSecOps and SBOMS in supply chain. Other the consumption side, we have policies akin to computer misuse act, company security policies on computer use, and fair usage policies.
Ethics and Legal  
  • AI civic ethics: Ethics and guidelines on using AI in spheres like facial recognition and user behavior against employee monitoring, traffic control and enforcement, and the criminal justice system.
  • AI scientific ethics: Ethics and guidelines on what AI can be used to do in terms of science and medicine. Examples being genetic and disease research, cloning, etc.
  • AI military ethics: Develop a Geneva Convention-type set of rules of engagement for using AI in kinetic operations especially in autonomous AI. This would particularly be important in AI taking life-and-death decisions to prevent the possibility of unintended consequences such as mass murder, chemical weapons use, or an unintended nuclear detonation.
  • AI legal framework: Guidelines and standards around the legal implications of involving AI in various legally relevant situations. This could include scenarios such as the nature of admissible evidence in legal cases or insurance and finance claims and risk acceptance calculations.

An AI safety and security framework is only the beginning

It’s encouraging to see a strong focus on the risk of AI cyber threats on a global scale. An AI safety framework is an essential first step to building an innovative yet safe AI future. But this is only a starting point. Ideally, there should be a selection of elite, focused groups of AI and industry subject matter experts who partner to develop and iterate AI safety standards similar to the Internet Engineering Task Force (IETF) and Request for Comments (RFCs).

Importantly, we must also consider how we can empower organizations to build resilience against AI-powered attacks. AI makes it easier and quicker for criminals to launch attacks. The best defence is reducing the “learning surface” so that there is less opportunity for AI cyber threats to learn, adapt, and progress the attack.  

Contact us today to learn how developments in AI can impact your organization’s cybersecurity.  

Related topics

No items found.

Related articles

5 Cybersecurity Threats to Protect Against
Cyber Resilience

5 Cybersecurity Threats to Protect Against

Organizations and their security teams are up against potentially devastating cyberthreats each day. But it's hard to provide protection for threats you don’t even know exist.

How to Secure Against the New TCP Port 135 Security Vulnerability
Cyber Resilience

How to Secure Against the New TCP Port 135 Security Vulnerability

A way to exploit TCP port 135 to execute remote commands introduced a port 445 vulnerability, making it necessary to secure port 135 to ensure TCP security.

How Illumio Lowers ACH Group’s Cyber Risk — With Nearly Zero Overhead
Cyber Resilience

How Illumio Lowers ACH Group’s Cyber Risk — With Nearly Zero Overhead

"Good lives for older people" is the tagline of ACH Group, a nonprofit organization based in Australia. But if ACH's IT systems get taken down by cybercriminals, its ability to support those they serve could be harmed.

AI Shouldn’t Be Trusted: Why Understanding That Can Be Transformative
Cyber Resilience

AI Shouldn’t Be Trusted: Why Understanding That Can Be Transformative

Learn why Illumio's CTO and co-founder believes the AI "tech boundary" is smaller than it appears – and how that informs the ways we use AI.

How AI and Machine Learning Can Accelerate Zero Trust Segmentation
Zero Trust Segmentation

How AI and Machine Learning Can Accelerate Zero Trust Segmentation

Learn how innovations in AI and ML can serve as powerful tools for accelerating the implementation of Zero Trust Segmentation.

Why AI Has a Communication Problem
Cyber Resilience

Why AI Has a Communication Problem

Get insight into why AI-related technology is struggling with "cross-silo" communication.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?