An abridged version of this article was originally published by Network Computing.
The Networking 101 class I took 18 years ago started with the concept of static routes that were programmed by humans to connect machines. The first version of the Internet was built on static routes, but as it grew more dynamic, people quickly realized that having humans reprogram the static routes when things fail or change was not sustainable.
Routing protocols such as Routing Information Protocol (RIP), Open Shortest Path First (OSPF), and Intermediate System to Intermediate System (IS-IS) were invented so the routes could be automatically calculated and programmed in the forwarding table by routers. These routing protocols were designed to find the best (shortest) route to a destination on the Internet. However, within a few years, people realized that an Internet that was connecting a large number of universities, service providers, companies, and even countries, would require policy as an input and couldn’t just calculate the best route based on shortest path.
Why did security not evolve with something similar to routing protocols?
Enter Border Gateway Protocol (BGP), which was invented to include policies written in an abstract, natural language by human administrators within the routing decisions. BGP took that policy as the input, combined it with the state of the paths available, and calculated the routing tables. As the state of the paths available changed, BGP would recalculate the routing tables to accommodate those changes automatically. A human administrator only needed to get involved when the administrative policy needed to be changed. BGP made it possible for us to effectively run the Internet because it allowed humans to focus on higher-level policies and let the machines do the dirty job of computing the routes based on various inputs 24x7x365—without making any mistakes.
Why isn’t security keeping up?
In my Security 101 class, I learned about programming ACLs into routers and firewalls that looked very similar to the static routes: you tell the firewall what you want to allow or deny in a language it understands—IP addresses. So, why did security not evolve with something similar to routing protocols? Because firewalls were deployed at the perimeters of data centers, the number of rules configured was small and these rules didn’t change very often.
Ask any enterprise the state of affairs of their firewall ACLs and you’ll likely hear back how afraid they are of touching them.
Also, firewalls were owned by one enterprise, which meant they didn’t require the same coordination between enterprises that was required for Internet routing. So, when a policy about what is allowed or denied changed, firewall administrators took out their secret network decoder rings, took the policy change that was expressed to them in a human-understandable language, converted it into IP addresses, and updated the firewall configuration. As the frequency of these changes grew, more humans with secret network decoder rings were thrown in to handle the changes.
When doing highly repetitive manual tasks, the possibility for human errors is introduced. And, we are not nearly as good at cleaning things up after they are no longer needed as we are at building new things. Ask any enterprise the state of affairs of their firewall ACLs and you’ll likely hear back how afraid they are of touching them.
Over last 10 years, data centers have dramatically evolved with the advent of virtualization, public and private cloud, automation, and distributed computing architecture. The threat landscape has also changed. Enterprises are now starting to deploy firewalls within their data centers in an attempt to secure their east-west traffic.
Both the number of firewall rules and the rate of change have suddenly spiked so high that the people with the secret network decoder rings are having a hard time keeping up. For example, as application developers move faster and faster, security teams are not able to keep up with the rate of change. The security teams, therefore, either slow the application teams down and get a lot of heat in return, or end up compromising on security by making an error in a hurry or by making a sub-optimal security choice in order to move fast—and get hacked because of it.
Manual vs. Software Intelligence: There’s no contest
It’s time to hand this manual security translation over to software intelligence that has been designed to scale with computing demand. It’s time we let people focus on the high-level policies expressed to the machines in natural language (e.g., “allow the web workloads of my ERP application in production to talk to the database workloads of the same application”) and let the software intellegience turn these policies into the language of the network and IPs. When IP addresses change, new workloads show up, or existing workloads are decommissioned, the software could recalculate the security policies for those workloads and reprogram them automatically without getting the people involved. This will allow people to move forward with the speed and efficiency of a superhero while still maintaining a tight security posture.
It’s time to hand this manual security translation over to software intelligence that has been designed to scale with computing demand.
Note that this is quite different from orchestration systems that manage firewall rules. Those are the equivalent of scripts and tools allowing people to re-program the static routes on multiple routers. The key difference is in what people have to deal with. With the right solution, people would never again have to speak the language of IP addresses, VLANs, etc. Instead, they would only focus on the language of the application, letting the software intelligence translate that into the language of the network and will continue to do that 24x7x365—without making any mistakes.
There are several examples of how letting the software intelligence do the repetitive jobs is helping, or will help, us improve the world we live in. Letting the software intellgience drive a car will save human lives as, once we get the algorithms right, self-driving cars will be safer than human drivers as they won’t drink, do drugs, text, or sleep while driving. Uber is uplifting the experience of public transportation by letting the software do what humans do in taxi dispatch call centers. Wealthfront and Betterment are moving the investing to software using smart algorithms.
It would have been impossible for us to build the current Internet without delegating the responsibility of dealing with changes to software and machines. We are soon reaching a point where it will be impossible for us to keep our infrastructure secure without delegating the responsibility of translating the security from the language that humans speak to the language that network understands.
Tony Stark: Jarvis, can you secure the new app we are launching. Allow web to talk to the database and open the web to Stark Tower.
Jarvis: What VLANs and zones should I use for the web and database?
Tony: Do I really need to know all that?
Jarvis: Never mind, sir. I will take care of it.