From a computing perspective, we live in a renaissance age. Information technology is not just a tool to help run businesses, but actually an economic factor of production; it is a necessary component in practically every finished good and service, like land, labor and capital. As computing integrates into every facet of our lives, it stretches the inter-connectedness of everything we do and touch. And it also raises the risk of our personal, commercial and national security information being obtained by bad actors.
The mainstreaming of IT has been paired to to a hailstorm of innovation over the past decade, including the fundamental emergence of a dynamic new computing architecture: distributed computing. Distributed computing and all of its branches — mobile, virtual, and cloud — is rapidly replacing the 30+ year dominance of client-server architectures. If you think IaaS (AWS, Azure), you are thinking distributed computing. Linux containers? Check. Microservices? Right on. If you are thinking of security as a fixed place in time, you are thinking the wrong way.
The dynamic, innovative nature of new computing architectures presents a catch-22 for security professionals: what makes us more agile, fast, and distributed also exposes more mission-critical data to risk and hackers. It is difficult, potentially impossible, for the traditional network security model —built on the foundations of a fixed and centralized computing architecture — to address the new requirements.
What are some of the challenges?
- Security dependent on the network does not “bend” to hybrid or diverse environments (you cannot stretch your firewall into AWS).
- The temporal, brief life cycle of newer computing architectures such as Linux containers move too quickly for traditional, manual network management models.
- Most significantly, the segmentation model of networking such as VLANs or zones leaves too much attack surface available to bad actors.
To illustrate the last point, if a VLAN or a virtual network segment contains access to 500 workloads (i.e., physical or virtual servers), it is the cyber equivalent of Typhoid Mary. If one workload becomes infected by malware, every workload is subject to infection. The traditional network segmentation model is a poor defense in an era of heightened concern about APTs and data exfiltration. Imagine a container that moves across an entire data center being infected and it is not partitioned off from sending and receiving communications.
So what can enterprises do?
- Ringfence critical assets. Determine ways to segment high-value assets away from lower-value compute infrastructure. This “hygiene” move will not stop a determined hacker, but will make communication with critical servers much more difficult.
- Build security and segmentation into the application cycle. This would include building more granular security policies directly into application architectures to reduce inter-application communications.
- Dynamically adapting is the best defense. Institute an adaptive security architecture whereby security moves and adapts with dynamic compute assets — such as Linux containers or vMotion that spin up or down and move — without human intervention. One of the best thought pieces on this strategy was outlined last year by Gartner’s Neil McDonald and Peter Firstbrook.
Many of the CISOs I meet have stated that action they undertake in their first six months on the job is to determine the most valuable and most at-risk actions and take steps to mitigate the risk. How can they take those steps while also addressing the catch-22?
The only way to make this change is to involve the security, infrastructure (e.g., networking), and applications teams in rethinking the application development cycle from a security perspective. These groups must jointly understand and invest in the kinds of security systems that support the rapid and dynamic workflow of distributed computing capabilities. This will reduce the attack surface and while increasing the difficulty of penetrating critical information assets.