The idea of segmenting the network as a way to increase security is not a new one. But it has been difficult to achieve granular segmentation because compute and security have been tied at the hip. That means any changes made to attain the desired security posture requires changes to the underlying network transport or sacrifices granularity. In addition, IT and security teams are often juggling competing priorities, and segmentation hasn’t always been the most popular strategy.
The increase in the scale and scope of cyberattacks is changing that.
"...there's now no excuse not to enable micro-segmentation for any company or infrastructure"
Micro-segmentation (sometimes referred to as security segmentation) is a great deterrent for hackers. More and more organizations are implementing micro-segmentation as an essential part of a defense-in-depth strategy. According to a recent survey of over 300 IT professionals, 45 percent currently have a segmentation project or are planning one. Forrester Research’s Q4 2019 Zero Trust Wave report reinforced the importance in stating "...there's now no excuse not to enable micro-segmentation for any company or infrastructure."
But as with any security control, it’s important to balance the strategy of the business with the need to secure it. Segmenting your network is a major project and an entirely different way of managing your network. You may be going from a flat network infrastructure – where communications are wide open – to a network that requires firewall rulesets just like your perimeter infrastructure. It takes careful planning to achieve the desired result of a network that is difficult for attackers but still manageable for you.
So how do you get there?
For effective micro-segmentation deployments, here’s a list of six key functions and capabilities that solutions must provide:
1. Visibility with application context
The adage that you ‘can’t protect what you can’t see’ couldn’t be truer. Organizations run a variety of applications to operate their businesses, each of which communicates with one another and shares data. Therein lies the challenge. Without visibility, an unauthorized user has ample opportunity to infiltrate a business, land on an unprotected or vulnerable asset, and move laterally towards critical assets, all before any signs of detection.
Proper micro-segmentation of application assets can minimize or prevent the spread of breaches, but you need visibility at the application layer. This is different from getting netflow traffic or tapping a SPAN port on the switch that provides information about network flows at L2 and L3. You need to be able to see how application components talk to each other across various tiers (Web, Processing, Database) as well as how applications interact with each other.
Applications don’t sit on an island. They talk to each other, and that’s how business processes work. For example, a point-of-sale (POS) system will likely talk to an inventory management application before fulfilling a customer order. This makes sense. Traffic from the POS system should be allowed to talk to inventory applications. On the other hand, a public-facing Web server should not talk directly to a database but rather go through the application processing server to complete a transaction or query.
But these flows are not necessarily well known – certainly not to network teams and maybe not even security teams. Application developers may have this level of clarity, but not always. Organizations need built-in visibility that shows application dependencies to get micro-segmentation right. Ideally, this would appear in the form of a map that shows how various applications talk across tiers as well as with each other. This map should also depict how different environments like Development, Staging, Production, and Regulation are laid out and what kind of communications flow between them. This helps organizations to understand “what is happening” and “what should happen” based on their desired security posture.
How we help: Our real-time application dependency map clearly shows how application components interact and how different applications talk to one other. This map, known as Illumination, not only provides unmatched visibility but also recommends policies based on desired granularity. The policies are defined in natural language using labels, and don’t require network layer information to define policies. This approach does the heavy lifting and calculates the precise L2/L3/L4 rules based on those natural language policies. This makes it easy to develop a security posture aligned with organizational goals.
2. Scalable architecture
There are multiple ways to achieve micro-segmentation. While each approach has its merits, you need to consider scalability, solution efficacy and granularity, ease of use, and last but not least, cost-effectiveness.
Typical approaches include:
- Segmenting using the network: Programming access control lists (ACLs) on network devices (switches, routers, firewalls, etc.). While this approach will provide some coarse-level separation, it is very cumbersome, error-prone, and expensive. Networks are meant to transport packets from point A to point B as fast as they can, and assigning ACLs for segmentation is like stopping every packet to see if it should be allowed. These two are diametrically opposite goals and when we mix them together, things start breaking.
- Segmenting with SDN: Automating the above with software defined networking (SDN) allows users to access a centralized controller to define rules that get pushed to appropriate network devices. This approach provides some configuration relief, but in many ways, it is no better than using network devices. Besides, most SDN systems were designed to provide network automation, not security functions. Only recently have vendors realized the security use cases and tried to repurpose SDN systems to provide micro-segmentation with marginal success.
- Host-based micro-segmentation: Architecturally, this approach is different. Instead of policy enforcement taking place somewhere in the network, organizations can use the power of built-in stateful firewalls to enforce rules at line rate without any performance penalty. Further, this is the only scale-out architecture, meaning you add capacity as you add workloads. Scale-out architectures have been proven to track nicely as system components grow without sacrificing performance. Remember, for enforcement to happen, the enforcing entity needs to see the traffic. In the case of firewalls (or network devices doing the job), they have to see the traffic. At some point, the amount of traffic will outgrow the capacity of the enforcer, leaving an organization exposed without protection. Traffic steering poses a big challenge and sometimes breaks things at the network layer or introduces a great deal of complexity.
Because the edge of the network is not at the perimeter firewall, but the internal segments, the more granular the segments, the better protection you get. If you take this concept further along, you realize that the best segment is a segment of one – the workload becomes the new edge. Anything outside the workload is not trusted and all enforcement for allowing or stopping traffic happens at the workload without sacrificing performance. Host-based systems do exactly that.
How we help: Early on, we recognized that the only scalable way to achieve both coarse-grained segmentation and granular micro-segmentation was through a host-based architecture. This approach brings security very close to where the action is, and the controls are independent of the network. Leveraging the host's capabilities and capacity to enforce using the OS kernel’s line-rate stateful firewall, we decouple segmentation from the network and give you a scalable way to segment by eliminating the chokepoints present with firewalls and adding capacity as new workloads come online.
3. Abstracted security policies
Traditionally, security has been tied to the network, but both entities have different objectives. Networks are about speed and throughput. Security is about isolation and prevention. When we mix the two, we get the worst of both worlds. It is like being in a three-legged race where both participants are heading in different directions. Needless to say, the scenario does not end well. Security policies should be abstracted from the network so a desired security posture can be attained independent of the underlying infrastructure.
How we help: By decoupling segmentation from the network, we provide a workflow for organizations to build policies based on business-centric labels that are easily understood. Workloads are organized based on four dimensions of labels, and policies are written using these labels. All the heavy lifting of mapping the abstracted policies to network-level enforcement is done through a model that allows you to test policies before you enforce them. This ensures you can achieve the desired result – micro-segmentation – without breaking any applications.
4. Granular controls
Organizations have many applications with different business criticality. As such, security requirements differ based on importance and, sometimes, the regulatory requirements associated with that application. You need a mechanism to define varying security postures for unique compute environments. At times, it may be okay to simply separate different environments (e.g., separate development from production or separate in-scope regulatory assets from everything else). For stricter controls, you might need to lock down application tiers (Web, Processing, Database) and control which tier can talk to which. These options should be part of the policy workflow and should be easy to implement without making any network changes.
How we help: Our policy model is simple, yet powerful and easy to use. The application dependency map not only provides application-layer visibility but also drives policy creation by recommending different options, from simple ringfencing to tier-based separation to segmenting based on ports, processes, and services.
5. Consistent policy framework across your compute estate
Enterprises are increasingly becoming hybrid multi-cloud, and an application’s footprint is now often dispersed across various on-premises locations, hosting facilities, and public clouds based on resiliency, functionality, performance, and data residency requirements. You must develop a consistent security mechanism that transcends individual incompatible solutions applicable to a particular environment.
Security models are different in public clouds compared to on-premises deployments. Public clouds operate on a shared security model. Cloud vendors will provide basic infrastructure security and customers are responsible for securing their assets and applications. Further, the tools used for securing on-premises deployments are different from what is available in public cloud. Most public clouds offer security groups (they go by various names depending on the vendor) that offer basic firewalling per virtual private network. These security groups are limited in scale, have the same issues as firewalls in terms of configuration complexity, and are not compatible across clouds. This can be a challenge for customers who are truly hybrid multi-cloud and need a consistent security mechanism.
How we help: Our solution is location and workload form-factor agnostic. Your workloads can reside anywhere. We provide full visibility as well as a consistent security model that is applicable across the entire compute estate.
6. INTEGRATION WITH SECURITY ECOSYSTEM
Defining a security posture and enforcing its rules helps to keep an organization’s business running. This piece needs to firmly integrate with the processes and tools that an organization uses for creating new assets, deploying applications, operationalizing the systems, and more. Most enterprise organizations have systems that help them with daily operations. As an example, SecOps may use Splunk as their main control center, and other systems should feed notifications and alerts to this system. It is unlikely that SecOps will monitor multiple tools on a daily basis to keep things running. Unless security integrates with these processes, it will always be a challenge and create silos and complexity.
How we help: Our solution is completely API-driven, meaning it easily integrates with an organization's larger ecosystem. Everything you can do with the Illumio GUI can be done via API calls from your system of choice. In addition, we support integrations that allow for enhanced functionality. Notable integrations include:
- ServiceNow – Illumio can ingest host attributes from ServiceNow and use it to create labels that are assigned to each workload and used for defining policies. In addition, Illumio can send information about any discrepancy (based on a real-time map) to ServiceNow and correct the information to make the CMDB more accurate.
- Splunk – Illumio can send all alerts and notifications to SIEMs like Splunk, reporting on things like blocked traffic, tampering events, rule violations, etc.
- Vulnerability Scanners (Qualys, Tenable, Rapid7) – Unique to Illumio, we can ingest vulnerability information and overlay that information on an application dependency map to visualize and quantify risk, giving you a vulnerability map. That information can then be used to derive micro-segmentation policies to account for vulnerabilities, essentially using micro-segmentation as a compensating control when immediate patching is not an option.
- AWS Security Hub – Similar to Splunk, Illumio integrates with AWS Security Hub, which provides SIEM functionality for cloud deployments.
In summary, here are the top advantages of micro-segmentation:
- Improved security: Network traffic can be isolated and/or filtered to limit or prevent access between network segments.
- Better containment: When a network issue occurs, its effect is limited to the local subnet.
- Better access control: Allow users to only access specific network resources.
- Compliance: Organizations under regulatory or client mandated compliance requirements can demonstrate that appropriate steps have been taken and pass audits in a timely manner.
- Improved monitoring: Provides an opportunity to log events, monitor allowed and denied internal connections, and detect suspicious behavior.
Micro-segmentation is a very effective approach to preventing unauthorized lateral movement within your organization, and it is no accident that it has become a key tenet of a Zero Trust framework. Breaches can be damaging, but the absence of an internally segmented network can be just as detrimental.
Most high-profile breaches leave organizations crippled because an intruder traversed the network undetected for weeks or months, moving laterally to access high-value assets. Micro-segmentation prevents that movement so your organization does not become the next headline-breaking business to suffer an attack.
Ready to take the first step on your segmentation journey? Sign up for a free 30-day trial.