Illumio Blog
February 2, 2016

5 New Rules to Make Escalations More Effective and Efficient

Alan S. Cohen,

Find me on:

This article was originally pubished by SecurityWeek.

That escalated quickly. I mean, it really got out of hand.” 
—Ron Burgundy, Anchorman

human middleware

There is a new adage in the security world: don’t assume you will be hacked, but assume you have already been hacked. This forces security professionals to re-examine the validity of the Cyber Kill Chain model—which reinforces traditional, perimeter-focused, malware-prevention thinking—and develop new strategies to deal with persistent and smart attackers, including insider threats. 

Traditional incident management approaches that rely on network monitoring and detection of attacks are also falling short in today’s agile and distributed computing world. Three factors contribute to this security shortfall:

  • Heterogeneity, size, and scale of computing processes are too large and diffuse for human beings to keep up. A large customer, for whom we are protecting over 100,000 servers, has over 400,000 objects connecting to the server layer (this includes objects from storage filers or other infrastructure devices, including multiple IP addresses that make a device look like multiple objects from a security perspective). 
  • As cloud computing emerges, ownership of the infrastructure (from a network monitoring perspective) not only cannot be assumed, it must be discounted. Increasingly, even enterprise data center networks are untrusted. 
  • Dynamic, temporal workloads pushed forward by technologies such as Linux Containers make it more difficult to apply traditional chokepoint technologies. How do you secure a process that only fires off for seconds or minutes? How much preparation time is an organization willing to put in for that? 

These factors make escalation of cyber incidents a huge problem for security staff. To this, here are 5 new rules organizations can enforce to make (inevitable) escalations more effective and efficient.

  • Always full cycle, full stack. Security today for the most part is bolted-on vs. built-in to application development cycles. This leaves applications unknowingly vulnerable. If application developers or DevOps teams can build security practices and software into applications, it reduces vulnerabilities later and provides critical information to response teams trying to track down the source or movement of a breach. This requires a new “Mayflower Compact” between security and application teams. 
  • Shrink your “attack surface.” The traditional perimeter technology model means that security technologies must cover a lot of digital real-estate, the cyber equivalent of guarding a 1000-mile border between countries. Think about the IDS model: applying 20,000+ signatures against all network traffic entering the data center. Not all 20,000 signatures apply to every application. New approaches such as “ringfencing” or microsegmentation of applications and workload mean security teams not only shrink the connections among compromised and uncompromised workloads, they shrink the number of places security investigators must look for incidents. 
  • Gain visibility. You cannot stop what you cannot see. If you are trying to protect the attack surface of your data center or cloud, you must be able to recognize the chart attack patterns in real time (watch malware in action). Being able to visualize and understand attacks accelerate the ability to make informed judgments and take action against attacks. Having critical visibility tools in place to understand the special component of computing will increase the effectiveness of incident response teams. 


  • Increase the speed to quarantine. Being able to see an attack is a great first step. Being able to quarantine the offending computing resources is just as critical. Time to discovery and remediation of compromised computing is one of the most critical factors in limiting the scope of damage of an attack. Removing the ability of a compromised application to infect other applications or exfiltration is a huge factor in limiting damage. 
  • Reduce the human middleware. I love people, but they are hell on computer processes. Miskeying IP addresses, closing ports and processes, or just misplacing information is unfortunate in most computing actions but potentially lethal in security. Increasingly, software intelligence that is based on algorithms and machine intelligence will play a huge role in dealing with the speed and scope of escalations.

The cat-and-mouse game of security staff and hackers will not change anytime in the near future. How we escalate and deal with cyber incursions must.

Topics: Adaptive Security, Data Center Operations

Share this post: