/
세분화

데브옵스가 마이크로 세분화를 좋아하는 5가지 이유

When infrastructure and security teams want to introduce micro-segmentation, the application community isn’t so much opposed to tighter security as they are sensitive to the speed and safety of the proposed changes. The safety concerns are satisfied through adequate testing. But for many organizations, there’s a time expectation attached to adjusting firewall or segmentation policy that’s measured in days or weeks. For a DevOps team, this kind of timeline is almost incomprehensible. Server builds happen in seconds. Whole pods deploy in minutes. Bulk API operations beat typing complex data by hand every day of the week.

The good news is that micro-segmentation has five significant benefits for DevOps teams.

1. Micro-segmentation runs off shared metadata

Traditional firewall rules use IP addresses but DevOps automation runs off of metadata and abstractions. Micro-segmentation abstracts segmentation policy into labels or tags. These labels are not created in the micro-segmentation policy engine, but instead derive from standard enterprise sources of truth: CMDB, hostname conventions, IP management systems, and other programmatic sources.

When segmentation runs off the same metadata sources as the application automation, it is easy to build segmentation into automated workflows.

2. Micro-segmentation delivers dynamic policy automation

Once the segmentation policy is extracted into shared metadata, a micro-segmentation policy engine does all the heavy lifting of calculating, distributing, and converging the resulting rules. This effectively turns micro-segmentation into an automatable application feature that can be called just like any other application service.

Better yet, a quality micro-segmentation policy engine will track any changes to the underlying IP addresses or labels and automatically keep the desired policy in place. In this way, micro-segmentation becomes declarative and no longer tied to an imperative need to specify individual rules. The automation specifies the policy desire, and the policy engine creates the needed rules and keeps them constantly and continuously up to date.

3. Micro-segmentation inserts easily into existing run-books

Leading micro-segmentation vendors can point to fully automated deployments in the 40-120k range. Inside these fully automated data centers, it is common for the entire infrastructure to re-instantiate every few weeks, often in a matter of minutes. Micro-segmentation can be sequenced into application and pod automation so that all network connectivity required is available when needed.

During even large scale data center reconfigurations, the micro-segmentation policy engine keeps every workload and every container aligned with the specified policy. When segmentation instantiates quickly and policy distribution occurs in real time, DevOps run-books flow smoothly and seamlessly, even while tight micro-segmentation policies protect each application service.
 

4. Micro-segmentation is location independent

Good automation code offers sufficient abstraction so complex tasks can be accelerated. Whether the application runs in the cloud or in the data center is largely unimportant if the automation is sufficiently abstracted.

Because micro-segmentation abstracts IP addressing away, the location no longer matters for segmentation either. Half of the application can be in the cloud. It can move from one VPC to another. The physical location or addressing cease to matter. In this way, micro-segmentation delivers the same location and infrastructure independence that DevOps teams desire.
 

5. Micro-segmentation is application architecture independent

Some applications run on bare-metal servers, some run on virtual machines, and some run in containers. Some applications will migrate from one to the other soon. Micro-segmentation works the same regardless of application architecture or deployment methodology.

A quality micro-segmentation solution supports containers and Kubernetes just as effectively as it supports a physical database server. The same policy will work even if half of the app is containerized and the other half remains on bare-metal. As with location, once the policy is sufficiently abstracted and the enforcement points remain available, micro-segmentation works across legacy, current, and next-generation application architectures.

For members of your DevOps team, micro-segmentation is the security strategy they have been waiting for. Micro-segmentation works at speed. It works at scale, and definitely at speed and scale! Micro-segmentation uses the same metadata and abstractions that are already in use for application automation.

Combined with a powerful policy engine, this creates a dynamic policy automation layer that makes segmentation a standard “service” to be automated into the application. Micro-segmentation can be baked into run-books to ensure that segmentation is available from instantiation through removal.

Because micro-segmentation decouples segmentation from infrastructure concepts like IP addresses, it offers the location independence and application architecture independence required for broad applicability. When security needs to move as fast as the DevOps team, micro-segmentation provides the necessary capabilities.

To learn more, download Bishop Fox's research report: Efficacy of Micro-Segmentation: Assessment Report.

관련 주제

항목을 찾을 수 없습니다.

관련 문서

마이크로 세분화를 자동화하려면 무엇이 필요할까요?
세분화

마이크로 세분화를 자동화하려면 무엇이 필요할까요?

이 게시물에서는 고려 중인 마이크로 세분화 공급업체와 함께 살펴볼 수 있는 5가지 영역을 소개합니다. 공급업체의 상대적인 성숙도 수준과 API 준비 상태를 파악하고 양질의 결정을 내릴 수 있는 더 나은 위치에 서게 될 것입니다.

존 킨더백과 마이클 파넘이 파헤친 제로 트러스트에 대한 5가지 오해
세분화

존 킨더백과 마이클 파넘이 파헤친 제로 트러스트에 대한 5가지 오해

제로 트러스트의 창시자이자 일루미오의 수석 에반젤리스트인 존 킨더배그와 Trace3의 자문 CISO인 마이클 파넘이 업계에서 흔히 볼 수 있는 제로 트러스트에 대한 오해와 그 뒤에 숨겨진 진실에 대한 인사이트를 확인하세요.

클라우드 데브옵스에서 속도와 보안의 딜레마를 해결하는 5가지 방법
세분화

클라우드 데브옵스에서 속도와 보안의 딜레마를 해결하는 5가지 방법

빠른 클라우드 개발과 클라우드 환경 보안 유지의 필요성 사이에서 균형을 찾는 방법에 대한 5가지 권장 사항을 알아보세요.

항목을 찾을 수 없습니다.

위반 가정.
영향 최소화.
복원력 향상.

제로 트러스트 세분화에 대해 자세히 알아볼 준비가 되셨나요?