The need for micro-segmentation as a security solution is not new. Security historians like myself claim that roots of micro-segmentation began when organizations started to implement a separate DMZ network per application, but recent advances in computing as well as adversaries’ ability to hack into those systems are making micro-segmentation a must-have technology. While the demand for micro-segmentation is solid and clear, the compute environment of the enterprise is constantly changing and still going to change: moving from traditional virtualization (circa 2005) to SDN (the 2010’s) to public clouds, multi-clouds, micro services and back to a managed version of everything running inside each company’s own data center, managed by cloud providers. So how do you implement micro-segmentation in such a world?
Micro-Segmentation Deployment Models
Gartner recently updated its micro-segmentation evaluation factors document “How to Use Evaluation Factors to Select the Best Micro-Segmentation Model,” in which it lists four different models for micro-segmentation. It did not make a clear recommendation on which was best, however. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for your IT needs and the dynamic hybrid-cloud data centers. While there are four options, looking into compute and IT trends it is clear that the only possible valid architecture option is the one that will allow customers to deploy micro-segmentation at scale in any environment they have today and will have in the future. This is the overlay model – but first, let me explain why other models are not adequate for most enterprise customers.
Built-In Native-Cloud Controls are Inherently Inflexible
The native model uses the built-in tools that are provided with a virtualization platform, hypervisor or infrastructure. Solutions like VMware NSX-V or cloud provider’s security groups provide a L4 and L3 ACL, respectively. Using a less-than-capable built-in function is inherently limited and inflexible – especially if you are using more than a single cloud and virtualization technology. Even for businesses using just one hypervisor provider, this model ties them to one service, as micro-segmentation policy does not move with you when you switch to a new provider. The reality is that vendors that used to support native controls for micro-segmentation have realized that customers are transforming, and have had to develop new overlay-based products.
Third-Party Firewalls Provide Limited Visibility & Consistency
Based on virtual firewalls offered by third-party vendors that have not integrated it as part of the infrastructure, this model can force enterprises to change their entire network topology because of network layer design limitations. Known issues include traffic that shares the same VLAN, and can be hidden or uncontrolled, as well as encryption and proprietary applications. All of these problems can have a damaging effect on visibility.
On top of this, any reliance on third-party infrastructure can cause bottlenecks. If you’re looking for a consistent solution amongst varied architecture, and you want to be able to secure the container layer, this model will always be insufficient.
Choosing a Hybrid Model Adds Unnecessary Complexity
Some enterprises look to sidestep the downsides of both of the models above by utilizing a hybrid strategy. The companies use third-party firewalls to manage their north-south traffic, giving the flexibility they need for hybrid-clouds, and then inside the data center, east-west, these enterprises opt for native controls.
However, when businesses choose hybrid micro-segmentation, they are merging two models that are inherently limited. Both of these approaches still require multiple consoles for management and do not always share the same data models. On top of this, companies will have a complicated and lengthy set up and maintenance to contend with. The future is faster and more dynamic than ever, with workloads and applications automated, auto-scaled, migrated and moved across multiple environments all the time. It is impossible to ensure visibility and control of a hybrid choice under these circumstances. Enterprises need one solution that works best alone, not a hybrid model based on two others that are limited individually and insufficient together.
The Future of Hybrid Data Center Security: Micro-Segmentation Using the Overlay Model
From the start, the overlay model is built with ‘future-proof’ in mind. Robust and smart, Gartner describes the overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning. It is mostly useful when you are trying to implement micro-segmentation but do not own the infrastructure.
While the third-party firewalls model is inherently unscalable, agents do not rely on choke points, which means this model can be scaled as much as your business needs. Overlay covers all environments and infrastructures, providing visibility and control down to the process layer, even when handling micro-services and container technology. Entirely agnostic to operational environments and differences in infrastructure, your business can create a micro-segmentation policy that suits your context both for today and tomorrow, whether that’s bare metal and cloud, virtual or micro-services, or whatever technology comes next. Without an overlay model, your business could be out-of-date in a matter of months or years, and you can’t be sure of supporting future use cases and remaining competitive in the industry.
Micro-segmentation is now widely-recognized as a must-have strategy for risk reduction in complex and hybrid IT environments. Choosing a limited model can mean a lot of hard work that leaves dangerous gaps in your security strategy when all is said and done. Guardicore Centra champions the overlay model, supporting a granular and flexible approach to workload security, as well as an all-in-one solution for risk reduction.