Data Center Operators Take the Lead with Software-Defined Segmentation

By | Managed Services News

Dec 31

Microsegmentation solutions can be a market differentiator, helping data centers limit damage in a breach.

GuardiCore's Todd Bice

Todd Bice

For operators of multitenant data centers, the segmentation (or isolation, separation) of computing environments isn’t just important, it’s fundamental to their operating model. If done right, service providers will experience lower costs, operational efficiencies and reduced risk. Additionally, with cutting-edge, software-defined segmentation technology (microsegmentation), there’s an opportunity to drive more core data center services while becoming stickier with customers and establishing new services capabilities and revenue streams. It seems too good to be true . . . but it is. Here’s how.

Let’s start with the essential segmentation requirements, which are often operationally difficult and expensive to achieve. Looking into data center providers’ operational networks, here are a few scenarios where segmentation is needed and, if achieved efficiently, can significantly reduce costs while improving security for themselves and their customers:

  1. Segmentation within a service provider’s own operational network is foundational. They have their own internal applications and various operational technologies (OT) like DCIM, BMS systems, etc., that require a good level of separation to limit the impact of a breach. Additionally, data center operational networks typically have many hard-to-patch systems (especially OT) which introduces the risk of lateral movements if not properly segmented, which could negatively impact operations while also putting all their customers’ businesses at risk.
  2. A service provider’s own infrastructure must be separated from customer environments. A service provider also needs the flexibility to share certain resources while preventing access to others. As an example, create secure connectivity between customer-facing networks such as the DMZ where the customer portal is located, and which needs secure access to data from operational networks (i.e., reading the power status) and from enterprise networks (i.e., reading the billing information).
  3. Lastly, a service provider needs to prevent “cross-contamination” between their client’s respective environments, whether accidental or nefarious. That includes preventing successful breaches or malware infections from spreading from one client’s environment to others.

The Pitfalls of Conventional Approaches

The question is how to achieve segmentation most effectively, efficiently and economically. Historically, operators have relied on traditional firewalling or VLANs to separate environments within a multitenant architecture. Implementing and maintaining such measures, however, is arduous, highly manual, time-consuming and costly. Moreover, these techniques are by no means airtight and can leave a substantial amount of attack surface exposed. The efficacy of solutions designed for perimeter defense is particularly problematic within the data center, especially since most of these environments include a variety of virtual machines, hypervisors, containers, and even cloud components, and new workloads dynamically spin up and down automatically.

Internal firewalls are expensive to acquire and complex to set up. They also interfere with the normal flow of traffic, altering patterns and creating circuitous “hairpins” that ultimately impede systems performance. As the industry is learning, firewalls aren’t intended for segmentation within the data center.

One of the most painful challenges when trying to introduce segmentation to an existing, running production environment is that traditional methods require downtime of an application. Downtime for a business-critical application is costly, can only happen at specific-time windows, and oftentimes isn’t possible at all.

An additional challenge worth noting is that creating any internal segmentation requires good knowledge of east-west application dependencies. This insight is usually nonexistent. Without a simple way to map application dependencies it is extremely hard to separate a brownfield environment and it is also very risky.

The Modern Approach

For all these reasons, operators of shared environments are taking a closer look at …

About the Author

>