Customers use Pivotal Cloud Foundry Elastic Runtime to deploy their production applications. Anyone who has experienced the power of
cf push is familiar with what a paradigm shift this is.
While software delivery has changed, some things remain the same. “The business” still has to navigate compliance rules, handle regulatory requirements, and mitigate security threats.
A top concern for the compliance-minded operator? It’s often how to create isolation between application workloads.
Until recently, the only way to achieve any sort of isolation in PCF was to deploy many Elastic Runtime foundations. Operators would create network partitions with firewalls. These prevented foundations from communicating with one another. This solution is quite simple, and results in a secure, compliant deployment.
The success of this method encouraged some customers keep going, to add one foundation after another. A few deployed dozens of foundations, each with its own (redundant) control-plane. Every foundation added more cost and complexity. But there wasn't any other solution for isolation.
Happily, this changes in PCF 1.10 with Isolation Segments!
What is an Isolation Segment you ask? An Isolation Segment is a set of resources deployed in isolation, without its own control-plane.
First Things First: The Two Types of Isolation
Isolation is a confusing topic because it has more than one meaning. In some cases, operators care about where an application runs. And in others, they care about how network traffic reaches an application. Both of these types of isolation are important and have their own requirements. Let's talk about each.
Compute Isolation occurs when an app runs on compute resources isolated from other resources via a network partition or firewall.
For example, imagine a bank that requires compute isolation for apps authored by their savings and investment groups. Based on compliance requirements, they might choose to deploy two sets of compute resources isolated from one another.
By isolating resources in this way, the bank’s operations teams can make guarantees about where the savings group applications run. And they could ensure that the investment group could not have applications that run on the same resources as the savings group.
Compute Isolation is about where applications run.
Routing Isolation in PCF
Routing Isolation is when traffic to a running application instance traverses a dedicated network path. This dedicated path is isolated from the network paths used by other application instances via a network partition or firewall rule.
Imagine a healthcare provider that needs to provide isolation between the networks that carry patient data and those that carry financial data. This provider might choose to deploy two sets of network routing resources isolated from one another.
Isolating these resources from one another allows the operators to make guarantees about how network traffic travels. These guarantees also ensure that patient data on the network would never be seen by applications that manage the financial data.
Routing Isolation is about how application traffic traverses the network.
Say Hello to the New Isolation Segment Tile!
The Isolation Segment tile offers both Compute Isolation and Routing Isolation. And these features are not mutually exclusive. Operators can mix and match them to fit their needs.
The mechanism for Compute Isolation is the deployment of Diego cells earmarked for this purpose. The Isolation Segments tile deploys a user-specified number of Diego cells that will connect to, and receive workloads from, the shared Elastic Runtime control-plane.
For Routing Isolation, the tile offers a set of dedicated routers. These routers can be configured with their own TLS certificates. And they can be attached to their own load balancer, which allows inbound traffic to be completely isolated at the routing tier.
Ready to see Isolation Segments in action? Let's discuss a common architectural pattern: “hub and spoke isolation.”
A hub-and-spoke topology (sometimes called a “star network”) is a common and well-understood network architecture. In this model, a central "hub" node maintains connections to many "spoke" nodes. The "spoke" nodes have no connection to one another. For the purposes of this example, I will deploy the Elastic Runtime as the “hub” node, and a couple of Isolation Segments as “spoke” nodes.
I’ve already setup and deployed the Elastic Runtime on Google Cloud Platform using the documentation. I’ll start with 3 subnets defined in my network.
Let's start by describing the network topology. First, we create a set of network subnets that will act as the "nodes" in our hub-and-spoke model. For the sake of this example, we will need 3 subnets: 1 hub subnet
banana-ert-subnet and 2 spoke subnets
banana-spoke-b. Let's specify a CIDR for each “spoke” subnetwork, 10.0.12.0/22 for “spoke-a”, and 10.0.16.0/22 for “spoke-b”.
Now, we connect our 3 "node" subnets. These connections come in the form of firewall rules. We create rules that allow each "spoke" to route traffic to and from the "hub". We can do this by specifying that for VMs deployed into “spoke-a”, only allow communication from the “hub” and “spoke-a” subnetworks (
Now that we have our subnets, we’ll create a set of load balancers for each one. These load balancers route external requests to the components deployed in their respective subnet. We can use DNS records to create namespaces for each set of load balancers. Let’s start by creating a set of instance groups for each load balancer. This will allow traffic to be routed to the right VMs.
We will need 3 groups per load balancer, giving us 6 groups in total.
Finally, we can connect these instance groups to our load balancers.
Next, we use the Isolation Segment Tile
replicator to create two Isolation Segments called "spoke-a" and "spoke-b". The
replicator is a handy utility that allows you to create multiple instances of the tile. (Download the binary from PivNet here.) Then, we can manage each Isolation Segment independently with Ops Manager. Here is the command to “replicate” the “spoke-a” Isolation Segment. You can repeat this command, replacing “spoke-a” with “spoke-b” to get two distinct Isolation Segments.
$ replicator \
-name "spoke-a" \
-path /Users/pivotal/Downloads/p-isolation-segment.pivotal \
After the tiles have been replicated, we can upload and stage them on our Ops Manager VM.
Now that we have two Isolation Segments, we can deploy each into its named subnet:
banana-spoke-b. Start by creating a network for “spoke-a” in Ops Manager.
You can repeat that process with the subnet and CIDR details for the
Before we deploy, we want to assign the correct networks to each Isolation Segment and attach the
banana-spoke-b load balancers to the router VM in their respective Isolation Segment. Remember that machines deployed into each of these subnets will not be able to communicate with the VMs in the other "spoke" subnet.
With that configuration complete, we can deploy both Isolation Segments.
Cloud Controller Configuration
After everything is deployed, we can create organizations and spaces in PCF that map to our "hub" and "spoke" resources. Let's create two organizations: "spoke-a", and "spoke-b". In each organization, we will also create a space with a name matching the organization.
$ cf create-org spoke-a
Creating org spoke-a as admin...
Assigning role OrgManager to user admin in org spoke-a ...
TIP: Use 'cf target -o spoke-a' to target new org
$ cf create-space spoke-a -o spoke-a
Creating space spoke-a in org spoke-a as admin...
Assigning role RoleSpaceManager to user admin in org spoke-a / space spoke-a as admin...
Assigning role RoleSpaceDeveloper to user admin in org spoke-a / space spoke-a as admin...
TIP: Use 'cf target -o "spoke-a" -s "spoke-a"' to target new space
$ cf target -o spoke-a -s spoke-a
Finally, for each organization, we create a domain that maps to the respective spoke-a.example.com, and spoke-b.example.com load balancer DNS records.
$ cf domains
Getting domains in org spoke-a as admin...
name status type
$ cf create-domain spoke-a spoke-a.example.com
Creating domain spoke-a.example.com for org spoke-a as admin...
Finally, the last couple steps! We’ll now “introduce” the two Isolation Segments, "spoke-a" and "spoke-b", to the "hub" control-plane. This is done via a series of
cf CLI commands. This process tells the Cloud Foundry API that applications pushed to a given org and space should be deployed to either the "hub", "spoke-a", or "spoke-b" Diego cells.
$ cf create-isolation-segment my_segment
Creating isolation segment my_segment as admin...
$ cf enable-org-isolation spoke-a spoke-a
Adding entitlement to isolation segment spoke-a from org spoke-a as admin...
$ cf set-space-isolation-segment spoke-a spoke-a
Adding entitlement to isolation segment spoke-a from space spoke-a as admin...
And with that, we now have two Isolation Segments using a shared Elastic Runtime control-plane, with compute and routing isolation.
If we deploy an application into the "hub" organization, it will be placed on the ERT "hub" Diego cells. This action also creates a route on the
hub.example.com domain. This makes it reachable through the "hub" load balancer. The same can be said about an application deployed to the "spoke-a" or "spoke-b" organizations. Their instances will run on the Diego cells for "spoke-a" or "spoke-b" and their traffic will route through their respective load balancers and routers to those application instances.
The end result? We have created isolation between the "spoke-a" and "spoke-b" segments. This configuration implements compute isolation because the application instances in each segment run on dedicated Diego cells that cannot communicate with one another. It also implements routing isolation as the network paths from outside the platform travel through dedicated load balancers and routers to reach the application instances.
Achieving Isolation with the New PCF Runtime for Windows Tile
The new Windows Runtime tile is very similar to the Isolation Segment tile. It too provides a set of isolated resources for your apps. These resources run on a Windows Server stemcell instead of a Linux stemcell.
There is a big difference though: the Windows Runtime tile by itself only provides Compute Isolation. There is no dedicated set of routers included in the tile. But don’t despair!
You can achieve both compute isolation and routing isolation for your Windows applications. Simply deploy the Windows Runtime and the Isolation Segment tiles in concert.
The Windows Runtime handles compute isolation while the Isolation Segment provides routing isolation.
To get started, set the Resource Configuration instance count for the Diego cells in the Isolation Segment to zero. This enables you to deploy an Isolation Segment that is purely designed to provide you with Routing Isolation. You can then use those Routers to send traffic to your Windows Runtime Diego cells!
Isolation Segments help you address your most important compliance obligations. And once you’ve created the right level of isolation for your apps, developers and operators are free to innovate with PCF as they always have.
Regulations and compliance are facts of life for enterprise IT. Your chosen platform for modern apps should embrace this reality. Best practices should be built into the platform. That’s our goal with Pivotal Cloud Foundry. Isolation Segments are the latest example of this philosophy.
Ready to learn more? Check out the documentation below and get the latest bits for Isolation Segments.
About the AuthorMore Content by Ryan Moran