The Anatomy of Secure, Modern Infrastructure at a Big Bank: Moving to Zero Trust Networking with Pivotal Application Service & VMware NSX-T

November 2, 2018 Fadzi Ushewokunze

[Editor’s Note: This is the third in a series of posts describing how banks are modernizing with Pivotal Cloud Foundry. The first post detailed identity management. The second discussed how to secure backing services. In the third installment, we review infrastructure security considerations.]

A bank’s most important asset is customer data. For InfoSec teams, your task is protecting these “crown jewels” of the business, while enabling frictionless access to said data in your modern software architectures. Encrypting data at rest, and in transit, are a given. The harder questions to answer cut to the core of the task at hand:

  • How can you reduce the blast radius of a network attack?

  • How do you enable developer self-service, without compromising network security policies?

  • How do you isolate workloads?

The pattern we’re seeing with many top banks is the policy of “Zero Trust.” We’ve talked about Zero Trust before, but it warrants a closer look.

When Thinking About the Zero Trust Model, Forget Notions of “Trusted” and “Untrusted”

Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its walls. This is a break from the traditional “trusted” and “untrusted” networking model. And it’s long overdue, since attackers have sophisticated ways to penetrate networks previously thought to be trusted. These days, you need to assume all traffic is suspect!

With Zero Trust, authentication (AuthN) and authorization (AuthZ) comes before access. In practical terms, that means you should incorporate AuthN and AuthZ for each entity looking to establish communication. You should grant access to the entity only after they have successfully passed both these gates. To move to a Zero Trust model, you have to think about identity and policy as well as networking.

You can begin to operationalize the zero-trust mindset with micro-segmentation. So let’s start there.

Micro-segmentation: Control Access at the Microservices Level

If you want to control access for microservices, you need to govern network access at that granular level. Micro-segmentation to the rescue! It’s the most popular way to carve up your network into distinct security segments. In fact, the demand for micro-segmentation helps explain the rise of network virtualization.

Micro-segmentation with Pivotal Application Service and NSX-T

Micro-segmentation allows you to quickly and easily separate physical networks into hundreds or thousands of logical micronetworks, or microsegments. In the world of Pivotal Application Service, banks use network virtualization to logically divide the networks that underpin the platform. Platform operators create distinct security segments, down to the workload or application level. Then, the operator proceeds to define fine-grained access control between those workloads in each segment...even if some segments may host protected data sources like RabbitMQ and MySQL!

Many big banks start by deploying VMware NSX-T. NSX-T is a network virtualization solution that helps you securely manage microservices at both the application container level and at the VM level. You don’t need NSX-T to run PAS, but it sure makes PAS better! Advanced networking, load balancing, and of course micro-segmentation get a lot easier with NSX-T.

So how do you implement micro-segmentation? Let’s get dive into the technical details.

Administrators can define a security policy based on important factors like:

  • the network that will run the workload

  • what kind of data it will need to access

  • how important or sensitive the application is

NSX-T runs a Tier-0 (T0) router and multiple Tier-1 (T1) routers. Each connects to a network within Pivotal Application Service.

NSX-T creates a network topology per Cloud Foundry Organization (Org) such that each Org gets one Tier-1 router. It also creates logical switches per Space, and attaches them to the Org T1 router. Every Application Instance (i.e. container) has distributed firewall rules applied to it. These policies are defined in the cf-networking policy server.

Let’s take a step back and think about how this is different from PCF without NSX-T. By default, PCF installs with a default Application Security Group (ASG) that allows apps running on your deployment to send traffic to almost any IP address. (ASGs are collections of egress rules that specify the protocols, ports, and IP address ranges where app instances send traffic. ASGs also map to the distributed firewall.) Out-of-the-box, apps are not blocked from initiating connections to most network destinations. The platform administrator must take action to update the ASGs with a more restrictive policy.

OK, back to our NSX-T scenario. When a developer pushes an app to a new Org for the first time, NSX-T creates a new T1 router and allocates an address range for the Org, on demand. There’s also lots of other follow-on things that are done automatically for the developer:

  • App to app communication uses the logical switch, featuring container networking (described below)

  • App to app policy is enforced by the distributed firewall

  • App to on-platform  services network policies will be enforced by the distributed firewall

  • App to off-platform services (legacy databases like Oracle) policies will be enforced by the distributed firewall

Administrators can then use the NSX-T console to apply further network rules. Developers can then work within the constraints of these rules. In other words, your developers are enabled to innovate quickly, without compromising security. Instead, the security you require “just works” in the background because the security team can manage firewall rules at the platform level. This makes it nearly impossible for the developer to violate enterprise policies!

Make Your Apps Secure By Default

Operators use container networking in PAS to lock-down how app instances talk to each other.

Container networking enables you to create policies for communication between app instances. This feature assigns a unique IP address to each app container, and provides a direct IP path between app instances. This diagram explains it visually:

These capabilities are enabled through a “batteries included” pluggable network stack based on the Container Networking Interface (CNI). The CNI specification is an industry standard API for container runtimes to call third party networking plugins. NSX-T is fully compatible with CNI, and it integrates with CNI to enable networking with containers.

That’s a helpfully detailed look at NSX-T and Zero Trust. Let’s take a step back and talk about NSX-T, and how it helps PCF in general.

Why Banks Turn to NSX for Network Security

The great thing about NSX-T? It gives you a single security policy framework to manage, monitor and troubleshoot networking and security on cloud native platforms like Cloud Foundry. (As it happens, NSX-T is also really useful with Kubernetes. Kubernetes is powerful tech, but it does not have a comprehensive SDN solution. And it doesn’t offer easy multi-tenancy out of the box. PKS solves both challenges with NSX-T.)

Banks need to isolate different workloads. At the highest level, you may want to divide your infrastructure into different environments (DEV, TEST, PROD). From there, you could deploy PAS instances on each.

You’ll also have a number of networks in each PAS instance. These three networks will be part of your platform:

  • The management network is where you deploy VMs to manage your platform

  • The runtime network is where your application containers reside. (Or more specifically the VMs that host these containers.)

  • The services network where tiles like RabbitMQ and MySQL will be deployed.

Here’s a logical graphic of this setup:

Each of the networks will be firewalled from each other. Sound complicated? Not with NSX-T! The creation and configuration of these networks can be achieved programmatically using NSX-T’s APIs.

What about the developer experience? Remember, the operators have already set policy at the platform level. The developer can use PAS’s native network policy language to define their application security policy as part of their normal workflow. They don’t have to file a ticket with the network team, or use the NSX-T console to set their app security and networking specs. Self-service for the win!

fadzi@virgo ~ $ cf add-network-policy frontend --destination-app backend --protocol tcp --port 8081

Adding network policy to app frontend in org msci / space test as admin…

OK

 

fadzi@virgo ~ $ cf remove-network-policy frontend --destination-app backend --protocol tcp --port 8080-8090

Removing network policy for app frontend in org msci / space test as admin …

OK

You might be thinking “This sounds good. But containers and their IP addresses are ephemeral. They can change at any time in their life cycle. How can you possibly have any notion of a ‘firewall rule’?” It’s a great question that gets to the core of software-defined networking.

In NSX-T app instance (in PAS) or each pod (for Kubernetes)  gets its own IP address, independent of its VM host. But that doesn’t mean firewall rules need to be managed using IPs. This will be nearly impossible with containers at scale, given that IPs for containers are ephemeral. Instead NSX does something clever: it has context at the app level using the CNI plugin. Management is done all through the API natively, from the CF CLI or kubectl. This experience sure beats managing thousands of rules using a console!

Just a few more points before we move on:

  • NSX-T treats containers like any other endpoint. You can enable container-to-container L3 networking via the container networking interface.

  • NSX-T enables micro-segmentation down to the level of individual containers with a distributed firewall. This helps you create secure microservices for cloud-native applications.

  • Need to keep tabs on network traffic? NSX-T includes management and monitoring tools (like Traceflow) to show you the network traffic between any endpoints, including container-to-container communication paths.

PAS, Container Networking, and NSX-T Help You Move to a Zero Trust Model

By now, you know how the power trio of PAS, container networking, and NSX-T can help you gain more security and control over your app-to-app traffic. You can see how these capabilities combine to block all traffic by default. Each connection is discretely and purposefully enabled.

With PAS’s container-to-container networking, you get a network fabric that supports firewall rules at an application level.

So as we near the halfway point of our secure, modern bank series, our stack is coming into focus. In our next post, we’ll review how to achieve compliance at the platform and application levels.

Want to know more about why PAS and NSX are “better together”? Check out these resources!

About the Author

Fadzi Ushewokunze

Fadzi Ushewokunze is a Senior Platform Architect at Pivotal working with Financial Services companies on Wall Street. Fadzi has worked in the FinTech industry for more than 10 years, gaining experience in security, software development and digital transformation. His passion for FinTech can be traced back to the Asia Pacific region, where he spent a significant time working for RSA on security transformation.

Follow on Twitter Follow on Linkedin
Previous
VMware PKS Competency: Kubernetes Abstraction Going Mainstream
VMware PKS Competency: Kubernetes Abstraction Going Mainstream

VMware previewed a new VMware PKS competency for partners. Learn about PKS, why it's an interesting solutio...

Next
Need 24x7 ETL? Then Move to Cloud Native File Ingest with Spring Cloud Data Flow
Need 24x7 ETL? Then Move to Cloud Native File Ingest with Spring Cloud Data Flow

File ingest is a pervasive and seemingly simple ETL use case in desperate need of a cloud-native makeover. ...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!