Pivotal Cloud Foundry 2.6, Now GA, Offers More Ways to Build, Run, and Wire Your Apps

June 24, 2019 Jared Ruckle

For software to deliver value, it first needs to make it to production. That requires three things: build, run, wire.

You have to build your source code into a deployment artifact, often using build pipelines. You need to run your code somewhere. Ideally, a continuous delivery tool connects up to a platform to ease scaling, high availability, telemetry, and so on. Finally, you need to wire your applications up to other components: a database, or other dependent services.

Since its inception, Pivotal Cloud Foundry(PCF) has helped you perform the “build, run, wire” trifecta on any cloud.

PCF 2.6, now GA, continues this mission. These capabilities stand out:

  • Custom sidecar processes (beta). In Pivotal Application Service 2.6 (PAS), developers can run custom sidecar processes in the same container as their application. This simplifies development for all kinds of “wire” use cases, including proxy forwarding, client-side load balancing, timeouts, and retries.

  • Multi-cloud continuous delivery with Spinnaker. PCF now integrates nicely with the most popular CD tool, Spinnaker. Spinnaker 1.14 now supports several advanced CD scenarios with PCF. As a result, large development teams can more easily deploy to production to improve outcomes. Use Spinnaker with PAS as well as Enterprise PKS. (This integration is backed by community support.)

  • New permissions model in Concourse for PCF (coming soon). Concourse for PCF 5.2 will include a powerful new permissions model to better segment access to build pipelines. The new release will add compatibility with CredHub for secrets management as well.

  • Multi-datacenter replication capabilities for MySQL (coming soon). MySQL for PCF 2.7 will add multi-DC replication capabilities as a beta feature. This will offer more stability and scalability for your database apps.

There’s much more in PCF 2.6 to help you build, run, and wire your apps. Read on for details, and watch the PCF 2.6 webinar!

1. Run custom sidecar processes in containers (beta)

2. Bring Spinnaker to your PCF deployment

3. Concourse for PCF to add role-based access controls, improved container clean up (coming soon)

4. MySQL 2.7 to add multi-datacenter replication (coming soon)

5. Enterprise PKS 1.4 eases install atop vSphere, adds new operational capabilities. Plus, new open-source Kubernetes tools!

6. Spring teams, take note: Config Server turns 3.0

7. Platform Automation for PCF, your perpetual update machine, is now GA

8. Application rollback helps you recover from the unexpected

9. Spring Cloud Data Flow for PCF 1.5 makes it easier to get up and running

10. Bring more .NET apps to PAS! We add support for multiple custom network ports, ODBC connections

11. Across-the-board updates to RabbitMQ 1.16.4 for PCF

12. Pivotal Cloud Cache 1.8, now GA, eases backup and restore ops for your cache

13. The AWS Service Broker for PCF, now GA, simplifies AWS for developers

14. Other Enhancements

Run custom sidecar processes in containers (beta)

Digging into the sidecar pattern? You’re in good company. Sidecars are most often associated as part of a service mesh. (Envoy™ probably leaps to mind here.) This is a popular use case for sidecars, but it’s far from the only one.

In PAS 2.6, our beta service will allow you to run sidecars as an independent process from your main app process, but with the benefits of running it in the same container. Why does this matter? Pivotal’s Tim Downey explains:

You might be wondering how using sidecar processes differs from just pushing separate apps as microservices. While it’s true that these use cases overlap, the ability of the sidecar process to run in the same container as its main app grants us several affordances. For example, you may want to use sidecar processes if you have two processes that:

  • need to communicate over a unix socket or via localhost.

  • need to share the same filesystem.

  • need to be scaled and placed together.

  • need to have fast interprocess communication.

Here’s a diagram of how the sidecar process works for Ruby and Golang apps. (You can run all the usual supported frameworks with this feature.)

We’re already seeing lots of interest for sidecars for proxy forwarding, client-side load balancing, timeouts, and retries. It’s a nifty way to deploy APM tools as well.

Ready to experiment with sidecars? Check out this sample app. Then, read the docs and get started!

Bring Spinnaker to your PCF deployment

Your path to production broadly includes two segments: “build” and “deploy.” You probably know and love Concourse for build (i.e. continuous integration). “Deploy” (i.e. continuous delivery) is the other half of the pipeline. It’s the practice of reliably getting your code into the hands of your customers quickly and securely. The runaway winner for CD is Spinnaker.

Why is Spinnaker so great? Our own Lyle Murphy sums it up:

[Spinnaker is] multi-cloud and multi-runtime. It runs at scale. You can manage all your deployments through a single instance of Spinnaker.

 

As you get better at software, your continuous delivery complexity goes up. That’s where Spinnaker really shines. It gives you visibility into what you have deployed through application inventory and monitoring systems integrations. What’s more, Spinnaker makes it easier for you to adopt modern development techniques.

 

Want to move to blue/green deployments and rollbacks? Spinnaker has you covered. The same goes for automated canary analysis. Thanks to the hard work of many people over many years, you don’t have to figure all of this out on your own.

We’ve been advocating for Spinnaker for quite some time. You may know that Spinnaker is a terrific match for Kubernetes and Enterprise PKS.

Now it’s also a fit for Pivotal Application Service and its open-source cousin. Use Spinnaker 1.14 to deploy apps to PCF, and you’ll enjoy:

  • Zero-downtime blue/green deployments

  • Multi-foundation view of applications

  • Manifest-based deployment

  • Application management actions and pipeline stages

  • Clone stage for promotion of applications across environments

  • Pipeline stages to deploy/destroy services

  • Binding applications to services as part of deploy stages

  • Artifact framework for triggering and assembling deployments

  • Artifactory and Nexus integrations

  • Artifact traceability from build to deployed assets

  • Concourse trigger type

Here’s the best way to get started with Spinnaker and PCF:

  1. Download and install Spinnaker. (We recommend the Helm chart and Enterprise PKS.)

  2. Configure Spinnaker to work with your PAS deployment, following these instructions. (You’ll also want to review this tutorial on how to connect Artifactory to Cloud Foundry.)

  3. To wire Spinnaker to Enterprise PKS, follow these instructions.

Of course, your account team is standing by to assist here as well.

You’ll also need to configure Spinnaker itself, apart from any integrations with the platform, for authorization and authentication.

To learn more about how Spinnaker changes the game, watch this talk from SpringOne Platform:

Then watch this on-demand webinar from Pivotal:

Finally, see this new capability in action “Metrics Driven Blue-green Deployments using Spinnaker’s Cloud Foundry Integration”:

One last note: Spinnaker is supported by the community; it’s not a generally available offering from Pivotal. We're considering ways we can make Spinnaker work even better with our products, and your account team would love to hear your ideas. In the meantime, you’ll find that the Spinnaker community is second to none!

Concourse for PCF to add role-based access controls, improved container clean up (coming soon)

Concourse for PCF helps you to deliver changes to your modern application stack. And it’s so dang flexible. Enterprise development teams love it for application CI, while platform teams use it to automate platform updates.

Concourse for PCF 5.2.0, coming soon, will package up loads of new capabilities. Here are a couple of expected enhancements.

Role-based access controls

Administrators always want folks to have the right level of access so they can do their job (and to make compliance auditors happy). That gets easier with the new permissions model in Concourse for PCF.

Concourse for PCF 5.2 plans to feature five roles: Concourse Admin, Team Owner, Team Member, Pipeline Operator, Team Viewer. Here’s what each role would see and do:

  • Concourse Admin is the most powerful role, a special user attribute granted only to owners of the main team.

  • Team Owners can read, write and auth management capabilities within the scope of their team, but they cannot rename or destroy the team.

  • Team Members can operate within their team in a read/write fashion, but they can not change the configuration of their team.

  • Pipeline Operators can perform pipeline operations such as triggering builds and pinning resources. However, they cannot update pipeline configurations.

  • Team View is a view-only role for secondary users. This grants "read-only" access to a team and its pipelines.

Simple, right? You can probably already tell which engineers on your team need which level of access. But if you do have questions, consult the docs for how each role compares.

Faster container & volume cleanup

All those pipelines run on infrastructure. Concourse already has some mechanisms to clean up after itself to minimize your costs. This new version would get a lot smarter about these cleanup jobs.

Previously, workers in Concourse would “garbage collect” containers and volumes sequentially. Better than nothing, but it was inefficient. In Concourse for PCF 5.2, containers and volumes would be removed in parallel to each other, with a default max-in-flight of five containers and five volumes at a time. This would speed up garbage-collection overall and prevents an imbalance in volume/container counts from slowing each other down. This is especially important. Workers in Concourse are typically capped at 250 containers but may have thousands of volumes.

Can’t wait for the official Pivotal docs to come online? Browse the open-source release notes for what’s new.

MySQL 2.7 to offer multi-datacenter replication (coming soon)

Relational databases have long been the most popular type of backing service. MySQL adoption remains sky-high. And MySQL is at its best when it has these features:

  • High availability

  • Instant, self-service provisioning

  • Built-in TLS encryption

  • Easy administration, with an automated lifecycle

  • Strong data protection provisions

That’s MySQL for PCF. Soon, the “high availability” attribute will get even better. In MySQL for PCF 2.7, we plan to add multi-DC replication capabilities as a beta feature. It would work like this:

  • Developers to create a leader-follower MySQL instance in two foundations. This is done using familiar cf services commands.

  • Developers to bind apps in either of the aforementioned foundations to the multi-dc MySQL instance.

Once multi-DC replication is enabled, developers can then trigger a failover to their “follower” MySQL site in the event of a disaster. What’s more, operators can perform datacenter maintenance on one site while keeping the other mysql service up and running.

Giving your development team a rock-solid MySQL service has never been easier!

Enterprise PKS 1.4 eases install atop vSphere, adds new operational capabilities. Plus, new open-source Kubernetes tools!

ICYMI! We released Enterprise PKS 1.4 a few months back. As usual, it features the latest stable version of Kubernetes.

This version includes a special treat for VMware administrators: a new configuration tool that captures and cross-checks all the details needed to set up Enterprise PKS. The tool ships as an OVA, and simplifies the installation for Ops Manager, the PKS tile, the Harbor tile, and vROPS.

We add several new features to help you with day-to-day administration of Kubernetes as well. In particular, you’ll appreciate:

  • Pod Security Policies, for more control over workload execution. Pod Security Policies are a cluster-level resource that defines a set of run conditions a pod must adhere to in order to be accepted into the system.

  • Cluster Admin resource quotas, to limit memory and vCPU usage. PKS operators can now put an upper limit on the total memory and compute resources a user can allocate across one or more clusters.

  • Self-service KubeConfig access...no more fiddling with complex scripts. Developers can access their KubeConfig without custom security scripts. It’s all thanks to UAA/LDAP integration.

  • Backup and restore, for all types of clusters. Operators can recover single and multi-master clusters from unplanned outages.

There’s a lot more packed into Enterprise PKS 1.4; read the release notes for the full rundown!

Get to know Kubernetes Tools: simple and composable tools for application deployment

Have you tried out Kubernetes tools (aka k14s) yet? You should! Here’s the TL;DR:

Kubernetes tools from k14s (ytt, kbld, kapp, kwt) when used together offer a powerful way to create, customize, iterate on, and deploy cloud native applications. These tools are designed to be used in various workflows such as local development, and production deployment. Each tool is designed to be single-purpose and composable, resulting in easier ways of integrating them into existing or new projects, and with other tools.

Our own Dmitriy Kalinin spoke with VMware’s Joe Beda on these tools recently. Watch here:

Spring teams, take note: Config Server turns 3.0

Spring development teams have come to rely on Spring Cloud Services. Config Server, the tile that tackles all the configuration toil for your microservices, flips to a new major version. The bump to 3.0 comes with good reason. Config Server 3.0 boasts across-the-board enhancements, including:

  • Better performance. Config Server 3.0 bundles a local Git server. (This server will be deployed in each PCF foundation.) This reduces latency from the prior bring-your-own Git server setup. Further, this add-on improves the auditing and governance of Config Server changes.

  • No more dependencies on Rabbit and MySQL tiles. Fewer dependencies mean easier day-to-day management for you.

  • Integration with CredHub to improve secrets management. Config Service 3.0 has an integrated CredHub backend to manage secrets for each service instance. (Use Vault? No problem; we will continue to support that option.)

Note that Spring Cloud Services 3.0 only includes Config Server. For this reason, you’ll need to have both the SCS v2.0.x and the new 3.0 tiles deployed and used “side by side” in your PCF environment. Read more about this release in the launch blog by Chris Sterling.

Platform Automation for PCF, your perpetual update machine, is now GA

The companies that excel at build, run, wire are those that keep their platforms in an updated, secure, and healthy state. We recently announced a new tool as a beta to help you in this task: Platform Automation for PCF.

This tool is now GA, so it’s recommended for production use for all PCF customers! To learn more, check out the new product page! Get into a regular upgrade pattern and feel the relief.

Application rollback helps you recover from the unexpected

As Josh Long says, production is the happiest place on the Internet. But what happens when a deployment doesn’t go as planned? You want to regain control of your systems’ stability as fast as possible. That means “rolling back” your app to its last known good state. Tools like Spinnaker can help; now PAS 2.6 has a native way to help you rollback too.

Rollbacks are possible thanks to “revisions,” a new concept in PAS 2.6. A revision is simply a snapshot of code and configuration for an application at a specific point in time. Revisions are automatically created for an app when new app code or configuration is deployed.

In order to rollback an app to a previous revision, you need to create a deployment for an app that points to that previous revision. Here’s a demo with rollbacks and revisions in action:

Now when a production deployment does not go as expected, you can quickly regain control over application stability. Check out the docs to learn more. You should also add this open-source CLI plug-in; it’ll help you keep tabs on all your revisions.

Spring Cloud Data Flow for PCF 1.5 makes it easier to get up and running

Spring Cloud Data Flow brings a powerful toolkit for modernization streaming and batch data pipelines, but the learning curve can be a bit steep. So in this release, we’ve added loads of examples, documentation, and guides to make it easier to get started. Your feedback is reflected in a new microsite.

We learned a ton from all your feedback on Spring Cloud Data Flow (SCDF). The always-insightful Sabby Anandan explains:

In answering community questions and customer support tickets, we noticed a high degree of context-switching was required between different projects, their reference guides, and the samples, in order to build streaming and batch processing solutions. This was consistent with the feedback you gave us in the survey results as well.

 

The main reason for all the context-switching is that there are various projects in the SCDF ecosystem, each of which evolves with different features and release cadences. SCDF brings them all together into a coherent set of developer tools to build, deploy, and manage streaming and batch data pipelines. Therefore, answering community questions typically involves pointing to various project-specific resources.

 

 

To minimize this context-switching and to promote the easy discovery of product capabilities, we realized that we needed to deliver a step-by-step developer guide to delve into new features, enhancements, and use-case possibilities.

We’re also seeing a ton of interest in how SCDF works with event streaming pipelines based on Apache Kafka. This post is a great place to start:

Bring more .NET apps to PAS! We add support for multiple custom network ports, ODBC connections

Your .NET apps deserve a good home, so run them on PAS! Two more classes of .NET apps are now suitable for the app platform.

PAS for Windows supports multiple custom ports for .NET apps on Windows

Does this sound familiar? It should. The multiple custom port feature was included in PAS 2.5 for Linux apps. Now, in PAS for Windows 2.6, it comes to the wonderful world of Windows Server.

When would you use this feature? As we mentioned recently, you can use this capability to serve web client requests on one port and stats/debugging on another. It’s also handy for apps that use a TCP protocol requiring multiple ports. And when you do push these apps to PAS, they will enjoy all the same Day 2 benefits as your other apps.

One quick caveat: the HWC buildpack does NOT support multiple ports. You’ll need to use the Binary buildpack with a .NET Core app or a self-hosted (OWIN) .NET Framework app that incorporates this feature.

PAS for Windows supports .NET apps that require ODBC connections

Got .NET apps that use ODBC database connections? Now, you can run them on PAS for Windows!  To learn more, talk to your account team about access to our .NET Cookbook!

Across-the-board updates to RabbitMQ 1.16.4 for PCF

Lots of nifty enhancements for users of the Rabbit for PCF tile are now GA:

  • More capabilities for on-demand instances. You can enable the rabbitmq_event_exchange and the rabbitmq_mqtt plugins for on-demand service instances. These plugins are disabled by default. For how to enable them, see Enable Optional Plugins.

  • Share metrics with PCF Healthwatch. You can configure your RabbitMQ tile to share metrics with PCF Healthwatch.

  • Reduce your operational costs. You can disable service metrics for RabbitMQ for PCF service instances to reduce operational costs. To learn how, see Set up Syslog Forwarding and Metrics Polling Interval.

  • Simpler TLS setup. You can now configure TLS when you create a service instance, as well as when you update an existing one.

  • Option to enforce TLS. Want to require all on-demand RabbitMQ to use TLS? Now you can; see Configure Security.

  • New metrics. RabbitMQ for PCF now exposes the return_unroutable and return_unroutable_rate metrics to the Loggregator subsystem.

Want to know what else is new with the most widely deployed open source message broker? Then head on over to the RabbitMQ blog for a recap of “This Month in RabbitMQ”!

Pivotal Cloud Cache 1.8, now GA, eases backup and restore ops for your cache

Is your new user sign-up flow a little sluggish? Seeing too much latency in your queries? Are your users complaining about slow page loads on the most popular parts of your app? More and more companies implement a cache to address performance bottlenecks. What do we recommend for caching on PCF? Pivotal Cloud Cache, or PCC.

PCC is a proven, effective way to improve the performance of your most important services. The product has come along way in the last year. The latest release, PCC 1.8, adds several key features to boost stability and simplify day-to-day operations. In particular:

  • PCC service instances can now be backed up and restored via BOSH Backup & Restore. As we’ve written about, it can be hard to perform backup and restore operations in distributed systems. But now it’s very simple for you to do this for your microservices cache. Just follow the steps outlined in the documentation.

  • PCC now captures more details about the health of your deployment. You can use this enhanced telemetry to establish SLOs for the throughput and latency of each cluster. In this release, dozens of new data points are emitted and exposed via Log Cache. Create psql queries and create dashboards for PCC!

We’ve also tuned PCC’s performance, based on the real-world customer data. In PCC 1.8, PUTs are 7% faster. GETs are 9% faster and server GETs are a whopping 250% faster. Your actual performance gains will vary; these numbers are our best estimate at the improvements you’ll see in a typical environment.

We’ve also published a few excellent pieces recently that help you get up and running with Pivotal Cloud Cache.

Most organizations can do lots of great things with microservices. But after a while, you start to hit the limits of what you can do when your microservices are tethered to a data monolith. What then? Your peers add a data layer to their microservices; you should do the same. Pivotal’s Gregory Green has a timely post that explains how to get started: Moving from Monoliths to Microservices? Don’t Forget to Transform the Data Layer. Here’s How.

The AWS Service Broker for PCF, now GA, simplifies AWS for developers

Want to extend your custom code with services from AWS? It’s easy with the new AWS Service Broker for PCF, now GA! The broker gives you a simple, structured way to bring 18 different AWS services to your apps running on PCF.

Want to use Amazon DynamoDB or Amazon RedShift? Or maybe you prefer one of Amazon’s popular RDS services? You’ll find them in the broker. You can even use Amazon Lex and Amazon Polly for voice-based apps. Just need some object storage? Of course Amazon S3 is in there!

Lots of enterprises run PCF atop the three public cloud providers: AWS, Azure, and Google. It’s easy to see why. PCF gives you an accelerated path to success in the public cloud. A big part of that success is helping you easily capitalize on the innovative services offered by each provider. The AWS Service Broker for PCF is a great example of this elegance in action.

Read the docs, then download the broker on Github to get started!

Other Enhancements

Let’s summarize the “best of the rest”:

Observability Enhancements

  • Platform engineers enjoy a more dynamic dashboard with PCF Healthwatch. Operators will be able to highlight and zoom in on interesting data points in the provided charts. The reference needle will stay synchronized across charts for easier comparison.

Operations Manager Highlights

Try Pivotal Cloud Foundry for Free

You should spend more time with customers and building great apps based on their feedback. Then leave build, run, wire to Pivotal Cloud Foundry! Want an easy way to see how PCF can help you deliver high-quality code faster? Try PCF for a spin on Pivotal Web Services for free.

After that, review the links below and read up on the newest capabilities. Then make the move to Pivotal Cloud Foundry!

SAFE HARBOR STATEMENT

This blog contains statements relating to Pivotal’s expectations, projections, beliefs and prospects which are "forward-looking statements” within the meaning of the federal securities laws and by their nature are uncertain. Words such as "believe," "may," "will," "estimate," "continue," "anticipate," "intend," "expect," "plans," and similar expressions are intended to identify forward-looking statements. Such forward-looking statements are not guarantees of future performance, and you are cautioned not to place undue reliance on these forward-looking statements. Actual results could differ materially from those projected in the forward-looking statements as a result of many factors, including but not limited to: (i) our limited operating history as an independent company, which makes it difficult to evaluate our prospects; (ii) the substantial losses we have incurred and the risks of not being able to generate sufficient revenue to achieve and sustain profitability; (iii) our future success depending in large part on the growth of our target markets; (iv) our future growth depending largely on Pivotal Cloud Foundry and our platform-related services; (v) our subscription revenue growth rate not being indicative of our future performance or ability to grow; (vi) our business and prospects being harmed if our customers do not renew their subscriptions or expand their use of our platform; (vii) any failure by us to compete effectively; (viii) our long and unpredictable sales cycles that vary seasonally and which can cause significant variation in the number and size of transactions that can close in a particular quarter; (ix) our lack of control of and inability to predict the future course of open-source technologies, including those used in Pivotal Cloud Foundry; and (x) any security or privacy breaches. All information set forth in this release is current as of the date of this release. These forward-looking statements are based on current expectations and are subject to uncertainties, risks, assumptions, and changes in condition, significance, value and effect as well as other risks disclosed previously and from time to time in documents filed by us with the U.S. Securities and Exchange Commission (SEC), including our prospectus dated April 19, 2018, and filed pursuant to Rule 424(b) under the U.S. Securities Act of 1933, as amended. Additional information will be made available in our quarterly report on Form 10-Q and other future reports that we may file with the SEC, which could cause actual results to vary from expectations. We disclaim any obligation to, and do not currently intend to, update any such forward-looking statements, whether written or oral, that may be made from time to time except as required by law.

This blog also contains statements which are intended to outline the general direction of certain of Pivotal's offerings. It is intended for information purposes only and may not be incorporated into any contract.  Any information regarding the pre-release of Pivotal offerings, future updates or other planned modifications are subject to ongoing evaluation by Pivotal and is subject to change. All software releases are on an if and when available basis and are subject to change. This information is provided without warranty or any kind, express or implied, and is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions regarding Pivotal's offerings. Any purchasing decisions should only be based on features currently available. The development, release, and timing of any features or functionality described for Pivotal's offerings in this blog remain at the sole discretion of Pivotal, and are subject to change Pivotal has no obligation to update forward-looking information in this blog.

Kubernetes and Envoy are either a registered trademark or trademark of The Linux Foundation in the United States and/or other countries. 

About the Author

Jared Ruckle

Jared works in product marketing at VMware.

Follow on Twitter Follow on Linkedin More Content by Jared Ruckle
Previous
How to Build a Critical Application Assessment Framework for Your Bank
How to Build a Critical Application Assessment Framework for Your Bank

When banks move apps to the cloud, they often need a framework to assess the needs of their mission-critica...

Next
DICK’S Sporting Goods Poised to Continue its Successful Digital Transformation Journey
DICK’S Sporting Goods Poised to Continue its Successful Digital Transformation Journey

A look into DICK’S Sporting Goods massive overhaul and the key factors that contributed to its successful t...