Pivotal Perspectives—Pivotal Cloud Foundry on Amazon Web Services

May 13, 2015 Simon Elisha

sfeatured-podcastWhilst you have always been able to deploy Cloud Foundry onto Amazon Web Services using BOSH—you needed good BOSH skills to make it happen. With version 1.4 of Pivotal Cloud Foundry, this is now all handled by the Ops Manager.

What does this mean?

It means you can get Pivotal Cloud Foundry up and running on AWS in a morning with about 20 minutes of actual keyboard time—the computers do the rest! This makes it easier to take advantage of an open-source based, scalable, powerful and efficient platform on the IaaS provider of your choice, or the one that suits your needs at the time.

PLAY EPISODE

SHOW NOTES

TRANSCRIPT

Announcer:
Welcome to the Pivotal Perspective’s Podcast. The podcast at the intersection of Agile, Cloud and Big Data. Stay tuned for regular updates, technical deep dives, architecture discussions and interviews. Now let’s join Pivotal’s Australian & New Zealand CTO Simon Elisha for the Pivotal Perspectives Podcast.

Simon Elisha:
Hello everyone. Welcome back. Great to have you back here. Simon talking to you again from beautiful Melbourne, Australia, although not so beautiful at the moment. We are moving into autumn and then winter and then it gets a little cold and a little rainy from time to time but you get that. It certainly is not as bad as other places in the world that I have visited.

What are we going to talk about today? It is a little shorter episode today but it will be an interesting one hopefully because we will be talking about running Pivotal Cloud Foundry on Amazon Web Services, so combining two topics I am pretty passionate about, having previously worked at Amazon and now working at Pivotal, these are a marriage of two things close to my heart.

What has happened around the world of Cloud Foundry and AWS? Well, you have been able to run Cloud Foundry on AWS for a long time using the magic of BOSH, which is the tool that helps us deploy things and customers have also used homegrown approaches using Chef, Puppet, etc. to manage and maintain it. We have a long history in Pivotal of running the Pivotal Web Services offering, which is actually Pivotal Cloud Foundry running on Amazon Web Services but to deploy this in the past has needed some skills and some work to make it all hook together. You had to be a little careful with the upgrades and maintenance. It was sort of a non-trivial task and we want to make things easy and simple and straightforward so rather than having to do a lot of hands on work, it should be pretty automated and easy because that is the point of having a platform, is it not?

What is new? With version 1.4 of Pivotal Cloud Foundry, it is now fully installable on Amazon Web Services using Ops Manager. This means you have tile on your Ops Manager and you can essentially go from zero to platform in about five hours of time. Not five hours of effort, I’ll get into that shortly but five hours of time. This is fully featured Pivotal Cloud Foundry running in your own VPC in your own Amazon Web Services account. I think that is pretty cool because it gives you a lot of flexibility in terms of deploying in all the different locations that Amazon has available to you and it means you can combine deploying on premises, off premises, on different cloud providers. It gives you yet another one that is easily installable and easily managed.

What are some of the components that are used in the deployment of Pivotal Cloud Foundry when running on Amazon Web Services? Well, there are a few things that come into play. You do not actually have to use any of the AWS native services that are there that I will talk about. You can do a “standard install,” which just uses all the internal components and the only Amazon components that get used would be the VPC—the virtual private cloud for your network and obviously the easy two instances for the servers or the infrastructure as a service component.

You can also use some of the native services on Amazon as well if you so choose. I think it makes sense and is a good idea to leverage capabilities of platforms and underlying infrastructure of service components where necessary and where possible. There are three key components on top of the VPC that get used in a install of PCF on AWS. The first one is the elastic load balancer. The elastic load balancer is a load balancer in the cloud, as it kind of sounds like and it replaces the work that HA proxy is doing in an installation. We can refer directly to the ELB. We can do an SSL termination on the ELB, etc. and it all gets hooked directly into the platform.

In terms of storage for the Blobstore for both the Ops Manager and the elastic runtime, we use Amazon S3, which is one of the best known Blobstores out there so it kind of makes sense to leverage that so you get that large scale storage at reasonably low cost and you do not have to worry about capacity, etc. so that replaces the Blobstore that we have internally, we can use the native one on AWS.

Finally, obviously within the platform, we have a number of databases for metadata, etc., so databases for BOSH, for Cloud Controller, for UAA, for the console etc. that we typically run on MySQL. Now, obviously on Amazon, you have the Amazon Relational Database Service, RDS, so you can use that instead to host all those databases, which means that those MySQL instances run there and are managed and consumed there, which means that you are taking advantage of the platform that you are running on at the time.

None of this means that you cannot move. You can, of course, migrate all your applications across to another Pivotal Cloud Foundry instance running on vCloud Air or vSphere or OpenStack or wherever else you choose to put it.

What are the installation steps? What do we have to do to get this up and running? This is pretty cool because I think it shows the power of having a platform but also that getting a platform does not need to be a particularly onerous task now that we have this level of automation.

There are three key steps. The first step is to prepare your environment. This involves creating a VPC, creating security groups, creating the load balance, the relational database service, etc. It is all mapped out for you in the documentation in great detail, step by step. For someone who is familiar with the AWS console, etc. you could probably do it over about 45 minutes by hand. I do not like to do things by hand; I like automation. That is why I work in IT.

The team has put together a really cool cloud formation template. Now if you are not familiar with AWS cloud formation is a templating language that allows you to automate the creation of components on the platform and what this means is that you just enter a few key parameters about how you want your VPC to be created, some of the naming, some of the passwords, etc. It will go ahead and run. It takes about 20 minutes to run but you do not have to be there. You just kick it off, walk away, make a cup of coffee, have a chat with friends and 20 minutes later, all of the required infrastructure service components are there, the load is correct, the MicroBOSH is ready to go and essentially you are at the next step, which is to install Ops Manager.

You bring up the console in Ops Manager, log in, and you go through a little wizard that we enter a bit of information. A lot of this information is actually output from the cloud formation template. That is a little bit of a cut and paste experience and basically you kick that off and it takes about 20 minutes to run again. Off you go, have a coffee, do a different task, etc. and get on with your day. Twenty minutes later, you come back and you will see that Ops Manager is all up and running and configured. Optionally you can also load in the elastic runtime tile. This is the component of the system that runs the droplet execution agents and allows you to run code on top of your PCF platform.

This tile is pretty big from a download perspective so one of the cool things that happened in this cloud formation template is that it actually copies the tile from an S3 bucket automatically for you onto your Amazon Instance that is running Ops Manager so you do not have to do that load process yourself so that saves you a bit of time.

Then you go through the elastic runtime, do some configuration there. It is probably about 10 minutes of work maybe, I would say, in terms of hands on stuff. You configure your load balancer, you set up your DNS, you enter some parameters for the elastic runtime, you size the DAs that you want, etc. Then you click the go button and you let it run. Now, this takes a lot longer to run. It takes about 200 minutes or so to run because it is compiling a whole lot of components and doing a hell of a lot of work to be honest but this is one of those ones where you say, great, I am going to go and do another task and I will come back to it when I am finished.

I have been doing this myself and essentially I will maybe start the process when I come into the office in the morning, jump in 20 minutes later, do the next step, jump in 20 minutes later, do the next step and then go do something else and by lunchtime, I have fully featured Pivotal Cloud Foundry running on Amazon Web Services, good to go with my IPIs etc. It is really easy to get going and it means you can get up and running very, very quickly and be very, very effective in terms of what you are offering your developers plus you get the full operational experience because you control obviously the operations side of Pivotal Cloud Foundry as well. You can control how many DEAs are running, how it is laid out, what the network configures, etc., etc. It gives you a lot of choice.

It is pretty cool, pretty exciting, and pretty easy to run. Now, some things you need to know for this particular release. This is kind of an MVP release as we tried doing in “agile world” so for version 1.4, we currently support US-east-1 only, so US East One Region only and we only support running PCF in one availability zone within that region. Now it can be any of the availability zones available there to your account but you can only run in on AZ at a time.

Version 1.5 will look to remove these restrictions so that will be out pretty soon, which means you will be able to deploy globally and will also look to relax the availability zone restriction and have multiple availability zones supported, which of course, PCF supports already on vCloud Air and so just a couple little things to know.

What I will also do in the share notes is I will give you the link to the cloud formation template and also the download for the ERS component as well. If you get a chance, have a go, spin it up. I think from memory, it would cost about, I want to say $40 a day to run. That depends on the configuration, it depends on the instance sizes that you use so very inexpensive to have a super powerful platform on a global web provider without much work at all.

I hope that was useful. I hope you have a play with that and give it a go and until then, good to talk to you and keep on building.

Announcer:
Thanks for listening to the Pivotal Perspectives Podcast with Simon Elisha. We trust you have enjoyed it and ask that you share it with other people who may also be interested. Now we would love to hear your feedback so please send any comments or suggestions to podcast@pivotal.io. We look forward to having you join us next time on the Pivotal Perspectives Podcast.

About the Author

Simon Elisha is CTO & Senior Manager of Field Engineering for Australia & New Zealand at Pivotal. With over 24 years industry experience in everything from Mainframes to the latest Cloud architectures - Simon brings a refreshing and insightful view of the business value of IT. Passionate about technology, he is a pragmatist who looks for the best solution to the task at hand. He has held roles at EDS, PricewaterhouseCoopers, VERITAS Software, Hitachi Data Systems, Cisco Systems and Amazon Web Services.

More Content by Simon Elisha
Previous
Pivotal People—Mark Secrist on Big Data Education, Open Source Software, and Free Training
Pivotal People—Mark Secrist on Big Data Education, Open Source Software, and Free Training

Training on big data, in-memory, data science tools have never been more popular. With over 25 years of dev...

Next
New GE Performance Whitepaper Shows Pivotal GemFire To Be Industrial Strength
New GE Performance Whitepaper Shows Pivotal GemFire To Be Industrial Strength

A technical white paper recently published by GE Power & Water and GE Global Research highlights the benefi...

Enter curious. Exit smarter.

Learn More