New Cloud Foundry Service Broker Updates

June 22, 2015 Paul M. Davis

sfeatured-CFSummitThe Cloud Foundry Services team recently discussed the platform’s Service Broker architecture and its new features during a talk at the Cloud Foundry Summit 2015. Pivotal’s David Sabeti and IBM’s Michael Maximilien reviewed the ways in which Cloud Foundry’s Service Broker enables a variety of integrated services, as well as how new features such as async provisions, service broker keys, and exchangeable parameters will address current limitations.

The Service Broker provides developers an easy way to build on the Cloud Foundry platform in an infrastructure- or service-agnostic way, acting as a “translation layer” between Cloud Foundry and the service provider. Enabling simple and dynamic access to the services running on the platform. The Service Broker also provides an opaque deployment and maintenance experience.

Sabeti and Maximilien discussed the updates to the services architecture. First, services can now be provisioned and de-provisioned in an asynchronous fashion, allowing Service Brokers to take whatever amount of time needed to create and delete their managed service instances. This allows the broker to support Services that take longer than 60 seconds to set up.

Second, Service Brokers can now issue keys, or credentials, to any client without the need for a Cloud Foundry application. This update means that a Cloud Foundry service can be accessed by a client outside of the Cloud Foundry installation, such as a Docker container application. The final update is that service brokers can now exchange any number of parameters for their service instances. Sabeti and Maximilien demonstrated this feature with the passage of a JSON payload to a database without requiring the user to interact with a dashboard UI. The information can be passed while the user is creating the service.

For more details on the feature updates to Cloud Foundry’s Service Broker, check out the transcript, or watch the full Summit talk below.

Learn More

Transcript

David Sabeti:
Good afternoon everybody. My name name is David Sabeti. I am an engineer on the services API team at Pivotal. This over here is Dr. Max as he is more colloquially known from IBM. We are going to talk to you a little bit today about what’s new today on Cloud Foundry services. Sort of give you guys a little bit of a tour through these, these as in architecture and the features that are coming to the pipeline. What are we going to cover today?

We’ll start with an overview of the broker architecture. What I really want to focus on is, I want to focus on the goals that we managed to accomplish with the given architecture. I want to make sure everybody understands why those goals are valuable for people who are writing services, for people who are operating Cloud Foundry and also for developers who are using Cloud Foundry. Then after I do that Max is going to go through the three big features that we’ve added in the last couple of months. Then sort of explain a little bit about how we implemented them and how that sort of to extend the same basic principles that we establish in the first part.

Then if the demo gods are happy with us, we are going to go for a demo at the very end and then we’ll finish up with some Q&A and maybe some feedback or just sort of hearing from you guys. Let’s get started. Probably the best place to get started is to describe the problem that CF services tries to solve. Basically what we want to do is we want to make sure that Cloud Foundry services enable developers to discover and integrate third-party software with their Cloud Foundry applications through the platform. Let’s start with that. This definition is actually a little bit malleable. I can tell you for example that services actually don’t necessarily have to be connected to applications. This is sort of the basic principle on that we find with our definition of services we can sort of expand it as much as we want to include all sorts of services that don’t necessarily fit into the original plan. Which is a good thing.

Let’s stick with this. When I talk about third party software and services, we talk about Cloud Foundry services, what do we really mean? What we are talking about is third party software, any kind that a user might want to use in the context of Cloud Foundry. The easiest example to imagine is something like a data persistence layer, SQL databases or maybe Redis or RabbitMQ, [Apache] Hadoop, anything like that, just data services. You can take a little bit further and include things like analytic tools. Monitoring or metrics like New Relic. You could think of utility services for your application. Things like sending emails but you could even take it one step further and this is sort of where we break away from this definition.

You can include services that are sort of just, team enablement services things like Pivotal Tracker or Jewel. They don’t necessarily have anything to do with an individual application but if your developers are already working in the context of Cloud Foundry it really makes sense for them to have those services in the same context. Services have a pretty generic definition and that’s largely by design. The first thing we want to do is we want to make sure that software developers, the users can actually find these services pretty easily. Really easy UX through the platform, something like this. You have CF marketplace will tell you, “Yeah. These are couple of services that your Cloud Foundry is currently offering.”

P-mysql, P-riakcs are two services that Pivotal built and if you look at this output, pretty straightforward at least services. The second column might not be as obvious but these are different plan configurations. Then finally a little description about what the service does. The next thing we want to do is we want to integrate these apps, these services to our apps if that’s possible. You can do this in three commands. You might actually imagine for the developer. Something as simple as starting by creating the service then moving on to what we call binding the service. Binding is simply a way to tell the system that this application is now, we want this application to be able to use this service instance.

When we talk a little bit about the architecture we’ll see that there is a lot of different ways that you can sort of tell the system that. This is the way that we do it through the UX. Then finally there is a sort of a technical detail that you have to restage your application to make these changes propagate. There you go. Three commands and you can now use your service. Pretty straight forward. I should point out that we did all this through the platform. No point do the developer have to go out of band to provision these service instances go, log into a web interface or anything like. We want to make sure that sort of the same way that I said these instances are valuable in the context of Cloud Foundry. We want to make sure we don’t make users leave that context as they are provisioning and using their services.

We have a couple of other goals that we want to make sure that we accomplish as we build up the architecture. One is that development and maintenance are opaque to the developer. This actually means two things. The first thing is that a service developer really shouldn’t be concerned with the management or maintenance of the service instance. They don’t care. It’s not their job to maintain uptime or things like that, where these servers are located and they are working. That’s typically not a developer’s concern. Especially not at first. The only thing is that we want to make sure that we are agnostic to specific technology choices. We want to be agnostic to infrastructure. You can do this on, we don’t care if the service is deployed to AWS, OpenStack, VMware, whatever. We also don’t care about other technical choices like what language it was written in or what other dependencies it has.

We have a pretty basic requirement that services are accessible via URL from the instance of Cloud Foundry and that’s really it. We also don’t want to be prescriptive about what a service is. I talked a little bit about his earlier but we want to have on purpose a very loose definition about what a service is. We can come up with a lot of really easy examples about what a service is but it’s really interesting is to see people take all this definition and imagine really interesting use cases for it. Earlier today I just heard of someone talk about networking as a service as is wrapping on a conversation. You can only think of a lot of different things and we want to make sure that we enable people to continue to have sort of different ideas about the service.

As a result, external vendors can bring their service to CF. Right. We don’t care where it is deployed and we don’t really have a strong definition of what a service is. That means that even if you are the operator of a Cloud Foundry instance, you can invite partners to come and bring their services to your Cloud Foundry and the reason why this is really great is that as the operator of Cloud Foundry it’s not your job to also be the operator of the service. You can leave that to the experts. Finally, we want to make sure that all this happens in a dynamic way. I’m just using a buzzword here. What I mean here is that we want to make sure that you never have to redeploy your Cloud Foundry or restart any part of the system.

We want to bring broker integration into the API so that we can make a few API calls in the service broker and the service provider is ready to be integrated, is completely integrated and ready to have services provisioned in use. These are four different things we want to make sure we also accomplish. The solution that we converge on, is what we call the service broker architecture. When I talk about a service broker what we are really talking about is a translation layer between Cloud Foundry and the service provider. Service Broker it used to be a very small HTTP interface and basically Cloud Foundry is a contract. Cloud Foundry is going to make requests and expect the broker to conform to this relatively small interface.

The broker can take this Cloud Foundry domain request and translate them into service domain request. For example, if your service is a, let’s say it’s a MySQL database. Cloud Foundry is going to ask, “Give me your service instance.” MySQL probably doesn’t know anything about what a Cloud Foundry service instance is but the broker does. The broker knows what he means. He wants a database. The broker can then delegate to appropriate components or make the request itself to create the database. Then we can come back to the Cloud Foundry and say, “Yeah. We provisioned your service just like asked.” When we talk about service broker and we’ll see this in a second. He has some pictures. What we are talking about when we talk about a service broker is a translation layer between Cloud Foundry and the service provider.

We are going to do some boxes and lines. Nobody get too scared. I just sort of want to show you the basic back and forth that users will do and what happens to a user’s request as it makes its way through the system. The first thing we need to to is we need to register the service broker. This is actually an admin functionality at least currently. An admin says, “All right. I have a new service broker that’s ready to integrate with my Cloud Foundry.” The Cloud Foundry admin can now burst out his CFC and make a request to create a service broker. All he has to do is provide a URL and credentials to access the URL. Basically the URL points to where the broker lives. The cloud controller is going to make an HTTP request. This is the first HTTP endpoint that we are expecting brokers to implement.

What we do is we ask for a catalog. Catalog is pretty straight forward. It’s just a list of services and plans that the broker offers and basically services that we want users to be able to see. Once we’ve integrated, I started talking about the cloud controller here. The cloud controller is just the component of Cloud Foundry that is accepting API requests and making these API requests. You don’t know what the cloud controller is if you are not already friends with him. Just pretend that that’s his Cloud Foundry. It’s really all that that matters. If you are familiar with a little bit of the internals of our architecture, we want to make sure that it was known, which component within Cloud Foundry we are talking about.

The cloud controller wants, the requests succeeds then a user can now make a request that says, “Hey. Show me your services.” Just like I showed you before and our new service here is displayed to the user and the user knows that he can now provision and bind these services. What does the user want to do next? Now we can be a developer. So a developer says, “I want to create a service.” Here he is. He makes his CLI command. CF create-service. The cloud controller is now going to make the next important request to the broker to the new end point. It’s going to use a point request to the broker and include a GUI, which is just a globally unique identifier. At what point the broker is going to go and do whatever he needs to do to make the service. As I said before in the example of MySQL, that work is go create a database.

We can imagine lots and lots of different cases what it means when it means to create a service. Like I said before, the brokers translate their Cloud Foundry specific quest into a service specific request. Then once everything goes well the broker returns to a tool he created and the cloud controller keeps a record of that service in the database. Now when the user asks for his all these services, he can see his new database. His new service instance in the applet and he knows that he has successfully created a service instance. The next thing that he is going to do is probably going to bind the service. That means that he is going to connect it to one of his applications. Yet again, the cloud controller is just going to relay this request over to the broker. This time it’s a slightly longer URL here but it’s a request to create a service binding. Sort of treat it as its own resource but nested under service instances.

Again the broker is going to return to the once created in the case of a success. The important thing to point out here is the actual return, the response body that the broker is going to return to the cloud controller. What we are getting back from the broker is a set of credentials. We call it credentials. What we mean here is that this is the broker telling Cloud Foundry, “Hey. You want to give this set of data to your application. You application can use your service instance properly.” Most of the time what you will see is something like off credentials. You also often find things like IP addresses or URLs where the service instance is actually located. Depending on the nature of the service it can be something totally different.

It might be an API key, it might be other things that basically tell the application, “This is how you can use your service instance.” These credentials get injected into the application’s’ runtime environment. They can start as an environment variables in the variable called VCAP services that your application can then actually make use of the service instance. Likewise a user over here on the front end can see that now that this bound ascom has been updated to include the name of his application. Wow. That was fast. Any questions? Right. Maybe we should hold off questions for the end. I will let Max go over what some of what new features are.

Michael Maximilien:
Thank you David. Okay. Before I start let me mention something. First is that, as you probably know, when we talk about Cloud Foundry everybody wants in with IBM. Everybody repackages Cloud Foundry in resales. If you think well, “Where do we make money?” In general. Well. Right there. In services. Services is one of the areas where different platform can differentiate. Of course we want the base to be open source. When we were looking, especially Brian Martin over there, from IBM, wanted different aspects of the services API to be changed and improved. It created some sort of tension.

Great thing is we work with Shannon and David Stevenson and created a brand new team starting at the beginning of the year. Of course David was the leader. I work with him. I have never worked on services before, never worked with them before and it was just a joy. For two months we pretty much implemented a lot of what you are going to see here. Then plus I went to China. Got two colleagues from China. Edward and Tom. Guess what. They worked on it also with us. What you are going to see is basically the outcome of this new team and they will continue adding more stuff.

The first big thing that we added is that in the current service or the old service architecture, you had sixty seconds to provision your service. Crazy. Right. What gets done in sixty seconds, like speeding up VM, if you dock in thirty seconds but you have to still put data, you still have to do stuff. Operations need to be a little longer than just sixty seconds. The obvious solution for this is can you do stuff asynchronously? Well, it’s a lot easier than done. We did it. It took a little bit of time partly because we also paid some technical debt. The basic view of it is that for something like IBM Watson where a lot of it is analytics stuff. Right. Lots of data needs to be pulled in. You’ve got to set up things before the service are actually available for you to use. It takes more than sixty seconds.

You want to be able to do it in asynchronized fashion. Same thing for Hadoop for instance. Right. You’ve got to set up in a Hadoop cluster. It’s going to take you more than 60 seconds because for any job that’s more than just a toy job, you’re going to spin up maybe some VMs, maybe some Docker containers. Whatever it is, it’s probably going to take more than sixty seconds. The key is to be able to enable these use cases. How did we do this? We basically used a classic sort of design pattern which is a future object. Instead of returning immediately, you get an object that tells you what the future is. You can interrogate it. In other words, you are going to poll. You do that and it gives you pretty much the state of the service.

We are going to have a nice demo that shows you exactly what I’m talking about here. The other thing that is a consequence is that there is now tighter integration between the Cloud Foundry or the cloud controller with the broker. If you are all broker providers, make sure you pay attention to the specification if did not in the past. Now there is a little bit more communication between the broker, your broker and the CC. As an example, if we look at the previous slide that David was talking about when he showed how service gets bound to, what’s going to happen now, instead of immediately having a service ready so that you can start using it, you instead are going to have a series of poll where you can see there is a state now.

The first state of course going to be that it is in progress. This state can actually persist for any amount of time depending on your broker until it’s ready or it’s created. Same thing if you delete it. Maybe as part of your delete you want to back up. Your delete could actually take time. Okay. That’s going to be one of the bigger changes that you are going to experience. This is to support asynchronized operations. This is the difference between what you saw before and now. Another issue that came up, again a lot of it came from the team but also Shannon has a list of customers. A lot of you probably in this room that pretty much tell him what you are looking for and we are trying to address that.

We used to call it credentials but credentials is complex because there is other things using credentials so we call it keys. The idea there is that when you have a service running in your Cloud Foundry installation you might want to use it without necessarily having an app also being used. There is the Docker name. If you have a Docker container you want to connect to a service, how do you do it? That may not be an app. Right. There is a lot more examples of this also. What you can now do is you can actually call the service API in Cloud Foundry and say, “Create a key for me.” You can ask as many keys as you want. The keys are going to be your credentials to be able to connect to the service. The use cases for this are things like for instance having keys for read only.

Let’s say, for instance, this is one that Shannon and I, and David and I discussed where you could have a database where you may want to expose a read only access to that database. The key there will be a read key versus a read-write key for instance. Right. There are many more. It’s enable easier accounting. Sorry for asking Blue Mix because now you no longer need to have an app like a fake app or something like this that certainly a very a good thing. You can maybe have multiple credentials. This is all goodness in some ways. The best part of it is we didn’t have have to change much. We added a new end point but in the terms of the internals of it, we reused the binding.

If your remember how it worked before or how David showed it to you. Where after the service is created, you would have your app and you would bind them together. We’ll be using that same thing. Except, there is no app. As a broker, you don’t have a lot of change. If you didn’t expect to use that app ID, your broker should work. That’s one benefit of this.

The last feature is sort of almost of no brainer in some ways but always harder to implement than it appears. If you have a service that you want to create but instead of having to force the user experience go to a dashboard or some additional configuration. You may want to just pass all that information as you are creating the service.

In other words, you want to be able to pass some JSON payload when you are creating the service or when you are deleting it for instance. This is what the arbitrary parameters does. That allows you to pass any number of JSON payload to your service creation. What we are going to do next is to show you a demo just because, this is how it looks. Right. You just pass, JSON for instance to your database and then you can specify the size. This avoids having to go to a dashboard. The next step, this is where I’m very excited to bring my colleague Tom. Where we are going to show you a demo. Let me set this up as Tom gets it ready. It’s not live, it’s recorded so just that the gods of demos don’t come here. I guess we will be guaranteed that it works.

Here is the important thing. Is that everything you see here is actually code that you will be able to go and start using today because it’s simple broker that they have implemented. To set the stage, the reason we are showing you AWS is because we are trying to think of a demo that would cover all three cases or all three new features that we have in services. Shannon suggested this one where what you are doing is you are using your broker to spin up a VM. Even Amazon takes more than sixty seconds for a lot of their VM’s. In some ways we are going to show you the asynchronized operation. This is the first thing. What you are seeing here is in the console. The next step is we are going to start the worker and create a service instance.

If you can move it to show it up. The broker is showing here. Starting here. Then on the left hand side you can see we are login in. Right. We can show the different service, what’s available. The broker gets started. This is all set up. It’s running inside our internet right now that’s why you see there’s nine. First thing is, we show for instance that this service worker exposes AWS different plan on AWS. Some of which is, for instance a micro VM or large VM or small VM. The next step is we are going to create one instance of this. This is going to show the next, move it faster, okay. What you saw very quickly on the left hand side is that the request went through. What you can see on the Amazon dashboard is the VM being started.

Of course, again as we discuss the core to create the service, we’ll return immediately. The next thing to do since it has synchronized is to pull that particular service to see the state of it. What you saw is that in the demo we did a watch on the the CF service command so that it’s basically calling the service multiple time until the service is ready. What you are seeing here is exactly what the output of the new CF command. At this point it basically created because on Amazon it’s pinned up so the VM is ready to be used. Now this is the first part of the demo. You can see here that it is running. This shows a synchronized operation. The next second part of the demo is to show service keys. Obviously, if you have a VM what do you want to do with it?

You want to log in to it. What’s the secure way to log in? SSH. We can ask Amazon to create SSH for us. The broker supports create keys where it is going to do this. You log in again. All of these videos are online on Youtube so you can take a look at them. You can see here we are creating a key. It takes a little bit of time. What you would see on the right hand side is essentially the output of that create service command. Eventually you are going to see the command. Here you can actually, once the key is created, you can get the details of that key and you can see this is the private key. The SSH key that the Amazon created for us. We can ping that, cut and paste that to a file and then use that to log in to the VM. This is going to be the next step.

Thank you Tom. Here is the SSH going on on the key file that we just created. It should log in to the VM. The next third part of the demo and the final part of it is to show you service parameters. Again we are here on the dashboard again. We can see what the AMI that’s being used. Right. One of obvious thing that you could for service parameters for such as a broker is instead of using a default AMI or hard coding the AMI as part of your broker, maybe you can allow the user as we use your broker to create VMs to pass the AMI that you want. That’s pretty much what we are showing you here. Thank you. This is the file that specifies the parameter JSON. Essentially a JSON payload. Then we are going to use this to pass it to the create the service command and then that way we’ll use that particular AMI to spin up.

Here we pass the JSON. Then at the end you basically get a VM now with the new AMI. As I mentioned again this code, then entire demo was written by Tom and Edward. It’s a Go broker. It’s going to be available for you under Apache licence. You can go and start using it. With that, let’s go back to David who is going to conclude it. Then we will take some questions.

David Sabeti:
All right. You guys don’t you think that was a really great demo. I’m actually really impressed because a lot of that stuff are features we’ve literally added in the last week or two. I don’t know how you guys managed to pull that off. That’s really great. All right. We are basically done here. Of course we are always looking for feedback as a team. Our product manager Shannon Coen, this is the email address that you can reach him at. Right. This is Shannon. He is also doing one of the open houses tomorrow at two fifty. If you have any questions for him, feel free to stop by. We’ll both be there answering questions about service enablement. We are always looking for what can we improve and how do APIs look. We’d love to hear more from you guys. I thought I’d also put some pictures of the team. People who have worked on these different features. These are the people who’ve worked on the API team for the last couple of months.

Editor’s Note: Apache, Apache Hadoop, and Hadoop are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

About the Author

Biography

Previous
Making Or Saving Money With Big Data
Making Or Saving Money With Big Data

Based on a listener suggestion, this week host Simon Elisha discusses examples of some of the data science ...

Next
Software and Kindness, and You
Software and Kindness, and You