Pivotal Podcast #14: The Need for Speed With In-Memory Data Grids—All About GemFire

January 18, 2017 Simon Elisha

featured-pivotal-podcast

Many applications designs that you work on face conflicting and complex challenges. I need to deliver tens of thousands of concurrent connections. I need to provide sub-second response time. I need to cater for seasonal workload spikes. I need to keep costs low. Typically part of the solution is to move volatile or frequently accessed data to memory—which is much faster than disk. But do you move it to one "honking big" server full or RAM? What if that server goes down? So how about a selection of servers that provide an aggregate amount of RAM? Now you have the problem of maintaining state between all those servers. How can I find them? How can I replicate them? What if one goes down? I seem to be creating more problems rather than solving them!! Fortunately this is a problem domain that has had a strong degree of thought put into it—and the issues raised above have been solved, and In-Memory Data Grids can provide your application architecture with a much-needed boost of speed without sacrificing reliability. In this week's episode we dive more deeply into the world of in-memory data grids and the GemFire product in particular.

Play Episode #14

RESOURCES:

Transcript

Announcer: Welcome to the All Things Pivotal Podcast. The podcast at the intersection of agile, cloud and Betar. Stay tuned for regular updates, technical deep dives, architecture discussions and interviews. Please share your feedback with us by emailing podcast@pivotal.IO. Simon Elisha: Hello, everybody and welcome back to the All Things Pivotal podcast. Fantastic to have you back, great to speak to you again. My name is Simon Elisha, Chief Technology Officer and Director of Field Engineering here in beautiful Australia and New Zealand. Obviously, not at the same time, there's a bit of a ditch between them, but I'm here nevertheless. As ever a fun filled and packed episodes of you today hopefully. Today’s topic will be all about in-memory data grids and in particular a product we have called GemFire. The in-memory data grid world is an interesting one in that many people haven’t played with this technology; may not be familiar with this technology, so I wanted to really contextualize it first and then dive into some of the details about why you might choose to use these type of solution in your own architecture. Really it's about scale, big, highly scaled systems that have a lot of transaction rates and have very low subsecond or millisecond performance requirements, so something that has to work really, really fast. Now, as you know, traditional data based technologies typically will break down under those kinds of loads. They’ll also break done under very high, spikey type workloads, etc. Let me give you real example. There's a really interesting case study we had recently from India Rail and, according to McKenzie, India Rail or India in general is moving from 120 million online users in 2013 to 330 million users in 2015. As you can imagine this is a huge increase in electronic access to ticketing systems, booking systems, etc. India Railways center for railway information systems, known as Crisp, realized they had to really re-architect their system. They had to cope for the increased volumes that were coming and these are gradually increasing volumes in many cases. There are also two national holidays, one called Diwali that creates massive sparks in user experience, so this populous migration takes place and suddenly the system gets very heavily hit by these requirements. This is very traditional seasonal type spike that a lot of It systems struggle with. Obviously it needed to work all the time, cope with the new mobile experiences and new access experiences, etc. Really what they found that they had is they had a situation where they had this booking system that needed to cope with 120,000 plus concurrent sessions during peak time; had to codify over 30 million registered users; had to handle some 700,000 bookings per day; had to handle over 300 million queries per day. It was processing the non-inconsequential amount of revenue of over 600 million rupees per day, so as you can see, a pretty difficult situation. As part of their application architecture they introduced an in-memory data grid and in particular something called Pivotal GemFire. Now an in-memory data grid takes a concept of accessing data from memory, which is typically the fastest way it can access data. If you think about accessing data from disc has seek time, if you’re using mechanical disc. It still has some look up time if you're using SSD’s although that's significantly reduced. You're still going down through multiple layers of the stack into the disc layer. If you choose to reside data into memory you're no not taking those hops and typically you’ll get much, much faster access, typically around ten times faster, but it can be significantly more depending on the application topology. You might say great. That’s fantastic. I’ll just throw some data into a big amount of RAM and we’ll be good to go, done, just throw it that server and we’re good to go. Unfortunately, as with everything in IT, there are tradeoffs to be made and things aren’t always as easy as they seem. While putting things into memory is great from a speed perspective not so great from a volatility perspective and from an availability perspective. If you put data into memory and that machine fails then you’ve lost that data there's no consistence. If you need to scale up beyond the capability of the network to access one machine now you have the very curly computer size problem of how do you maintain cache coherency between all of those nodes? How do you make sure they can communicate information to each other? How do you divide workload between those nodes? How do you [cater 00:04:30] for failure of those nodes? How do you replicate those nodes somewhere else? Aha, you think. This is where we go out; we’re talking about in-memory data grids isn't it? Exactly, this is the problem that we’re trying to solve. The round back about what happened with our friends at Crisp. They were able to implement this type of technology into their architecture. I want to share some of the results with you because they're kind of interesting before we talk about the technology itself. They were able to prove that they can have 120,000 concurrent users booking the e-tickets simultaneously. For them that was a 3X improvement over what they were doing before. They also found that their bookings themselves increased by 500% in the first ten minutes of running this in-memory data grid, GemFire, underneath their short-notice booking application. What they found is that the infrastructure had been holding back the customer experience and when customers were able to engage with the application faster they sued it more. Kind of stands to reason doesn’t it? They found that their e-ticket sales grew significantly overall as a result of that. Their response time has improved dramatically, so they're now on a subsecond response time zone, which is as you can imagine, for a highly scalable system is no small achievement. They can also handle these wide variations of user requirements in terms of seasonal workload and peaks of demand without having to worry about their SLA’s or down time. Their per minute booking capacity increased five times to 10,000 transactions per minute. It’s a really interesting story. In our case study there's a great talk about what they did and you can see the results of implementing this type of technology into an environment. To implement any technology you need to first understand it, so I want spend a little bit of time talking about GemFire in particular as an example of a really powerful, enterprise grade in-memory data grid and what it does. Basically the in-memory data grid pulls memory CPU network resources and optionally some local disc storage together across multiple processors to manage application objects and their behavior. It does a lot of very sophisticated things in the back end to make this work. If you’re aggregating a distributed system together you now need to manage memory. You now need to manage distribution of data. You now need to manage your availability, scalability, performance, low balance, etc. What this technology does is does that for you. It uses a few techniques to do that. I’ll just throw in some nomenclature so that we’re comfortable with what we’re talking about. The first thing you do within your cache architecture, so in-memory data grid architecture is to find what are called regions. Data regions are really analogous to tables in a relational data base. This is how you manage data in a distributive fashion that are stored as name valued pairs. It gives you the ability to create some sort of naming structures on top of organizational structure. Within these regions if you specify what's called a petition of the region it will then split the data and spread that data across all the cache members in the grid. This makes it easy for applications and developers to access this data without having to worry about where the data is physically placed. It simply manages itself for you. You can also create what are called listeners to notify the application proactively about what has changed within a particular region. We’ll talk a bit about some of the power of that shortly because that's really important. This whole concept of being proactive about notifying the application, so creating more of a push rather than a pull model, really ties into this real time aspect. If you're building a system that is meant to be real time you don’t have a huge amount of time to be constantly pulling a particular system. That's going to change the dynamics of the time frame that you need to get the access. If you're saying I'm going to ask you for something, has something changed? Now tell me what changed. That's got to be very efficient, inefficient I should say. GemFire uses continuous querying to create this event driven architecture. What it does is ties both the event taking place and the data required to analyze the event in the one concept or in the one activity. Again, this is important, if you consider a traditional architecture where we may go and say notify me when something has changed. When it's changed I’ll go ahead and look up that data and then analyze the data relevant to the change. What this technology is doing is saying no I'm going to notify you that something has changed and at the time that I notify you I’ll provide you with all the data relevant to that change such that you can make any business decisions you need based upon that data. If you think of trading data for example, if a price has gone from one dollar to two dollar I’ll notify you not only that the price has changed. I’ll say price has changed, old price one dollar, new price two dollar. Then, in your application code you decide what you want to do. Again, you're taking those round trips out of the loop which is very, very effective. In large environments, so we’re talking tens to hundreds of nodes, we also have the concept of locators. Locators do two jobs. They provide a discovery service and they provide a load balancing service. Your clients know how to find the locators and the locators will maintain a dynamically list of member servers. They know where the data’s distributed to; how to get to it, etc, which makes it very, very efficient and very, very performant. What are some of the things that we want to take advantage of in GemFire in particular? Let’s talk about maybe some of the reasons why we use it and then we’ll talk about how to achieve that because they're tied together. The first reason is for purely high read and write through, the need for speed. People will use an in-memory data grid in general and GemFire in particular because they have to have their application go faster. They can't squeeze enough out of the existing infrastructure and if they go to disk it's too slow. If they go into memory no they’ve got all those issues I mentioned about how do I protect data? GemFire in-memory data grid takes care of that for you and makes sure that you have data spread across all your available nodes and you can access it appropriately. It also allows you grow the grid over time so you can add additional nodes. Really you're linear increasing throughput is really limited only by the backbone network capacity that you have. In this world of [tingling 00:10:48] backbones we’re looking pretty good there I would say. The other element, not just around throughput, is around low and predictable latency. The ability to say I can get my data when I need to very quickly and, irrespective of the level of workload I've got going, it will come back at the same time as well. It’s interesting when you look at highly scalable systems we often fall into the trap of just trying to get really low performance numbers, which is a worthy goal. Really what affects customer even more than that is the consistency. No one likes a situation where nine times out of ten they hit a particular system and they get one second response time, but that tenth time its five seconds. It’s really frustrating, so you want predictable latency. GemFire does a whole bunch of stuff under the covers to make this happen. One of interesting things it does is it manages subscriptions. Applications that have registered interest in particular notifications it manages those subscriptions and those queries so that all of those are processed in the same location and send out at the same time, only once, for all interested clients. Again, it's making sure that it's very efficient in the way it does processing for you. Obviously, in any grid-type technology you want high scalability, that's the point. If you're going to invest in some additional hardware; you're going to invest in some software to make it happen you want to be able to grow very dynamically. The way GemFire handles this is it does dynamic petitioning of data across many members and it creates a uniform data load across all the different servers. If you have hot spots in your data, you can also be more intelligent about where you spread your workload and you can have multiple copies and multiple systems handling those copies of the data. Again, if you have hot stops in your data, often a challenge in many architectures, you can essentially design around those by saying I know this is going to be a bottleneck. I'm going to make sure it handled very, very effectively in-memory all of the time. The nice thing is this ability to deal with the unpredictability of things. We spoke about that India rail example where there's seasonal requirements, etc. If you’ve ever worked in ticket sales of any type be it music, etc., that's often a problem. One of the things that happens is that the clients of GemFire are continuously load balanced to the server farm based on continuous feedback from the servers themselves. The servers are telling the listeners what's going on. I'm under really heavy workload here. You might want to give this to someone else to handle for you. What this means is that we can be managing where the workload is going at any given time based upon the petitioning scheme that we’re using. This can really help us be very effective in shedding load to other nodes and making sure we’re getting the most out of their infrastructure. Now, one of the things that I mentioned earlier is that the challenge with storing things in memory is that memory has a habit going away really, really easily. If you’ve ever switched off your machine arbitrarily and your laptop etc you know the pain of that data loss, so obviously in an in-memory data grid we’re really, really careful about where the data is. We want to make sure we have multiple copies of data in memory across multiple instances. GemFire uses what's called shared nothing disk architecture. This means that none of the nodes use a common disc infrastructure to share any information. This is really, really good. Whenever you're working a distributed system you want to hear the word shared nothing because that makes life a lot easier. What it does is it also allows you to store information on disk as well locally to the instances as an additional store of data. What this means is you can actually be storing information, either synchronically or asynchronically, behind onto the local disk store for use in a recovery situation. The other thing that we make sure we do is we make sure that we have at least two members of a system have the same information on it. Even in the event of a node being lost we still have another copy of the data in-memory and on disk as well. This file over process happens automatically because the listeners take control of that. This system is used in many cases in financial institutions for trading data, etc. where a reliable event notification is really, really important. PubSub type architectures work on the concept of saying I'm going to subscribe to you as a service. You’re going to tell me when something’s changed and I need to know that I got it. As I mentioned before, often that is the real focus. Okay, I'm going to make sure you get this information, this notification I should say, as soon as you need it. As soon as you get the notification then okay all hands to the pump, we’re going to start doing some work. I'm going to start accessing data, querying something, etc. We want to close that loop and make it much more tight. What happens is that we do the notification to the subscription and get the data through the same system. What this means is the subscriber that receives a particular event has direct access to the related data in the local memory straight away. This makes a huge difference in your application architecture. I've got the notification and the data I need I can now process it effectively. You can see there's a theme here. It’s this proactive. It's this constant analysis of the data that's in the cache, so as the cache is being updated with new information we want to take some sort of action. We can do normal types of query, so these are on demand queries of application. We can say the application goes and queries the system, we’ll talk a bit about how that works shortly, but what people use more often is a mechanism called continuous querying. Continuous querying allows you to create a complex structure, a complex query of the information in the cache and notify and deliver all that information to the topic subscriber whenever the change takes place. The continuous query is happening continuously, so it just running on the system, on the cache side and is then able to notify the client side when something has changed that it would be interested in. I mentioned that those queries are running. One of the things that we can do is have them running in a paralyzed fashion on the actual members of the cluster. Again, this is moving the processing closer to the data. When you move processing closer to the data, that's typically a good thing because there's less communication across the wire; less transformation, etc. In a distributive system it's hard because you say my data is all distributed across these nodes. How do I bring my processing closer to the data if it's all spread off? We can execute an application business logic in parallel on the GemFire members themselves. What this means is that the application can send some sort of logic. We can deploy it there and it will be running close to the data. It’s very similar to the map reduced top model of the desegregation and distribution of queries to small subsets and then aggregating it together. A very complex topic, but all you need to know is it happens on this particular system. More and more of these systems are very critical to the way businesses run and so having it running on just the one site is not an acceptable risk. We have to have multisite distribution. Now you say, aha, even more problems, Simon. Now you're telling me I've got to have a distributed grid of nodes running data in-memory all coherent to each other and now we have to send it across a LAN to some other location and maintain some sort of cache coherency. This is hard is it not, to which I would answer yes, it is very complicated and very difficult. One of the things that we do with GemFire is we lay the correct what's called a gateway sines configuration. This allows you to correct these particular nodes that they're job is to send data from one particular region to another region. Because it has an intelligent view of what's going on in the cluster it only sends the deltas across, whatever has changed; the change updates that have to go other sites. Again, this is often used for when you want to send data from London to New York and have it very consistent in a really low level of time. Obviously, you're still constrained by the time it takes for the data to transit across the LAN, but if you're sending as little data across the LAN as possible or you are also parallelizing those sines so you can create multiple gateway sender instances in parallel. You can send that across; you're looking good. As we know, when connections can be unreliable or unpredictable at times, so the gateway senders will queue information persistently and wait for that node or that link to come back and then catch up the information as well, so a lot going on under the covers there to make sure that happens. How do you access this information? There’s two primary ways to do it. One way is to use the native libraries so you can do your C#, C++ and Java. They're the common types of applications that will be accessing this type of information. there is now also a rest based interface which allows you to access the data cache, in-memory grid data cache, as well using a rest APR. What this means is you can use a ruby, python, etc and you're accessing the data normally. The data’s represented as what are called PDX instances. It's the portable data exchange instances. This allows you to serialize the information, bring it back into your application and use it in applications that maybe do not have native libraries. Obviously using the native libraries is the easiest way, but having a rest based interface obviously opens it up for a lot more options as well. If you're wondering about security as often we do. You can have multiple distinct users in your applications. This means you can have smaller subsets of data bank access by particular instances or particular application instances or particular customers which have sets of credentials. You can control who can access what which obviously in a larger data grid type situation becomes really, really important. We talked a bit about querying and what we can do. The querying is similar to sequel. I call it sequel-like. We use a query base syntax called object query language or OQL. This allows you to query the information that's there in each region. It has some typical functions that you may be used to seeing because they can do like a selective [stent 00:21:12]. You can use where clauses. You can do order by’s etc. It's a very familiar development type environment but you're working against these name value pays in these regions. Because it's all in-memory you're getting your answers very, very quickly. The querying functionality is interesting. Obviously, the main feature that people like to use is that continuous query-type functionality. This is where they're creating these events that will then publish back to interested listeners who then take business action. Really, that's the cycle that we like to use to make sure that something’s working very, very effectively. You can imagine that you’ve written your application to say go listen to this data grid somewhere. I'm going to point you to some listeners. The listeners will route me to the particular nodes that I need to be interested in. as soon as something changes I'm going to do something with it. As soon as the data changes; I get told the data’s changed and I take action upon it. That's it. That’s what an in-memory data grid does. Obviously, a lot more to look into there in terms of how you might want use it or apply it your environment. Again, data grids can be small so a few geeks all the way up to many, many terabytes of data. It depends on your workload. It depends on your application topology. What it is though is a really powerful, capability in your architecture toolkit that many people aren’t aware of. What it does bring to the table is the ability to have very low latency access to your data at very high scalability and takes away a lot of the complexity that comes with trying to build these distributed systems using memory based caches. I hope that was useful. Some links in the shout outs if you want to dive more deeply into it. Until then, I’ll let you absorb all that and keep on building. Announcer: Thanks for listening to the All Things Pivotal Podcast. If you enjoyed it please share it with others. We love hearing your feedback, so please send any comments or suggestions to Podcast@Pivotal.IO.

About the Author

Simon Elisha is CTO & Senior Manager of Field Engineering for Australia & New Zealand at Pivotal. With over 24 years industry experience in everything from Mainframes to the latest Cloud architectures - Simon brings a refreshing and insightful view of the business value of IT. Passionate about technology, he is a pragmatist who looks for the best solution to the task at hand. He has held roles at EDS, PricewaterhouseCoopers, VERITAS Software, Hitachi Data Systems, Cisco Systems and Amazon Web Services.

Previous
An Introduction to Look-Aside Caching
An Introduction to Look-Aside Caching

Learn the basics of look-aside caching, how it works, when to use it and how it differs from inline caching...

Next Video
New Security in Apache Geode
New Security in Apache Geode

Hear about the new security framework released in Apache Geode 1.0 M3. This framework provides a simplified...