While some may use the term “big, fast data” to talk about real-time Hadoop, Storm, or Spark, a similar term with early traction in Wall Street technology circles was “complex event processing” (CEP). This set of capabilities is growing fast—according to Markets and Markets, technologies for CEP are estimated to grow at 34.2% CAGR from 2013 to 2018.
In this Q&A session, C24’s founder and CTO, John Davies, explains how their toolset, along with Spring, RabbitMQ, and Pivotal GemFire, can blow prior complex event processing benchmarks out of the water.
John is a technology leader who has played chief architect roles at Visa, JP Morgan Chase, and BNP Paribas. As well, he successfully founded, grew, sold, and re-acquired a software company in the same space—Java integration tools for financial services markets. As a thought leader regarding the technology underpinnings of how banks move money, John answers questions about how C24’s Java toolset works with Spring, Cloud Foundry, RabbitMQ, and GemFire. Ultimately, he reveals how these can be used to scale complex event processing to a massive, new level in the cloud.
Could you tell us a bit about your company?
Yes, C24 is a product company. We started in 2000. It was sold to IONA in 2007. IONA was then bought by Progress Software in 2008. In 2011, I and two colleagues acquired C24 back—it was spun out. Today, we have over 40 people with offices in the U.S., Europe, Japan, and London, which is our headquarters. Over the years, we have done a lot of work with EMC, VMware, and Pivotal as they have evolved.
Our product basically speeds development and reduces maintenance of data and message integration by binding Java to the data and message models, similar to JAXB but not restricted to XML, we can handle CSVs, binary and all sorts of strange formats. We have an IDE that allows data models and messages to be imported, provides hundreds of standards formats such as FpML, ISO 20022, SWIFT, etc., allows for rules syntactic and semantic to be created, enables enrichment and transforms, and provides a complete toolset for design that also creates runtime classes, documentation, automated JUnit tests, builds, and more. There is integration with Spring, GemFire, RabbitMQ, and others like Mule, Camel, GigaSpaces, Cohernece, MarkLogic and MongoDB.
As far as customers go, our toolset is mostly used in banks although thanks to Pivotal we’re expanding into Telcos too. Our bigger customers are the Federal Reserve, Citi, JP Morgan Chase, Fidelity, RBS, RBC, and UBS. They use our technology for much of their internal and external facing integration, in some cases front office (FIX), some middle (FpML, DTCC & ISO 20022) and some back (SWIFT). Among other things, our technology helps them validate out-going message, transform and store and determine internal routing.
|>> Click here to get started and get your download for Pivotal GemFire and Pivotal RabbitMQ.
How do you work with Spring?
In wholesale banking, the predominant programming language is Java, and Spring is pretty much the defacto framework. Pretty much anyone in banking who is using Java is using Spring. When you look at the world of messaging, it doesn’t matter if your integration, SOAs or ESBs are using RabbitMQ, Mule, Oracle, Camel, or whatever, Spring is there.
We have a deep integration with Spring. Our marshaling and transformation functions can be used as first-class Spring Integration components, and we extend the core Spring Integration with implementations of transformers, selectors, and routers. Spring Batch of course is another critical technology, especially with our customer working with such high volumes.
How do you work with Cloud Foundry?
When I first saw Cloud Foundry two years ago, I thought it was absolutely brilliant. I thought that all software applications will be packaged and deployed this way in the future. There have been massive development efforts with it recently, and there is a lot going on with this platform in the software industry today. We have publically hosted services running on Cloud Foundry for our clients to test and validate their SWIFT and ISO 20022 messages. It’s very useful to us to provide a common platform, and our clients feel it is easy to deploy. They can use our version in the cloud or bring it in house whenever they want.
How do you work with RabbitMQ?
A while back, while working at JP Morgan, I first announced AMQP to the world at a show in New York but that’s all I can claim credit for, the inventor, John O’Hara was the genius behind it. We’ve known and worked with the RabbitMQ founder Alexis Richardson his team since they started. However, these are not the reasons we use RabbitMQ.
We use RabbitMQ because many of our clients use it, and some of them use it for ALL their internal systems. It is much more cost effective, faster, and a more modern architecture than prior messaging technologies like MQ Series and Tibco RV. Importantly, it binds to different languages—Java, C++, C#, Spring, Scala, Clojure, .NET, and more. Critically though it’s an open wire-level protocol meaning that AMQP messages can be understood by different AMQP vendors avoiding the lock-in. Banks hate to be locked in to one technology.
So, we work with RabbitMQ because our clients chose it.
How do you work with Pivotal GemFire?
Whether you use GemFire as a scalable cache, in-memory data grid, an ESB data store, or whatever you choose to call it, it is a really important product in the portfolio. Banks and telcos are managing huge amounts of messages. We are talking 400,000 messages a second. These need to be stored and searched. This volume gets into big data quite quickly, and GemFire scales well in this area of big, fast data or complex event processing.
With our product, we can go from a binary, CSV or XML message to a C24 integration object, and then into GemFire in a native or binary format. Basically, this means a message can be taken directly off of RabbitMQ, go through the Spring Integration, and directly into GemFire with 5 lines of code.
With our latest version we can take XML messages and convert them to a binary arrays giving not only over an order of magnitude less memory foot-print but also the same performance boost. It’s been called the back to binary movement.
In today’s financial markets, it is easy to see how big data needs real-time analysis. Real-time analysis means we will need to do complex event processing to detect, filter, aggregate, or transform , event patterns, hierarchies, and relationships on massive data sets. With GemFire, Spring, and C24, we can process over one million complex events per second. Before, we saw benchmarks with message throughput of about 20,000 messages per second. The performance improvement is more than an order of magnitude improvement, and our presentation at the last SpringOne covered this architecture.
About the AuthorMore Content by Adam Bloom