Application Integration Needs A Microservices Reboot

February 16, 2017 Jeff Kelly

There’s more to cloud-native application development than using cloud infrastructure and adopting Agile and DevOps, as important as that is. You also need to adopt new tools and technologies that make it possible to practically apply the tenets of these approaches at every phase of the cloud-native application development lifecycle. Otherwise, you end up with a watered down version that only marginally improves developer productivity and doesn’t achieve increased release velocity.

One phase of the application lifecycle that sometimes gets overlooked when organizations attempt to transform how they build software is application integration. No application lives in a vacuum. Virtually all applications need to communicate with other applications and systems, including underlying databases, at some point. There are a number of approaches to application integration. One common approach is the use of an enterprise service bus, or ESB, to enable messaging between applications. ESBs are great when you’re dealing with mostly fixed or slow changing endpoints, you aren’t connecting to a lot of SaaS or dynamic endpoints, and your large & centralized integration team is responsive.

However, ESBs - and the centralized integration teams that operate them - fall down in a cloud-native, microservices environment with hundreds or thousands of discrete, constantly evolving applications developed in-house and an increasing numbers of SaaS applications. Yet, some enterprises continue to use ESBs in conjunction with Agile practices, continuous integration/continuous delivery and microservices applications running in the cloud, undermining their efforts to ship code early and often.

To understand why ESBs are not compatible with effective cloud-native development practices, you first need to understand their initial value proposition.

The ESB Bottleneck

Enterprise application integration (EAI) engines and, later, ESBs made life a whole lot easier for developers when they first hit the scene 15 or so years ago. For the first time, integration developers had a centralized view of all the applications deployed in their environments and a single place to add, manage and monitor integrations between them. Many companies had all their mission critical operations flowing through ESBs, which connected all their packaged applications that didn’t naturally integrate or communicate with one another. Integration developers no longer needed to build and manage custom, point-to-point integrations between applications, nor bake their own common mediation capabilities like format translation, content transformation, security protocol bridging, and reliable messaging - an approach that resulted in spaghetti code and very complex, brittle environments.

Sounds great, and ESBs really did make integration developers’ lives easier. But there are a number of characteristics of ESBs that, while valuable in more traditional application environments, don’t translate well to Agile development and cloud-native microservices.

The most obvious is the centralized nature of ESBs. Centralization causes a number of issues.

From a process perspective, any time a particular step in a multi-step process is centralized, there is the risk of bottlenecks developing. And bottlenecks are antithetical to Agile development and DevOps. In the case of ESBs, the technology itself is expensive and, more importantly, difficult to manage. As a result of complexity and unique frameworks, ESBs require specialized developers and operators. Most companies only have a handful of these highly-paid, specialized integration developers on staff, maybe three or four people in the whole company, that know how to run ESBs, including understanding the ESB mental model and how to scale them. These specialists are usually organized into centralized, shared-service organizations supporting application integration needs throughout the enterprise. Application integration teams get inundated with requests for new integrations and must simultaneously maintain and grow their ESBs, but they can only do so much so fast. Inevitably, requests start backing up.

The whole point of Agile and DevOps is to continuously ship small batches of code to production that adds immediate, if iterative, value to users. To achieve this, “the goal is to democratize all parts of the delivery pipeline from building the apps, to building the integrations, to working with the data models,” as my colleague Fred Melo puts it. Said another way, Agile developers work across—and are responsible for—the entire application lifecycle. Stopping mid-cycle to throw code over the wall to a small team of specialized integration developers to build application integrations is incompatible with this approach, the same way throwing code over the wall to an operations team is. It slows the entire cloud-native application development process down, preventing continuous delivery of code into production.

High Availability is But a Dream

But bottlenecks aren’t just antithetical to Agile and DevOps, they’re completely at odds with the principles of cloud architecture. Cloud environments run on huge numbers of distributed, commodity machines. If a machine dies - as they inevitably do - another one picks up the slack. If an entire data center goes down, a completely different data center can take over. That’s how cloud services maintain high availability (HA).

An ESB isn’t capable of taking advantage of these HA capabilities because it is a monolithic piece of software that runs on a single (albeit expensive), scale-up machine. So if an ESB goes down due to a hardware failure or even an entire data center going offline, all the applications it supports - be they traditional applications or cloud-native applications - go down with it. As bad as this sounds, it was probably survivable when most enterprise applications were used from 9-to-5 by internal employees, as was the case ten years ago when ESBs were in their heyday. But in today’s environment, with 24/7 customer-facing applications, any downtime results in lost revenue and potentially severely damaged relationships with customers.

A related issue is maintenance. ESBs, like most traditional software, require maintenance windows in which they are taken offline to apply patches and upgrades. There’s no blue-green concept with ESBs. Again, any downtime impacts the top line and can lead to unhappy users and customers.

Scale and Performance Takes a Hit

The nature of ESB hardware also impacts scalability and performance. Back in 2005, most companies used only a handful of prepackaged applications and weren’t processing millions of transactions a day. Today, as enterprises develop more and more applications in-house, many using microservices, the number of applications is increasing dramatically. We’ve also witnessed an explosion of SaaS apps and web endpoints that don't naturally integrate with traditional ESBs. And customer-facing applications, including mobile applications, that generate huge volumes of low-latency transactions are commonplace today.

With software that runs on a distributed cloud-native architecture, scaling is as simple as spinning up a new node or container in the cloud. Most monolithic software, such as most ESBs, don’t scale out this way or only do so at considerable effort and expense. Rather, scaling an ESB usually requires purchasing another expensive, scale-up machine. Even then, performance can take a hit due to the sheer volume of applications and transactions being supported by a single, high-end box. When enterprise application users are internal workers that have no choice but to use certain apps, it’s ok if it takes a minute or more for the UI to load. But consumers using mobile ecommerce apps will only wait a second or two for an app to load before moving on. An overtaxed ESB simply can’t support that type of performance demand.

A Microservices Approach to Application Integration

A new approach to application integration is needed that enables, not hinders, cloud-native application development and continuously shipping code.

In Pivotal’s opinion, this requires a microservices-based approach in which integrations are developed just like any other microservices applications and don’t require specialized, expensive software or skills. With a microservices approach, you don’t need highly-paid, expert integration developers operating expensive, proprietary ESBs. Rather, you need microservices developers that understand asynchronous processing, know how to write good code and how to leverage modern message queues and streaming platforms like RabbitMQ and Apache Kafka, respectively. This approach is less costly, which makes CIOs happy, and supports Agile development and DevOps, which makes everybody - CIOs, developers, the business and customers - happy.

Let’s take a moment to doff our cap to ESBs. They served us well for more than a decade. But like all good technology, their day has past. It’s time to adopt a microservices approach to application integration. To learn more about how to develop microservices applications to support application integration, check out this blog post from Fred Melo.

About the Author

Jeff Kelly

Jeff Kelly is a Director of Partner Marketing at Pivotal Software. Prior to joining Pivotal, Jeff was the lead industry analyst covering Big Data analytics at Wikibon. Before that, Jeff covered enterprise software as a reporter and editor at TechTarget. He received his B.A. in American studies from Providence College and his M.A. in journalism from Northeastern University.

Follow on Twitter Follow on Linkedin
Previous
Pivotal Achieves Cloud Foundry-Certification For a Second Year
Pivotal Achieves Cloud Foundry-Certification For a Second Year

The Cloud Foundry Foundation has certifications for providers. This tells customers that providers meet rig...

Next
Helping Partners Transform: The Pivotal Platform Acceleration Lab
Helping Partners Transform: The Pivotal Platform Acceleration Lab

We’re pleased to announce an ambitious new initiative to help build cloud-native application skills within ...