What’s Your Decomposition Strategy?

June 2, 2017 Matt Stine

It’s a simple question really, but very few people have an answer.

The most frequent and most important question I get from the developers and architects with whom I work is this:

What microservices should we have?

If you stop and ponder this question for a minute, you’ll quickly come to realize that if you get the answer to this question wrong, it doesn’t really matter what else you get right.

I’ve discussed this question in some form or fashion with almost every guest that I’ve had on Software Architecture Radio. During one recent, as of yet unpublished conversation with Simon Brown, he twisted the question around a bit and it became the title of this blog:

What is your decomposition strategy?

Simon went on to stress that Decomposition is in fact a fairly well characterized thing that we used to talk about in Computer Science quite a bit. In fact, academics used to write incredibly fascinating papers on the topic, including this one that was recommended to me by both Simon and Mike Gehard during completely independent conversations separated by months.

If you have time, dig into the wealth of resources out there on decomposition. If you don’t, let me boil it down a bit for you:

When you create software modules or components, what thinking do you apply as part of that decision making process?

As it turns out, there are several strategies that we might consider:

Bounded Contexts

Bounded Contexts are obviously drawn first from Eric Evans’ book Domain Driven Design. This is where everyone thinks they are starting right now.

bounded contexts may be the most dangerous concept in enterprise software today

 — @mstine

everyone is using them to organize software, few have a proper understanding of them, and no one knows they misunderstand them

 — @mstine

I hear a lot of what I believe to be incorrect definitions of bounded contexts. So here’s my take on it: a bounded context is a set of domain concepts having an internally consistent ubiquitous language that are accessed through a well-defined interface. Basically we’re drawing a circle around a group of concepts. If we’re both standing inside of that circle, we use the same words to describe the same things, and we always agree (which is what makes the language ubiquitous). If one of us is standing inside the circle, and the other is standing outside the circle, the bounded context doesn’t say anything about our agreement on terms. But it does define the protocol (in the form of an API) that keeps our communication rational. Finding this definition of bounded contexts within your business domain is an excellent candidate for decomposition, as it ensures that within a module or component’s boundaries, you’ll be absolutely clear about what concepts exist and how they relate, helping you to create a very cohesive module or component.

Value Streams

Value streams are activity flows that allow an organization to carry a customer request from inception to production. We can also think of these as “concept to cash” flows (to borrow Mary Poppendieck’s language), or simply “the path from someone having an idea that we think will make us some money to that idea actually making us some money.”

The DevOps Handbook contains a rather nice definition:

In DevOps, we typically define our technology value stream as the process required to convert a business hypothesis into a technology-enabled service that delivers value to the customer.

The important thing to realize about value streams is that an organization will usually have many of them, and they will often want to have independent change velocities. By velocity, I mean a physics definition: the speed of something in a given direction. Value streams definitely have a speed associated with them, but they also have a direction. And two value streams will often want to vary both speed and direction independently from other value streams.

For example: an organization may have a mature set of technical capabilities around customer management. Customer management gets a few feature requests a quarter, but they’re usually not technically complex, and they’re rarely urgent. That same organization may have a brand new business offering around product reviews and recommendations. The domain is evolving rapidly, and the business owner wants to test many different hypotheses with their users.

If we couple the delivery of these technical capabilities, we’re also coupling their value streams. Coupling independent change directions introduces technical risk, as one value stream’s changes could have an adverse effect on the other value stream’s changes. Coupling independent change velocities can force streams that want to release rarely to release more often, and streams that want to release continuously to slow down.

Decomposing your system into independently deployable microservices for each independent value stream gives you a tool to support their independent change velocities.

Mark Richards and I discussed this idea at length in Episode 3 of Software Architecture Radio.

Failure Domains

Things which must fail independently should be isolated from one another such that a failure in one component cannot trigger a definite failure in another component. This is an application of the Bulkhead pattern, introduced by Michael Nygard in his book Release It!

Ships are divided into multiple watertight compartments. Why? If they were not, and a ship’s hull was damaged, the entire hull could become compromised and cause the ship to sink. By using bulkheads to divide the ship into multiple watertight compartments, we can limit the scope of hull compromise caused by one incident, and hopefully, save the ship!

Let’s say we have some features within our system that depend on an external legacy service, and the legacy service can be quickly overwhelmed at scale. If we keep those features coupled to others that don’t share the dependency, a failure in the legacy service could potentially cripple (i.e. by exhausting a thread pool) the orthogonal features.

Decomposing your system into independently deployable microservices for each failure domain gives you a tool to isolate these failure domains. But be careful, as creating a distributed system will introduce new technical complexity and failure domains.

Anti-Corruption Layers

When a system needs to integrate with a legacy system or one beyond the boundary of organizational control (e.g. integration with a third-party logistics platform), the architecture can be decoupled from the legacy/external system’s conceptual model or API by proxying with a microservice. In Domain-Driven Design, Eric Evans described components that play this role as anti-corruption layers. Their purpose is to allow the integration of two systems without allowing the domain model of one system to corrupt the domain model of the other. These components are responsible for solving three problems:

  1. System integration
  2. Protocol translation
  3. Model translation

I discuss anti-corruption layers in greater detail in my book Migrating to Cloud-Native Application Architectures.

Single Responsibility Principle

From the SOLID principles of object-oriented design, the SRP states that a component should have “only one reason to change.” This is similar to value streams, which are more coupled to visible line of business driven change. The SRP is typically more granular, focused on technical dimensions of change. For example, in an API having search and storage capabilities that depend on third-party services, we probably don’t want a change in our storage provider to affect the component managing search, as these can be considered distinct axes of change, and therefore, distinct responsibilities.

I discuss the relationship of SOLID to microservices in great detail in my blog post: Microservices are SOLID.

You Don’t Have to Pick Just One!

As it turns out, these strategies are not mutually exclusive. You can apply one or more of them in concert with one another. In fact, in a system of any non-trivial complexity, you will probably need multiple strategies to manage all of the architectural concerns in the system.

What follows is a quick hand-drawn sketch I made with my iPad and Apple Pencil on a flight to one of our customers to discuss this exact topic. Hopefully it will help you understand the potential relationships between these decomposition strategies:

The Kitchen Sink of Architectural Decomposition Strategies

Change is the only constant, so individuals, institutions, and businesses must be Built to Adapt. At Pivotal, we believe change should be expected, embraced, and incorporated continuously through development and innovation, because good software is never finished.


What’s Your Decomposition Strategy? was originally published in Built to Adapt on Medium, where people are continuing the conversation by highlighting and responding to this story.

About the Author

Biography

Previous
How to Train Your Microservices
How to Train Your Microservices

DreamWorks Animation takes filmmaking to the cloudThe entertainment industry is in a state of hyper-speed e...

Next
How I Broke Into the Startup World After High School
How I Broke Into the Startup World After High School

This week for Pivotal Voices, we’re featuring Dwayne Forde, Director of Engineering at Pivotal Toronto.“Hig...