Should That Be a Microservice? Part 2: Multiple Rates of Change

July 28, 2022 Nathaniel Schutta

In the first part of this series, we laid out a set of principles to help you understand when microservices can be a useful architectural choice. Here, we explore one of those principles, multiple rates of change, in more detail.

Recall our Widget.io example, a prototypical shopping app.

We discovered that our Cart and Inventory functions hadn’t been updated in some time. Meanwhile, the Recommendation Engine and Search capabilities are modified frequently.

Splitting those two modules—Recommendation Engine and Search—into microservices would allow the respective teams to iterate at a faster pace. This approach will help us quickly deliver business value.

In this fictitious example, we can simply declare “these modules change a lot.” But that won’t fly in the real world. How do we find parts of our application that evolve at different rates? More specifically, how do we find the components that change far faster than the rest?

Normally, software developers skew logical. But let’s use our emotional and rational brains in concert. Odds are, you have an inkling about the part of your app most likely to benefit from faster iteration. Trust your gut instincts!

But you shouldn’t rely entirely on feelings. We can use our source code management system to give us a “heat map” of our code. With a git repository for instance, we can run git log with a few command line options piped through common Linux tools. We can generate a “top ten list” of most commited files with a command like this:

git log --pretty=format: --name-only | sort | uniq -c | sort -rg | head -10

For fun, let’s run this little script against a popular open source project like Spring. (Of course, it’s not a monolithic app, but you get the idea.) Rummaging through the logs, we see this:

Notice the fourth entry: AnnotationUtils.java. That file gets a lot of love and might be worth investigating. (For more detail on this method, check out this Stack Overflow thread.)

What are some other ways to identify candidates for a microservices refactor? For your monoliths, your task now is a bit of software archeology. You need to root around your code base looking for (to paraphrase Isaac Newton) smoother pebbles and prettier shells. This job harkens back to the concept of churn, first introduced by Michael Feathers. Churn is a way of informing decisions on refactoring. When you look at file churn for a given project, you are almost always going to see a “long tail” distribution. You can visualize what this histogram looks like - some files change constantly, while others haven’t been touched since the initial commit. Based on Feathers work, Chad Fowler created Turbulence, a visualization into the churn vs. complexity of a codebase.

There are also new “code forensics” tools like CodeScene that yield deeper insights into our projects. CodeScene identifies hot spots in your code, shining a bright light on areas that will be hard to maintain. The results also underscore the parts of your app that could be at risk if a given developer leaves.

For example, take a look at the Clojure project.

Drilling deeper into the main clojure module, core.clj jumps off the page at us.

There is a fairly high amount of churn, and (unsurprisingly) Rich Hickey is the primary author.

We can bring tools like this to bear on our projects. Use them to help you identify the prime candidates for a microservice transformation.

Another handy method? Just look at the last commit in GitHub. You’ll inevitably find that some files were last modified a few moments ago, while others haven’t been updated in years. Here’s what that looks like for Spring:

If a file hasn’t been touched since the last Super Blue Blood Moon Eclipse, our “rate of change” factor won’t push it into a standalone microservice pattern. But if we see a grouping that appears to “always be changing”, we should dig deeper in those areas.

You should also look at your bug tracker and your project management tools. Defect density might point you in interesting directions. It also makes sense to review the stories in your backlog. What modules seem to have a disproportionately high amount of attention? Those are worth exploring.

Using these tools and our instincts, we have an idea what components change more often than others. Now, we need to decouple them from the rest of the application. How do we do that?

Fortunately, there’s a proven technique for this exact task!

Applying the strangler pattern

The strangler pattern was introduced by Martin Fowler as a way of handling rewrites of legacy systems. Fowler was inspired by the strangler vines he encountered on a trip to Australia. These vines spread seeds in the branches of fig trees. The vines then gradually work their way down to the soil...all the while gradually killing their host tree.

Applied to software, the approach suggests that an abrupt “rip and replace” upgrade is fraught with peril. Instead, the strangler pattern argues that we should build the new system around the edges of the old system, gradually retiring the legacy app over time.

The strangler pattern greatly reduces project risk. Instead of rolling the dice on a big bang cutover, you incrementally improve the application. Use a series of small, easily digestible steps to boost your chances of success. Your teams can also deliver value on a regular cadence, while carefully monitoring progress towards the ultimate goal of monolith retirement.

We can take the strangler pattern a step further with a data-driven approach. That chunk of the app we’ve identified for refactoring? There’s a good chance we don’t understand every aspect of what that module does. Rather than risk incomplete understanding - and thereby inject errors into a critical business system - we can rely on real-world data to guide us.

The data-driven strangler introduces a proxy layer between the client (phone, web browser, another app) and the legacy system. This proxy layer intercepts all requests and responses; it logs the results. The proxy layer pays off for you in two ways. First, it provides vital information about how the current system behaves. Second, this data allows you test the new functionality to ensure it matches the legacy system.

Here’s a quick block diagram of this configuration.

A block diagram showing the proxy layer.

In some situations, the new microservice could even be run in parallel with the legacy system. The proxy layer can route the request to both the legacy monolith and the new microservice. It can compare the results. And if the results don’t match, the layer can “switch over” to the legacy system by default, recording the “miss” for further inspection. Based on the data, we can continue to enhance the new microservice while at the same time adding tests cases to our test suite for the new codebase. This approach further improves our confidence in the new code.

Wrapping up

To channel Obi-Wan Kenobi, we’ve taken our first step into a larger world. Our instincts—along with some code archeology—help us identify multiple rates of change in our system. Pull out your volatile features as their own microservice. The end result will be simpler code allowing faster development that’s also lower risk. So much winning!

Further, when you apply the strangler pattern, you can confidently maintain existing functionality without introducing new bugs.

Read the rest of this series:

Part 1: Should that be a Microservice? Keep These 6 Factors in Mind
Part 3: Independent Lifecycles
Part 4: Independent Scalability
Part 5: Failure Isolation
Part 6: Simplify External Dependencies
Part 7: The Freedom to Choose the Right Tech for the Job

Want to learn more about microservices? Join us at the next SpringOne!

Want more architectural guidance for your modern apps? Be sure to download Nathaniel's eBook Thinking Architecturally.

Are you ready to break up your own monolith? For ideas and strategies, check out Breaking the Monolith and Deconstructing Monoliths with Domain Driven Design.

About the Author

Nathaniel Schutta

Nathaniel T. Schutta is a software architect focused on cloud computing and building usable applications. A proponent of polyglot programming, Nate has written multiple books and appeared in various videos. Nate is a seasoned speaker regularly presenting at conferences worldwide, No Fluff Just Stuff symposia, meetups, universities, and user groups. Driven to rid the world of bad presentations, Nate coauthored the book Presentation Patterns with Neal Ford and Matthew McCullough.

Follow on Twitter Visit Website
Previous
Building Functions with riff
Building Functions with riff

Pivotal created Project riff, an open-source project, to help developers build functions. In this post, we ...

Next
Confronting The Ethics of Your Design Choices
Confronting The Ethics of Your Design Choices

How design thinking can be applied when the stakes are higher.

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!