Dealing With The Stuff That Makes All The Money: The Legacy Journey (Part 3)

September 21, 2015 Coté

sfeatured-cloud-journeyThis is the third part in a series profiling the “Cloud Native Journey” to becoming a Cloud Native enterprise. The first part gave an overview of the three types of journeys—greenfield, legacy, and transformation. The second part discussed several approaches, issues, and tips for brand new applications, the greenfield. This one, on legacy, discusses how to deal with legcy applications—mostly how to do portfolio management to free up time for innovation and how to work with rather than rewriting legacy code and services.

Editor’s Note: Be sure to check out the rest of this series and other posts on the Cloud Native Journey including the Welcome To Your Cloud Native Journey (Part 1), Greenfield Journey (Part 2), and the Meatware (People & Culture) Journey (Part 4).

That Stuff That Makes All The Money

While the term “legacy software” usually has a negative meaning, the fact is that much of what’s labeled legacy software is the core money maker for companies. It may be creaky, risky to change, expensive to run, and otherwise full of risk and fear, but, almost by definition, legacy IT is the software that’s currently enabling the business and, thus, bringing in the money.

Companies get into a problem—they’re trapped in the daily management of their legacy software and have no time to work on new, greenfield software. Worse, they have little time to transform how their organization creates software to support company wide improvements and plans to become Cloud Native enterprises. Depending on how well or poorly designed your software is, “maintenance” (fixing bugs and adding new features to existing software) may also be considered “legacy.” If making updates to your software was easy and low-risk, you likely don’t think of it as legacy. If making changes is difficult and error prone, you’ll think of it as legacy. Whatever your situation—larger organizations have to deal with legacy code and services on the Cloud Native journey.

Let’s look at some guidelines for how to work with, around, and possibly start to improve legacy IT as you go along your Cloud Native journey.

Manage The Portfolio

The first step in dealing with legacy software is to ensure that you have proper portfolio and resource management in place. What I mean is that you’re aware of all the software you have, its approximate “business value,” and the expected life span of the software. With this knowledge in place, you can determine how many resources (time, money, IT assets, and corporate attention) to spend on each item. That is, you can determine the priority any given application has. Once you can prioritize assets, you can start making decisions about where and how you run that application (high-end infrastructure that requires a lot of manual, human attention vs. tape backup cold archiving vs. decommissioning).

On the Cloud Native journey, the goal of portfolio management is to free up resources (time and money) to focus on innovative, new software and capabilities. Remember the goal is to put a continuous delivery process in place that will allow you to start delivering software weekly, if not daily, which will allow your organization to start using software as the core enabler for company’s business processes and strategy. To meet that goal, you’ll need plenty of time to focus on “innovation.” Sadly, most companies I talk with have very little time for innovation as they’re tied up with the shackles of success—spending too much time managing their legacy software.

To illustrate what I’m talking about here, let’s look at one approach to managing your portfolio.

Application Portfolio Management Value Levers


This approach from the EMC Global Services Application Transformation Discipline group is focused on identifying applications whose costs and management attributes can be optimized—basically, it shows the paths that different apps can take. It sorts IT services into six buckets according to their technical profile (e.g., can it run on virtualized infrastructure?). As well, business needs can drive the evolution of the service. For example, some applications are of little value and can be all but decommissioned (e.g., archiving the data for regulatory retrieval if needed), others can be moved from higher cost virtualization to public cloud IaaS, and others require the agility of a cloud platform approach. Part of the analysis assesses if there are merly process and tooling changes needed to improve an application in question—maybe doing the refactoring to get the application running on Pivotal Cloud Foundry is too high for the business value the application provides and simply automating the builds would add enough value and free up enough time. On the other hand, maybe the business value of ensuring very high agility is so high that the costs of “forklifting” the application to Pivotal Cloud Foundry are more than worth it.

What I like about this approach is that it forces you to re-evaluate the IT management and infrastructure needs of your applications. You can prioritize the applications and, thus, how you start spending your time. Not all applications are created equal, and not all of them have equal levels of white-glove needs as they age. Most IT departments don’t seem to operate this way, and, thus, find themselves over-spending on applications that no longer have commiserate IT value.

Honing Your Priority Management Methods

There are many other portfolio management methods, and I’ve just picked the above as an example. To quickly cite just two other methods—Cutter has an interesting approach that focuses on matching the appropriate methodology to each application. Also, the various interpretations of Gartner’s bimodal IT thinking and pace layering can start to feel like a useful approach to sorting your portfolio and determining how to proceed with each asset. Finally, Pivotal’s solutions group routinely works with customers to methodically identify and then move legacy applications to Pivotal Cloud Foundry.

Whichever way you choose, make sure you have some way of evaluating your software portfolio and then prioritizing how you deploy resources. To use a predictable quipif you don’t manage your portfolio, it will manage you!

Selecting Your Legacy Dance Partners…Carefully

Engaging with the beast of legacy can be dicey and risky. So, it’s good to verify that you really need to dance. Clearly, if you’re working with a core service that’s needed for your application, you should work with itlikely using some of the architectural patterns referenced below. However, it can sometimes be wise to leave it alone. Let’s look at some red flags for when you should leave legacy alone.

Negative Business Case and Financial Implications

Many enterprise projects are expected to live 2 to 5 years, if not longer. Accordingly, the up-front and ongoing budgeting of that project may depend on little to no additional spend after the first, “big bang” release. Unfortunately, this model is antithetical to agile and Cloud Native thinking. Agile thinkers want to take advantage of the inherently plastic nature of software to make frequent changes at will for the benefit of the business. But still, toying with big bang in this manner may cause your business plan to go into the red. For example, if you incur extra, unanticipated costs to modify legacy software and bring it up to contemporary fashions, or you outright cancel a project before it’s been able to “pay back” the investment, you may encounter corporate flack, or to echo the schoolyard days, just plain “get in trouble.” Most IT-minded people have an engineering mindset that causes us to pursue the “right” solution to a problem, which is usually good. However, if you sense that you’ll have to, as they say, revisit the business case for a legacy system you’re looking to work with, make sure you can avoid negative consequences. This usually amounts to talking with key stakeholders and the finance department as soon as possible to explain your case. Pursuing big bangs may be a lost cause.

Agile Resistant Projects

While dated, the 2003 book Balancing Agility and Discipline has several good discussions of when to choose waterfall instead of agile approaches. As the application development and infrastructure layers have become more automated and cheaper—that is, cloud!—many of the concerns in the book have lessened in relevance. However, they offer some cautionary advice about projects that may be best not to monkey with:

  1. Changes to the software require close integration across organizational boundaries—if you must work with several different groups to make “simple” changes, you could find yourself in a quagmire. Many of the tenets of microservices address this problem by carefully embracing it, as we’ve discussed numerous other places.
  2. Company-wide and/or industry standards can not easily be changed or ignored—if there are numerous self- or externally-imposed regulations and standards that must be followed, you may not have the option to change them at all. Battling against these exogenous requirements can become too time consuming. That said, it’s common for companies to assume that implementation requirements for audit and compliance will hamper Cloud Native approaches. However, it’s often and actually the case that they just need to come up with a new implementation to satisfy the original requirements, perhaps even better than the old ones. It’s good to investigate what’s actually needed to be compliant rather just assume that the situation is impossible.
  3. The legacy system is not modernized…at all. If the legacy software is essentially a black box and is in no way modernized, trying to change it too much may result in slowdowns and friction. Approaches like the strangler pattern below might help here, or you might be best suited to just quarantine the legacy service and wrap a good, modern API around it.

(Note: The above is my paring down and slight updating of a list of five attributes for projects that would fit poorly with an Agile approach, on page 30. The two items I left off—monolithic requirements and continuity requirements—have more to do with the creation of the software than with integration of other services.)

The Leftover Legacy That Won’t Leave


National Lampoon’s Christmas Vacation © 1989 Warner Bros. Entertainment Inc.

Once you have good portfolio management in place, you will have identified applications that can be taken off the table with respect to software development needs. However, you’ll be left with another bucket of applications that are not so easily gotten rid of, like those relatives who don’t seem to promptly leave once the holidays are over. These are applications that you need to still evolve and develop, and you may even need to move them to a cloud platform like Pivotal Cloud Foundry. As we’ll go over here, there’s actually a lot of options.

Some of these applications might require net-new rewrites, in which case I’d think of them more as greenfield, truly “Cloud Native” natives where you have the benefit of a working “prototype” (the existing, legacy application!) to base your new application on. In many cases, though, you won’t have the benefit of being able to start from scratch. Even more commonly than being faced with the choice to rewrite a legacy application, you’ll have many legacy services that your new applications need to integrate with and rely on, as one of the early Pivotal Cloud Foundry customers, CoreLogic explained in this year’s CF Summit talk.The rest of this piece will go over some tips for dealing with those applications and services.

Testing, Why’d It Have To Be Testing?

One of the more popular definitions of legacy code comes from Michael Feathers’ classic in the field, Working Effectively With Legacy Code: “legacy code is simply code without tests.” Most code will need to be changed regularly, and when you change code, you need to run tests—to verify not only that the code works, but that your new code didn’t negatively affect existing behavior. If you have good test coverage and a good continuous integration and delivery processes in place, changing code is not that big of a deal and you probably won’t think of your code as legacy. Without adequate, automated testing, however, things are going to go poorly.

Thus, one of the first steps with legacy code is come up with a testing strategy. The challenge, as Feathers points out, is going to be testing your code without having to change your code (too much). After all, to quote from Feathers again:

The Legacy Code Dilemma

When we change code, we should have tests in place. To put tests in place, we often have to change code.

Feathers’ book is 456 pages of strategies for dealing with this paradox that I won’t summarize here. What I want to emphasize is that, until you have sufficient test coverage, you’re going to be hampered. In other words, this is one of those pesky prerequisites for being a successful Cloud Native enterprise.

Automating Legacy, High Value Only

One thing that may be possible is automating as much of your build process as possible. As numerous industry studies show, the use of continuous integration is not wide-spread. Even if you can’t wrap tests around everything, try to automate your build by adding in as many “smoke tests” as possible throughout. This will give you the scaffolding to put tests harnesses in place, but also start to follow some 12 factor principals like separating configuration and putting everything (beyond just code!) into version control.

When it comes to picking what to automate and test, Continuous Delivery’s Jez Humble & David Farley suggest picking the highest value parts of your system. Ideally, you can automate building all of the system and spend extra time testing these more high value parts of the system. Prioritizing and focusing on deeper testing like this is a short-cut from doing everything. However, it is more pragmatic and at least gets you a “skeleton to protect legacy functions” within your old code.

When You Have To Change Code

If you are lucky enough to have the option to change code (or cursed, depending on your situation), there are many options and tools available to move your legacy applications into a Cloud Native platform like Pivotal Cloud Foundry. Thankfully, we have been documenting these concerns a lot recently with more to come so I’ll just summarize here.

For Architecture, Favor the Strangler Pattern

The “strangler pattern” (named for vines that slowly take over a tree, not the brutish EOL’ing technique) provides guidance on how to slowly modernize a legacy code base. You selectively select the well separated, sub-components of the system and create new services around them, instituting the policy that new code must only use these new service interfaces. Over time, the effect is that fewer and fewer of the legacy services are used directly until one day, you can swap over to just new code (having replaced the legacy implementation behind the new service) and convert them over to the new approach. In other words, converting them to a microservice may be slow and deliberate, but that steady pace generally makes it a safe approach. This pattern is covered numerous places, including Matt Stines’ book on migrating to Cloud Native applications, which contains numerous other approaches for modernizing legacy architectures.

One variant of this looks to convert your ESB- and SOAP-driven SOA services over to microservices, slowly but surely. In this instance, instead of looking at Pivotal Cloud Foundry as an application enabler, you can also look at it as a services “hub” (to use an SOA friendly term that will make Cloud Natives cringe). Indeed, if you stare at the architecture of Pivotal Cloud Foundry itself, you’ll realize that the whole thing is actually a service oriented architecture itself. Recursion is fun!

Some Things Are Happy To Move: Java, Static Content, Simple Apps

Thus far, moving legacy applications to Pivotal Cloud Foundry can seem like a chore. In fact, there are many applications that can be easily fork-lifted to Pivotal Cloud Foundry without much suffering. Java applications that use standard libraries and don’t have dependencies on local file systems or vendor specific application server libraries can be easily moved, as Josh Long has covered recently. Unspooling Java applications from proprietary application servers can also be done with a good, systematic approach.

When it comes to static content, Pivotal Cloud Foundry can provide a surprisingly easy and well governed approach to basic content management as well. I’ve spoken with several large organizations who are hobbling along with modernizing ancient CMS systems used by their content groups to quickly update their web and mobile applications. In these instances, looking at that static content as an application that is cf push‘ed into Pivotal Cloud Foundry—benefiting from all the release management and cloud management capabilities—can be one path for tackling your legacy applications.

For more ideas on how to modernize legacy code, check out Josh Kruck’s post on the topic and his paper with Abby Kearns’s on the topic (which I’m told will have a major update soon). Jared Gordon also are just kicked off a new series on working with legacy code which collects lessons we’ve learned as we’ve been helping our customers on their Cloud Native journey.

Legacy Process and Governance Changes

While I’ll save most of the organizational change items for the transformation part of this series, let’s look at some of these challenges here. Because Conway’s Law has such large sway over software and the structure of the organization that supports that software, it’s hard to tinker with the organization much unless you tinker with the software as well (and vice-versa)! In the theme of “dealing with legacy” that most of this piece covers, let’s focus on how to deal with organizational issues through “legacy” processes.

Legacy processes often involve much review as the software passes through phase-gates. Metaphorically, and often literally, this is thought of as “going to the change review board,” a governing body that approves advancement to the next stage. It can be metaphoric—as there may not actually be a sitting board that approves things but a latent “council of elders” who approves all changes in IT. For older, slow moving IT that’s laden with risk and the chance of failure, this may have seemed like a good idea at the time. It does, however, slow down the small batches and effectively quash the more frequently mentality that a Cloud Native approach favors. Indeed, it can cause the dreaded “waterscrumfall.”

Heavy oversight like this may be inescapable, mostly due to regulatory concerns. In such cases, we’ve been finding that working under the rules of the change review board can lead to some positive effects, namely:

  • The one about babies and bathwater—working with the review boards may allow you to identify the valuable parts of the legacy process. These are parts you’ll want to retain! Hopefully, as you look towards transforming your organization, you can work towards automating these beneficial processes. For example, a result of the change review board may be updating the PPM tools that allow your organization to properly do portfolio management, as discussed above. Perhaps the process can be automated as part of your continuous deployment process instead of done manually.
  • With trust, comes much permission to change—for a rational organization (hopefully you’re in one!). If you can show the change review board that a series of quick, small changes actually reduces risk and increases quality, they may start to trust the new way, accepting quick change as the new routine. As you transform more broadly, small wins you’ve had with greenfield and legacy systems will have built up organizational trust in your new approach. This may allow allow you lighten the heavy hand of the change review boards over time.
  • The process fifth columnist—sometimes, the best way to change an onerous process is to fully engage with it, making sure to demonstrate why it’s onerous. If the change review board becomes an obvious bottleneck, it may be helpful to demonstrate this as a way to start transforming it. It also demonstrates how much the legacy process gets in the way of rapid improvement and becomes a barrier to scaling value delivery. Remember, the business probably wants smaller, more agile, more regular batches of change than a great big long one.

Navigating these changes can seem like extra work and be really annoying to deal with. However, if you’re taking the bigger view—that your organization’s process is just as important as the actual product, you’ll realize that you won’t get all the benefits of a Cloud Native approach if you don’t change most, if not all of, of the organization creating the product.

Using Your Suffering For A Better Tomorrow

Dealing with legacy software is hassle and often painful. As any studier of legacy code will tell you, the paradoxical thing about legacy software is that—because it’s been so valuable to the business over the years developers had added more and more features to it. This adds complexity and, often because tests are not updated, architecture is unknown, and the code-base altogether poorly understood, the code becomes a risky mess to touch. If you’re experiencing that, use that pain as a motivation to start doing things in a more healthy, future-proofing fashion—insist on writing all those tests, really automate all stages of building and deploying the application, and otherwise be disciplined.

The benefits of legacy code (it’s what runs the business!) and the suffering that code causes (we spend all our time just keeping this thing up because it’s so fragile!) are good reminders for why you should always be improving your process. Constant submission to short-term optimization yields to long term pain and business stagnation—bang head here. The next time something feels weird or painful on the Cloud Native journey, use that line of thought to remind your team that it could be worse, and probably is.

Josh Kruck contributed to the organizational changes section, while Jared Gordon did the same in the selecting your legacy dance partners section.

Learn More:

About the Author


More Content by Coté
Forecasting Time Series Data with Multiple Seasonal Periods
Forecasting Time Series Data with Multiple Seasonal Periods

Time series data is produced in domains such as IT operations, manufacturing, and telecommunications. Examp...

What to Expect from Pivotal at ApacheCon Europe 2015
What to Expect from Pivotal at ApacheCon Europe 2015

If anything exemplifies Pivotal’s all-in for open source direction, more than 20 sessions and events for Ap...

Enter curious. Exit smarter.

Register Now