Quick: name the last piece of software you installed via a floppy disk. Still can’t name it? That’s probably a good thing. If you can still see the install wizard behind your eyelids when you close your eyes, chances are it wasn’t a great experience.
Software architecture has gone through numerous cycles of change. Each iteration equipped developers with new ways of solving problems as well as new mechanisms for distributing and delivering software. But plenty of archaic practices still exist in every direction you look, and there’s no bigger exposed sore in enterprise IT today than the quarterly release cadence. Unfortunately, it’s alive and well, even at most Fortune 500 enterprises.
Today — most often at large enterprises — legacy web applications are most prone to the quarterly release cadence. The apps are made up of single large, complex, and brittle codebases commonly known as “monoliths” because they eat an enormous amount of time and overhead to update and release. That fact generally limits the number of times companies update those systems to once a quarter.
Thankfully, organizations will soon bury the quarterly release cycle when they realize there is a better way to build software: cloud-native architectures supported by continuous integration/continuous delivery. But how did we get here?
A brief history of painful software delivery methods
Long ago I worked on the developer help desk at InstallShield. Anyone who remembers when desktop software came packaged, bundled, and burned onto a CD will remember the InstallShield wizard starting the moment the CD-ROM drive closed. The primary pain points with that method of distribution were the expense and the time required to distribute software. Every time a new release or patch was developed, it required a new installer and, in most cases, a new set of disks.
Beyond the pain of physically installing software on thousands of enterprise desktops, there was the compatibility pain because software architectures on the desktop were difficult to deploy alongside other vendor applications. Specifically, Windows desktop apps and their dependencies on key systems libraries created a problem commonly referred to as “DLL hell” because it would cause Windows to crash when a user attempted to run the software.
I remember many help desk phone calls with developers from all over the globe who were trying to figure out why their applications were crashing on a select few machines or customer environments; it became an unfortunate but almost unavoidable situation.
Eventually, the web emerged as the prominent channel for enterprises to meet their customers. Developers found a much simpler deployment and distribution model via the web browser. Gone were the days of needing a packager and a bootstrapper to provide users access to software. But while centralization of software installation on the server helped enable faster deployment of releases, updates, and patches, new problems emerged.
Enter the monolith
With the rise of the web application, development teams were creating large, tightly coupled, multilayer architectures. Due to their size and complexity, updating those large systems almost always required taking them offline for long periods, and that had business implications. The longer systems were down for maintenance, the longer end users went without access to the system. The result? A poor experience for the end user.
Release cycles for complex, monolithic web apps within the enterprise also required significant coordination among development, operations, and business teams. Regardless of the amount of prep time devoted to a release, those events are fraught with anxiety and often prone to manual error. And while archaic and extremely inflexible, that model is, unfortunately, still in place in a majority of large organizations today.
The Mobile Revolution to the Rescue
As smartphones came online in 2007 and app stores in 2008, a new client/server software distribution and delivery pattern emerged. The mobile device quickly became the primary channel for digital brand and product interaction, and in turn, mobile software development skyrocketed. Developers were able to push applications and corresponding updates to a centralized store in a consistent and repeatable manner, and users could install or update software on a per-device basis whenever they wanted.
As consumption grew, large digital brands and technical pioneers began to see value in delivering incremental changes to the market. Iterating on mobile apps is an opportunity to remain relevant with users as well as a way to help inform product backlogs. The pace of software update delivery from a few companies set the expectations for all companies. User expectations were elevating; users expected software to work offline, in low-bandwidth situations, and most importantly, without any significant interruptions in service.
Continuous delivery as the new normal
Seemingly overnight, users began expecting that seamless experience from their enterprise software. Suddenly, users became aware that an application could be updated without “going offline next weekend.” Enterprises needed to be flexible, nimble, and fast. As enterprises began to survey their systems of record, they quickly conceded that the application architectures of the past were, indeed, holding them back. In order for companies to meet users’ demands, organizations needed to realign their strategies from the hardware all the way to the software stack.
Several organizations (whose legacy thinking has evolved) realized that the only way to achieve the flexibility they sought was to find a new approach. Built on the back of select principles from service-oriented architecture (SOA) as well as some lessons learned from major web companies including Google and Netflix, enterprises quickly sought to move to distributed, microservices-based architectures that were battle tested and could be run on a global scale. The path forward for enterprises would be to move their most critical business systems from their monolithic architectures to microservices-based architectures.
As is frequently the case, architectural change helped enable a new software delivery and distribution paradigm. But extending them all the way to production environments aligns well with more loosely coupled architectures and requires automation at every turn, from infrastructure to code repositories. Once in place, developers can get their code into production within a day, gaining more immediate feedback from user behavior. Multiplied across hundreds of development teams, enterprises such as Discover Financial see 12 week release cycles shrink, enabling constant innovation.
What does all that mean, and who stands to benefit? Here are some of those items:
Developers and operations teams get their weekends back. No more Saturday/Sunday releases 🎉🎉🎉🎉.
The business can code to production whenever it wants. There’s no need to wait six months to get a feature out anymore.
Enterprises can move at the speed customers expect, and they can stop falling behind faster-moving competitors.
Development teams embrace a test-and-learn culture, fueling innovation, because they no longer fear experimentation or failure.
It’s shortsighted (and irresponsible) to think monolithic applications and the slow, rigid processes that have accompanied them for the last twenty years don’t need to move into the future. In fact, enterprises will suffer disruptions and lose their customers if they continue to expose their customers to such pain.
As enterprises adopt this new architecture and software delivery paradigm, the quarterly enterprise release cadence will die quietly. And that’s one death I’ll be excited about.
To learn more about how Solstice and Pivotal help organizations kill their enterprise quarterly releases, please visit content.pivotal.io/solstice.
About the Author
Mike Koleno is Vice President of Technology at Solstice.Follow on Twitter More Content by Mike Koleno