What Happens After the MVP?

October 25, 2017 Adam Piel

Maybe your project could benefit from a “release map.”

Written together with Shanfan Huang, Pivotal Labs Product Designer.

On our last project at Pivotal Labs in Boston, we released the MVP/SVP of our iPhone app in 38 work days. We were happy, but in the days leading up to the launch, our product team felt subtly stalled.

We had spent so much time trying to get to that first release — so much time defining the the bare-bones SVP that would begin the influx of real user feedback. But now that this SVP was almost live, what was the next right thing to build?

As Product Managers and Product Designers, it’s our job to make sure we’re building the right thing, and that we’re building the right part of it now. After six weeks of laser focus on the SVP, we struggled to know the next right thing to do.

Roadmaps are a somewhat controversial topic amongst practitioners of agile; they have their advantages and disadvantages. While they can help communicate a longer term vision of a product, they don’t always help to answer the question, “what do we do today? And why?”

We returned to the basics, to Eric Ries’s “Build, Measure, Learn” loop. To answer the questions of what to build next, we should first ask “what do we want to learn?” and then “what can we measure?” Only then should we ask, “so then what should we build?”

We wanted something physical we could keep in our team space, something we could tweak daily and use to organize our thinking. Starting from the whiteboard, we eventually iterated to this physical artifact, the “release map:”

Each of the columns on the board represent a release. In true agile style we labeled the releases from left to right “Definitely,” “Probably,” “Maybe,” and “Hypothetical.”

The rows include:

  • What to Learn
  • What to Build
  • Validation Threshold: What would prove our hypothesis “true” in this case?
  • Actionable Data Points: What data would cause us to change our minds about something?

To paraphrase Ries, a product team’s purpose is not to make money, rather it is to hoard knowledge about what people will pay for. Innovation accounting is, in essence, measuring success in terms of validated learning about your user.

First and foremost, we started with what we wanted to learn next. Then we described what it is we’re going to build in order to learn the thing we want to know.

With the framework in mind, we wrote down ideas and questions on stickies, and then placed them into different releases.

It was tempting to start filling up the next release (currently in the the “Definitely” column) with all of our great ideas. But as we challenged ourselves to connect every feature to User Question, we realized that we didn’t have to build some features until we validate them from the previous releases.

Let’s use the example of building an application which enables users to donate to disaster relief.

Perhaps we decide to start by integrating with all kinds of payment methods. The app’s primary value proposition is accepting donations, so obviously we need to be able to accept money from different sources. But with the help of the map, we get back to basics: what are we really trying to learn here? We prioritize “Do users want to donate?” and “What is the preffered payment method?”

We want to be lean; we decide to first build a “pledge” button to see whether people would even be interested in donating. We then get ready to integrate with Venmo, Paypal, direct bank transfers, etc. But here is where the map really helps us: we don’t need to prioritize all of these right away.

If we build Venmo integration, and 75% of customers use Venmo, then maybe we don’t have to build any more integrations to meet our success metrics. Suddenly the obviously-we’ll-build-this features seem less obvious. Unless Paypal integration is actually going to answer some key question we have about our user, we can deprioritize it in favor of other features that will teach us more.

This brings us to the third and fourth rows. We need to define what is the observable evidence to validate or invalidate our assumptions.

Eventually, we map out all of the features, ideas and questions we had in mind, and connect different Actionable Data Points to features in future releases. This is what it might look like:

After a release goes to production, take the stickies off of the “Definitely column,” and move the relevant stickies over on column to the left. The “Probably” becomes the “Definitely,” etc.

Like in Chutes and Ladders we could trace causes to effects easily. Arrows point to “Pivot of Persevere Meeting” at the bottom of the board when something we learn has the potential to trigger a large-scale pivot in our product strategy.

The tool helps a team actually adhere to and implement the “Build, Measure, Learn” loop. Essentially we unwound it into a zig-zag line moving us into the future.

As we — on our last engagement — learned more and released new versions of the application, we updated the release map.

This is what we really ended up with:

The release map helped us to organize our thinking, but more than that it helped us to be ruthless about cutting out features that weren’t designed to teach us anything new. By forcing ourselves to root every single feature in “what to learn,” we actually deprioritized a lot that we were previously committed to building. Things that we thought were obvious to include, all of the sudden fell away.

If it isn’t going to teach us something new about the people downloading our app, it can wait. On our project, there were at least three large, epic-level feature-sets that we deprioritized while working our way through the Release Map.

The tool really helped us to hold ourselves accountable: if we were going to build something, we had to be darn sure we were clear on what we hoped to learn from it.

While this exact structure may not work for other teams, we would encourage other PMs and Designers to experiment with similar tools. We will continue to use and tweak this format. We found it to be a very successful tool for ensuring we were always thinking lean, as well as for telling our product story effectively to stakeholders and to the engineers.

Change is the only constant, so individuals, institutions, and businesses must be Built to Adapt. At Pivotal, we believe change should be expected, embraced, and incorporated continuously through development and innovation, because good software is never finished.


What Happens After the MVP? was originally published in Built to Adapt on Medium, where people are continuing the conversation by highlighting and responding to this story.

Previous
Continuous Delivery: Fly the Friendly CI in Pivotal Cloud Foundry with Concourse
Continuous Delivery: Fly the Friendly CI in Pivotal Cloud Foundry with Concourse

Concourse is an open source continuous integration (CI) system designed for agile development teams. It sup...

Next
One Year Later: Update on Volkswagens Cloud Program
One Year Later: Update on Volkswagens Cloud Program

All About Agile

Learn More