An incredible amount has been written about Agile and MVPs over the last 20 years, yet the majority focuses on the first step of a process, the output of which is stated to be a minimum viable product. This mismatch is likely a side effect of most literature being focused at organisations transitioning from out-dated practices and adopting something “new”[1]or desperately clinging on to the past in the case of SAFe – I’ll get to that article, eventually. There is much less writing dedicated on what to do when you’re up and running, with the general guidance being to reduce cycle times.
The conflation of process and product is confusing, often resulting with the view that if you simply “do agile” then the product will take care of itself. Often an MVP is the conclusion, with minimal discussion on what an MVP is outside of an expansion of the initialism. Let me give you an example.
About 5 years ago I was involved in a pitch to an academic publisher who wanted to completely rebuild their website. They made it clear in the tender that they did agile, and that they wanted to release the MVP alongside the celebration of the company’s birthday which kicked off on New Year’s Day.
My section in this pitch was to discuss this “MVP”, and I took the opportunity to lay some ground work.
- What does MVP mean to your business?
- What can you do without, or copy from the existing site (e.g. content)?
- Given your 1st January deadline, when do you actually expect to release the new site?
- What are you looking to achieve?
The answers were what you’d expect. MVP = Minimum Viable Product. Everything was to be new (code and content). To release a) when the fireworks went off at 00:00 on New Years Day, or b) over Christmas, as long as everything is finished. The aim of the project was to do the project, but any further “why” was elusive with no cohesive view shared, not even metrics.
Rather than doing agile, they had changed their terminology and kept everything else the same. MVP was the new word for a release, the solution was predefined (a complete lift-and-shift), and the scope was locked in. It’s waterfall, plain and simple.
In this instance there was no MVP, there was a project delivery. Even if delivery was managed through sprints, and some tweaks made based on stakeholder feedback, it still won’t have been an “MVP”.
Why? Because an MVP requires an hypothesis to test.
The purpose of an MVP is to find out if you’re right, as quickly and cheaply as possible. If there is no hypothesis, there is no product to access the minimum viability of.
I joined Safecall because they had a particular strategy they wanted to validate, and needed a Product Manager to shepherd it in[2]In the interest of full disclosure, this was started before I joined the business. I turned up towards the end of the initial delivery from an external supplier, but had been in post nearly 6 months … Continue reading. Would companies buy software from an expert provider with 20 years experience in whistleblowing services, but very little in building software?
Given the company’s lack of experience in the area, the last thing anyone would do is to write a cheque for several million quid to build a whistleblowing platform from scratch. As such, we used pre-existing products built to handle entities related to individuals within businesses to build our MVP. I also took the step early on to purposefully keep every page and interaction as simple as possible. Simple in this context means few screens, little conditional logic, and little fluff. For the first release, for the vast majority of Safecall clients, the platform operated as a secure means to receive and transfer confidential information, and little else.
We gathered feedback from existing clients who had been granted access to this new system, as well as through the sales process. The most common feedback was that “it’s a bit simple” – considering it isn’t “what are you doing, this is rubbish” I take this as a pretty big win. Over the last 2 years, the platform has become a core part of the service we provide and we have fleshed out some features, added some new ones based on new hypotheses, and reworked some areas where our previous assumptions were proven wrong, or we discovered unforeseen consequences. Ultimately, what we’ve proven is that, yes, companies will buy software from industry experts, and that it isn’t only “software companies” that can play this game.
An MVP is two things, it is a product, and a method of testing an hypothesis. In this sense MVPs are unlike any other part of the empirical, iterative nature of doing agile as it isn’t just a piece of research, it is also the thing you sell. In this sense, they are the most expensive way to ever prove whether you’re right or wrong.
The answer to “what comes after MVP?” is another hypothesis, one based off of the new data you have now been able to collect. This new hypothesis and new MVP can take many forms, from scamps to research aides to A/B tests, but your aim should always be to do just enough to clarify your assumptions, and test your hypothesis. Or, to put it another way:
Simplicity–the art of maximizing the amount of work not done–is essential.
https://agilemanifesto.org/principles.html
And then you do it again.
Notes
↑1 | or desperately clinging on to the past in the case of SAFe – I’ll get to that article, eventually |
---|---|
↑2 | In the interest of full disclosure, this was started before I joined the business. I turned up towards the end of the initial delivery from an external supplier, but had been in post nearly 6 months by the time we’d made our first release in March 2019 |