#6: Introduction to the Minimum Viable Product

In my last podcast I talked about how you should be considering investment in software development as volatile experiments – and that they should be managed as such.

In this podcast, I’ll introduce the concept of Minimal Viable Product (MVP) a tool that can help you manage that volatility in an experimental manner.

Or listen at:

Published: Wed, 07 Aug 2019 15:44:27 GMT

Transcript

In the last podcast I introduced the idea of thinking about your software development initiatives as experiments.

In traditional software development you will have requirements - a list of changes required to meet some business objective.

An individual requirement could be anything from a cosmetic tweak to a fairly major undertaking - a change of colour or a change to how credit limits are calculated.

An individual requirement could be of varying importance - it could be deemed critical to the success of the initiative right the way through to a "nice to have".

One tool for this is the "MoSCoW" - an acronym that denotes the importance:

  • M for Must Have
  • S for Should Have
  • C for Could Have
  • W for Won't Have

MoSCow

Invariably though, almost every requirements document I ever saw treated every requirement as a Must Have ... Maybe with a small smattering of Should Haves.

As an aside; our traditional software development practices have pretty much encouraged bad behaviour when it has come to requirements gathering.

Our traditional software development cycles have been months - maybe years in the running. We believed we where reducing risk and costs by batching a lot of changes at once.

Anytime we'd want to make a change, we'd want to thoroughly test everything - as part of risk reduction.

Now if that testing would take weeks to complete, we wouldn't want to do it very often - so we'd want to batch a good many changes in to get the most out of that testing.

Which invariably led to more testing and the number of changes grew.

We kept making the scale of our software development bigger and bigger to the point that every release was a major event. It wasn't uncommon for systems (including customer facing websites) to be down while these releases occurred due to their size and complexity.

What started as an intention to reduce risk and save money, ultimately had the reverse affect - making everything "big bang" - carrying considerable risk and cost overheads.

While this phenomenon is probably an episode in its own right, I raise it now as this batch thinking incentivises business managers to push as much as they can into a release.

When it came to requirements they wanted to get everything they could think of into the scope of the project. They really had no idea when the next opportunity to do so would present itself - or indeed if it every would.

I know if I had been in their position, I would have done exactly the same - I would have raised as many requirements as I felt that I may need - then mark as many as possible as Must Haves ... It was the way to get things done.

Ok, so back from that aside;

The requirements would be grouped together as a collection and, once they'd made their way through approvals, prioritisation, governance and scheduling, would land with a development team for delivery.

So rather than those requirements being grouped as a collection, what if we treated each requirement separately?

And what if we renamed each requirement as an experiment?

We can then frame our software development processes round much smaller, easier to theorise, easier to test and evaluate chunks of work.

And this is where starting to think in terms of a Minimum Viable Product comes in.

A quote from Wikipedia

"A minimum viable product (MVP) is a product with just enough features to satisfy early customers, and to provide feedback for future product development.

Gathering insights from an MVP is often less expensive than developing a product with more features, which increases costs and risk if the product fails, for example, due to incorrect assumptions."

So while traditionally we will have made a collection of requirements for the intended end state; MVP actively encourages us to ask "ok, what's next"

Referring back to the previous podcast, the full collection of requirements is very much working on the principle that we can invest in software development as if it is of a know duration with a known return. As I discussed in that podcast, that simply is not a realistic or useful way to look at software development.

Rather you should be considering that software development as a ongoing collection of short experiments, each designed to test a theory in the real world.

And the MVP mindset is a great way to think about this.

As an illustration;

Your company has recently acquired the exclusive rights to sell Donald Trump bobble heads. You believe that there will be a really wide market for these bobble heads; you'll have those people that love Donald Trump and will want to purchase the "statesman" bobble head. And you'll have those that loath him and will want the "comedic buffoon" bobble head. And then you'll have those in the middle group which will want the personalised "speech bubble" bobble head where the customer can decide exactly what Donald should be saying.

So the traditional route would have you building out a great quantity of requirements; including a website, a product selector, a personalisation section, fulfilment and order processing, etc ... The list could go on and on.

You'd then hand off your requirements to the software development for them to take 6 months to deliver.

6 months later, you can test your theory that anyone even wanted to buy a Donald Trump bobble head.

That's a lot of work, expense and risk on a theory.

So what, if instead, you stood up a social media campaign where the public could vote on their favourite Donald bobble head?

"If you where going to buy one Donald bobble head this year, which would it be?"

How long would that take to set up and run? A hour ... Maybe half a day?

How much would that cost compared to the traditional project?

Imaging if you find out that there is no market for Donald bubble heads. When would you like to know that - after a days worth of effort on a social media campaign? Or after 6 months of costly software development effort.

I'm aware of various example of this in the real world where an organisation will stand up a simple one page website for a potential product and allow customers to register their interest (most likely via email).

The organisation wins out twice in this way;

Not only do they get quick, cheap feedback on the potential - they also have an engaged customer base if they choose to proceed with the product.

Let's compare this with a project that I was asked to help with a few years back;

There was a general feeling that this specific project was failing to gain traction. A lot of work seemed to have gone in, but little had come out of it.

The first thing that hit me was the sheer scale of the requirements. Two guys has spent six month drafting those requirements - these where a combination of market research, competitor analysis, personal option and the kitchen sink.

To say the amount of requirements they had gathered was staggering was an understatement.

And the development team where trying their best to understand all of those requirements - and struggling to produce anything tangible.

So what did I do?

I asked "what next?"

I never read the requirements - I never did during my time with that team.

I focused the team on what can we produce next. What can we provide so that others (ideally customers) could comment and provide feedback.

And generally from that feedback we got the "what next" after that.

We started to demonstrate traction in a few weeks.

Ultimately; a lot of the investment that went into gathering such a broad set of requirements was wasted.

The guys that had worked on it had put a lot of effort in, and it was an impressive body of work. It just wasn't helpful at producing quick, small experiments that we could be put infront of the customer.

And ultimately the customer (or user of the software) will be the arbiter of if your theory is correct.

So again, what would you prefer - to know if you are heading the right direction is a few weeks or after half a year?