#190: Estimation - Do you get value?

I started the mini-series in episode 189 by providing the following guidelines:

  1. Don't invest in estimates unless there are clear demonstrable value in having them
  2. Agree what a "valuable" estimate looks like - this will likely be a desirable level of accuracy and precision for an estimate
  3. Provide the team with training & time to develop their estimation skills
  4. Collect data on points 1-3 and regularly review if you have the balance correct

In this episode we dive into "Don't invest in estimates unless there are clear demonstrable value in having them"

Estimates are not free – and I would argue that truly valuable estimates are often prohibitively expensive to produce.

Thus in this episode, I ask the question are you getting the ROI on that investment. And more importantly – can you prove that.

Or listen at:

Published: Wed, 06 Nov 2024 01:00:00 GMT

Links

Transcript

Hello and welcome back to the Better RIO from Software Development podcast.

This episode is part of a wider mini-series looking at estimation in software development.

I started the mini-series in episode 189 by providing the following guidelines:

  1. Don't invest in estimates unless there are clear, demonstrable values in having them.
  2. Agree what a valuable estimate looks like. This will likely be a desirable level of accuracy and a precision for an estimate.
  3. Provide the team with the training and time to develop their estimation skills.
  4. Collect data on points 1 to 3 and regularly review to see if you have the balance correct.

Subsequent episodes take a deeper dive into specific aspects for estimation in software development. And while long-term listeners may find an amount of repetition across a series, I want each episode to be understandable in its own right, as much as practical to be self-contained advice.

In this episode, we dive into the "Don't invest in estimates unless there are clear, demonstrable value in having them."

Estimates are not free.

And I would argue that truly valuable estimates are often prohibitively expensive to produce.

Thus, in this episode, I ask the question, are you getting the ROI on that investment? And more importantly, can you prove that?

Let's start with a simple thought experiment.

What has been your recent experience with Estimates? And how has that changed the direction of your organisation or team?

Did you use it to favour one work package over another? Using expected ROI as a prioritisation technique?

Did you use it to decide a venture would even be practical, like a mini business case?

Did you use it to inform resourcing levels, such as staff numbers, time given etc?

Did you use it to appropriately size the amount of work going into a given team for a given period of time?

Or was it simply done because it was expected?

And maybe more importantly, did it really make any difference to the organisation?

Now this can be a much trickier one to answer. Let's take the previous examples and dig a bit further.

If you are using estimation as part of prioritisation, as a way of favouring one work package over a number of possible options, then this seems sensible.

If you have multiple things your team could work on, then trying to have some prioritization makes sense. After all, when do we ever reach the bottom of our to-do lists?

So ranking these tasks by expected ROI is useful. The mini business case of expected benefit versus expected cost. Our estimate being part of that.

What we need to remember, however, is that both our expected benefit and expected cost, and thus our expected ROI, is a hypothesis. It's well-meaning, sure, but it's still a guess, which is subject to our own biases and preconceptions.

We will never know the true ROI until the job has been done. And if I'm being picky, often you don't truly know until much later down the line with the operational maintenance and real-life usage.

Steve McConnell, in his book, Software Estimation - Demystifying the Black Art, introduces us to the Cone of Uncertainty. The Cone of Uncertainty illustrates how we know less about a given thing the further out we are. The closer we are to the tip, the slimmer the cone. The further away, the wider the cone.

In short, the earlier we are in a piece of work, the less we know about it.

The more we work on it, the more we know.

This obviously has an impact on our expected costs, and indeed benefits. As we work with something that might have seemed like a great idea at the outset, it can prove to be an intangible mess. Or, conversely, a mediocre idea may turn out to be a game-changer, pivoting the entire organisation on an unexpected trajectory.

In short, we don't really know until we try it. Until it's proven it's just a hypothesis, a guess. Yes, maybe it's well-intentioned and comes with considerable experience and thought, but it is still a guess.

So, while there is certainly an argument to be had to choose Work Package B over Work Package A if they have the same expected benefit, but B is expected to be half the work, it is still a guess, and should be treated as such.

Certainly, if you're using your expected ROI to prioritise, your processes need to be mature enough that, when working on it, if additional evidence comes to light that affects a ROI, then it should be adaptive enough to change course as appropriate. And that includes throwing away the work package at the earliest opportunity. While it can certainly be a wrench to walk away from sunk money, it's much better to do it early than keep ploughing in good money after bad, perpetuating the sunk cost fallacy.

Okay, let's move on to the next very similar case. If you use it to decide if a venture is even practical, like a mini business case.

I've certainly done this in the past when looking at a major undertaking, something that, on paper, could take multiple years. In one case, I spent considerable time with my COO drawing up a shopping list of work packages, then estimating them out into what most people would recognise as a project plan spanning multiple fiscal years.

Again, this was all guesses.

Yes, well meaning, and with no small amount of experience. But it was a guess. It was very much at the outer edge of that Cone of Uncertainty.

And even worse in this example were two additional factors that would affect the likelihood of being correct. The size and the proximity to the work.

The size of the work has a considerable impact on estimation. Think about it in terms of our cone. By being that much bigger, we are making the magnitude of the uncertainty to be so much greater. For a simple work package, we may be out by a few days or maybe weeks. For a large venture, we could be out by years.

And the proximity to the work relates to how far removed the estimators are from the team actually doing the work. In this example, I was leading the team that would take on the technical changes, but I wasn't one of the team who would be directly working on the change. As such, I didn't have the technical experience in the domain that the actual team had. Thus, something that seemed easy to me might have been nigh on impossible for the team, and vice versa.

And this is a common approach when assessing large ventures. Many a project has been designed and estimated by well-meaning and intentioned people, too far removed from the coalface - often with carefully crafted plans, spending considerable time and effort in the approval and justification efforts, to simply fall to bits under the scrutiny of the delivery team.

So, again, we need to have a maturity in our process to highlight any fundamental problems as quickly as possible.

Generally, it's best to approach such a venture with a Minimum Viable Product approach. What is the minimum we can do to learn something valuable about this venture? In some cases, it may be to validate the market conditions. In others, it may be proving a technical unknown.

For example, sell on Amazon or eBay before building a world-class bespoke shopping experience for your niche product. Or manually handle technically complicated expensive activities to prove the service is something your market wants.

Laser focus on the fundamental risks that underpin the venture.

Too often when we have a shopping list of tasks. It's too easy for the team to focus on the easy, showy ways to demonstrate forward momentum. In some cases, leaving the complex, risky and ultimately most crucial parts to the end of the project, only to find that it simply is not viable.

Again, wouldn't we rather know that earlier than later? Reducing the lost cost. Establish there is no market before investing heavily.

I've talked about this previously about being small bets. Invest small into something. It pays off, then invest more. If it doesn't, walk away from it. Use every opportunity to learn and adapt.

This approach not only helps to reduce the disasters, it also helps provide fertile soil for new growth, new ideas. Whereas an out-there idea may never see the light of day if you need to go for a full-blown justification and approval scrutiny, investing a little, placing a bet on it, is so much safer, and thus a practical way of exploring new ideas.

Yes, many will fail, but some that would previously have not even been attempted, will bear fruit.

Ok, let's move on to if you're using it to inform resourcing levels, such as staff numbers, time given, etc.

Again, we are much the same as we were with the prior use cases. The further out we are on the Cone of Uncertainty, the less helpful this really is. It's also not necessarily a good way of understanding what effect staff levels would have on any outcomes.

As the adage says, you can't produce a baby in one month by getting nine women pregnant. It's a complete fallacy to think that more developers will produce the results quicker. As I go into more detail in episode 188, simply throwing more developers in doesn't help. More often than not, it actually hinders.

You generally get better uplift from better processes, better ways of working, which is often difficult to gauge from the outset.

Rather, like the small bets approach, start small. Focus on building a cohesive team. Develop solid processes and ways of working. Then, incrementally replicate that team, each time adjusting practices and ways of working to allow those teams to work in harmony, rather than getting in each other's way.

This, in time, can be a delicate art to slice the work and responsibilities in an appropriate manner to avoid harmful dependencies. It's way too large a subject for this episode, but suffice it to say, evolving over time and careful nurture is going to produce a much better result than attempting to engineer it from day one.

Let's move on to if you are using it to appropriately size the amount of work going into a given team for a given period of time. Again, this seems sensible on the face of it. We should have our teams working at a sustainable pace, thus right-sizing the work so that it's practical within a given time frame.

It's a little unfortunate that the Scrum framework has made the two-week sprint so synonymous, simply using the word "sprint".

We're in this for the long haul so the word sprint can accidentally bring the wrong idea.

The work needs to be attainable. If not, we are setting ourselves up for failure. We need to avoid burning out our teams with soul-destroying death marches, having to squeeze too much work into too short a period of time. Not only do we alienate our people, we simply will not get the best from them as they jump haphazardly from one burning fire to another. A terrible waste of investment, paying high salaries for terribly poor quality work.

Which brings me to the danger of using those estimates as commitments, as a means of holding the team's feet to the fire. Just because a team feels they should be able to achieve a given thing within a given time, it's not a guarantee.

Again, we have the effects of the Cone of Uncertainty.

Yes, we may be at a narrower point because of the proximity to the work, but it is still uncertain. It has occurred to me countless times - we have confidence that we can get something done within a few days - yet, months later, our hindsight educates us why we were so wrong. Conversely, I've had countless times where something was significantly quicker than expected.

Either way, we need to avoid punishing the teams for making innocent mistakes. Yes, we can work with the team to improve their skills estimation, something I cover more in upcoming episodes, but forcing teams to work long hours or weekends simply because they made an incorrect assumption, is again setting yourself up for failure.

At this point, I want to take a small aside to deal with the possible objections to this. Objections that run something like "we need commitments from the team to keep them honest, otherwise they could just be sat around doing nothing".

I'm sorry, but I'm going to treat this sentiment with the contempt that it deserves. This statement is saying that you do not trust your development team. If so, then fix that trust issue. Because punitive action is never going to resolve that problem.

It's similar to the misguided belief that a manager needs to sit and watch their staff - otherwise, they will just slack off. Covid showed us the fallacy of this. Teams can and do work just as well, if not better, without having the proverbial drum-beater behind them.

And trust me, if you have a lazy employee, then they are getting away with it in the office under the taskmaster steely gaze, just as much as they would do at home. No, trust is something that must be built, not enforced.

So we fall to the last use case. We simply do it because it was expected.

This is the obvious one to push back on. If you really can't tie back to some demonstrable value, then you have to seriously ask yourself why you do it at all.

I'd assume, even if not obvious now, then it will likely to have originally been one or more of those previous use cases. And even for those seemingly valid use cases, I hope this episode has challenged you to think a little deeper, to consider if you really are getting the benefits you expect you are.

Regardless of the use case or cases for which you want to estimate for, it is also useful to consider what would have been different if you hadn't expended that effort.

As I've said, estimates don't come for free. And in many cases, really good, valuable estimates are more often than not prohibitively expensive to produce. And in the worst case, the dogged adherence to, we've always done it this way, isn't just wasting effort, but it's also getting in the way of innovation.

Thus, I ask you to take the time, think about your estimates, and can you prove that you're actually attaining value from it, and making real difference to your business and organizations?

In next week's episode, I introduce the term "valuable estimate" as a shorthand for some value that is desired by the organisation asking for it.

What is valuable will be in the eye of the beholder and will vary. But two characteristics that are likely to be common are accuracy and precision.

When I talk about the accuracy of an estimate, it's how correct the estimate was to the actual value.

When I talk about the precision of an estimate, it's how close to the actual value we are attempting to be.

And in that episode, I'll start to hint at the efforts needed to achieve that valuable estimate.

Thank you for taking the time to listen to this podcast. I look forward to speaking to you again next week.