This episode is part of a wider mini-series looking at Estimation in Software Development. So for this episode I want to introduce the idea of a "valuable estimate" and what that may mean for you. As I go through this series I will use the term "valuable estimate" as a short hand for some value that is desirable by the organisation asking for it. This may seem a little vague … and to be honest, by design it is. I've spent a lot of time trying to decide what the best estimates should look like - but ultimately I feel it depends on a variety of factors - and is likely to be specific to an organisations - or maybe an initiative - or maybe even to an individual piece of work. We quickly find ourselves in the consultants stock answer of "it depends". So while, in this episode I'll discuss to key characteristics that I believe affect the value of an estimate, the actual (or imagined) value of an estimate will be in the "eye of the beholder". Thus the short hand of "valuable estimate"
Or listen at:
Published: Wed, 13 Nov 2024 01:00:00 GMT
Hello and welcome back to the Better RIO from Software Development podcast.
This episode is part of a wider mini-series looking at estimation in software development. I started the mini-series in episode 189 by providing the following guidelines: 1. Don't invest in estimates unless there are clear demonstrable value in having them. 2. Agree what a valuable estimate looks like. This will likely be a desirable level of accuracy and precision for an estimate. 3. Provide the team with training and time to develop their estimation skills 4. Collect data on points 1 to 3 and regularly review if you have the correct balance
Subsequent episodes take a deeper dive into specific aspects for estimation in software development. And while long-term listers may find an amount of repetition across the series, I want each episode to be understandable in its own right, as much as practical, to be self-contained advice.
So for this episode I want to introduce the idea of a "valuable estimate" and what that might mean to you.
As I go through this series I will use the term valuable estimate as a shorthand for some value that is desirable by the organization asking for it.
This may seem a little vague, and to be honest, by design it is. I spent a lot of time trying to decide what the best estimate should look like, but ultimately I feel it depends on a variety of factors. And it's likely to be specific to an organisation, or maybe even an initiative, or maybe even to an individual piece of work.
We quickly find ourselves in the consultant's stock answer of "it depends".
So, while in this episode I'll discuss key characteristics I believe affect the value of an estimate, the actual or imagined value of an estimate will be in the eye of the beholder.
Thus, the shorthand of Valuable Estimate.
Okay, let's look at two characteristics that commonly affect the value of an estimate - accuracy and precision.
When I talk about accuracy of an estimate, I mean how close that estimate was to the actual value.
When I talk about the precision of an estimate, I mean how close our estimate is intended to be to the actual value.
We will only know how accurate we have been after the event.
We will, however, know how precise we are trying to be when we are making the estimate.
For example, if we estimate a given piece of work will take between 1 and 4 weeks, then we have a precision of 3 weeks, over which we expect the work to be completed. We know the precision ahead of time. We don't know how accurate we are until the work is completed. So in this example, if the work was completed in 2 weeks, then our estimate was accurate, it was within that 1 to 4 weeks estimate.
If, however, it took 5 weeks, we are fairly accurate with our estimate, it is wrong, outside of our 1-4 weeks, but not by much. Most people would be fairly comfortable with that.
If, however, it took 2 years, then our accuracy of 1-4 weeks would be completely out. I doubt anyone would be comfortable with that level of accuracy.
Let's try a thought experiment;
Let's say our original estimate had been between 1 week and 2 years. Then, if the work had been done in 5 weeks or 2 years, our estimate would have been accurate. But we would have had a very poor level of precision. So much so that the estimate probably gives us no value in the first place.
Thus, in most cases, we should be considering an estimate as valuable if it falls within the organisation's acceptable level of accuracy and precision.
Which of course brings us to the question, what is the acceptable level?
Again, we are back in the "it depends" territory.
What level of accuracy and precision would your organisation deem to be valuable?
And even within an organisation, this may change dependent on the work being done.
It would be impractical to expect the same level of accuracy and precision for a simple report versus an organisation-wide initiative. Yet few teams will define those levels ahead of producing estimates. And fewer still will continuously monitor and improve themselves in the accuracy and precision of those estimates.
But for now, let's focus back on having any value an estimate needs to have acceptable accuracy and precision.
So, let's start with accuracy.
What questions should we be asking ourselves?
We obviously want some level of accuracy. That would be self-evident. But the accuracy level is unlikely to need to be 100%. As with all things within software development and indeed modern business, achieving 100% can be prohibitively expensive, and often produces undesirable side effects.
If starting out with software development estimation, we can expect the accuracy to be relatively low and to build over time.
Controversially, I would suggest that a new team, or a team new to estimation, to have an accuracy of less than 50%, a less than 50-50 chance of getting the work done within the estimate.
Now you might consider this as being unambitious, but humans are terrible estimation.
I'll discuss this further in future episodes, but a lot of this can be summarised by the Planning Fallacy, which highlights a tendency for plans to be overly optimistic in their bias and approach, ignoring historical data, and a failure to take into account external factors.
The New York Times bestseller, Nudge, by Thaler & Sunstein, references the Planning Fallacy and goes on to say "thousands of studies confirm that human forecasts are flawed and biased".
Thus, any initial accuracy expectations should be humble, until such point any prowess can be established and proven with historical data. And this will take work. Not just the initial work to produce one-off estimates, but work to improve the skills in producing those estimates.
This is like any other skill, it needs to be developed.
Many high-profile projects, and even organizations, have failed because of the expectation that estimates could be produced with a high level of accuracy "off-the-cuff".
Off-the-cuff is a phrase I'll use many times in this series. It refers to doing or saying something spontaneously, without prior preparation or thought. In the context of estimations, an off-the-cuff estimate would be made quickly and without detailed analysis or consideration. Such estimates are typically based on a person's immediate intuition or rough guess, rather than on a thoughtful examination of the relevant data or structured estimation process.
Ok, now let's move on to precision.
Again, what constitutes value in precision is situation dependent and will of course be subject to the it depends.
However, there are some important things to consider. A range will be less precise but more accurate. But a single value will be more precise and by definition less accurate.
So, for example, if we provide a range of one to four weeks, we are not as precise as saying the 2nd of April, but we are much more likely to be accurate.
And this is where a change of language can be helpful.
William Davies expresses this well in his Easily Estimate Projects and Projects course on Pluralsight.
In the course, William explicitly splits the concepts of estimates into two, the prediction and the forecast.
He defines an estimate as "somebody's idea about the true value of something unknown."
He defines a prediction as "somebody's single idea about the true value of something unknown."
He defines a forecast as "somebody's varied idea about the true value of something unknown."
In our previous example, the 1-4 weeks is a forecast, a varied idea, while the 2nd April is a prediction, a single idea.
I personally feel that predictions, a single idea, are the wrong level of precision. It's trying to be too precise. It leads to the illusion of high confidence.
By giving a prediction, we express a level of confidence that I find impossible in the estimation of modern software development. It implies a level of confidence that can create an artificial reality, which in itself can be dangerous if an organisation bases decisions upon it.
There is also a danger of the prediction and a target date being conflated. If there is a target date, then the prediction date may lazily be set from that date, rather than going through any thought process to achieve it. If not, a target date may be based on the prediction, which can lead to the organization basing expensive decisions on overconfidence, such as a marketing launch for example.
Interestingly, I suspect most predictions are meant to be expressed as a by or within, so with our 2nd of April, it's likely to have been "by the 2nd of April", which inherently changes our prediction into a forecast.
Now personally, I feel a forecast, the varied idea, carries much more value. It inherently carries an idea of the confidence of the estimation.
If the range is relatively low, that implies a good level of confidence, whereas a wide range implies a poor level of confidence.
For example, it would be good practice to expect a wide forecast range, thus low precision, early in the piece of work when it is poorly understood and expected to be subject to the complexities and unknown unknowns. Then, as the work progresses, and more is known, that forecast range can adjust as appropriate. This goes back to the Cone of Uncertainty that I talked about in last week's episode.
Steve McConnell in his book, Software Estimation - Demystifying the Black Arts, introduced us to the Cone of Uncertainty. The Cone of Uncertainty illustrates how we know less about a given thing the further out we are. The closer we are to the tip, the slimmer the cone. The further away, the wider the cone. The width of the cone and the width of our forecast are based on how far out we are from having completed the work, having removed the unknown unknowns and progressed to a greater level of certainty.
In this episode, I've introduced the shorthand of Valuable Estimate an estimate that is desirable for the organisation asking for it.
What a valuable estimate is will differ between organisations, teams and maybe even pieces of work.
However, I've suggested that in most cases, a valuable estimate would have an acceptable level of precision and accuracy. And I've discussed how to establish that acceptable level of precision and accuracy.
In next week's episode, I want to expand on this further and discuss predictability.
In my opinion, I find that predictability carries more value for an organisation than the overly optimistic stretch estimates that teams are often asked to provide.
They would rather have a higher accuracy level of time than well-meaning but unrealistic estimates that produce low accuracy.
Predictability helps us to build trust. And trust and honesty are key to providing the agility and flexibility that modern businesses need to survive, let alone thrive.
Having that trust allows for many benefits. Individuals work better as a team. Ideas and concepts are openly shared, and ultimately, the organization benefits.
And as part of the suggestions for improving that predictability, I continue to discuss the effort needed.
Thank you for taking the time to listen to this podcast. I look forward to speaking to you again next week.