This episode is part of a wider mini-series looking at Estimation in Software Development. In this episode, I wanted to look at practical methods to help develop the team's estimation skills. We cannot expect valuable estimation without investment. It's like any skill, it has to be practiced and refined to obtain a good level of proficiency. So in this episode, I want to take a look at two approaches to estimation - Qualitative and Quantitative.
Or listen at:
Published: Wed, 22 Jan 2025 01:00:00 GMT
Hello, and welcome back to the Better RIO from Software Development podcast.
This episode is part of a wider mini-series looking at estimation in software development. I've started the mini-series in episode 189 by providing the following guidelines:
Subsequent episodes take a deeper dive into specific aspects of estimation in software development. And while long-term listeners may find an amount of repetition across the series, I wanted each episode to be understandable in its own right, as much as practical, to be self-contained advice.
In this episode, I wanted to look at practical methods to help develop the team's estimation skills.
As per point 3 in the guidelines, provide the team with training and time to develop their estimation skills.
We cannot expect valuable estimation without investment.
It's like any skill, it has to be practiced and refined to obtain a good level of proficiency.
As I've compared previously, we can't get an oil painting set for Christmas and expect to be exhibiting in the Louvre by New Year.
So in this episode, I want to take a look at two approaches to estimation - Qualitative and Quantitative.
Let's start with a quick description of Qualitative and Quantitative estimation.
Qualitative estimation is predominantly based on expert judgment, something based on subjective thought process.
Whereas Quantitative is based on something we can count or calculate, a use of statistical analysis based on historical data.
During my recent studies of software development estimation, I was somewhat surprised how broad and mature the Quantitative estimation practices are. I was certainly unaware of how much statistical analysis can be applied to the practice, and that universally, Quantitative estimation was deemed to produce better estimates than Qualitative.
Steve McConaughey's book, Software Estimation, Demystifying the Black Art, kindly summarises much of his research and finding into a single tip, and it fits nicely here:
"Count if at all possible. Compute when you can't count. Use judgement alone only as a last resort."
And this is backed up with countless books and studies that human judgment, the Qualitative, is incredibly inaccurate for estimation. And yet, within my 30 years of professional experience, I have only ever seen examples of Qualitative.
When I started this mini-series back in episode 189, I talked about a number of unreconcilable contradictions that software estimation throws up - and this Quantitative is recommended but Qualitative is used is one such contradiction.
Let's dig deeper into the Qualitative estimation.
At its heart, Qualitative estimation relies heavily on the experience and intuition of experts. It involves subjective assessment rather than using strict numerical analysis found in quantitative.
The approach can use descriptive methods like analogies, expert panels and consensus building to estimate project parameters. This can provide more flexibility than Quantitative methods as they can be more adaptive to the unique aspects of each project, allowing for adjustment based on non-measurable factors like team experience, client relationships and project complexity.
It can also involve scenario analysis, risk identification, and assessment based on human judgment and experience. And it can be particularly effective in understanding and incorporating project-specific nuances and team dynamics.
But the main drawback is the potential for bias and inconsistencies. As these methods rely heavily on human judgment, they can also lack the precision and reproducibility of Quantitative methods.
Several books have delved into the challenges and limitations humans face when it comes to estimation, particularly in complex domains like project management, software development and decision-making. These books often draw from psychology, behavioural economics and empirical studies to illustrate why humans struggle with accurate estimation.
"Thinking Fast and Slow" by Daniel Kahneman is probably the best-known. The book explores the cognitive biases and systematic errors that affect human judgment and decision-making. Kahneman, a Nobel laureate in economics, explains how intuitive thinking, system one, often leads to errors and biases, including those related to estimates and forecasts. He emphasizes the overconfidence bias, where individuals tend to be overly optimistic about their estimations and predictions.
Steve McConnell from his book gives us this tip:
"Don't reduce developer estimates, they're probably too optimistic already"
In related work, "Decisive - How to Make Better Choices in Life and Work" by Chip and Dan Heath, a book predominantly about improving decision-making processes, discusses how cognitive biases and emotional influence can lead to poor decision-making. While not directly about estimation, these same biases and influences often affect estimation accuracy as well. The Heath's "WRAP" process (Widen your options, Reality check your assumptions, Attain distance before deciding, and Prepare to be wrong) is particularly relevant as it encourages considering a range of outcomes and acknowledging the possibility of being wrong, which is crucial in estimation.
I'd also highlight the related work in "Radical Focus - Achieving Your Most Important Goals with Objective and Key Results", by Kristina Wodtke. Wodtke's book focuses on goal setting and achievement using the Objective and Key Results framework (OKRs). While it is more about strategic planning and execution than estimation, the principle of setting clear objectives and measurable key results can indirectly highlight the difficulties of estimation. The emphasis on adaptability and learning in OKRs also aligns with the need for flexibility in estimations, especially when dealing with uncertainties and changing environments.
And you may also have come across the well-known and widely accepted concept of the planning fallacy. The planning fallacy is a cognitive bias that affects people's ability to accurately predict the time or resources needed to complete a future task, despite knowing that similar tasks have typically taken longer in the past.
This term was first introduced by psychologist Daniel Kahneman and Amos Traversky in 1979. Here's a summary of its key aspects:
Optimism bias: The planning fallacy is primarily driven by optimism bias, where individuals underestimate the time, costs and risks of future actions, while overestimating the benefits. The bias occurs even when they are aware of past experiences, with similar tasks taking longer than planned.
Ignoring historical data: Despite having historical data indicating that similar projects have overrun in terms of time and cost, individuals still tend to plan for best-case scenarios. This is partly because they believe their project is inherently different and will not encounter the same issues.
Focus on future success: People engaged in the planning phase often focus on the most optimistic scenario for their project, where everything goes according to plan without any major issues.
External vs. internal factors: The planning fallacy is influenced by failure to account adequately for unexpected external factors and an over-reliance on the belief of one's ability to overcome obstacles.
The New York Times bestseller, "Nudge" by Richard H. Thaler and Cass R. Sunstein, reference the planning fallacy often, and go on to say:
"Thousands of studies confirm that human forecasts are flawed and biased."
Thus, while there is value in Qualitative approaches, especially if used to allow the team doing the work to explore and discuss the work that risks the assumptions, it can be poor at producing valuable estimations due to so much of it being based on subjective bias and individual experience.
Let's now take a deeper look into Quantitative estimation.
Quantitative estimation is primarily based on numerical data. It uses metrics and historical data to make predictions about future projects. This approach often includes statistical techniques and mathematical models. Methods like PERT (Program Evaluation and Review Techniques), CPM (Critical Path Method), or SPERT (Statistical PERT), for example.
Quantitative methods strive for objectivity, minimizing the influence of personal bias or opinion. The focus is on measurable, empirical evidence and aims to provide precise, often numerical estimates, such as number of hours, days or the cost required for a project.
Now again, this approach does not come without its own drawbacks and limitations. Its reliance on historic data can be a downfall.
Firstly, you must have that historic data. And secondly, that historic data needs to be relevant to the work that you are trying to estimate.
The greater the variability between the work being estimated and the historic work, the less value any estimate will carry.
To be honest, there will always be variability. You're not going to have the same team with the same capabilities producing the same output. If nothing else, you'll be asking the team to produce something slightly different each time. You're asking for a unique piece of art each time, not the same mass-produced widget churned out by production line.
And this, in my mind, is where Quantitative estimation is an incomplete approach.
While there is definitely some interest in Quantitative approaches using statistical modelling (I'll talk about some example approaches in future episodes) this will always be a level of Qualitative work going into it. If nothing else, there will be the Qualitative judgement on the comparable size and complexity of the work to be undertaken to allow a Quantitative comparison.
And regardless of how regimented we are, there will be Qualitative judgments being applied to the Quantitative results, even if it's only subconscious.
The System 1 intuitive thinking part of our brains, as discussed by Daniel Kahneman in his book "Thinking Fast and Slow", will automatically kick in when giving an estimate and give us a "Does that feel right?" response.
Sometimes that can be helpful.
The team is now half the size it was when the historical data was collected - does it seem likely an estimate based on the original size would be correct?
And sometimes it can be destructive - our team is much better than it was. I'm sure we could do it much faster than we have historically.
It's the ego trap of every manager and leader thinking their team is so much better than the average.
Yet again, Steve McConnell provides advice on this:
"Avoid using expert judgment to tweak an estimate that has been derived through computation. Such expert judgment usually degrades the estimate's accuracy."
In this episode, I wanted to introduce the differences between Qualitative and Quantitative approaches to software development estimation.
Within software development, estimation can be approached through either method, each offering a different perspective and a set of tools for predicting project timelines, costs and resources. Understanding the differences between the two approaches is key for effective project planning and management.
However, in practice, the most effective software development estimation will involve a combination of both Quantitative and Qualitative methods, levering the strengths of each to provide a well-rounded, realistic view of project requirements and expectations.
Over the next few episodes, I will explore this further and look at examples of Qualitative and Quantitative approaches.
And I'll use next week's episode to dive further into Qualitative approaches and discuss specific practices.
I will specifically look at five types of Quantitative practice:
Thank you for taking the time to listen to this podcast, and look forward to speaking to you again next week.