This episode is part of a wider mini-series looking at Estimation in Software Development. In last week's episode, I introduced two approaches to software estimation, Qualitative and Quantitative estimation. Qualitative estimation is predominantly based on expert judgment, something based on subjective thought process. Whereas Quantitative is based on something we count or calculate. A use of statistical analysis based on historical data. In this episode, I want to dive deeper into the Qualitative and look at some specific practices.
Or listen at:
Published: Wed, 29 Jan 2025 01:00:00 GMT
Hello, and welcome back to the Better ROI from Software Development podcast.
This episode is part of a wider mini-series looking at estimation in software development. I started the miniseries in episode 189 by providing the following guidelines:
Subsequent episodes take a deeper dive into specific aspects for estimation in software development, and while long-term listeners may find an amount of repetition across the series, I wanted each episode to be understandable in its own right, as much as practical to be self-contained advice.
In last week's episode, I introduced two approaches to software estimation, Qualitative and Quantitative estimation.
Qualitative estimation is predominantly based on expert judgment, something based on subjective thought process.
Whereas Quantitative is based on something we count or calculate. A use of statistical analysis based on historical data.
I also discussed how neither approach is perfect, and to get the most value is likely to take a blend of both.
In this episode, I want to dive deeper into the Qualitative and look at some specific practices.
Qualitative practices are common in various contexts. Qualitative approaches are valuable for exploring complex, nuanced issues where numerical data alone might not provide a complete picture. They are especially useful for understanding the why and how behind certain phenomena, behaviours or trends. We find them heavily used in market research and product development.
Some practices that you may be familiar with, for example, are:
These Qualitative practices have been stock in trade for product discovery and development for years.
As an aside, I find that many of these practices are losing popularity, losing ground to experimental thinking. Many of these practices are time-consuming and expensive to set up, and even the best run are still responses from a narrow audience in an artificial environment. While I personally feel there is still value in the practices, I do feel we should be careful how much weight we give to the insights that they provide.
Ideally, these insights are used to educate our experimental-driven approach. Use those insights to generate a hypothesis, test it, and review the feedback it generates. Often, this experimental approach provides us quicker, cheaper, and more representative data than their Qualitative counterparts.
Okay, back to software development estimation. Let's look at five qualitative practices:
Expert judgment is attempting to leverage the experience and intuition of experts to estimate or make decisions about a project.
At its simplest, "off-the-cuff" is a form of expert judgment.
"Off-the-cuff" is a phrase I'll use many times in this series. It refers to doing or saying something spontaneously without prior preparation or thought. In the context of estimates, an "off-the-cuff" estimate would be one made quickly and without detailed analysis or consideration. Such estimates are typically based on a person's immediate intuition or a rough guess, rather than a thorough examination of the relevant data or structured estimation process.
But, expert judgment could be extended to approaches that use someone outside of the delivery team to estimate. Historically, this may have been a manager or a technical architect, removed from the actual work, but believed to be close enough to make an educated estimate.
And this may be an approach that you want to use if you want to size the work before engaging the delivery team, or as a way to outsource the work if the team in question doesn't have the skills to produce an estimate.
But, as Steve McConnell says in his book, Software Estimation, Demystifying the Black Art:
"When discussing expert judgment, we first need to ask, expert in what? Being expert in the technology or development practices that will be deployed does not make someone an expert in estimation."
As a general rule however, we should probably consider these to be inaccurate to a considerable magnitude.
Expert judgment can be improved by using the wisdom of crowds via either brainstorming sessions or the Delphi technique.
Brainstorming sessions build on that expert judgment by using the wisdom of the group by generating ideas and solutions through open, unstructured group discussions.
The key here is group discussion. Ideally, this is the group expected to deliver the software.
As I discussed back in episode 196, this group activity should be happening as part of planning anyway, so it does provide the potential to feed in to estimation work.
Certainly, having a wide collection of experience and perspectives can help with both planning and estimation.
The expectation is by having many voices in the conversation, we gain from the group-mind. Thus, less likely to miss important steps and more likely to be self-correcting by questioning outlier estimates.
There are downsides, however. It inherently costs more as more people will need to be involved, and is likely to happen later in the delivery process, i.e. once a delivery team is formed. But these activities will need to happen, even if only for planning, so I'm not sure how much of this is a true downside.
Potentially, start off with expert judgment and then once you have the team in place, use this approach to then refine the estimate further, producing a better, clearer estimate.
The danger for me in this is avoiding the loudest voice in the room taking over. It's quite easy for this to revert to being an individual's expert judgment, purely because they are the loudest, or maybe the most convincing in the room.
There is a tendency for people to want to agree and be seen to agree. Care needs to be taken to ensure all voices are able to contribute to the collective and still come out with a meaningful outcome.
An alternative is a much more structured Delphi technique.
It's not a method I've used previously, but I can see it being a method to deal with the loudest voice in the room problem.
The Delphi technique works through assembling a group of experts, in our case, the delivery team, and then allows them to provide their opinions anonymously. A facilitator can then provide a summary on the responses, which the experts use as feedback to revise their earlier answers, all with the aim of converging on a consensus.
This loop continues until consensus is reached.
This approach has the potential benefits of:
Let's move on to user stories.
In Agile Software Development, this is a gathering of Qualitative descriptions of software features from the perspective of the end user.
A user story is generally made up of a title, a brief description of the story, user role, specifically who the user is (i.e. as a registered user), goal, describing what the user wants to achieve (i.e. I want to log in to the application), benefit, explains why the user wants to achieve this goal (so that I can access my personal dashboard), and acceptance criteria, defining conditions that must be met by the story to be considered complete.
So, for example,
Generally, any piece of work can be broken down into multiple user stories.
The advantage of user stories is that it forces the team to step out of the technical tasks and consider what it actually means to the user, and possibly, just as importantly, the acceptance criteria to know when we are done.
I can't stress how important it is to have acceptance criteria up front. It's invaluable to bring everybody onto the same page of what the work is, how it will behave and how to test it. I'm sure we've all got to the end of a piece of work only to hear the infamous words "I thought it was going to do X".
User stories provide benefit as part of planning, and are an Agile development staple, so I would expect them, or similar, to appear in any planning activity. But they can also be a useful activity to help with fleshing out the work to help with Qualitative methods.
And finally, Retrospectives.
Retrospectives provide us an opportunity to reflect on the processes and outcomes of our ways of working, to identify lessons learned and areas for improvement. Again, this is an Agile staple, a means of continual improvement in our ways of working.
The delivery team should be engaging in retrospectives on a regular basis anyway, but you may want to focus specifically on estimation if there is value to the organisation, either by making it the focus of existing retrospectives for a period, or holding estimation-specific retrospectives.
Note that retrospectives are intended to improve the team's way of working over time and are expected to be continual. The team can always be improving.
Care should be taken to avoid unintentionally putting this in a negative light. As I talked about in episode 198, care must be taken to avoid missed estimates being seen as a failure. If the team feels as if they are deemed as having failed, it can quickly result in a lack of trust and dysfunctional behaviour.
In this, you're all on the same side of the table, looking at methods to improve the value of estimates produced.
In next week's episode, I want to take a specific look at another method that I'd personally consider a Qualitative approach But it's so different to those discussed in this episode. I felt it needed its own, the #NoEstimate approach
The underlying precept is to break the work down into small work increments, with the aim of rapidly producing shippable software. It removes the need for estimation.
However, in my opinion, saying there is no estimation is possibly a tad misleading, but the approach defines valuable key principles that are well worth exploring.
Thank you for taking the time to listen to this episode. I look forward to speaking to you again next week.