Back in January 2023, in episodeĀ #160, "Revisiting
Software Development Estimation," I made the commitment to revisit and potentially revise my views on software development estimation. While I acknowledged the desirability of estimations for reasons like cost, risk, and planning, the episode recognized that these desires don't always make estimations attainable or accurate, particularly in complex or complicated domains like software development. In 2023, I planned to research estimation techniques, focusing on their feasibility and implications in the software development process. This research would involve exploring resources such as online courses, books, and audience suggestions, with the intention of sharing findings later in the year. This is the first, if somewhat delayed, in a series of episodes following that work. In this episode I will provide a summary of upcoming episodes and how they fit together.
Or listen at:
Published: Wed, 30 Oct 2024 01:00:00 GMT
Hello, and welcome back to the Better ROI from Software Development podcast.
Firstly, I apologise for the delay in the episodes. The last one was back in November. I had originally intended to take a couple of months out to prepare a series of episodes revisiting my personal views on estimation in software development. Ironically, that series on estimation took longer than I would have expected.
Back in January (2023), in episode 160, Revisiting Software Development Estimation, I made the commitment to revisit and potentially revise my views on software development estimation.
While I acknowledge the desirability of estimations for reasons like cost, risk, and planning, the episode recognised that these desires don't always make estimations attainable or accurate, particularly in complex or complicated domains like software development.
Thus, in 2023, I'd plan to research estimation techniques, focusing on their feasibility and implications in the software development process. This research will involve exploring resources such as online courses, books and audience suggestions, with the intention of sharing findings later in that year. This is the first in a series of episodes following that work.
In this episode, I will provide a summary of upcoming episodes and how they fit together. As much as possible, I will try to make each episode self-contained, usable in their own right, but expect common themes to run through them.
But, before I get into that episode summary, I want to share my overall thoughts, and a good chunk of my frustrations, in relation to estimation in software development.
No matter how I looked at it, the courses I worked through or the books that I read, estimation provides many seemingly unreconcilable contradictions.
The desire for accuracy and precise estimates are higher the more complex the work. Yet, the more complex the work, the less likely you'll be able to produce accurate and precise estimates.
Low-effort, "off-the-cuff" estimates are the least likely way to achieve accuracy and precision. Yet, these low-effort, "off-the-cuff" estimates may yield the best return on investment.
Having an estimate is generally highly valued. Yet we don't invest the training and time in our teams to produce those valuable estimates.
Everywhere I looked, I could find further examples of these contradictions. And to be honest, no way to reconcile them.
So, as much as I would like to provide a definitive answer on how to get the best out-of-software development estimation, I'm afraid it all comes back to the consultant's stock answer of "it depends".
I will, however, provide these basic guidelines and use the upcoming episodes to expand further:
Point 4 is the key to this. We need to continually review so we have the best blend for our organisation or team in question.
If the modern world has taught us anything, it's that we need to continually be sensing and adapting. The Plan, Do, Study, Act cycle if you're a fan of Deming. The Define, Measure, Analyse, Improve, Control from Lean Six Sigma. The Plan, Design, Build, Test, Review from Agile. The Plan, Do, Check, Act from ISO 9001. The Plan, Code, Build, Test, Release, Deploy, Operate, Monitor from DevOps. And there are countless others.
They all have a basis in that scientific approach to have a hypothesis, build something to prove or disprove that hypothesis, review the results, repeat. It's the having and acting on that feedback loop. By having that feedback loop, you can build confidence that the outcomes are right for your organisation or team. You will establish empirical evidence on the return on investment.
Thus, while I may discuss the dysfunctions and cost of estimates, if you can prove that they are working in your situation, then great, carry on ... as long as you can prove it.
Ok, let's move on to what we can expect from the upcoming episodes.
In the next episode, 190, are you getting value from estimation?
In the episode, I expand on number 1 from the guideline earlier "Don't invest in estimates unless there are clear demonstrable value in having them". So are the estimates really going to change how your business operates? Estimates don't come for free, and I'd argue that really good, valuable estimates are, more often than not, prohibitively expensive to produce.
Thus, in the episode, I ask the question, are you getting ROI on that investment? And more importantly, can you prove it?
Then, in episode 191, What Constitutes a Valuable Estimate, I introduce the term Valuable Estimate as a shorthand for some value that is desirable by the organisation asking for it.
What is valuable will be in the eye of the beholder and will vary. But two characteristics are likely to be common - accuracy and precision.
When I talk about accuracy of an estimate, it's how correct the estimate was to the actual value.
When I talk about precision of an estimate, it's how close to the actual value we are attempting to be.
I'll also start to hint at the effort needed to achieve that valuable estimate.
In episode 192, I'll move on to predictability.
In my opinion, I find that predictability carries more value for an organisation than the overly optimistic stretch estimates that teams are often asked to produce by middle management.
Leadership would rather have a higher predictability than well-meaning but unrealistic estimates that produce low predictability.
Predictability helps to build trust, and trust and honesty are key to providing the agility and flexibility that modern businesses need to survive, let alone thrive.
Having that trust allows for many benefits. Individuals work better as a team. Ideas and concepts are openly shared. And ultimately, the organisation benefits.
And as part of the suggestions for improving that predictability, I continue to discuss the efforts needed.
In episode 193, I ask the question, how much to invest?
In the episode, I start to weigh up the organisation value of that valuable estimate with the cost to achieve it.
Work comes at a cost, thus we need to be confident that we are getting the correct balance. I ask how much we should really be investing and discuss things to consider when attempting to get a good return on that investment.
In short, if you want valuable estimates, be prepared to invest to get them.
I discuss why I feel that the "off-the-cuff" estimate is so prevalent within software development. The super easy quick way to keep a process moving, but providing no real value other than to tick a box within some project management process.
If you are attaching real value to having a valuable estimate, then you should be expecting a comparable investment in the team's time to produce it.
In episode 194, I ask a question with an estimate, what are you actually asking for?
In the episode, I want to discuss what the term "estimate" may mean to you and the colleagues that you work with.
I often find it's used interchangeably for effort to achieve, date by which it will be done, and even a commitment by which it will have been done.
Often the meaning of estimate changes between people, even within a single team, let alone an organisation. The definition can easily be conflated. Is it an amount of effort? Is it a target? Is it a commitment?
Thus, it is often a cause of friction and dysfunction due to it being very much in the eye of the beholder.
I follow that in episode 195 by looking at estimates by any other name.
The episode will take a look at estimates by any other name, a proxy for estimates, maybe t-shirt sizes or story points.
While they can be useful planning tools, they again can be conflated to an estimate. How often do we hear someone describe a small as "a week's work", or having some form of conversion rate from story points to days?
This just adds to the confusion and dysfunction discussed in episode 194.
In episode 196, I talk about Estimation vs Planning.
In the episode, I want to encourage you to mentally separate estimation and planning. They are often conflated which leads to confusion, distrust and again more dysfunctional behaviour. Any estimate is generally part of a plan, and that plan can be the outcome of the act of planning.
Importantly, we have to remember, software development can't be delivered without planning, even if it's only the most cursory of activities.
Software development can be delivered without an estimate.
Yet, I often find more weight given to the estimate rather than the planning.
In episode 197, I move on to Estimation vs Dependencies.
In the episode, I will look at the impact that dependencies have on our ability to produce valuable software estimation.
This specific episode came about because I happened to read the "Eliminate Dependencies, Don't Manage Them" blog post on scrum.org while I was preparing this series.
In brief, the article talks about your better off eliminating dependencies rather than trying to manage them through normal, traditional management processes.
In his book, Software Estimation - Demystifying the Black Art, Steve McConnell says that the size of software is the single most significant contributor to project effort and schedule. Personally, I'd like to suggest that dependencies, if not of similar importance, are a close second.
In episode 198, I look at estimation versus the punitive target.
In the episode, I want to discuss the psychological scarring left behind from decades of using estimates as punitive targets.
I want to highlight the emotional baggage that your developers may associate with estimation, and, through no fault of your own, will carry into all future estimation work.
I, like most developers with any experience in the industry, have worked in organisations that take an estimate and then turn that into a punitive target. Where a well-meaning estimate has been turned into a commitment, and ultimately a stick to beat the team with. A stick that ranges from the aggressive to the passive, be it threats of dismissal or simply discussing the failure to hit that estimate.
In episode 199, I look at Quantative vs. Qualitative
In the episode, I start to introduce some practical methods to develop estimation skills. And I introduce the two main approaches, qualitative and quantitative.
Qualitative estimation is predominantly based on expert judgment, something based on subjective thought process. Whereas quantitative is based on something that we can count or calculate, a use of statistical analysis based on historical data.
I dive deeper into the Qualitative approaches in episode 200. In the episode, I discuss some specific Qualitative practices.
I specifically look at five types of Qualitative practices: * Expert judgment, * brainstorming sessions, * the Delphi technique, * user stories, * and retrospectives.
In episode 201, I look at the #NoEstimate approach.
I used the episode to take a specific look at another method that I'd personally consider a qualitative approach, but is so different to those discussed in episode 200, I felt that it needed its own episode.
The #NoEstimate Approach underlying precept is by breaking the work down into small work increments with the aim of rapidly producing shippable software. It aims to remove the need for software estimation.
However, in my opinion, saying there is no estimation is possibly a little misleading. But the approach defines valuable core principles that are well worth exploring.
In episode 202, I move on to the Quantitative.
In the episode, I want to move on from the Qualitative practices to discuss some Quantitative estimation approaches.
While Qualitative estimation is predominantly based on expert judgment, something based on subjective thought process, Quantitative is based on something we can count or calculate, a use of statistical analysis based on historic data.
In the episode, I take a look at two practices, Monte Carlo simulations and Statistical PERT.
In episode 203, I ask the question, is AI the answer?
Following on from the episodes on Qualitative and Quantitative approaches, it's easy to see there are pros and cons to individual practices. And realistically, it feels like you will need a blend of various approaches to create truly valuable estimates.
But, this all seems like a lot of work and investment, a lot of training, tools and time to achieve. And even more so to iteratively improve to nudge towards those valuable estimates.
As I'll say many times in this series, producing estimates costs. Producing valuable estimates costs a lot.
Thus, it doesn't take much of a leap of logic to ask the question, can AI help us here?
Thus, I use that episode to explore that idea.
In an ideal world, there would be an AI-powered tool that would just do the work for us. Thus, I explore how such a tool could come into being, and probably more importantly, why I doubt it will happen any time soon.
In episode 204, I move on to Professionalism.
In the episode, I discuss how software estimation works in terms of Professionalism, and in many cases, surprisingly, the Professional response is not to provide an estimate.
Now you might be asking, why would an episode on Professionalism be included in a series focused on software estimation?
This, for me, is closely related to episode 198, where I talked about the psychological scarring left behind from decades of using estimates as punitive targets.
In a similar way, how many developers have been described as unprofessional when they legitimately cannot provide an estimate?
In those situations, what is the more professional response? Is it to give the "off-the-cuff" guess to keep the software development process moving? Or to take the harder path of explaining why you can't provide an estimate?
And in episode 205, I provide a wrap-up to the series.
It's the final episode of the mini-series, in which I provide a wrap-up, recapping those guidelines and providing some additional tips that don't have a home elsewhere in the series.
OK, so that's what's coming up over the next 15 episodes.
This is by far the largest mini-series I've created, and it's taken considerable research and work to prepare, so I can only hope that the end result is helpful.
Before I close this episode, while there have been a variety of sources I've used in the creation of the miniseries, there are two primary sources that I'd like to call out, both of which I'd recommend if you want to take a similar deep dive.
In reverse order.
The Estimating and Forecasting in an Agile Environment Path by Pluralsight. I've talked about Pluralsight before, an online training provider covering a wide range of technical topics.
While the platform is starting to feel a little long in the tooth, which feels like a lack of investment leading to limited new content, it still has a wide backlog.
And the Estimating and Forecasting in an Agile environment path is an interesting one, with over 16 hours of content over six courses, covering a number of estimation subjects.
But the clear winner is the book, Software Estimation - Demystifying the Black Art by Steve McConnell. While the book was launched back in 2006, it is still seen as THE book on software estimation. And while I feel some parts of it are a little dated, the majority of the content and the research backing it up are just as relevant today as when it was written.
Thus, for anyone seriously trying to achieve valuable estimates, I highly recommend taking the time to read the book cover to cover. You will find that I reference it heavily during this series.
I'll include links in the show notes for both.
I hope you will join me in the upcoming episodes as I dive further into software estimation. And, of course, I hope that you will get some value from doing so.
I wish you all the best and I look forward to speaking to you next week.