#160: Revisiting Software Development Estimation

I've long held the belief that Estimation is the source of much dysfunction within Software Development.

However, with a New Year, I'll like to take this as an opportunity to revisit my strong opinions on the subject - are they still valid? Are there better ways?

In this episode I recap the understandable desire for Software Development Estimations, why I feel that its a source of so much dysfunction and my plan for the coming year to test and challenge my opinions.

Or listen at:

Published: Wed, 11 Jan 2023 16:17:19 GMT



Happy New Year and welcome to the first episode of 2023.

I've said before that I consider myself someone that has strong opinions weakly held. I will only hold a belief until evidence can be proved to adjust that belief.

As such, I want to challenge myself to revisit my views on Estimation.

So think of this as my New Year's resolution to invest time into Software Development Estimation to see if my beliefs need to be updated. As such, I'll be using a lot of my learning time to research Estimation and then report back here later in this year on my findings.

So what is my current stance on Estimations?

I recognise that Estimations are desirable.

There are a number of reasons for wanting to have an idea of how long something is going to take. Predominantly those reasons fall into Cost, Risk and Planning.

Cost - ahead of time we naturally want to know how much something is going to cost. We can then use that with the expected benefits and produce an ROI justification - does the benefit justify the cost? This is a staple of running any organisation.

Risk - we want to have confidence we understand all the variables within our business. We want to be operating with certainty. Thus, we need to make sure that we have controlled all of those risks. Again, another staple of running any organisation.

Planning - we need to align different teams, disciplines and even organisations. If we are selling the software as a product, Marketing and Sales need to plan their activities. If we are using the software internally, we need to plan and coordinate training. If it's part of a bigger program of work, we need to make sure that disparate teams and organisations are ready to join everything together in an efficient manner. Again, another staple of running an organisation.

These are all reasonable desires.

If it was my money on the line, then I'd want to have these certainties to.

However, while I recognise the desirability, I also recognise something being desirable does not make it attainable.

A good way to look at this is through the lens of the Cynefin framework. I introduced the Cynefin Framework in episode 157 as a means of understanding the complexity of different activities. The framework defines four domains - Clear, Complicated, Complex and Chaotic.

Within the Clear domain, we are dealing with the "known knowns". Thus, any Estimation effort will have a high level of certainty. We can be highly confident in our costs, risks and planning as it's an activity we've done time and time again. We have Best Practice for it, which has been tried and tested and we do not expect any variance from the norm.

The Complicated and Complex domains, however, are much less clear cut. Within the Complicated, we get the "known unknowns" and within the Complex we're getting the "unknown unknowns". We are moving further away from any level of certainty.

Software Development falls into the Complicated or Complex domain with a huge variety of factors affecting the quantity and impact of the "known unknowns" and "unknown unknowns".

For example, as a consultant, most engagements will start in that Complex domain with many "unknown unknowns". Be it related to the team's capabilities, the technologies, the desired outcomes, any organisational working practices, stakeholder desires, internal politics, external governance requirements, even things as simple as holiday patterns for the team - all of these, and many more, will have an impact on any level of confidence I can have in any Estimation.

In short, as much as I'd love to give my clients that level of certainty, I can't. Otherwise, I'd be lying to them.

So we mitigate that. We are asked to provide our best guess.

And this is unfortunately where dysfunction creeps in.

I may have the well-meaning understanding of the hiring manager saying "We know it's only a guess. Don't worry, you won't be held to it". But experience teaches me otherwise.

Our desire for that certainty means that once something is on paper, it attracts a level of reality - something I've previously described as a dangerous artificial level of certainty.

No matter how caveated estimates are, they are too often taken to be a much greater level of certainty than intended. Someone somewhere in the stakeholder chain will take a glance at a Gantt chart and build their own reality around it. And the further removed that stakeholder is from meaningful conversations, the more likely they are to assume that their artificial reality is correct.

And this obviously causes a huge level of friction when actual reality doesn't match their artificial reality.

Commonly, this has resulted in teens being incorrectly blamed for missing deadlines, rushed work and poor quality as milestones loom, poor practice, and unhappy teams, middle management and stakeholders.

Many a product has been scrapped purely because of this disconnect.

Thus, for me, Estimates, while desirable, are a source of considerable friction and dysfunction within Software Development.

I generally find that in many cases the whole Estimation conversation can, and should, be replaced with a more sensible "why" conversation. It's not uncommon to find the question for Estimation is being used as a proxy to provide an answer to a different question.

Plus, of course, many of our modern working practices in Software Development - Agile, Scrum, Lean, DevOps - promote that both the work and the desired outcomes are an iterative activity. A realisation of the work, and indeed the organisation, operate in that Complex or Complicated domain and thus should be working on shorter time horizons - effectively making the Estimation question moot in many cases.

Now, that may seem like a very opinionated stance.

And like I said in the intro, I have strong opinions, but I'm old enough and hopefully wise enough to hold those opinions up to the light and check that they remain true.

And for Estimations, that is something I intend to spend at least a couple of months this year exploring.

So what's the plan?

I plan to start by using the online training site Pluralsight. I'm a fan of Pluralsight content and I find that I consume it conveniently and easily, and they have a whole learning path devoted to "Estimating and Forecasting in an Agile environment". It's seven courses over 17 hours and claims that:

Big income the learner will develop decision making skills in assessing uncertain product development efforts, then modelling those uncertainties using different statistical models. After completing this path, the learner will know how to create probabilistic forecasts for agile development efforts so that they can align stakeholder expectation and foster sound business decision making. End quote.

Pluralsight also has a number of other related paths, so I may branch off if I feel appropriate.

I suspect after that I may branch off into books, depending on what I can find on the subject.

And I'd also ask you, the audience, do you have anything I should add to my study guide? Maybe you're able to recommend some great content on the subject or simply have your own experiences to share. I'd really welcome that, so feel free to share.

I'd expect it to be a good few months before I report back, unless I find something really interesting in the meantime to share.

I hope you have a really good and productive 2023. Thank you for listening to this podcast. I look forward to speaking to you again next week.