This episode is part of a wider mini-series looking at Estimation in Software Development. In the last episode, I talk about the short hand of a "valuable estimate" - an estimate that is desirable for the organisation asking for it. While what constitutes "valuable" will differ from organisations to organisation, team to team, and maybe even piece of work to piece of work, I feel that it will mostly be related to acceptable level of accuracy and precision. Accuracy is how close to the actual the estimate was - which realistically can only be assessed after the work - and precision is how broad a range is acceptable - which can be defined as part of providing the estimate - and if anything also indicates confidence in the estimate - a narrower range indicating a higher confidence, a wider range indicating a lower confidence. In this episode I wanted to talk about how Predictability is part of this conversation
Or listen at:
Published: Wed, 20 Nov 2024 01:00:00 GMT
Hello, and welcome back to the Better ROI from Software Development podcast.
This episode is part of a wider mini-series looking at estimation in software development.
I started the mini-series back in episode 189 by providing the following guidelines. 1. Don't invest in estimates unless there are clear demonstrable value in having them. 2. Agree what a valuable estimate looks like. This will likely be a desirable level of accuracy and precision for an estimate. 3. Provide the team with training and time to develop their estimation skills. 4. Collect data on points 1-3 and regularly review if you have the correct balance
Subsequent episodes take a deeper dive into specific aspects of estimation in software development. And while long-term listeners may find an amount of repetition across the series, I wanted each episode to be understandable in its own right. As much as practical, to be self-contained advice.
In the last episode, I introduced the shorthand of "valuable estimate", an estimate that is desirable for the organization asking for it.
While what constitutes valuable will differ from organisation to organisation, team to team, and maybe even piece of work to piece of work, I feel that it will mostly be related to an acceptable level of accuracy and precision.
Accuracy is how close to the actual the estimate was, which realistically can only be assessed after the work.
And precision is how broad a range is acceptable, which can be defined as part of providing the estimate. And if anything, also indicates confidence in the estimate. A narrower range indicating a higher confidence, a wider range indicating a lower
In this episode, I want to talk about how predictability is part of that conversation.
In my experience, I find that organizations attribute more value to predictability than an overly optimistic estimation. They prefer higher accuracy over time rather than well-meaning but unrealistic estimates that result in low accuracy.
That's not to say we aren't often under pressure at the micro level, the individual piece of work, to be aggressive in our estimations. We'll always be told "ASAP" or "we need this yesterday".
But at the macro level, the team are delivering multiple pieces of work over time. Having a level of predictability provides an organisation with the ability to make more meaningful decisions. As a business leader, we want the unvarnished truth, or at least the most transparent, honest truth that you can get. Too many initiatives and businesses have failed for business leaders being given sugar-coated versions of the truth, leading to false security and ultimately failure.
We need to have honesty and transparency in all things. Otherwise, how can we possibly make meaningful decisions? Not least of which, the organizations can make more informed strategic decisions when they have predictable estimates. For example, product launch dates, marketing strategies, and sales forecasts often depend on the predictability of software development timelines. While optimism in estimation isn't inherently negative, it's crucial that it's grounded in reality and supported by data and experience to avoid the pitfalls of wishful thinking and to ensure that the benefits of predictability are realized.
So, let's take a moment to look at why predictability is favoured.
First and foremost to me is stakeholder trust and satisfaction. Trust is one of the most important parts of our professional relationships. When the trust is undermined, we find various dysfunctional approaches being put into place in an attempt to fix, more often than not causing further erosion of trust, resulting in a spiral towards failure.
Consider how many Agile implementations are attempted to be fixed with traditional waterfall techniques. While well-meaning, these waterfall techniques erode at any benefit that Agile can bring to an organisation, until such point that there is no Agile benefits at all, during which a complete lack of trust is built up among all parties, to the point that realistically it will be unsalvageable.
Rather, it is the trust that needs to be worked on and maintained.
Having that trust allows for many benefits. Individuals work better as a team, ideas and concepts are openly shared, and ultimately the organisation benefits.
This is why I said earlier, we need honesty and transparency in all things. And sometimes that will mean that we need to provide less than palatable estimates, either that they are considerably bigger than expected or desired, or to be honest when we cannot provide an estimate. And why?
Whereas, if we are always providing overly optimistic estimates, we can be applauded for a can-do attitude, but break trust with the stakeholders, as well as being tempted to cut corners to achieve the unachievable.
For example, to reach those unachievable estimates, we may be tempted to accept and advocate for higher risk than the organisation or team would normally be comfortable with.
Produce code to a lower quality standard, introducing higher future maintenance costs or customer-affecting security problems.
Or we sweat the team, asking them to work long, unsociable hours, ultimately burning them out or forcing them out of the organisation in search of a better work-life balance.
So, we need to arrive at that acceptable level of predictability. So, how do we do that?
As we talked about with accuracy last week, 100% predictability would be great, but ultimately impossible.
In short, we have to come to it over time, over a series of estimates, to really understand how good our estimation has been, how accurate, and thus, how predictable we have been.
Thus it begs the question, how often do we review our estimates once they are set?
Are we just setting the estimates once, before the work is done, then only looking at that estimate if the actual work took longer than expected? If you said yes, then you are probably within the vast majority of organisations using estimates.
You are using a fire-and-forget metric, which is often just to provide data to project management process, rather than to provide value to the delivery effort.
Producing a high level of predictability is a skill. A skill that the team has to learn and gain experience in. As such, we need to invest the time to implement the feedback loops to allow the team to measure, assess and take learnings from each estimate.
So what does this look like in practice?
I suggest there are two key learning points, during and after.
After is probably the most important for establishing predictability. How accurate was our estimate? And what can we learn from the estimate and what actually happened to help us improve?
This is going to sound very similar to the Agile Retrospective, a ceremony in which the team assesses what has gone well, what hasn't and how to improve.
Much like any improvement in an Agile Retrospective, having relevant data to be able to see a trend is vital for knowing if you are improving or not. And in this case, it's that historical trend of how accurate our estimates have been.
And I'd argue that this activity is also a great way to build trust with stakeholders. Being honest and transparent with the trend data, along with the why it is what it is, and the plans to improve it over time, really helps to demonstrate that the team wants to provide that value back to the stakeholder.
Now, obviously, there will be times when an individual estimate is completely out. It isn't even remotely accurate. And these should be celebrated as a learning opportunity on how to get better. How to improve the overall process.
This is similar to the blameless post-mortem approach, advocated after some outage or system failure. Yes, things didn't go right. But rather than using it as an opportunity to berate or even discipline, let's use this as an opportunity to learn and improve our systems and processes to become better, stronger and more resilient.
Through an iterative approach, you can work towards predictability as a trend. Yes, occasionally you will have outliers, but that is to be expected.
As an aside, if anything, 100% predictability is a possible warning flag that teams are inflating their estimates to always hit them, something referred to as sandbagging.
The other opportunity to measure, assess and possibly even refine our estimate is during the delivery.
Often, when we make estimates, it will be upfront before any work is done, at the point where we know the least about the work. This is the edge of the cone of uncertainty.
In software estimation, at the edge of the cone, before we start the work, or even maybe even during any investigation, there will likely be a high level of known unknowns and unknown unknowns, leading to a high level of uncertainty.
Whereas, the more we work on the piece, the more we discover the answer to those known unknowns, and surface those unknown unknowns, leading us to a higher level of certainty. Arguably with true certainty only really being possible after we have done the work.
So, as the Cone of Uncertainty suggests, the closer we get to the completion of work, the more certainty we gain, which can lead to improvements in our estimation.
Thus, as we learn more, gain more certainty, does it make sense to re-estimate on a periodic basis? If we know more, could we assume our estimates will have greater value, better accuracy and precision, as we progress with the work?
And in doing so, does that again afford us an opportunity to learn and improve our estimation process, and to build greater trust and satisfaction with our stakeholders?
Again, this is an honesty and transparency thing. If we re-estimate part way through the work and establish that our original published estimate was off, maybe our new estimate is magnitudes bigger. Is it not the professional thing to share that with stakeholders as soon as it's discovered, rather than wait for some progress update scheduled for 3 months time?
In this episode, I wanted to talk about predictability. I've talked about why it is important, and if we are being honest with ourselves, much more important than churning out aggressively optimistic estimates. Predictability helps to produce trust and satisfaction with stakeholders, and trust is one of the most valuable commodities within the modern working environment. But getting to that predictability takes effort. It's skills that need to be developed and honed over time, and I've suggested using a blameless post-mortem approach and periodic re-estimation to help with that.
But if that sounds like work, and quite a bit of it, then you're correct. There is a cost associated with generating an estimate, and an even greater one to get good at it.
Thus, in next week's episode, I take a look at how much we should invest. I start to weigh up the organisational value of that valuable estimate with the cost of achieving it.
Work comes at a cost, thus we need to be confident we are getting the correct balance. I ask how much we really should be investing, and discuss things to consider when attempting to get a good return on that investment.
In short, if you want valuable estimates, be prepared to invest time and resources to obtain them. And I discuss why I feel the "off-the-cuff" estimation is so prevalent within software development. The super easy, quick way to keep a process moving, but providing no real value other than to tick a box or grease a cog within some project management process.
If you are attaching real value to having a valuable estimate, then you should be expecting a comparable investment in the team to produce it.
Thank you for taking the time to listen to this episode, and look forward to speaking to you again next week.