#178: Transaction-based costing - a wrap-up

In this episode I wrap up this series of episodes on transaction-based costing by looking at the common themes and revisiting some of my initial reasons for starting the series.

For me the key takeaway is the common theme of constantly rethinking our best practice and adapting to the changing landscape of technology, our organisation, and our markets.

Or listen at:

Published: Wed, 23 Aug 2023 15:00:58 GMT

Transcript

Hello and welcome back to the Better ROI from Software Development podcast.

Over the last few episodes I have been looking at transaction-based costing as an alternative to the conventional CapEx/ OpEx model for software development.

In episode 174 I introduced the idea and how this approach could potentially provide more insightful financial metrics and drive cost effectiveness.

In episode 175 I looked at the relationship between transaction-based costing and "pay-per-use" technologies like serverless and cloud.

In episode 176, I explore the relationship between transaction-based costing and value stream teams.

And in last week's episode, 177, I looked at the relationship between transaction-based costing and small batch sizes.

In this week's episode, I want to wrap up this series of episodes on transaction-based costing with some closing thoughts.

I've scripted and recorded this episode about 4 weeks after the rest of the series, and I've done that on purpose. I wanted to make sure I had some time to think through how I would wrap this series up. And it fell just right for annual leave, so I've had a good amount of time to revisit what I've previously recorded and try and establish a common theme.

And simply put, the common theme I found is that much of the advice I provided over the last 4 episodes flies counter to what I was taught when I started in the industry.

As with my entire podcast series, if there is one thing you take away, what brought you to your current position isn't enough to keep you there. You have to continually adapt and improve. A continual state of learning.

When I was trained as a manager, many of the things I now advise against would have been considered good practice:

  • Produce a financial forecast for the next five years, covering both CapEx and OpEx expenditure, so that they can be approved before the project starts.
  • Procure oversized equipment to handle forecasted spikes.
  • Break the work into tasks for appropriate disciplines. We want to keep everyone 100% occupied.
  • Batching work reduces the overheads associated with getting the work done.

But, we live in different times. Software development has become much more "soft".

Some of the reasons that we would have previously batched large amounts of work, or had specialist silo teams, or had to spend 50% of the actual project time on up-front requirement gathering, they have simply gone away over the years. But the approaches we put into place continue to persist regardless.

Again, what has worked in the past, what has brought you success in the past, is no guarantee to provide you with the same benefits today or tomorrow. And it's critical that you as a leader, and even as an employee, understand the need for change. It is exceptionally career limiting to have that fixed mindset, that "it's always worked that way here" mindset. You must have a growth mindset.

Let's take a look at each of those best practices I mentioned earlier.

The general rule for any project starting was to have a solid business case. Something that would detail out what was needed to achieve a particular aim and the expected return. Effectively, an ROI calculation. If we spend this, we will achieve that.

The generation of such business cases were an art form in their own right. Not only was there considerable upfront planning to establish all the costs, most business cases would have needed extensive research into market conditions, customer fit, etc. Often the business case could take months, if not years, to prepare. Just think how many promising ideas never went anywhere because of such a high barrier to entry.

Now, we understand we are in a world of uncertainty. We know we need to have the agility and fluidity in the way that we work to adapt to the market and customer needs. We are forced to re-think how we have worked traditionally, making us re-think our previously held beliefs in what works.

We know that we need to be faster and more experimental. As organizations, we know we need to be continually making a series of small bets to test and adapt.

Thus, the overhead of producing a 5 year forecast for an experiment up front would be in itself cost prohibitive. Rather, we can run our experiment, look at the real costs and extrapolate them. If needs be, we can even set a budget for our experiment, and when that budget is met, the experiment is reviewed and even cancelled for full cost spiral.

In a world when we cannot be 100% confident of the costs or benefits at the outset, then having that budget, or table stakes to use the betting analogy, is a great way to rethink our investment decisions. It controls risks and forces us to keep close tactical control of that spend, especially when the costs can be tracked at the transaction level, and responsibility for its use is delegated to the team spending it.

As part of the business case, we would invariably need to obtain quotes for physical hardware to run our system on. This would have been challenging for a number of reasons:

  • We hadn't built the system yet, thus we had no empirical data on how much horsepower that equipment would need.
  • We would have to factor in operational peaks and resilience into our hardware. For e-commerce, we would have needed to build for that Christmas peak, or have a spare in case of failure, while in normal operation most of that horsepower would go unused.
  • Variability in costs and availability of that hardware through the supply chain. It wasn't uncommon for those costs to fluctuate wildly depending on market pressures, and availability may mean buying months earlier than needed to avoid the embarrassment of having a product, but nothing to run it on.

When we compare this to the "pay-per-use" model of the cloud and serverless technologies, it's easy to see that the cloud and serverless technologies support a much more adaptive and experimental approach.

We can defer expensive decisions until we need to make them. We can run at the capacity we need. We can ramp up or down within minutes, greatly reducing paying for unproductive horsepower. We can try something rapidly, see the cost and tear it all down if the experiment was a flop, greatly reducing exposure and risk.

Assuming we've made it past the business case, we then need to think about resourcing. This became a complex game of guessing how much time was needed by each discipline, scheduling that time in, and then managing the interdependencies between them. Legions of project managers, resource managers and line managers were needed to manage this complex and poorly understood series of critical interdependencies.

This was always unnecessary overhead. This was always waste that provided no value. It was just the cost of "doing work".

For a long time we believed that what was best for us was to operate every person at 100% utilisation. If someone wasn't 100% utilised, that was lost ROI. It was bad for business.

We now know, of course, that measuring the individual utilisation is not as effective as measuring a team's outcomes. But this is still a message I find that some departments, or even some organisations, struggle with.

Part of this mistaken belief is that many people have built their livelihood around this complex activity, and thus not just rely on it to put food on their table, but they also self-identify based on it.

Thus, it really is not a surprise that it can be difficult to challenge those long-held ideals and ways of working. This may be such a fundamental change that will only ever be realised with a change of leadership.

Let's move on to why batching was seen as best practice.

Mass production of goods has been a massive driver in driving down costs, both to produce them and then to sell them on. Advancements in being able to repeatedly, reliably and cheaply produce goods over and over had led us to think the same is true for any type of work. As such, we have applied batch processing to knowledge work in the same way we were using it with mass production. We tried to drive down the overheads by doing the expensive actions infrequently, by batching them.

And this has made sense until fairly recently.

Take for example the common maintenance period for downtime to update website or services to make changes. If you have an e-commerce website as your main storefront, the last thing you want as a business owner is to close the doors to your 24x7 customers so that you can update it.

Thus, it was common to batch those changes to keep the customer impact to a minimum.

But times have changed. It should be rare that you'll need to close the virtual doors while making changes. Any changes you make should be releasable while the doors are open. It's like wheeling in a new display stand. There should be no reason to close the doors.

This is often described as being able to change the tyres on a moving car. But I think that analogy sets the idea that this is exceptional. While it demonstrates the idea of making change while still being in motion, it is not common practice to see anybody changing a tyre like that.

With our software, however, the opposite is true. While I see the occasional service or website warning me of an impending maintenance period, these are becoming less and less common.

The other reason we previously liked batching was back to that resource efficiency.

Why get your QA team to test 10 individual changes, when they can be batched and they only have to test once?

Why get your operations team to release 10 individual changes, when they can be batched and they can be released once?

The answer is simple, so that you can get a return on your investment quickly. It's not just the effort to produce the change you're investing in, it's being able to validate the original idea, being able to iterate and adapt quicker, and to be responsive to the market.

And to address the perceived negatives of having to do something 10 times rather than once? Automate.

Automate your testing. Automate your release procedures.

The DevOps mantra of "if it hurts, do it more often" is key here. If you are doing it 10 times rather than once, then you are incentivized as a team to improve it over time. Add little bits of automation over time, and it's surprising just how much can be achieved in the course of a year.

I'd rarely start a new project now without baking that automation in from day one. The benefits simply outweigh the initial cost.

So for me, much of the common theme is that while these may have been best practice in the past, they no longer are. And we and our organizations need to respond to those changes. Otherwise, we are simply left behind.

Going back to the original genesis for this mini-series on transaction-based costing, I had a desire to introduce the idea of thinking about costs differently.

If, like me, you've professionally grown up in a traditional CapEx/ OpEx mindset, the microtransaction nature that comes with transaction-based costing can seem incredibly alien. Yes, ultimately, those microtransactions are just OpEx, but at a scale most organisations will struggle with, if nothing simply other than in terms of process and controls.

Can your organisation handle the common micro-invoices that you find with cloud, for example?

In the early days of cloud, it wasn't uncommon for it to be organizationally impossible to support monthly invoices for only a few dollars at a time. It would be an internal battle to gain approval for processes based on expecting 5 year spend and detailed business cases. It would be difficult to handle the processing of an individual invoice and assign costs. It was almost like organizations were set up to stop us doing it.

In the early days of cloud, it really wasn't uncommon for an individual to become so frustrated that they simply paid for it off their own credit card with their own money. It was just easier to get the job done.

But time marches on and the idea of micro-transactions become easier to understand and better supported by our tools and processes. In some organisations this may already feel like the norm.

In thinking about costs differently, I also wanted to reset the way we approach costs, especially in the way we tie them back to outcomes.

For example, if we are to consider the profitability of an individual transaction, let's say an online sale, we could only ever guess at the cost, generally by dividing the total CapEx/ OpEx cost by the total number of transactions.

Think about what we would have needed to have for this guess to have been correct. We'd need to know the entire CapEx/ OpEx spend and the transactions for a given period. And given that CapEx was generally for a 5 year period, then you could only really perform a meaningful calculation after 5 years, at which point, let's be honest, nobody really cares anymore.

The alternative was attributing cost based on assumptions. While useful, it still represented a guess and could be incredibly variable based on the volume of transactions.

Now compare this guessing to actually having all the microtransactions, all the micro-costs that actually make up that online sale. Suddenly the maths is incredibly different. We just sum the micro-costs and we have a true and accurate figure. Where margins are tight and costs need to be controlled, having a true and accurate figure can make the difference between being profitable and not.

And of course it prompts us to think differently about those costs. Having the costs closer to the transaction prompts us to understand it better, rather than simply treating it as the "price of doing business".

In this episode, I wanted to wrap up this series of episodes on transaction-based costing by looking at the common themes and revisiting some of my initial reasons for starting the series. Everything ultimately comes back to that common theme of constantly rethinking our best practice, adapting to the changing landscape of technology, our organisation, and our markets.

While transaction-based costing is unlikely to be something you can just pick up, I suggest even being aware of this concept can help us reset some of our long-held and likely out-of-date best practices.

But should you be able to implement, it gives us some exciting potential benefits:

  • It allows us to defer spend until it's needed, freeing up capital for other priorities.
  • It encourages improvement through transparency it provides.
  • It encourages team engagement as they have responsibility.
  • It enables that start-up mentality within a wider organisation.

I'd really be interested to hear your thoughts and experiences on this subject.

Thank you for taking the time to listen to this podcast, and I look forward to speaking to you again next week.