#102: The Theory of Constraints - Part 2

In this episode, I discuss the Theory of Constraints as introduced in the book The Goal by Eliyahu M. Goldratt.

Modern software development methodologies (Agile, Lean, DevOps) place a heavy focus on the flow of quality work through the development process - and the continual improvement of that flow.

The Theory of Constraints helps us to identify areas of improvement within that flow.

Note: I originally recorded this as one episode, but have subsequently split into two parts during the edit. Part 1, last week, introduces the ideas of flow and the Theory of Constraints using a simplified manufacturing line example. Part 2, this episode, applies those thoughts to our traditional software development practices and look at how we may traditional have tried, unsuccessfully, to resolve those constraints.

Or listen at:

Published: Wed, 29 Sep 2021 15:29:41 GMT


So now let's look at this in terms of software delivery.

In our more traditional software delivery and a more waterfall method, we would have silo'ed teams.

The work would come into our business analysts. Our business analysts will probably produce a document and pass that may be into our architecture team.

Our architecture team would take that document and produce their own documentation on it. They would then produce high level designs. Those high level designs would go to the developers.

The developers would then build the software.

The developers, once they complete the software, would passed that software over to the quality assurance team, who would test it and verify the quality, and that it met the original requirements as detailed by the business analysts.

Once it passed. Quality assurance, it was probably then passed into some form of operations team for deployment into production and ongoing maintenance and support.

It's quite easy to see a correlation then between our business analysis, our architects, our developers, our QA, and our operations team as being synonymous with the machine A machine B, machine C in the example I gave you earlier.

They have various steps along a process. As such, they are very much subject to the same problems as that process. They're very much subject to being constrained to the slowest silo within that process. And of course, taking into account statistical variance in terms of what each one of those silos can produce.

If we take, for example, in a traditional environment, we may find that the quality assurance team is maybe our bottleneck. It may be the slowest place for us to be able to do things - because when we would provide our change into them, they need to potentially retest the entire system. What could be seen as a small change, makes a lot of work for the quality assurance silo.

Thus, you may find that in a normal process, while the business analyst team can maybe handle 10 pieces of work per week, the architecture team may be 12 pieces of work per week, the development team, maybe eight pieces of work per week, the quality assurance team, only three pieces of work per week, and operations may be seven pieces of work per week - effectively, the flow free your system is constrained by that quality assurance team.

Traditionally, we've tried to get around this problem by batching things when they go into quality assurance. So rather than it just being one piece of work they're looking at, they're potentially looking at 4, 5, 10, 20, 100 changes in a single batch. The theory being that if we botched it, they can test everything at once rather than having to go and test the system multiple times for each individual. We expect to gain local efficiencies for the QA team by them batching their work.

But that has its own downsides.

By doing that, batching the process, the upstream silos are having to do a lot more work across different changes before it gets into QA. Which means potentially when we find a problem with one piece of work, the development team or the architecture team or the business analysis team may not have actually touched or looked at that work for weeks, potentially for months. That making it very difficult and expensive to then respond to any problems that's been identified by the quality assurance team.

And I've seen in many environments that the amount of work to test and then subsequently release has got so large that the risk is just too great. The company enters a level of paralysis because there is too much risk and too many things changing, to be confident that when they go through the testing, they're not going to find regressions in previously working software.

And because we batched all of that work at the quality assurance stage, we're expecting operations then to release that as one big batch.

And this is where all that risk and potential downtime comes from. Again, I've seen it where because of so much being released, companies have had to take their primary systems offline for an entire weekend or maybe even longer, affecting both their staff and their customer -because of how much work has been backed up.

Batching also has a side effect of tying up earlier investment, delaying our ability as an organisation to realise any benefits from any work done by the business analysts, the architects or the developers.

What we should be looking at more is that flow through the system. Because we're batching we're effectively stopping the work flowing through the system.

If you think of each piece of work as being a single item, we want to actually try and get to a point where we have single piece flow. We want that piece of work entering the system at one end and coming out the other end and being useful and valuable to us and our customer.

We want to minimise the time it takes to go through that flow - all the time, making sure that we're still meeting the correct level of quality, security and compliance in everything that we do.

From the outside, what we're trying to achieve may seem counterproductive. What we're trying to do is go faster by doing less, by doing one thing at a time.

And by doing that one thing at a time, we can identify the constraints and work on those constraints to improve them.

Goldratt describes this like trying to find rocks in a river. By reducing the water level, the work going through the system, you start to see the larger rocks, the constraints in your systems.

So going back to a quality assurance team, we know that they are the constraints in this example. We've tried to fix it by using batching. But we found out that what we're producing is negative effects in the process as a whole. We're affecting the overall group, as well as introducing a lot more risk when it comes to the delivery process.

So how do we look at improving that quality assurance piece? So we know where our constraint is to how can we improve that constraint?

There's a couple of techniques here, specifically around quality assurance that can help us improve that constraint.

One of them is automation. I talked previously about automation being an excellent way of improving our ability to test - not just being able to be confident the same test is being run over and over and over again, but also in terms of freeing up time for a quality assurance team to be able to think outside of the standard tests, to be able to look at other, more valuable work, finding problems and potential issues in our system - rather than repeating the same rote test time and time again.

So automation is certainly key to helping us with that bottleneck.

We can also look at the job roles of people and how they work.

It's not uncommon for us to find development processes that are moving to a more agile process to still have silos, but be at a much smaller levels with single piece flow. And that still might give us that bottleneck, admittedly a much smaller bottleneck, but it would still give us a bottleneck within our quality assurance team.

A way of addressing this is to look at the T-shaped developer.

Back in episode 73. I introduce you to Scrum, a very popular, agile framework used for software development. Within Scrum. It defines roles, but it defines anyone that is delivering value into the product, anyone working on the software, as being a developer, regardless of what their traditional role may have been.

So regardless whether they were traditionally a business analyst, an architect, software programmer, quality assurance or operations, they're seen as being a developer.

And this is a description of what is called the T-shaped developer. Wikipedia describes a T-shaped developer as:.

"The concept of T-shaped skills, or T-shaped persons is a metaphor used in job recruitment to describe the abilities of persons in the workforce. The vertical bar on the letter T represents the depth of related skills and expertise in a single field, whereas the horizontal bar is the ability to collaborate across disciplines with experts in other areas and to apply knowledge in areas of expertise other than one's own."

So when this comes to software development, you may have somebody that is exceptionally skilled in the development part, but they may also have skills in business analysis, in architecture, in quality assurance, in operations.

And they'll have differing levels of capabilities in each of those skills. But because they have some level of skill, it means that the team as a whole can help each other out around the bottlenecks.

So where we have a bottleneck and a constraint in our quality assurance process, the team as a whole can help. The team as a whole, can jump in and assist, what originally would have been an individual perhaps, to share the work.

And the same is true across the entire piece of work. Everybody has a level of involvement within that business analysis. Everybody has a level of involvement in the architecture. Everybody has a level of involvement in the development. Everybody has a level involvement in the quality assurance. Everybody has a level of involvement in the release and the ongoing maintenance.

They are no longer there just to do their one piece.

We're breaking down those silos of functional capabilities.

Yes, the thing that probably brings them to the team is their strong ability to do one specific skill set. But they are rounded enough to be able to either have or to gain skills in those other disciplines. And thus, as a team, they can work through and help to alleviate constraints within the process.

While automation and the recruitment of T-shaped developers can assist with a constraint, where Agile, Lean and DevOps shine is the ability to expose those constraints in the first place.

Regardless if they are called wastes in Lean or impediments in Agile, all three methodologies place a high value on first finding and then elevating them for resolution.

For example, if we go back to Scrum, it has two defined meetings for finding and sharing those constraints.

We have the daily scrum, a quick daily meeting intended to identify impediments to the team getting the job done.

We also have the retrospective, a meeting held every spin cycle dedicated to discussing how we as a team can improve the flow.

Within these methodologies, constraint, be they called waste or impediments, are seen as opportunities to improve the flow.

And similar to the five steps for addressing constraint, the work is cyclical. We know that the improvement work will never be complete. We know as we clear one constraint, we will surface the next, and by surfacing the next, again, we have another opportunity to improve the flow.

We know we will never reach the bottom of our possible sources for improvement.

In this episode, I've introduced the idea of the Theory of Constraints.

The Theory of Constraints helps us to identify constraints in our flow within our systems. Our flow is what we want to look at in terms of being able to produce the best level of throughput for our software development processes.

And this isn't just software development. This obviously originated in manufacturing, but the idea of flow can be found in almost every discipline now. Whereas traditionally we've looked at task focus. We are now more and more moving towards, in every activity that produces an outcome, looking at flow and throughput - and improving those over time. With a constant view of constant improvement to give us the capabilities to produce quality work in a timely manner, to satisfy not just the business, but the market, and allow us to thrive in this ever changing environment.

Thank you for spending your time listening to my podcast. I really do appreciate each and every one of you that takes the time to do so. I look forward to speaking to you again next week.