The State of DevOps report provides excellent insight through rigorous analysis of its wide reaching survey. The research provides evidence-based guidance to help focus on the capabilities that drive performance. One of those is the DevOps Technical Practices of: Why you might be interesting in this episode:
Or listen at:
Published: Wed, 02 Mar 2022 20:08:53 GMT
Hello, and welcome back to the Better ROI from Software Development podcast.
In the last few episodes, I've been looking at the State of DevOps report and what it tells us about practises that can help you achieve better results.
In episode 120, I summarise the State of DevOps reports.
In 121, I told you what it said about Cloud Computing
And in the last episode, 122, I told you what it said about Documentation.
Now I want to move on to what the report says about the DevOps Technical Practises of:.
So why might this episode be of interest to you?
In this episode, I'll give you:.
Right, let's start with a recap.
The term DevOps - as I've said before, I like the Microsoft definition:.
"A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers."
It's a marriage of traditionally opposing forces, innovation and change from the dev side, stability and limiting change from the ops. DevOps helps us focus on the business outcomes that need a mix of the two.
The State of DevOps report is now in its seventh year, reporting on 32,000 professionals worldwide. Produced by DORA - DevOps Research and Assessment - it's the longest running, academically rigorous research investigation of its kind.
For me, it provides clear evidence on the benefits of DevOps and its practises. But many of those practises are universal. So even if you're not officially doing DevOps, I feel they can provide benefit.
The report specifically measured the technical practises of:
And of those, the report found that Loosely Coupled Architecture and Continuous Testing had the greatest impact. So let's start with those two.
Loosely Coupled Architecture; Wikipedia describes this at:.
"In computing and systems design a loosely coupled system is one in which components are weakly associated (have breakable relationship) with each other, and so, changes in one component least affect existence or performance of another component."
With Loosely Coupled Architecture, our systems will be made up of many components that can be changed or amended independently. I discussed many of the benefits when I talked about Monoliths & Microservices back in episode 17. It provides us the flexibility to be able to change components quickly and with lower risk. We effectively have a level of safety between each of those components - if done right, the impact of a failing component should be contained. If we need to investigate a problem or make a change, it's much easier. We're simply changing that one component.
Imagine trying to edit the chapters of a book. It's much easier to proofread and edit a single chapter than the entire book.
For me, the move to Loosely Coupled Architecture and Microservices, has been a substantial benefit in being able to provide business benefit. It's not a silver bullet and can introduce additional complexities, but for me, I would expect any modern system to follow this practise.
Continuous Testing; historically, we have left our testing (or QA) to the end of the development process. It's been very common to find that a project is into its 11th hour before testing is really performed - at which point a raft of problems are found.
Often this would result in a "them" and "us" mentality between the development team and the testers, each blaming each other.
And then the business unfortunately, is given a choice: delay the project go live or accept poor quality.
Too often, the allure of hitting the project go live, promoted long hours, weekends working tired - which in itself introduced more bugs and actually produced worse quality.
Continuous Testing brings that testing to be parallel to the development. The team are constantly testing as the product is being developed. I've said previously the quicker a bug is found, the cheaper is to resolve - and less impact it has on the product and the organisation.
If you're building a house, you'd rather fix any problems with the foundation before adding the walls. If you start adding more on top - if we write more code on top of bugs and building on top of them - then we're going to struggle to actually fix the underlying problems easily or cheaply.
And we certainly don't want to wait until we've got the roof on it.
The report stated that Elite performers who met the reliability targets were 3.7 times more likely to be using Continuous Testing.
And while the report doesn't specify, I would expect many of those tests to be automated - providing not just benefit as the product is being built, but also highlighting any regression during future development - a safety net for allowing our teams to go fast as they greatly reduce the risk of introducing regression box.
Next, let's look at the practises of Continuous Integration and Deployment Automation; similar to testing, integration of various developers work and then subsequent deployment, often happens late in the project. Again, only highlighting problems late in the day when everyone was tired and under the threat of impending deadlines.
And with any level of complexity, there will always be things that are missed or misunderstood. The more complexity, the greater the number and impact of those things missed or misunderstood.
Again, similar to the Continuous Testing, bringing these in parallel to the development, allow us to find them earlier, finding them earlier makes them easier and more cost effective to resolve.
Again, these are tasks that should be automated. As I talked about in the last episode on Documentation, having these steps documented by virtue of them being in automation, we're getting double benefit - you have a record of how the tasks should be performed and you remove the manual delaying mistakes of manually performing them. You have a repeatable process that could be run over and over again, giving early notification of problems so that they can be resolved and considerably quicker to accomplish the tasks.
And when it comes to deployment, it can really make the difference to allow us to release multiple times per day with no customer impact.
The report claims that Elite performers who meet their reliability targets are 5.8 times more likely to leverage Continuous Integration.
Let's move on to Database Change Management. This, again, has some similarities. Historically, we would treat our database differently to our development. We would have a separate team, the database administrators (DBAs).
Again, they would often be left late in the project, often struggling to provide guidance and governance of the developer's database work.
Again, often this would result in a "them" and "us" mentality.
And over time, we would find that our databases would be tweaked, maybe a fix to a bug or an improvement to performance. But often this happened directly on the database server without the visibility of the development team.
We could experience significant drift over time drift from what was expected to what was actually in place. And this led to confusion.
When making changes, nobody was quite sure what was correct. This delayed our changes or just made any further changes uneconomical.
Database Change Management introduces the same disciplines found in version control used in the rest of software development. It allows us to track who did what and when and generally why.
The report found that elite performers who met their reliability targets are 3.4 times more likely to exercise Database Change Management compared to the lower performing counterparts.
Next, let's look at Trunk Based Development. Trunk Based Development is a way of teams working with their source code.
I talked about source control back in episode 18. Source control allows us to track who did what and when and, of course, generally why we've asked software development.
Think about this as the history in a Word document. You can see the revisions over time by whom and when, and hopefully a why if they use the comments.
Now, Trunk Based Development is a specific way of using that source control, one which I personally favour and recommend.
To illustrate, let's consider multiple people in your organisation working on a critical contract. Maybe many departments need to be involved in the creation of this contract, each providing specific sections. Let's imagine we set those departments working on their individual sections for a full month.
At the end of that month, we bring the departments together round the boardroom table and attempt to stitch that final contract together. How smooth do you think that process will be? How well do you think those sections will produce a cohesive whole? How much will they contradict each other or confuse the reader? How likely is the final contract to have the same tone and voice throughout? How likely is it that the final contract will be incomplete?
How likely is it that everyone will leave the room with a list of remedial actions to address, only to return in a month's time to repeat the activity?
Now, imagine the departments working together on the same document, making relatively small changes on a frequent basis, maybe even hourly.
This may seem like more work, but it highlights problems earlier.
As I've said a number of times in this episode, the quicker we find something, the more effective it is to resolve. We are not deferring conflict until later. We are not kicking the can down the road. We deal with it when we find it.
The report found that Elite performers who met their reliability targets were 2.3 times more likely to use Trunk Based Development, whereas Low performers were much more likely to delay that merging activity.
Let's now talk about the use of Open Source Technology. I spent a bit of time talking about Open Source Technology in episode 96 to 98 - in terms of what it was and the motivation behind it.
But to recap, Wikipedia describes Open Source as:.
"Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product.
The open-source movement in software began as a response to the limitations of proprietary code."
Open Source is a considerable source of productivity in modern software development. It's almost impossible to imagine modern software development without Open Source.
But there are dangers with the use of Open Source; it can incorrectly be seen as free, and it can be difficult sometimes to find reliable, secure and supportable Open Source software - I talk a lot about this in episode 96 to 98, where I discussed the myth of it being free, the motivation behind those that create it and a discussion of the common licences in use.
The report specifically highlights the benefits of Open Source for recruitment.
You will be highly unlikely to be able to recruit based on prior experience of your proprietary software unless you're employing past employees.
But you can recruit for prior experience of Open Source software. The more popular the software, the more likely that you will find a community of developers with experience. And if using the "hot" software, it can actually act as a recruitment draw for good talent.
For me, this is as much of an argument for keeping up to date with your software development, ensuring that you're using the latest technologies and practises. By investing in this, you game benefits across many dimensions: recruitment, security, productivity and innovation.
The report found that Elite performers who met their reliability targets are 2.4 Times more likely to leverage Open Source technologies.
Let's move on to Monitoring and Observability practises, I talked about monitoring back in episode 15, where I described it as a safety-net post-release. We use automated testing and manual testing pre-production, pre-release. Monitoring is the other side of that coin, checking the system after we've released it into production.
By investing in monitoring, we're quickly able to highlight problems. It gives us also insight in how to fix it.
Bugs in production will cause financial and reputational damage, thus the shorter time the bug is in production, the smaller the impact is.
The report found that Elite performers who successfully met their reliability targets are 4.1 times more likely to have solutions that incorporate observability into overall system health.
You may have noticed a common theme in most of the suggestions: earlier is better.
Finding bugs and problems in our work earlier is better for our software development. Be it finding bugs in code through Automated Testing, or finding integration and emerging problems through Continuous Integration and trunk based development, or finding post-release problems with Monitoring and Observability, getting to these earlier makes them cheaper and easier to resolve. Ultimately producing better outcomes.
Personally, for me, whenever I start a new product, the first jobs I set up are source control, automated build, automated test, automated deployment. This allows me to develop the code faster due to the greater confidence and the safety-net it provides.
In this episode, I've given a brief recap of DevOps and the State of DevOps report, and I have talked about what the report says about the DevOps technical practises of:
I've highlighted a common theme running through many of these: earlier is better. Being able to pick up those problems earlier is cheaper and more effective to resolve.
And I've highlighted where the report has found correlation between Elite performers meeting their reliability targets and these practises.
In next week's episode, I'm going to take a look at what the report says about Security.
Thank you for taking the time to listen to this podcast. I look forward to speaking to you again next week.