#122: State of DevOps 2021 - What it says about Documentation

The State of DevOps report provides excellent insight through rigorous analysis of its wide reaching survey.

The research provides evidence-based guidance to help focus on the capabilities that drive performance.

One of those is quality internal Documentation.

Why you might be interesting in this episode:

  • The correlation of what the survey found to great software development
  • Advice to improve that Documentation

Or listen at:

Published: Wed, 23 Feb 2022 16:45:58 GMT

Links

Transcript

Hello, and welcome back to the Better ROI from Software Development podcast.

In this episode, I'm going to continue talking about the State of DevOps report and specifically what guidance it gives around Documentation.

In the last couple of episodes, I've been talking to you about the State of DevOps report, both in terms of the report itself and some of the guidance it advises in terms of producing better outcomes from your software development.

So in the last episode, I talked about cloud. In this episode, we're going to talk about Documentation.

So why might you want to listen to this episode?

Firstly, there's the correlation of what the survey found between great software development and great Documentation.

And secondly, the advice that it provides in terms of improving that Documentation.

In this episode, I will provide a brief recap of DevOps and the State of DevOps report itself, which I talked about fully in episode 120. I'll talk about what the survey looked at for Documentation and the key findings. I then go through the key practises that the report believes significantly impact on document quality. And of course, I'll give you my own thoughts along the way.

Ok, so let's start with the recap DevOps. What is DevOps? Like so many technical terms, it's very overloaded and many people have different definitions. I personally like the Microsoft definition:.

"A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers."

It's a marriage of traditionally opposing forces, the innovation and change that comes out of the development and the stability and the limiting of change that comes out of the operation. It focuses on the business outcomes, which needs a mix of the two.

The State of DevOps report is in its seventh year of reporting on 32,000 professionals worldwide about their use of DevOps produced by the DORA team - DevOps Research and Assessment. It's the longest running, academically rigorous research investigation of its kind.

For me, it provides clear evidence of the benefits of DevOps and its practises. But many of those practises are universal. So even if you're not officially doing DevOps, some of these practises can really still provide benefit.

The report took a specific look at a number of practises it believed help produce better outcomes, better software development practises.

One of these was the quality of internal Documentation.

They measured Documentation on the quality by the degree to which it:

  • helped the reader accomplish their goals
  • was accurate, up to date and comprehensive
  • was findable, well-organised and clear.

The report summarises Documentation as:.

"Documentation is foundational for successfully implementing DevOps capabilities. Higher quality documentation amplifies the results of investments in individual DevOps capabilities like security, reliability, and fully leveraging the cloud. Implementing practices to support quality documentation pays off through stronger technical capabilities and higher SDO performance."

They found that only 25% of respondents had good quality Documentation - but the impact of that documentation was clear - teams with quality documentation were 2.4x more likely to see better software delivery and operational performance.

Now, the low percentage doesn't surprise me. As a consultant going into struggling clients, it's common to find a lack of useful documentation. And I think it stems from three things.

Firstly, documentation has traditionally been seen as drudge work by developers, a task that most developers don't want to do. They want to build cool things. They want to solve hard problems.

Secondly, employers reward for new code and solving those hard problems. Employers place value on new features, new code, not documentation. In fact, one of the key recommendations that the report makes is to recognise documentation work during performance reviews and promotions. More on that later.

And thirdly, a lot of documentation historically has been failing to provide value. And I've commented on this many times myself in my episode. I've actually been quite negative about the wrong kind of documentation. The kind of documentation that are a means to an end. The kinds of documentation that are to defend a position. The kinds of documentation that are purely there as a step to an end goal.

And The Agile Manifesto even tells us to favour "working software over comprehensive documentation".

But there is still value in good, useful internal documentation. So this begs the question, what sort of documentation is valuable and how do we get it?

The report provides some clear, specific advice on this:

  • Document critical Use-Cases
  • Create clear guidelines for updating and editing existing documentation
  • Define owners
  • Include it as part of the development process
  • And recognised development work during performance reviews.

As we've already touched on it, let's talk about that recognising documentation work during performance reviews.

I find it interesting that almost every job spec I see will include the term "good written communication skills", but the job rarely rewards or encourages its use. We recruit and reward based on developing new features, solving problems. We don't reward on the quality of documentation.

Thus, why would a developer consider it important?

We as employers need to encourage documentation to be part of the job. It should have an equal billing with the quality of the code, the test ability, the security of the code -which the report repeats when it says include as part of the development process.

In episode 76, I talked about the "Definition of Done" from the Scrum framework. This acts as a checklist of all the tasks needed to be completed to be considered "done". Documentation should be part of that if you have one. I certainly recommend building a Definition of Done, because not only does it define out to the team what is expected from them by the organisation. It also allows the development team to go back to your organisation and say "No, we're not ready. We haven't finished because we haven't met the definition. We haven't met that contract. On what we agreed when we said we would do this to meet that Definition of Done".

And having something like that Definition of Done helps us go some way to providing those clear guidelines for updating and editing existing documentation.

But I would also consider other opportunities in terms of when we should be reviewing documentation.

After problems. If we had an outage at 3 a.m., was there enough documentation to allow us to fix it.

As part of any post-mortem review - have we asked if there's anything missing from the documentation? Have we asked what worked?

If you run GameDays, again as you review ask, the question, is there anything missing?

And I find that new starters are an excellent opportunity to review documentation - a fresh set of eyes really helps to identify gaps or stale information in documentation that we may have been living with for some period of time.

The report recommends having defined owners for the documentation. Again, this seems sensible.

While I think everybody should have the ability, and potentially the responsibility, to update the documentation where needed, having an owner means that it should be periodically reviewed for suitability.

And for me, as a consultant trying to understand the system at speed, the documentation of critical Use-Cases is golden.

You know, start with the purpose of the system. As a system grows, this can be sometimes difficult to understand, but there should be an elevator pitch - what is its core purpose? If it cannot be described, then maybe it's too big to be maintained in its current state.

And moving on from documenting its core purpose, we want to document those key Use-Cases. Say, for example, if it's a sales website, we would expect:

  • A customer should be able to browse products
  • A customer should be able to add products to the basket
  • A customer should be able to check out and pay for the basket.

For documenting these Use-Cases, I really like using Behaviour Driven Development (BDD). While, not only useful in terms of being able to document those use cases, it also allows us to produce automated tests.

Our Use-Cases then are producing double benefit - they're providing Documentation to align the team as a whole as to what the system should be doing, and it's providing automated testing for any regression that may be accidentally added at some future point.

So what is Behaviour Driven Development? Wikipedia describes it as:.

"In software engineering, behavior-driven development (BDD) is an agile software development process that encourages collaboration among developers, quality assurance testers, and customer representatives in a software project. It encourages teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave."

We would generally find our Use-Cases then being written out in a specific format, for example:.

  • GIVEN Eric has a valid Credit or Debit card
  • and his account balance is $100
  • WHEN he inserts his card
  • and withdraws $45
  • THEN the ATM should return $45
  • and his account balance is $55

The Use-Case is broken down into a simple structure. We are GIVEN a starting point. We then have a WHEN something happens, we have a THEN we expect this result.

It is a simple way of describing a Use-Case which can be understood by anybody in the team. There is no technical skill necessary to understand it.

And then that Use-Case can be used to automate tests against the system as we build it.

Thus, we're getting that double benefit. The documentation, in that form of GIVE, WHEN, THEN as a Use-Case, makes it easy as a team to coalesce around what a specific function should be doing. And then having the automated test to prove that it's doing it and continues to do it going forward.

I think Behaviour Driven Development is an exciting and interesting subject and something I will return to in the future.

Above and beyond what the State of DevOps report recommends, there are a few other things I would recommend when it comes to documentation:

  • How does it interact with other systems?
  • How is it developed, built, tested and released?
  • What key decisions have been made?
  • And make it super easy to maintain and access?

So why am I looking to document interactions with other systems?

Well, it's quite common now for most of our systems to deal with upstream and downstream systems. And sometimes that can be difficult to tell what those interactions are.

Sometimes I've found in organisations that legacy systems are not turned off for fear that something else might be using them. There's a fear that unexpected errors, either upstream or downstream, may occur if that system is ever switched off or changed.

Why do I ask how the system is developed, built, tested and released?

Well, these are the basics if we need to make change - which we would expect to do unless a system has been decommissioned. We would expect it to go through some level of continual change.

Now, again, a bit like the Behaviour Driven Development, we can actually be looking to do double duty here. If we document our build, test, release processes in our automated Continuous Integration, Continuous Delivery and Continuous Deployment systems, as I discussed in episode 19-21, we're getting double benefit.

Firstly, we're documenting the steps that we need to take to develop, build, test, release that code.

And we're also automated the process. And by automating it, we can do it faster and more reliably than relying on an individual.

So why would I ask for us to document key decisions made?

I always like to make the assumption that whoever built the system before me made the best decision they could based on the information they had at the time.

But even so, it can still feel like archaeology - trying to understand why things have been done in a certain way.

I've certainly equated working with legacy systems as being something like Indiana Jones - one wrong move and you've got a five ton boulder rolling towards you.

Having how key decisions have been made would be useful to understand the context of why it is the way it is. Within software development, we can use "architectural decision records" - these are used to capture any important architectural decision made along with its context and consequences.

Now, this doesn't have to be in-depth Documentation, and certainly would probably differ from team to team, and it can generally be quite light. Maybe a few paragraphs on the problem being solved, what approach was taken and why.

Having this information can help us avoid costly mistakes of reversing prior decisions because we didn't understand the context under which certain changes had been done.

To a certain extent, it helps us avoid repeating the same mistakes as the past.

And any Documentation must be easy to maintain.

For every barrier you create, you create an excuse for it not to be done.

I've seen examples where documentation has needed to go through a rigorous process to be updated - well meaning policies to give consistency or avoid mistakes. But producing barriers in any change being made, there's no surprise that those changes are then not being made to the documentation.

It must be easy to access and maintain. Things like WIKIs (like Wikipedia) or either that or as part of the source code. Places where it's easy for the development team to be able to firstly access the documentation, for it to be useful, and secondly, be able to change it in line with the code that they're working with.

Any form of formal process should be avoided. Formal processes often produce extra steps that put people off making the change.

And as I say, for every barrier you put in place, the less likely you are to get quality documentation.

So we've talked about a number of things of what sort of Documentation we want and how to get it, but this does beg the question: what do we not document?

As I said earlier, I've been quite critical of bad Documentation - Documentation that doesn't really help us.

For example, having requirements documents written chapter and verse, I've gone into projects where people have spent six months writing documents which have never been used.

Sign off documents - do we really need them?

Again, go back to that Agile manifesto idea of working software over comprehensive documentation.

If you're not sure whether the documentation provides value, think of it this way: will that documentation have value after the system has been running for a month? If it doesn't, then I'd really question whether it's providing value.

Include this question as well in any review. What documentation is no longer required? What documentation can then be removed so it doesn't need to be maintained?

In this episode, I've given you a brief recap of DevOps and the State of DevOps report. I've talked about how the report recognises the benefit of quality Documentation - 2.4x more likely to see better software delivery and operational performance, yet only achieve by 25% of respondents.

I talked about why that figure may be so low. A combination of developers seeing it as drudge work and employers not rewarding based on it.

I talked through the report's recommendations:.

  • Document critical Use-Cases
  • Create clear guidelines for updating and editing existing documentation
  • Defining owners
  • Including as part of the development process
  • And recognising that documentation work during performance reviews.

And I added my own additional recommendations of:

  • How does it interact with other systems?
  • How is it developed, built, tested and released?
  • Why certain decisions have been made
  • And to make it super easy to access and maintain.

And during which I've highlighted a couple of places where we can use practises like Continuous Integration, Delivery and Deployment and Behaviour Driven Development to not just document, but also automate to gain double benefit.

In the next episode, I want to look at what the report recommends in terms of DevOps Technical Practises.

Thank you for taking the time to listen to this podcast. I look forward to speaking to you again next week.