#14: Software Testing

In this episode I want to talk about testing, why it is important for ROI and what types of testing can be done.

Or listen at:

Published: Wed, 23 Oct 2019 15:21:17 GMT

Transcript

Wikipedia defines software testing as:

"Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use"

From an ROI perspective, testing should be focused on minimising the financial impact of software not operating as expected.

“Not operating as expected” is a very subjective term and I use that on purpose. How software is expected to behave is entirely based on our own context.

Software crashing would be viewed by all as a bug.

But a “left swipe” on a mobile app not clearing a message maybe unexpected to a user – but had never been intended by the product team.

Testing is about highlighting anything that would be unexpected to the organisation as soon as possible. It is then up to the organisation to decide how to handle that.

Why test?

Software development is complex & complicated.

Sometimes even the simplest change can feel like a game of Kerplunk

You pull on the wrong thread and you may have unexpected effects elsewhere in the system. And while in Kerplunk you can see this by the deluge of marbles – in software it can be much more subtle – many times going unseen by the developer.

Testing is there to spot those effects as cost effectively as possible.

For an unexpected effect, it becomes more expensive to an organisation the longer it exists. I've talked about this before, but in short, it is obviously cheapest to be handled by the developer at the time they add it – and most expensive when it makes it into the wild and affecting your commercial relationship with customers.

So ideally we want to resolve as early as possible.

You then need to think about the investment to find it.

You could employ a massive QA (Quality Assurance) team to test and validate everything you do.

But is that cost effective if you are making very small changes to an internal website with no critical impact if it goes wrong?

Unfortunately there is no one-size-fits-all rule for testing. It will be different per organisation based on risk and cost.

Let's talk about different types of testing.

There is a considerable body of research into testing. There are countless books, courses, articles, etc devoted to the subject.

So … no plans for me to go through them all in this article.

For this article, I wanted to look at 4 types – manual, unit, integration and acceptance.

Lets start with manual testing - the act of someone physically testing the software.

Our manual tester will be putting the software through its paces. Pushing it to its extremes and likely taking it places that hasn't originally been intended.

Or at least they should do.

Too often out manual test is spending their time doing repetitive, well trodden path style testing. This is where most testing practice starts. Create a list of tests, and pass them over to someone to repeat over and over again.

Those test will generally be your core operations – so will be great for making sure that the well trodden path works – but it does very little for anything that detours off that path.

What we should be using our tester for is designing tests that can be automated. Testers define the test – the developers automate it – the tester moves onto the next set of tests.

Once automated, those tests can be run by the system. You can run those over the weekend, at night, during lunch, etc – you aren't constrained to the individual.

This should be freeing the tester up to find those less trodden paths through the system. Ultimately improving the quality of the software and thus the ROI.

Now lets more onto Automated Unit Testing

One of the most common types of testing produced by developers are unit tests.

When I say produced – these are software in their own right. They are like mini-programs that validate the lowest level component in the software.

Image you have a car. Unit testing would be looking at one component of that car and testing it to prove it operates as expected.

Take the seat belt material for example. When buying you car, it really isn't a consideration. But the amount of testing that goes into that single component of the car is phenomenal.

And do you think that the material is tested by hand?

Nope. They automated the process with a machine.

While there is an investment to setup the unit tests, once they exists then it can be used repeatedly very cost effectively.

In most cases (unless you are deeply technical) you are unlikely to get involved or care about the Unit Tests – other than they exist and are being used. As with any test, if they are not maintained in line with the system under tests then they quickly lose value.

It is key that a development team are empowered to produce and maintain those Unit Tests. And when one fails then the team need to understand why. Was it because of a desired change in the system (and thus the test needs to be updated) or was it unexpected due to something changing elsewhere?

Setup right, those Unit Tests can provide exceptionally fast feedback to a developer (within seconds). This reduces the impact of any problem greatly because recent work is within the developers mind, thus (in theory) making it easy to find the source of the problem. And of course then fix as appropriate.

Automated Integration Tests are similar to Unit tests, but will tests how well a number of components work together.

In our car example, we'd want to perform integration tests against the engine.

We can test all the components of the engine work correctly in isolation – but we need to test them in the aggregate to ensure that our engine is doing what is expected.

Again these are generally tests for the immediate benefit of the development team.

Acceptance tests are the part that you are getting involved.

Acceptance tests should be validating the key characteristics of the software that deliver value to the customer.

They should be at a high level of granularity – so for example, we want to make sure that the car starts, drives forwards & stops.

These tests will inherently be slower than unit & integration tests to execute - they will have more moving parts. Ideally you would run these as soon as software is changed to keep the feedback cycle as short as possible.

Again automation is appropriate here.

Even if run automatically nightly, this will be considerably more cost effective than waiting for the availability of a manual tester.

An important consideration is how much to invest in these activities.

The rule of thumb is ... It depends.

You ideally want to invest enough time and energy to get the desired benefits - do just enough give a reasonable level of confidence.

Unfortunately that will depend on an organisations own judgement call of cost vs risk.

As a rule though, I would suggest you aim to add testing until, as an organisation, you are comfortable with not needing that manual testing on a release. Build up your capabilities (especially the automation) alongside any manual work you are doing - bit by bit until you reach a level of confidence that you don't need that manual test.

Don't strive for perfection.

You will never reach a point where everything is 100% tested - not only will it be exceptionally expensive ... Its largely impossible in any system of any size. The number of variabilities within a given system can soon become immense.

Again as a general rule of thumb, I would expect;

  • A very low number of Manual tests (if any)
  • More Automated Acceptance Tests than Manual
  • More Automated Integration Tests than Acceptance
  • More Automated Unit Tests than Integration

This is described as the Testing Triangle, where the top of the triangle in the small number of manual tests, and bottom by the large number of Unit Tests.

From a cost perspective; that triangle is invested. It is considerably cheaper to produce and operate:

  • An Unit Test than an Integration Test
  • An Integration Test than an Acceptance Test
  • And an Acceptance Test than a Manual Test

If we tie testing back to some of the previous episodes;

Lean, introduced in episode 7, talks about Defects being a source of waste. It also talks about Building Integrity In.

Agile, introduced in episode 8, talks about valuing Working Software

And DevOps, introduced in episode 10, talks about automation of any process to increase flow of work through the system.

Good testing makes for good productivity and a quality product – all great stuff for ROI.

There will always be a tipping point between effort and reward (as in all things) – but when doing that assessment, take into account the potential lifetime of the product (3 – 5 years generally).

The key is to ensure that your team have the time and empowerment to implement AND maintain the proper testing. It will always be tempting to cut corners when the pressure is on – but if you don't do it right now, when will you?