Testing when there are no users

By Antony Edwards | 11/7/16

Gartner Symposium. I always enjoy it. Of course it's a deluge of buzzwords and bombast, but there is substance beneath, and I enjoy thinking about the implications for teams creating software in the future.

This morning I was listening to a talk from someone in the Oil and Gas industry who has actually deployed machine-to-machine, i.e. what everyone talks about as the future of Internet-of-Things but very few have any practical experience of. It was an interesting talk.

As I listened to their talk I realised how central 'the user' is to our current model of testing. Fundamentally most companies quantify quality in terms of the user, e.g. by measuring customer-raised-unique-defects (CRUD), basket abandonment, or some similar measure. Users are also treated as the last line of test, i.e. if a defect is not caught by the development/test team then a user will find it, report it, and if the team can fix it quickly most users won't be impacted. The latter is somewhat formalised by "testing in production".

But what if there are no users (or at least no human users)? What if the 'user' is another software application which has no interest in reporting defects? How would this change how we approach software development and testing?

Actually such situations exist today and these existing systems should provide clues. A friend of mine leads the development of an automated trading platform - where the majority of both inputs and outputs are not to humans but to other live systems. A mistake there can easily cost millions of pounds before it's caught and a catastrophic error can cost much more.

So what do these existing machine-to-machine systems do differently to other systems?

  • The software tends to be very clearly encapsulated with low coupling. Semantics are well defined. Control-flow and data-flow are controlled. Most importantly this ensures that internal checks can be done reliably (i.e. without finding there's another path that avoids the checks).
  • Test-cases are executed with thousands of different (carefully defined) data sets. Obviously those are automated.
  • Expected test outputs are not based on a 'golden oracle' (because there are lots of different reasonable results) but on models of 'good' and 'bad'.

But most importantly - if they are not confident that the software is robust they don't release it; and this drives an entirely different culture throughout the whole development cycle and a thousand small differences in how these teams produce software. Interestingly this is something we could all do today.

Antony Edwards

Written by Antony Edwards

He’s our COO. He guides where the product goes. Antony likes haiku.