How should we structure our test team? This is probably the most common question I hear when talking to test leaders about what's on their minds.
What makes this question so difficult is that, frequently, it's simultaneously trying to understand the impact of several major changes on testing—the move to an agile development process, the move to some form of continuous delivery, the move to a more modularized or microservices architecture, the move to being a digital enterprise where the major touch point with customers is digital, and so on.
This question also typically tries to address some clear gaps that are emerging in testing, most notably:
- The user experience gap where 82 percent* of teams say they don't know how to ensure a high-quality user experience.
- The productivity gap where 89 percent* of teams say they can't keep up with the pace of DevOps.
- The automation gap where 56 percent* of teams say that traditional automation approaches can't test significant parts of their application.
Figuring out how to structure a test team is a huge, simultaneous equation. No wonder test leaders struggle to answer it.
I'd also add that (as Theresa Lanowitz said) software quality is currently suffering from “soundbite marketing.” Test strategies that sound great in 140 characters seduce too many people and end up leading teams down bad paths.
But, in the last six months, I've seen several companies that I think are real thought leaders in testing—all successfully implementing a similar structure, which I think makes a lot of sense. So, I wanted to share it. The diagram below summarizes the structure with the key principles below.
End-to-end test is critical. Given how complex and dynamic modern applications are, it's just impossible to predict the end-to-end user experience based on the sum of the parts. So, you must have a dedicated, end-to-end test team that is all about testing the user experience from the user perspective.
The scrum team should be a true, cross-functional team. That means you have people who are great at every role, and where not everyone in the team does every role. Too many teams today have made the mistake of thinking that a cross-functional team means that developers do everything. That's as crazy in software as it would be in professional sports.
Developers in the scrum team create unit tests (white-box testing). This is code checking. Developers are great at this because they understand the internals. You shouldn't try to reuse these assets further down the process—it's a different type of testing and the effort to reuse is more than it's worth.
The tester in the scrum team creates black-box tests for that component (e.g., the app, or the web GUI, or the microservice API, etc). They’re testing from the user perspective (for APIs, the "user" is another piece of software).
These tests from the embedded tester then go to the end-to-end team. The end-to-end team arranges these component tests into end-to-end tests.
The tester in the scrum team should have either a dotted or a hard reporting line to the person running the end-to-end team. Otherwise, test fragmentation typically occurs and the end-to-end team can't reuse the assets.
So, that's it. It's simple, but there are some key decisions in there that I think address many of the issues today’s teams face, and that remove a lot of complexity.
Please get in touch if you have thoughts or alternative approaches that have worked well for you.
* Kickstand survey of 750 testers in the U.S. and the U.K. conducted on behalf of Testplant, August 2017.