<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/33f0fcb02bbd41af9d706a2c469929f2/pixel?tag=ViewContent&amp;noscript=1">

Six Reasons Why We Need Dedicated, Professional Testers

By Antony Edwards

Antony Edwards

Antony Edwards - 7 November 2017

In my last post, I described a test team structure that I've seen several companies (which I think are real thought leaders in testing) successfully implement over the last few months. Included in that structure is the sometimes controversial statement that scrum teams should have dedicated, professional testers; that is, we shouldn’t make developers responsible for all testing (though they should be responsible for white-box unit testing).

In this post I want to make the case for why this is the right thing to do.

  1. The majority of defects are caused by incorrect, implicit assumptions made by the developer—they’re not just careless coding errors. For example, a developer may (subconsciously) assume that everyone will always enter their username before their password at a login screen. Clearly, a developer won’t find these defects because they’ll likely have the same assumptions. It’s also very likely that another developer on the same team will make the same assumptions.
  1. The vast majority of developers struggle to think like or behave like normal users. They’re not normal users; they understand what's going on under the hood, and modify their behaviors accordingly. So, developers make bad black-box testers (though they are the best white-box testers). How often have you heard a developer say, "No, that can't be it, that's not touching the database." And how often have you later heard, “That's weird, it is touching the database."
  1. Developers test what they build, not what the user uses. Consider a modern microservices architecture where you probably developed less than 30 percent of the components being used (for example, where payment services, video services, location services, and content distribution services are being provided by external microservices). Here, it's critical that someone is testing from the user perspective.
  1. Cost. Unfortunately, testers are typically paid 35+ percent less than developers (in many organizations, they’re paid half). So, moving testing to developers is just a bizarre thing to do from an economic perspective. If developers were able to test twice as effectively or efficiently it would of course make sense, but I've never heard anyone suggest that's true (with any credibility).
  1. Empirically, developers just don't test. Organizations that move test responsibility to developers always have a measurable reduction in test assets and an increase in post-release defects. Most developers don't enjoy testing, and there's always some urgent development work to do, so testing is always the thing that doesn't get done. This is not a theoretical point, and of course there are some developers who do prioritize testing, but they are in a very small minority. The fact that practically everyone knows is developers don't get around to testing.
  1. Testing is a skill and an aptitude. For example, testers need great EQ whereas developers generally do not. So in the same way, no one would think it sensible to say, "Everyone in my sales team must also be a lawyer so they can quickly deal with terms and conditions." or "My footballers must play every position." It just doesn't make sense to assume everyone who's a developer has the skills, behaviors, or interest to be a good tester. Testers need to have more pride in their abilities and challenge anyone who says this.

So, developers as testers is ineffective and expensive—which is also what almost all empirical results I've ever seen conclude. We've seen large organizations where (with the right automation) a developer-to-tester ratio of 15:1 was able to massively increase quality over a developers-as-testers strategy—and development productivity went up too. A strong argument for why having dedicated professional testers is a no-brainer.