The annual Gartner Symposium/ITxpo in Barcelona, Spain, is a great pulse check for what's on the minds of CIOs in large companies (like banks, utilities, telcos, governments). It's not necessarily the place to see the absolute latest technology, but it is the place to see what organizational problems CIOs are trying to solve with technology, and what companies are rolling out next year.
So, here are my top three takeaways from the show for test architects.
- AI technology is changing rapidly, so focus on applications not platforms. If you take the traditional IT approach of spending 12–18 months to build up a platform and apply it to specific problems, by the time you start applying the platform, it will be out of date. So, focus on applying AI to specific problems using applications that can be deployed (and deliver value) in weeks. I think this is particularly pertinent in testing where we all feel that there will be huge benefits from AI, but most people aren’t sure exactly how. Concentrate on tangible solutions, not vague platforms.
- Security isn't going away, like everyone hopes it will. Security doesn't make customers' lives better, it doesn't help the organization be more agile, and if you get it perfect, nothing happens. But as we all know, security breaches are becoming more common and serious, data breaches are incurring bigger fines, and consumers see threats as real. There was a strong sense at the show that companies are now accepting the fact that they really need to get more serious about security. What does this mean for test architects? Security testing is going to become mainstream. You can't just say, "We do penetration testing once a year." Security needs to be part of every testing plan, and every release needs to be security-tested.
- Conversational systems mean that products need to be tested from the user perspective. When it comes to traditional software products, the onus is on the user to learn the product. We learn the terminology, learn the forms, learn the workflow logic, we may even get trained. But with new conversational software like Amazon's Alexa, this has all reversed. Now, the product has to learn the user. This is actually a far more dramatic shift than simply putting a voice front end on information services, and I think further drives the need to change how we test. It's not good enough to be testing compliance, that is, if a user uses our product the way we think they should then it will work. We need to test that products work with real users using our products how they want and in their environments.
Agree? Disagree? I’d love to hear your thoughts on these topics.