Antony Edwards - 1 October 2019
Over the next four years, Gartner predicts that 25 percent of employees will use voice to interact with applications in the workplace—up from less than 2 percent in 2019.
Gartner sees a fairly broad market for adoption, with voice an attractive option across both industries and functions. This stands to reason, given the popularity of Siri, Alexa and similar voice technologies in the consumer realm. As Gartner Vice President Van Baker noted, “We believe that the popularity of connected speakers in the home, such as the Amazon Echo, Apple HomePod and Google Home, will increase pressure on businesses to enable similar devices in the workplace.”
As enterprise use of voice-driven software increases there are numerous considerations, including how to develop and test these technologies. When interacting with websites and applications, people tend to input information in a structured manner but that is not necessarily the case with voice.
Let’s take an HR onboarding workflow as an example. If you were to fill this form out online, it’s likely you’d start with the first box (your first name), second box (surname), third box (street address), and on and on in a successive manner. But it’s anyone’s guess how you would provide this information verbally, and chances are your approach differs from your colleague’s in the next office. In this environment where information is not provided in a set order, voice software can easily become confused and produce incorrect or inaccurate results—leading to user frustration and potential abandonment of the technology.
Circumventing these issues requires that companies put users at the center of their testing strategies. In order to develop voice-driven technologies that your employees will use, you need to first understand much more about them, how and why they are using the software and the journeys they typically take. It’s not enough just to have this information; it’s also critical to feed user data back into DevOps in a structured manner so that the actual voice-driven product is designed to reflect users’ behavior.
Another consideration is ensuring that voice technologies can support the modern workforce. As CMSWire’s Kaya Ismail wrote in a recent piece on voice adoption, “The workplace has become increasingly diverse, and this means employees speak different languages with a variety of dialects and accents.” As such, testing the voice recognition capability itself is a critical part of developing voice-driven software.
It’s clear that, in order to bring about the possibilities inherent in voice-driven technologies, significant changes are required to the way we build, create and test software and applications. Learn more about the Eggplant approach, and how we can help you orient your testing strategy around your users.