Alex Painter - 29 January 2019
People make mistakes. Human behavior so often falls short of ‘expected standards’, it begs the question why we hold ourselves to such standards at all. Too often, we build systems and processes on the implicit assumption that the people using them will be rational, infallible, and consistent. Of course, the truth is that most of us are anything but.
Our general fallibility is obviously closely tied to AI and test automation. Automated testing is immune to the unintentional biases and lapses in concentration that affect human testers.
But what about moral failings?
It’s an unattractive thought, but straying from the straight and narrow is just as much a part of the human condition as having a concept of the straight and narrow in the first place.
This is not, perhaps, the biggest issue when it comes to test automation. But when the technology we use in testing finds its way into other areas of the business, the moral dimension is more likely to come into play.
Intelligent test automation solutions such as our own are increasingly used for robotic process automation (RPA). In other words, they don’t just test whether a system is working—they become an actual user of that system. Since our own solution operates very much like a human end user, working across multiple applications and responding to visual cues, it can take on many of the repetitive tasks that used to be the sole preserve of human beings.
And this opens up a new world of possibilities when it comes to privacy and security. We’ve already seen this in the healthcare sector where, for example, we’ve seen the use of RPA to move sensitive patient images from encrypted hard drives to an archive without a person ever having to view them.
It’s not hard to see the applications in other areas, such as defense and financial services. In fields where security is paramount and the weakest link in the chain is almost always a person, it gives organizations a tireless, impartial worker, devoid of greed and not susceptible to blackmail—a worker who will handle all kinds of confidential and sensitive information without the slightest risk of falling prey to temptation.
And aside from the general improvement in efficiency you get from a robotic worker, there is the potential for further savings. You can, to a degree, forget about some of the safeguards required for human workers, such as difficult, drawn-out recruitment processes, oversight, and background checks.
All of this could begin to sound a little ‘anti-human’. It shouldn’t. There are plenty of jobs that require levels of creativity, spontaneity, and empathy that only we (for the time being!) possess. It’s just that there are others—often those that humans don’t particularly enjoy doing—that can be achieved more cost effectively, safely, and securely by robotic operators.
Learn more about the capabilities of RPA in our forthcoming webinar: Is a robot going to take my job? What you need to know about robotic process automation.