By improving load times to just under a second, COOK saw a seven-percent increase in conversion rates. At Eggplant, we deliver insights that result in better customer experiences and drive better business results — something we call Customer Experience Optimization (CXO). With this in mind, we recently renamed a number of our solutions to better reflect the outcomes they’re designed to help organizations achieve.
Customer experience transformation is a key initiative for any business that wants to position itself for the 21st century. Two important concepts involve updating and digitizing technology, and creating persistent customer relationships. According to Bain & Company, customer experience transformation starts with “… simplifying your core business and digitizing it where it matters.” McKinsey & Company writes that in any customer experience transformation, “… the voice of the customer can be used to identify upstream and cross-functional issues and address the root causes of problems.” In short, to see positive results, you need well-tested, high-quality digital assets that reflect ever-evolving customer needs and desires.
Burndown charts, feature completeness, code quality, pass/fail testing. Dev and test managers have access to lots of data from many sources about an upcoming release. But none of it directly relates to business outcomes. While decisions might be influenced by data, they’re still largely subjective and more often based on experience.
Just one hour of downtime cost Amazon an estimated $100 million in lost sales. Unless you were completely off the grid, you’re well aware of the performance issues Amazon and its shoppers experienced on what was touted to be the biggest Prime Day in the company’s history.
Testing is critical for organizations like NASA, the US Army, Northrop Grumman, BAE Systems, Lockheed Martin, MBDA, the UK’s Ministry of Defense and the Metropolitan and Scottish Police, where lives are on the line. As we've worked with customers like these over many years, we've noticed how much more testing is than just making sure the system works — it’s about ensuring we test for mission success and continuously optimize mission outcomes. Whether you're designing systems for command and control (C2); to provide support for complex police operations, such as hostage negotiations; or for shooting down an enemy missile, you should plan your testing and monitoring strategy to continuously test against the desired mission outcomes.
If you’re an online retailer, there’s a good chance you’re busy gearing up for the pre-holiday rush. Black Friday and Cyber Monday have been pushing retail sites to the limits of their ability to cope with surges in visitor numbers.
The weather, the tennis, the football — with all the distractions, you’d think those of us on the Real User Monitoring team would be kicking our feet up, right? Not a chance! I'm super excited to tell you about our latest release: a brand-new version of our Performance Trends Report.
On May 21, 2018, Bank of America announced that it was rolling out its chatbot, Erica, to all its mobile customers. On the surface, the premise makes sense. It’s making the bank more relatable. It’s providing real-time customer support to people where artificial intelligence (AI) assistants like Siri and Alexa are becoming the norm. It doesn’t have the limitations that some phone-based IVRs have, and it aims to provide immediate assistance instead of making us wait for a human (we’ve all shouted “representative” or pressed zero dozens of times to get a real person). Erica is a great way for Bank of America to optimize the customer experience.
But let’s pull back the covers and ask some basic questions. How does Erica know the customer so well? How does Erica pull from different sources of information? How does Erica know what products and services to offer? What systems, both homegrown and third party, does Erica need to be effective?
Quality assurance (QA) used to be a compliance activity. You were releasing a product and needed to test it and stamp it “approved.” QA was about testing that the code worked. You might manually test the code. You might have even tried some automation — coding a set of test scripts that would try to capture regressions or errors that you had eradicated in the past, but which somehow crept back in. All in all, you were reasonably satisfied that you achieved a level of test coverage that met your goals. Then, you put your code into production and crossed your fingers that nothing went wrong. And if it did, you tried to fix it as quickly as humanly possible.