<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/33f0fcb02bbd41af9d706a2c469929f2/pixel?tag=ViewContent&amp;noscript=1">

Q&A: The emergence of AI in embedded military technology

By Jaspar Casey

Jaspar Casey

Jaspar Casey - 24 June 2021

Test automation is increasingly seen as a strategic advantage in the defense sector. Even though the concept has been deployed in software development for many years, recent advances in artificial intelligence (AI) and machine learning (ML) have allowed software teams working on defense systems to make a big step up in terms of quality and turnaround time.

Earlier this week, we sat down with SIGNAL Magazine to discuss the growing use of AI and automation in military technology. I recommend watching the full panel discussion to get a fuller picture of these developments. Below are a few of the questions we faced and some insightful remarks from some of Keysight’s Eggplant team. To hear the full discussion, click to continue to SIGNAL media.


Featured in the discussion are Eggplant’s Meghan Danielson, Technical Consultant for Defense & Aerospace; Dr. Peter Cherns, Product Manager for Eggplant DAI; and John Hogan, VP of Sales for Federal, Defense, & Aerospace.

Why is there a need for software automation?

John Hogan: It’s clear that missions today are more reliant than ever on software. This means that speed of delivery is really important — getting those new versions out to warfighters.

Fundamentally, software needs to deliver outcomes from the perspective of the end user. We’re no longer interested in just answering the question “Does it work?” We also need to make sure it’s an effective tool for the warfighter.

Where does testing take place in the program life cycle? We’re seeing it from prototyping to sustainment. These teams are also asked to provide an audit trail of their work and the efforts they are putting in to make sure these digital products are safe.

Another interesting thing is the modules and the services these applications run on are also being updated continuously, often independently of the owners of the applications themselves. This means integration testing is important — testing the interactions between these systems of systems. Integration testing is a large challenge because of its complexity. It’s also important to note that the software lifecycle has really changed in the past 10-15 years. Testing now accounts for 75% of program sustainment costs. Automation is necessary to address testing needs while keeping these costs in check.

 

As AI is introduced into software, how does this affect the testing of these systems?

Peter Cherns: AI has changed the kind of things QAs — or anyone who is working on deploying software — have to care about. Some of the trends that John spoke about in terms of how software deployment has evolved over the last decade, we see things like the emergence of systems of systems; the involvement of third party components in nearly every interaction; the concept of services being delivered over multiple interfaces rather than just single, self-contained applications.

These stretch the paradigm of creating manual or automated sets of regression tests to their limit. Especially when you start to incorporate AI components into the mix, it really takes all of those concepts to the next level. You have this enormous potential testing surface area. You have a full end-to-end system that exhibits non-deterministic behavior. That’s a frightening prospect for QA professionals, as they don’t have reproducible aspects to their testing. It’s therefore very hard to write a representative set of tests.

The second aspect that is challenging is that the product deployed in the field isn’t what they initially shipped and verified. When you have machine learning models that can learn and adapt, you have components that are continually evolving. Outcomes that John talked about can’t be traced back to original requirements. A lot of previous techniques and philosophies of QA rely on that link back to what the software was supposed to do. With AI components, that link starts to break.

Therefore, we believe that makes it no longer possible to define that fixed set of linear tests. Part of that is the sheer scale of surface area, and part is the number of unknowns that these non-deterministic components introduce. Our concept and our design of Eggplant DAI has always been on this basis. The surface area is exploding to the point where we need to use AI to auto-generate the assets that we need for testing, and we need to auto-generate the test cases themselves; we can’t rely on fixed, linear sets. We need this fast, dynamic iteration of test scenarios. We use AI to help both generate and prioritize those test scenarios.

That also means needing to get rapid feedback from the field, from real users on where to prioritize testing. That link between production and development and test is key as teams start to build and deploy software that has AI components.

 

How are companies already using AI to aid testing efforts?

Meghan Danielson: We are in the infancy of using AI in testing in defense. Over the last year, teams have been asking for a lot more information on both the modelling tool and the AI approach to testing. Everyone gets really excited about it.

Scripts and tests are set up to execute in the same way as manually written tests; step 1, step 2, step 3 — always completed in that order. When teams start using Eggplant AI, they can apply the model to all of the interactions within their software. This expands testing to include a much wider range of user paths. It also shifts the focus from strict requirement compliance to usability experience. I think that’s the shift moving forward in defense, especially thinking back on my experience at a defense contractor, where we had a strict list of 500 or 1000 requirements that we had to test. As teams embrace model-based methods and AI testing, it really expands the ability to test the software and find those issues and bugs that would never have presented themselves until the application is out in the wild.

 

Learn more about AI's influence on embedded defense software

To hear more from Meghan, John, and Peter, head to SIGNAL’s webinar. They also address additional questions from an audience of defense technologists. Additional topics include using automation tools to generate an audit trail, automating workflows that allow engineers to use their time in the lab more efficiently, and how QA teams can get started with test automation and working with AI-driven tools. Click here to watch the full discussion.