Six Ways Intelligent Automation Helps You Optimize Mission Outcomes

By John Bates | 8/2/18

Testing is critical for organizations like NASA, the US Army, Northrop Grumman, BAE Systems, Lockheed Martin, MBDA, the UK’s Ministry of Defense and the Metropolitan and Scottish Police, where lives are on the line. As we've worked with customers like these over many years, we've noticed how much more testing is than just making sure the system works — it’s about ensuring we test for mission success and continuously optimize mission outcomes. Whether you're designing systems for command and control (C2); to provide support for complex police operations, such as hostage negotiations; or for shooting down an enemy missile, you should plan your testing and monitoring strategy to continuously test against the desired mission outcomes. 

Based on feedback from our customers in aerospace and defense, we've identified six critical automated testing needs and discuss how they’re being addressed with Eggplant solutions.

1. Test in terms of user journeys and desired mission outcomes. You know that a piece of software or system carries out a mission and you know what success looks like. But can you model your testing to help ensure you achieve that outcome? The experts on the mission who are using systems to support it are likely to be military-qualified personnel or law enforcement experts — who may not be programmers. Which means that writing a set of test scripts isn’t always the most natural way to capture the test process. A more natural approach would be to provide a scriptless modeling environment that enables non-programmers to capture the system states, interaction points, and desired mission outcomes. Doing so creates a digital twin of the real system states, properties, and objectives to be captured and visualized. Automation engineers can then attach snippets of automation scripts to bring that model to life. These snippets can power an automation engine (see below) to drive the model as if human users and other systems are really interacting with it.

2. Anticipate real-world stress and unplanned scenarios. Picture this: you're under fire in hostile territory and in charge of calling in air cover. What if you don't always press the right buttons in the right sequence? Military training is designed to provide the right behavior regardless of pressure — but we can’t always account for human error. And what if the system behaves in an unexpected way due to geography, temperature, pressure, hardware version, network conditions, or just about any of an infinite number of conditions? It's nearly impossible to anticipate human and system behavior in all situations, which is why only testing happy paths isn't good enough. Automated assistance is ideal to explore these unplanned scenarios and Eggplant takes the approach of AI-assisted test automation. An AI test recommendation engine can navigate the potential journeys through the system model. It can complement regression testing with automated exploratory testing, guided by a learning model that feeds on data around system changes, historical test data, and real usage patterns. Using AI assistance, it’s possible to automatically explore strange permutations to simulate humans and systems under stress.

3. Enable third-party testing but protect classified IP. It's common to bring in different teams to test the product you create, such as a drone control system. While you want that team to be able to interact with your product as a user would, you don't want them gaining access to classified, top-secret intellectual property. It’s critical to take a non-invasive automated testing approach and controlling a system through the UI and/or APIs — without the need to look inside the black box or touch the code.

4. Ensure the end-to-end UX won't fail in the field. The last thing you want to see in a tense situation in the field is an error screen. Screen Shot 2018-07-18 at 3.17.11 PMWhat happens if you don't have system connectivity, you forget your password, or you type it in wrong? Here's why it's so critical to test through the eyes of the user. Say you do testing on a platform based on Android version 2 but the tablets your personnel in the field use are running on version 3. You might have assumed it’s the same but when used, a critical screen doesn’t render properly and is unreadable. The underlying software may be functioning fine, but it’s not rendering right. We could only find this out by using the software as a user would. And further, there may be issues around any aspect of the user experience — functionality, performance, usability, connectivity. Eggplant enables what we call fusion testing — test on every platform in parallel; do functional, performance, and usability testing, through UI and APIs, and test every platform by utilizing a connected device lab of different device variants (even voice-driven ones).

5. Predict the likelihood of a successful system launch. Before going live, it’s critical to get an analysis of the risks of release. Screen Shot 2018-07-18 at 4.12.45 PMTesting often involves creating lots of technical data around functional quality and performance under different conditions. However, taking this data and applying predictive analytics, including a comparison with previous releases and their perceived quality, can give us a business risk associated with release. At Eggplant, we created Release Insights for this very purpose — including a dashboard, reporting, and what-if analysis. The more eggplants you see, the better the outcome when you go live.

 

6. Track mission progress and recommend improvements. There’s lots of pressure to release and go live, but testing doesn’t have to end there — we can continue to test in production. There are two aspects here: first, seeing how real users are experiencing the system by measuring whether they’re completing key tasks within the desired mission parameters. Second, creating synthetic users to interact with the system and test aspects of the functionality and performance in production. If a particular user journey doesn’t seem to be working, analysis may identify the problem and make an improvement remedy that can be fed back into development to make a measurable improvement to the system. For example, after continuously targeting problems, you might identify that the red text on a blue background on the interface of your targeting system is hard to read. By feeding the results of your testing and monitoring back into the dev process, with a recommendation to change to a more readable combination, the targeting effectiveness meets the successful mission levels.

Organizations in aerospace and defense are held to higher standards when it comes to testing for accuracy, quality, reliability and delivery. Register now for our webinar on August 7 to explore inherent QA challenges and see for yourself how Eggplant can help you ensure successful mission outcomes.

Topics: testing automation, UI testing, User Experience, UX, UX testing, testing best practices, testing strategy, APIs, API testing, artificial intelligence, analytics, QA, Eggplant Digital Automation Intelligence Suite, intelligent testing, fusion engine, AI-assisted testing

John Bates

Written by John Bates

Dr. John Bates is the CEO of Eggplant, a visionary technologist and highly accomplished business leader. He's also the author of the book “Thingalytics: Smart Big Data Analytics for the Internet of Things.”

Stay up-to-date with the latest in test automation

Lists by Topic

see all