The Impact of AI on Software Quality Assurance

By Erik Fogg

Co-Founder and Chief Operating Officer


February 12, 2021


The Impact of AI on Software Quality Assurance
(Image courtesy of Pixabay)

The introduction of AI into software testing looks set to revolutionize software QA.

Software QA is typically seen as an expensive necessity for any development team; testing is costly in terms of time, manpower, and money, while still being an imperfect process subject to human error. By introducing artificial intelligence and machine learning into the testing process, we not only expand the scope of what is testable, but also automate much of the testing process itself.

This can reduce the time and money spent on testing without sacrificing scope or quality, allowing engineering teams to break out of the ‘iron triangle’ of project management. Read on to find out how applying AI and machine learning to the testing process has the potential to completely change the testing landscape.

The problems with testing

Manual testing involves the development of a test suite, as well as the generation of test data to use with the tests. While automatically running test suites on commits can easily be integrated into the development pipeline to prevent failed code from being deployed, the test suite itself is still only as good as the test cases contained within it and the test data used. The developers or testers that create the tests are human. This means that mistakes can be made, and test cases can be missed. The number of tests will inevitably grow as the software grows, which makes it even harder to stay on top of a test suite and ensure a good level of code coverage.

These challenges can be overcome with the introduction of AI into the testing process. AI can be applied in a number of ways inside a project, from crawling the software to automatically generating a test suite with test data, to visually analyzing software output to spot errors that are not easily found with traditional functional tests. 

Visual testing

Image-based learning algorithms can be trained to analyze user interfaces, augmenting the testing process to help to ensure that everything on a web page displays correctly. This can be done with fewer errors than traditional functional testing and much quicker than manual testing. This saves time and money, as functional tests for UI validation are very time-consuming to develop and can quickly become very verbose, making them hard to maintain.

Visual AI-powered assertions revolutionize writing UI functional tests by dramatically reducing the amount of code necessary to test assertions. For example, instead of writing lengthy code to inspect DOM elements, AI-powered assertions analyze output against a target expected output, which is typically a screenshot. If the output matches the screenshot, the test passes. This can immediately highlight differences and also makes it much easier to write better tests that can be applied across different devices and screen resolutions.

AI spidering and user analytics

Generating user journeys takes a lot of time, and in sufficiently mature software, the number of possible user journeys rapidly approaches figures that go beyond what is feasibly possible to cover with manually created test cases. AI spidering is used to automate app discovery and can be combined with other testing procedures, such as regression testing, to quickly spot errors introduced in the user journey. It involves leveraging machine-learning to create a model that can navigate an app by interacting with UI elements. The model creates a series of paths through an app to automatically generate working patterns that tests can be written against. These tests compare current patterns to the expected working patterns to highlight differences as part of the testing process. This method can very quickly spot errors that might otherwise be hidden behind a very specific series of steps in a user journey that is easily missed with manual testing.

ML-enabled usage analytics can be used instead of or in addition to AI spidering. By watching and learning how end-users use the application, a testing system can identify test cases that are actually traversed by users, instead of every possible pathway through the application, greatly reducing the number of tests necessary to provide complete quality assurance.

Codeless testing

Using a record-and-playback interface is a popular method of generating tests, but they are difficult to maintain as UI elements change. AI-powered codeless testing opens up the possibility of creating self-healing test cases that require virtually no maintenance. AI can augment the record-and-playback process by dynamically generating object locators as they are interacted with. All commands, from mouse clicks to keyboard inputs are recognized, as well as the object type, whether it may be a dropdown option, an input field or something else.

AI-powered codeless tests are capable of self-healing. By developing a model of the objects on a page, it is capable of rediscovering UI element locators that have been moved or altered in some way without the need for manual reconfiguration. This is a major time saver when it comes to developing UI/UX tests, which usually require constant maintenance to adapt to changes.

Continuous Verification

Rather than defining testing as a specific stage in the CI/CD pipeline, continuous verification allows for testing to be run at all stages of the development process using continuous verification. AI-driven continuous verification can automatically carry out risk assessment on new releases by tracking thousands of metrics at each stage of the development process, handling machine log data much faster than is manually possible.

These risk assessments can be used as part of an automated decision-making deployment process. If a deployment is judged to be too risky, AI can be used to automatically roll back or roll forward deployments to prevent unstable code from remaining in production. Not only do these save 3am emergency calls to tech experts, but AI is also capable of error diagnosis and triage, so errors, warnings and exceptions can automatically be understood and categorized based on severity, further reducing the dependency on experts to determine the risk level of errors.

AI and the future of software testing

Many of the potential applications of AI and ML-based approaches to software testing are still in their infancy and their adoption in the software testing industry is not widespread. These approaches to testing are new, but AI-driven approaches to testing demonstrate the potential to not only widen the scope of what is testable within software, but also how much of the testing process can be automated. Software QA is one of the most expensive pieces of the development process, so the potential cost savings in terms of both time and money are enormous for development teams.

Erik Fogg is a Co-Founder and Chief Operating Officer at ProdPerfect, which is an autonomous E2E regression testing solution that leverages data from live user behavior data.

More from Erik