By Cassandra Balentine
Automation tools help speed the testing process, eliminating the need to manually perform monotonous tasks by running scripts without human intervention.
According to a recent study by MarketsandMarkets, Automation Testing Market by Component Testing Types, Endpoint Interface, Organization Size, Vertical, and Region – Global Forecast to 2024, the automation testing market is expected to be worth $28.8 billion by 2024. As artificial intelligence (AI) enters the picture it is poised to disrupt and evolve how software is currently tested and executed. AI brings additional promise for improved automation in software testing.
Automation Versus AI
Automation and AI are often trying to solve the same challenge in software testing, eliminating manual touch points and speeding the process so that software releases are turned around faster. However, it is important to differentiate the two terms.
“Automation can solve simple use cases by applying rules to data in order to generate a known response,” offers Oren Rubin, founder/CEO, Testim. For instance, in testing a website, only one can automate the action of clicking “add to cart” and then validate that the correct item and price appeared in the shopping cart. “However, suppose that a developer changes the location of the add to cart button, the color or the button, or the text to say, “add to bag” instead? Automation wouldn’t be able to complete the test because it would be looking for the wrong thing.”
AI adds value to automation by making it easier to create and maintain tests while making the overall testing environment more resilient to change. “By leveraging AI to augment testing, the cost and time savings are dramatic,” says Malcolm Isaacs, head of marketing, function testing, Micro Focus.
There are limits to AI in the case for software testing. Rubin points out that a human user could recognize that the add to cart button and add to bag are essentially performing the same function. “It’s here where AI helps simulate human judgement to provide a result that is more likely to be correct.” For example, one could script a test that opens a browser, simulates a few interactions with page elements—such as clicking and entering data—and verifies an expected result. The test has been automated, saving manual time performing this task. However, he says if the application code changes the test will be updated or it is likely to fail. “Consider a change where a software developer relocates a button to see if it performs better in the new location. The button still performs the intended function, but the new location would likely cause an automated test to fail.” From here, the QA manager needs to troubleshoot the error, determine if the test or the application is failing, and modify the test for the new button location. The test is automated but has required a significant amount of work for a simple code change. A human would see the new button and complete the action. In this scenario, AI could help by inspecting hundreds of different attributes to determine that the button is the correct button, even though its location was changed, and complete the test successfully. The result is that the AI-assisted test appropriately adopted to reflect the change whereas the automation test did not.
AI can play a big role in basic testing, for example UI-level testing, where even minor changes to the application break existing tests due to the fact the tool fails to recognize objects in the application. “This means any minor change to the application will require significant modification to hundreds, if not thousands, of existing tests. In today’s fast-paced environment, this is an unacceptable reality that customers are being asked to accept,” explains Isaacs. “AI-driven smart testing completely eliminates this issue by enabling automation technologies to recognize objects regardless of their device-specific identifiers, position, behavior, and appearance changes. This allows teams to quickly create a single test that can simultaneously ensure quality on multiple platforms and devices that will be resilient to change, even in the most challenging environment.”
Roles of Automation and AI
Let’s first look at the current role of automation in software testing, then determine where AI fits.
Rubin says automation is primary used in three areas of software testing—repeatability, connection, and automating manual tasks. He explains that automation is used to create repeatable tests that can exercise the performance, accessibility, or function of a software program. “Whether unit, integration, or end-to-end tests, automation can help accelerate the creation and execution of tests across multiple platforms to validate new feature functionality or to test that new features didn’t break existing functionality.”
In the second case, automation helps connect the tools in the DevOps toolchain so that testing becomes a natural part of the process of code creation and validation.
Third, a lot of time is spent on manually analyzing test failures. Automation can help analyze test results to determine if they all failed for the same reason, or to pinpoint the root cause.
“Combined, automation helps accelerate test creation and execution so organizations can increase testing coverage and improve quality. It also helps reduce the manual effort involved in testing, which lowers costs and helps accelerate release cycles,” says Rubin.
Organizations need to keep agility and release velocity as the top priorities in automation in testing. “Too often, automation generates initial improvements, but test instability and slow authoring mean it remains the bottleneck to faster releases,” he offers.
Most AI-based solutions are commercial applications that incorporate the practice to improve automation. “The two areas where AI has generated the most impact are accelerating test creation and improving the stability of tests,” comments Rubin.
With AI-assisted record/playback, testers are able to create test flows in minutes as opposed to hours. “AI-assisted stability features help tests to adapt to changes in the application-under-test, rather than simply fail the test. AI can also identify critical user paths for added focus or suggest tests that should be added to increase coverage,” says Rubin. He adds that AI helps address the challenges that automation creates so that teams can scale test automation projects, increase test authoring agility, and improve release velocity.
Isaacs believes the primary role of software testing is to ensure the customer experiences the application as the delivery teams intended, regardless of the device, browser, or operating system. “In today’s world, ensuring that experience is almost impossible to do manually—this is where automation proves invaluable. The role of automation is to scale testing and provide fast and accurate feedback to the software delivery teams when things aren’t going as planned,” he shares.
By automating testing, customers save time and money by rapidly scaling testing, including automating the test environment and test data provisioning. “Thanks to automation, teams execute a range of tests in a short amount of time and quickly provide a reliable picture of the software’s quality to the users,” adds Isaacs.
Isaacs points out that AI concepts such as machine learning (ML), deep learning (DL), and natural language processing (NLP) are already in use today. ML and DL enable test scripts to understand the behaviors of objects, making the test more reliable in the face of change, especially at the user-interface layer. NLP is used to create tests more quickly by describing objects, performing actions, and understanding the consequences of these actions—all in plain English. “These and other AI techniques, such as computer vision, are helping teams build resilient assets that test more quickly, reliably, and intelligently.”
Adoption and Evolution
We expect further implementation of AI in software testing as it continues to evolve and become more widely accepted.
Testing is an important step in the creation and deployment of quality software. “A buggy release can quickly erode customer goodwill and brand reputation much faster than it took to build,” cautions Rubin. With that said, companies are slow to trust AI-based testing tools. “We believe that we are on the edge of the tipping point. The pain of creating new tests, troubleshooting failures, and maintaining tests in traditional test automation solutions is increasing. Costs are going up as more testers are thrown at the problem and business leaders are pressuring software teams to deliver faster,” says Rubin.
All of this pressure is happening at the same time AI-best test automation tools are improving and maturing. “They are filling feature gaps and creating differentiation,” offers Rubin. AI-assisted test authoring is much faster than coding test. Tests that are created can adapt to changing application code to improve stability and reuse. Troubleshooting can also quickly identify the root cause of a failure and aggregate errors to minimize manual labor.
Isaacs feels that most, if not all, software delivery teams will be using AI—either directly or indirectly—in their software testing. “AI test automation will soon be the only way teams will be able to keep up with the rapid pace of change and the demand for near-instant feedback on changes made to the software. Soon, there will be additional applications of AI in software testing beyond automation, such as the ability to optimize which test results should run based on changes to the software delivery pipeline,” he notes.
Furthermore, Isaacs says AI techniques will offer an opportunity for less technical users to use natural language to create robust test automation rather than having to rely on code.
“As companies get comfortable using AI, they will increase adoption from pilot projects in departments to enterprise-wide deployments,” predicts Rubin. When they do this, they will increase the number of tests they run, which helps train the algorithms that make the AI more accurate. As the AI learns to drive more accurate tests, more companies are likely to see the benefit and adopt the solutions. “It’s similar to the Uber/Lyft phenomenon, where more cars result in lower wait times, which increases the value and demand, which leads to more drivers and available cars,” he shares.
Challenges of AI and Software Testing
All new technologies come with challenges—and AI is no exception.
Specific to the testing world, AI is a young technology with many vendors vying for a piece of the market. “Some have immature solutions and others claim to use AI, but are really just using automation,” warns Rubin. “As a result, there are many skeptical buyers who are investigating AI-based solutions but aren’t yet convinced they can fully replace current automation or manual testers,” he admits.
Additionally, may developers are responsible for end-to-end testing and prefer a coded testing solution rather than a code-less, GUI-oriented solution. “As a vendor, we expose our AI-based Smart Locators so that users can see how they are affecting a test. This increased transparency helps customers understand their impact, yet provides the control to override AI-based decisions until they gain confidence in the system.”
Isaacs says that ironically, one of the biggest challenges for AI in software testing is testing the AI algorithms themselves. “AI needs to be trained on high-quality data, capable of teaching it the correct behavior across a range of situations so that, when presented with a new situation it comes to the right conclusion. It is often difficult, or even impossible, for humans to understand exactly how an AI algorithm reaches a specific result, and teams must continuously review their AI test automation and fine tune the algorithms to deliver even more reliable results.”
The length of time acceptable for software testing is constantly tightened. Automation tools help reduce manual processes to speed application delivery. In some instances, AI tools are used to aid automation used in software development, specifically testing. AIA
May2020, AI Applied Magazine