Good test automation is mission-critical for any scaling software business. But there are still many pain points for testers and developers.
The evolution of agile and DevOps has dramatically increased the pace of software development, shortening software development life cycles, but testing has generally not matched this pace.
Test runtime needs to match the pace of weekly, daily, or even continuous releases without sacrificing quality or coverage. In order to achieve this, QA teams are turning to machine learning to aid in the software delivery process.
How ML is Used in Software Delivery?
Artificial intelligence and machine learning are already helping in mobile app development, also it’s no silver bullets.
Knowing what can and should be automated in software testing is important to reliably create and deliver stable and secure software.
A good analogy is that of the difference between breaking rocks and sculpting marble. Both follow fundamentally the same process, but the difference lies in the engagement of the mind.
Breaking rocks is a mindless task, making it perfect for automation. But the care and creativity required to create art from marble still require the engagement of the human mind.
Parallels to this can be found in many different areas of software delivery and testing.
Machine learning is not a replacement for QA engineers. It is an augmentation that allows engineers to focus on sculpting marble rather than smashing rocks all day.
☛ Automate Software Testing at Speed & Scale
Software testing, and particularly end-to-end testing, is getting increasingly complicated. User journeys are becoming less manageable as the number of potential paths through an app grows.
Manually tasking QA teams with developing sufficient test cases is a very time-consuming endeavour and can often lead to QA burnout.
Instead, machine learning can be used to probabilistically generate and automate E2E test cases and test data based on user analytics data.
Rather than relying on QA engineers to develop test cases based on their understanding of how customers use an app, user behaviour can be used to develop and train models on how users actually use an app.
This can then be applied to software testing to develop user behaviour models and probabilistically determine how users are likely to use novel features. So that test cases can be developed for them before they have even launched.
☛ Improve the Effectiveness of Automated Accessibility Testing
ADA compliance essentially involves creating a checklist of around 50 tasks to ensure that your app complies with the WCAG 2.1 AA technical standards.
This involves things like checking that images have alt tags, HTML markup is well-formed, audio can be controlled, videos have captions, etc.
These are all relatively easy to automate already, but simply automating this checklist does not necessarily meet compliance standards.
For example, the difference between simple task automation and automation augmented with machine learning comes up when checking that all images and non-text content have an alt tag describing the content.
It’s easy to check that an alt tag exists for multimedia content, but how do you know the content of that alt tag is correct, meaningful, and matches the content?
By taking advantage of natural language processing, we can check that text exists and that it makes sense and has meaning.
What’s exciting is the potential for combining this with machine vision. In the future, it’s likely that we’ll even automate the generation of alt tag captions for images and other multimedia.
☛ Test Optimization
Any developer that has used a compiler knows that a lot of what they write gets optimized behind the scenes. This same fundamental principle can be applied to optimizing test suites.
Developing tests is an expensive and time-consuming process, and not every test is of equal value. Knowing what tests actually hold value and match how real users use your software is difficult to do manually.
However, you already know exactly how users are using your app, even if you may not realize it.
By tracking and analyzing user behaviour through your app with product analytics toolsets, you can develop patterns of typical use cases.
This can tell you what the happy paths through your app look like and what areas should receive the most coverage by QA. This helps guide your test creation process without guesswork.
What this also does is point out the pain points where things go wrong. Deviations can be automatically monitored to identify problem areas quickly.
With enough data, heuristics can be applied to predict user paths through novel code before it even hits production.
☛ Reduce Test Maintenance
Keeping test runtime to a minimum is critical, but mature software often runs into a common problem. As the software applications grow, so too do test suites. These test cases are not static and require maintenance to stay current.
Areas particularly susceptible to change, such as UI/UX, can be very time-consuming for QA engineers. It must constantly maintain test cases to avoid them breaking when UI/UX changes are made.
In addition to cutting down on test runtime, machine learning is also used to develop self-healing tests. This relies on a combination of regression test suite analysis and real-time.
It is monitoring to identify when changes occur and then automatically updates itself to be relevant and not fail. It is good to consider UI/UX elements as a collection of dozens of identifiers, rather than relying on a single ID tag.
By doing so, UI tests can adapt to modifications and change automatically, dramatically reducing the time required for test maintenance.
Intelligent fuzzing will someday be used to repair tests that need to adapt to a different workflow.
What Lies Ahead: More Sculpting
Automation reduces the burden on QA engineers on a lot of the time-consuming and repetitive tasks involved with software delivery and testing.
We’re still in the early days of a machine learning revolution. Many companies have not yet realized the significance of just what machine learning means for software delivery and testing.
However, fully understanding and embracing how machine learning can help developers and testers create better software and better tests. That results in software that is more reliable, more secure and takes less time to test.
QA engineers will simply become scarcer and spend more of their time developing code, focusing on DevOps, sitewide reliability, or infrastructure.
Many will become QA analysts or sages, bringing better quality into the development process, rather than simply maintaining tests. They will solve more interesting problems as these technologies mature.
Eventually, developers will be able to write code and get instant feedback on likely bugs before they even commit to the branch, by using a combination of regression analysis and heuristics to spot potential issues without actually having to run entire test suites.
We’re not there yet, but this goal is fast approaching. Those teams still relying on manual processes for their testing will be the ones that get left behind.
Erik Fogg is a Co-Founder and Chief Operating Officer at ProdPerfect, which is an autonomous E2E regression testing solution that leverages data from live user behavior data.