Photo of the Transpire Brand

Why Automate Your Quality Assurance? Best Practice for the Best Digital Products

Rather than relying on a human to single-handedly test the functionality of digital products in the same way a user would, automation makes use of tools to assist in these tasks. Unsurprisingly, tests are carried out automatically and can be repeated on numerous occasions.

If a team is repeating the same tasks that result in the same outcome, they are an ideal candidate for automating. This frees up the QA team to focus on complex, hands-on tasks instead.

As the development progresses, QA can call upon previously written scripts to test new features or functionality instead of having to go over everything again.

This is in stark contrast to the norm, where QA teams are given features and components to manually test in both the backend and frontend of software. Individual aspects and elements have to be carefully scrutinised in order to ensure everything works as expected.

On top of that, QA teams also have to do integration testing, which involves checking each component alongside one another to ensure everything works in harmony. These two processes are known collectively as end-to-end system testing – a lengthy exercise that involves lots of people, lots of resources, and more often than not, lots of stress based on time restrictions.

the benefits of automation - save time and money, increase coverage and morale, and get better quality and feedback

QA automation aims to overcome these issues and provide value in other areas. However, that doesn’t mean to say it’s an easy and effortless journey towards implementation and execution.

Here’s a closer look at some common considerations and misconceptions as well as test automation best practice to deliver the very best digital products.

Test Automation – Considerations and Misconceptions

Broader skill set

With automation comes different requirements. For example, more time will be spent on developing the automated tests, alongside making improvements and adjustments as needed. This means team members are essentially required to “write code” in order to deliver automation capabilities.

Investment vs. return

Investing in automation can seem cost prohibitive, especially if it can’t guarantee a return. On longer projects, automation almost always makes sense, as re-usability will win out over writing and maintaining scripts. But with shorter projects, this might not be the case and should be assessed for appropriateness.

Human interaction

It’s safe to say that automated testing eliminates human error and provides faster, more accurate results. But human interaction remains an imperative aspect of software development by providing logic, reasoning and expert opinion about test outcomes.

Incorrect expectations

Automation is all about repeatability. After all, is it really worth the effort automating a test if you’re never likely to repeat it again? Having incorrect expectations from the get-go will negate the benefits for automation.

Incorrect timing

Some testers will wait until a feature is fully developed and stable before ensuring an automated test script covers the feature. Although new feature bugs almost always outnumber regression bugs. Therefore, if you’re going to spend time automating tests, it makes sense to do them in parallel with development and determine the appropriate test coverage early. That is, should the feature be covered by a unit test, an API/integration test, or User Interface (UI).

Team buy-in

Automation is a shared responsibility within the development team, where both streams – developer and QA – collaborate to ensure appropriate testing frameworks and coverage is applied.

Test Automation Best Practice

Identify the right tests to automate

Typically, the right test cases for automation are those that should be run frequently, cover areas of “risk”, and provide efficiencies towards configuring test conditions and test execution. It is also important to determine the appropriate test type – whether it should be represented as a Unit test, an API/integration test, or User Interface (UI).

Test early, test often

The earlier QA automation is implemented, the better. That way developers won’t spend their time and effort building products that are fundamentally flawed. Also, the more testing you perform, the more bugs you’ll find.

Pick a testing tool that suits your requirements

Key points to consider when choosing an automated testing tool include support for your platforms and technology, flexibility for testers of varying ability, as well as features, functionality and ease of use.

Know your script language

QA automation is only as good as the script it’s based on. Therefore, it makes sense to leave the task of writing automated test scripts in the hands of QA engineers with expert knowledge of the language in question.

Create good quality test data

Readily available test data, which is well structured and of a good quality, makes writing scripts a lot easier. It also helps extend existing automated tests along with the application’s development.

Ensure testing is resistant to UI changes

It’s normal for an application’s UI to change between builds. But by providing unique names for your controls, automated testing will be resistant to any changes.

The Value of Automation – Testing Vodafone Foundation’s DreamLab

Testing is an integral and essential part of the end-to-end development process at Transpire. From integration to user interface testing, our approach is completely integrated in order to provide instant feedback to the development team.

Although successful, this process had its limitations when the need to expand DreamLab to additional regions and localise the app to support multiple languages became apparent.

The solution to support this requirement and complement the existing automated test script suite was snapshot visual testing.

This form of checking presents screens that were otherwise difficult to access, injects textual and region specific content, compares them to a baseline and then reports on whether they are working presenting as expected.

Without this form of automation, each language that DreamLab now has available would need to be tested manually, which is neither scalable nor sustainable.

Our snapshot tests now run daily against each screen permutation, for all platforms, on an array of test devices and against each region within a matter of minutes.

– 158 Days –

Manual Regression Testing Time Without Automation

– 20 Days –

Regression Testing Time With New Automation Strategy

– 84% –

Reduction in Effort per Regression Cycle

– 2.5x –

Increase in Device Coverage

– 87% –

Increase in Release Frequency

Do you need help with your own test automation strategy?  Get in touch with the testing team at Transpire and we’ll show you the way.

Photo of the Transpire Brand

Sign up to receive all the latest insights.

MORE POSTS

Stay up to date with the latest news, events and insights...