Managing an effective and sustainable Quality Assurance process for websites and apps has become steadily more demanding over the last decade.
As the device landscape has continued to grow exponentially and waterfall development strategies have been superseded by more agile methodologies, QA teams have seen the dimensions of their QA challenge grow as testing timelines shrink.
The QA Challenge: What is quality assurance?
The demands of QA-ing websites and apps have changed significantly in recent years. Not least, the practicalities of assembling a bank of real-world devices that reflect the ever-changing face of the digital world. With the huge variety of device/browser combinations that could be accessing your site, just acquiring and maintaining all the hardware necessary to test them can be an impossible challenge for even a moderately sized internal team.
At the same time, the challenge of performing comprehensive testing that can keep pace with the speed of innovation you need to achieve in a hyper-competitive marketplace can, all too often result in quality compromises, bottlenecks and delays.
Organisations have increasingly needed to make choices between quality and velocity, putting web applications live that have not been adequately tested simply to try and maintain momentum and competitive edge.
This, of course, can be counterproductive, as website and apps littered with defects and usability issues can cause PR and financial disasters.
QA fails: when re-platforming goes wrong
Aviva found this when their own re-platforming project went spectacularly wrong. Their bungled migration between platform service providers resulted in a £7.2 million loss, when whole swathes of functionality on their apps and websites suddenly became unavailable to clients for an extended period. Customer complaints went viral and received widespread media coverage. The complexity of their problems was such that Aviva had to publish a constantly updated online white-board, listing known issues and when they would be resolved.
Launching without full visibility of potential issues can lead to nasty surprises for brands. Those whose reputations hinge on the success and transparency of their digital customer experience, needing full confidence in their testing ability to drive the right decision making.
Growing your QA team to meet the challenge?
A new approach to testing may be needed to face the QA challenges that have arisen in this changing digital landscape.
But in the last decade dedicated QA teams within digital companies have largely remained the same size, shrunk or been swallowed up altogether by development teams, as organisations have cut back or restructured. Adding more team members may be an expensive dream for some QA managers, and may be missing the point, anyway.
An old waterfall, QA model of an internal team of testers, working with in-house developers to plan exhaustive testing to be conducted in a linear way prior to a big release, is not how many companies need to operate. Certainly not when budgets are tight, development is agile, and testing requirements are complex and always changing.
Quality assurance at speed and scale
In a world of continual deployment businesses need a testing function with tools that can cover the full gamut of approaches, delivering results at speed and scale.
They need the ability to deliver rapid, targeted new functionality and regression testing both pre and post new feature release.
But QA teams also need the skilled developer-type resource for security and API testing and collaborative exploratory testing, that can augment internal efforts at critical times in the life cycle of a digital product (such as one-off re-platforming event).
And then there’s the need for non-functionality usability testing. As user experience is becoming central to optimising revenues and conversion, usability tests can deliver extraordinary insights showing how specific changes in customer journeys and design can improve performance and revenue.
Is outsourcing the answer?
All this is why using outsourced and crowd-sourced testing agencies has become such a vital part of the QA landscape in recent years.
But not all agencies are created equal.
The ability to draw down professional manual testing resources on an ongoing basis, to have access to thousands of testers for regression, functionality and usability tests on a 24/7 basis – can clearly help you scale your QA process and capabilities without increasing the size of your team.
But this scale of testing resource is only useful when it can be translated into prioritised and actionable results. Agile organisations, where the commercial and development function need to collectively make stop/go decisions around development projects, need to see and understand the impact of the issues that are being detected. Aggregated results of mass testing can help with this, as can screen-shots and videos of the real world issues that are being found.
There’s a need for large scale, manual repetitive testing tasks, as well as highly focused, ongoing monitoring and usability testing that can feedback into an organisation quickly and efficiently. Some agencies can offer the full range of these services in a bespoke and cost effective way and some can’t.
Usability testing, or you do the testing? Which is best? Find out here.
Conclusion
Expanding your team with full time QA testers is often not an option, bringing contractors in at time of need is an expensive and complicated undertaking. And neither really answer the new dimensions of the quality challenges we face. Using the right combination of outsourced testing resource can bring the right device coverage, pace and scale to the QA process for a digital brand. It can free up an existing QA resource to work more closely with the development and commercial teams; helping them plan strategies more effectively in the long term, and focus on ongoing optimisation and conversion challenges.