What is regression testing?
For the uninitiated, let’s start off with the meaning of regression testing.
It is the testing following a deployment or release, be that bug fixes, new features, configuration changes, even content, to verify that the system continues to function as it should – and specifically those areas that were unchanged in that deployment. Have those changes had an adverse impact anywhere? Have those changes introduced defects or altered the behaviour of a flow or specific function? In a nutshell it tests to see if anything has inadvertently changed.
Now, let’s focus on change for a moment. We live in an age where if you are slow to market, lack innovation or an appetite for change, you will soon fall behind your competitors. This, in turn, impacts the financial success of your company. Innovation and regular iterative innovation are a necessity for success. Peter Drucker, “whose writings contributed to the philosophical and practical foundations of the modern business corporation” famously said “innovate or die”. If you don’t do something first – and do it well – others will, and will bear the riches of that.
In order to drive that innovation there are many components that need to coalesce – some of which aren’t the focus of this particular article – but given we are discussing regression testing, we’ll focus on that.
Regression testing needs to be effective, and it needs to be efficient. “How do I do that?” I hear you ask. I’ll break this down into the following areas:
- How? Is it manual or automated?
- What? What is in scope of the regression testing? How is that regression pack prioritised?
- When? How often does it happen? Or what event triggers it?
- Where? Configuration and release management.
- Reporting. Who, what and how do we convey the output from this testing?
- Review and refine. The regression pack has run – what now? Do we require additional test coverage, does the existing test coverage require review and refinement based on the reporting output?
How? Manual or automated?
Regression testing in any form is value add, however, depending on…
- the function under test
- the layer of application it exists within
- its priority, or proximity to or inclusion in the change deployed
- or even the frequency of change
…will require a given approach that suits all of the above variables.
Remember, it’s to not only be effective (i.e. the how and what) but also efficient (i.e. the when and where).
Let’s break this down by application layer and the different types of testing that can be used.
Presentation layer – websites, or more specifically what you can see, are often just the presentation layer of a range of different layers and backend systems that culminate in what can be seen and used. This can be tested in both automated and manual forms. We’ll touch on this more later on, but if something is passing manual testing regularly, it’s safe to assume it’s a candidate for automation.
Service layer – this is commonly the processes that fetch data back and forth (i.e. an API) between the Presentation Layer and Data Source Layers. Generally, testing in this layer is supported by tools such as Postman and within the ‘Automated’ category.
Domain Layer – this layer comprises the business logic – this can be manually tested or by using automated tests.
Data Source Layer – this, as you may have guessed, is the data layer – i.e. databases. This can be tested in an automated or manual manner – be that directly (via SQL Studio, for example) or via the Presentation Layer.
Taking a step back from the application layers, a number of other variables influence or determine the method in which automation or manual testing takes place when regression testing a system.
Is the function being regression tested within the same functional space as the change being introduced? Is it up or downstream of the introduced change? Has the function – or more specifically the requirement that it originated from – got a high, medium or low priority (or where does it rate in the MoSCoW prioritisation method?)
Defining the ‘How’ in a regression testing approach or strategy consists of many variables that need to be considered.
Before we move onto the “What”, it would be worth reading another article that underpins the efficiency of regression testing, and how to write test cases with the same effectiveness in your regression testing approach – “The underappreciated skill of test case writing”. Poor test cases = poor testing = inefficient and ineffective testing.
What is in scope of the regression testing and how is that pack prioritised?
We’ve touched on it briefly, but a range of variables will determine what is included in your regression scope – be that automated or manual. However, at a high level – putting those variables aside – there are two predominant options:
- Full Regression
- Partial Regression
We needn’t touch on the ‘Full’ as that’s self-explanatory, but Partial Regression can take a few different forms.
It can be Targeted – focused regression on a number of functions that are directly impacted or dependent on the change introduced – or the areas that immediately surround that function.
It can be Prioritised – the client or business may request that all P1 functions have regression tests, for example.
Or, it can be Progressive – this involves introducing newly written test cases when existing test cases don’t exist or aren’t suitable.
When? How often does it happen or what event triggers it?
Regression testing can be undertaken at any time, however it is typically done when change is introduced, or during the latter stages of a test cycle or sprint (though with sprints, that is not as common, and can take a number of approaches). The tests provide confidence that the release has not regressed any existing code.
With automation, it’s often tied into a CI/CD pipeline and configured such that it takes a Partial approach, where coverage is mapped to code branches. (This limits the regression coverage to the areas where code has been deployed, provides immediate feedback on whether it was successful or requires immediate remedial action – i.e. roll-back!)
Where? Configuration and release management.
Configuration and/or release management are vital to ensuring the testing undertaken is both effective and valid. The objective of anyone working in release or configuration management is to provide anyone using an environment with somewhere that is as “production-like” as possible, and thus representative of what the end user or customer would see, feel and use. To that end, for any regression testing strategy or approach to do its job well, everything from the performance of the kit it’s hosted on, to the test data that flows through the system must be carefully considered, planned and implemented. This ensures the testing provides the insight and output you’re after. Put simply you want to replicate it as if you were testing in production (without testing in production, of course!)
This applies to both automation and manual regression.
Reporting. Who, what and how do we convey the output from this testing?
The ability to gain insights at both a cycle and lifetime level is key to informing decisions in the here and now, and defining long term software testing strategy and planning. Our advanced reporting platform provides trend analysis and comprehensive quality insights at both a test pack and case level to enable this.
Our data analysis produces a wide range of graphs and charts that clearly convey complex data in a consumable and accessible way to any level of stakeholder, who can use them when making key decisions.
That reporting is not just limited to test cases. It can also provide a clear view on what defects were raised during a cycle, and across the lifetime of a project (assuming multiple cycles have been run). You can also gain insights from a range of other perspectives including issue type, severity, feature, browser, device or operating system.
The key to gaining valuable and actionable insights is first defining what your measure of success looks like (i.e. what test metrics do you need to see), and then making use of our advanced reporting to gain that insight in the form you need.
Digivante’s reporting is extensively customisable allowing you to achieve the view and insight you need.
Next up, it’s time to take that data and act upon it. In an agile testing world, this would be the sprint review or retrospective, in a more traditional setting this may be an end of test report or GO/NO GO meeting.
Review and Refine. The regression pack has run – what now?
Lets start with ‘Review’ – what does the data tell us? Have we met the required quality measures to enable a GO, or conversely a NO GO to initiate a roll-back or delay, and the associated processes around that? Or, have we provided stakeholders with enough information to provide them with confidence, or enough data to define a risk level (which they’ll compare against their risk appetite), If it’s a regulatory change, that appetite will be very low, whereas if it’s a minor, low traffic cosmetic change, the appetite may be higher. In any case, that appetite should not promote or allow corner cutting. Quality in an agile world is a “collective responsibility”, but the same rings true in more traditional projects. Quality is the priority and reviewing the data should enable you to make the choices that result in that quality, but in an effective and efficient manner.
The key thing to take away from a regression test is: “Have these changes caused any adverse impact on the existing system or surrounding functionality?” and if the answer to that is “No”, that’s mission accomplished from a regression testing perspective.
The flip side of that is that if it hasn’t, plans will need to be initiated to reverse the changes – often as a back-out.
Again, the output from the test is the insight that defines whether it’s a positive outcome, or a negative one.
Moving onto ‘Refine’. As we’ve touched on previously, our software testing and reporting platform provides both test cycle and lifetime level insights into your test cases and defects raised across a cycle or project.
For test cases that regularly pass, these are ideal candidates for automation (assuming the other criteria are satisfied that doesn’t exclude them). If they are failing, they aren’t. From a trend analysis perspective, the percentage pass-rate of a test case, or frequency of its passing or failing (i.e. hasn’t failed in x amount of runs) may also play a part in defining its risk level. Do we have this as a P1 function from a risk perspective? Is this the kind of thing we need to test EVERY time we test, or is it part of a pack we test every quarter, or 4th test run? It all plays into the planning and strategy of your wider testing, but more specifically your regression testing approach. It ensures you don’t become complacent, because if you do, issues will creep in, and confidence in your testing function will dwindle. If that happens the validity and value of your insights falls of a cliff with your stakeholders – meaning that “collective responsibility for quality” becomes more of an aspiration than a sustainable and always improving reality.
Regression testing strategy
A poor or badly implemented regression testing strategy can destroy the reputation of a QA team, and in turn can have a reputational and financial impact on the company they work for. Their work must be effective and efficient in its application, be considered and thorough, but also mindful of itself insofar as it shouldn’t be planned and implemented and left to run repeatedly for the rest of time without review or refinement.
After each test run it should be reviewed for the insight sought, areas where it was successful, where it was not and refined to ensure its effectiveness is maintained and improved. One of the seven principles of testing relates to the “pesticide paradox”. If a test is repeated over and over, it will soon cease to detect defects. Review and refinement is essential in the continued effectiveness of your regression tests.
Digivante’s advanced reporting and trend analysis provides you with the quality measures and insights you need in order to make the key decisions pertaining to your regression planning and strategy.
You want to be in a position that, when you go-live, little to no issues are experienced. The steps taken to plan effectively and efficiently, coupled with the review and refinement of your regression pack, as well as its strategy or approach, should achieve the outcome and insight you’re after.
If you go live and find yourself in a position where issues are found, it has an impact on the customer – i.e. things aren’t working or don’t look right – but also the internal teams that have to rapidly work on fixing issues. You soon front load the coming months with large amounts of work that may be unachievable and thus have an impact on morale and trust internally. Couple that with poor customer feedback which may have a material impact on your profit and loss sheets.
It pays to do regression testing properly
Planning from the outset for an efficient and effective regression testing strategy pays dividends in many ways.
We can provide you with the required expertise to run your regression testing. We provide our clients with all the tools – our Portal, Advanced Reporting and Conversion insights – to not only reduce the time taken to perform your software regression tests but to also increase the device coverage too. When we supported TextLocal’s website regression testing, we were able to reduce their full regression test time from 2 weeks to just 72 hours, 48 hours of which could be done on a weekend.
Our tooling provides you with detailed insights, but also makes it very clear for any stakeholder what testing has been carried out, on which devices, as well as providing demographic and conversion insights on that testing.
Finally, the broad strokes around this advice – relating to the effectiveness and efficiency – are widely applicable. It can be applied to many types of testing. Having the objectives clearly defined (your success criteria), the tooling available, the approach and strategy well considered and defined and having a team that can use all of that will deliver you the insights you need to make those key decisions.
If you’re looking at ways to improve your regression testing process, get in touch with one of our specialist experts. We can provide expert insight, scalability that has unparalleled depth of device and operating system combinations, as well as a speed and quality assured throughout.