Regression testing in an Agile scenario: Reviewing implementation options …….
In my last blog on agile regression, we had focused on the challenges in getting the regression test case suite built the right way.
Today we focus on different options available for including regression testing as part of the sprint cycle. If we compare the agile practices employed by different organizations, one notices that regression testing is practiced under two broad categories.
A) Sprint level regression testing focused on testing new functionalities that have been incorporated since last production release.
B) End to end regression testing - The “true” regression testing that incorporates all the core functionalities.
Matured agile organizations with a good continuous integration process would have automated most of end to end regression test cases so that it can be executed in a very short time window. Theory dictates that any check-in of new code has the successful execution of end to end regression test suite as a mandatory pre-requisite. But then of course, real life hardly bothers to acknowledge theory, does it?
I have tried to outline three practical scenarios adopted by different organizations in tackling the regression testing as part of their release cycle.
Scenario A: “Traditional” Agile testing. This is the approach followed by most of the organizations. In this scenario, each sprint cycle is followed by a brief burst of sprint level regression testing. The completed code goes through further regression cycles but is not released into production. After few successful sprint cycles – typically 3 to 4 – the application goes through one round of end to end regression testing before being released to production.
This has the advantage of allowing the team to focus on ensuring functional validity for the sprint without the additional burden of ensuring end to end regression suite completion. And since production release is preceded by an end to end regression cycle spanning two weeks, there is also flexibility in the degree of automation needed. The detractors will of course point out that this is more a flexible iteration approach than agile – especially as the production release happens only once every 10 weeks. However, this is a more realistic and practical approach to move to an Agile approach for an organization used to either waterfall or iterative model of software development.
Scenario B: “Delayed Week” automation approach.
This approach differs from scenario A in that the sprint level regression continues beyond week 2 and extends till the middle of week 3. This removes the constraint on the team to stop testing abruptly at the end of week 2 and start with the next sprint immediately thereafter.
Especially during the early days of agile transition, following scenario A either results in the testing team regularly spending weekends at office – especially if a third party vendor is doing the testing – or the team deciding to park critical defects with an implicit understanding to take them up on a future date. The future date never comes due to a packed back-log and the organization is left to catch up during the end to end regression window.
Giving the team an additional 2 or 3 days to continue testing and defect fixing relaxes this constraint enough – the resulting productivity and quality improvement is usually enough to justify this deviation from theory.
Scenario C: “Delayed Sprint” automation approach.
In this approach, organizations do not differentiate between sprint level regression and end to end regression. Instead, there is a common regression cycle that is delayed by a sprint. So, the regression test cases that are employed for sprint 2 execution contains functionality till the stories that are part of sprint 1. This provides the testing team more time to keep their regression test cases current and automated. This also avoids the need for having two separate types of regression test cycles. One also can avoid a longer end to end regression test cycle at the end of the life cycle. However, it is very challenging to maintain the sanctity of regression testing. And typically the automation maintenance effort increases as one is still struggling against a two week time window and a relatively less stable set of requirements. Accordingly, very few organizations try to adopt scenario C.
It will be presumptive of me to recommend that scenario A, B or C is the best for an organization without understanding the specific constraints and challenges they have before them. For all you know, if the organization is matured enough, the fourth option of following a pure agile process with instant automation and heavy reliance on continuous integration might be the best option.
However, one approach I did find uniformly useful is to focus on the production release frequency instead of sprint frequency as one transitions into an agile organization. It is more effective for the organization to progressively reduce its production release window from an iteration mode of 3 to 4 months to the ideal window of 1 or 2 weeks. The core sprint and regression process needed can be determined and adopted from the beginning. As the automation percentage increases and the organizational maturity improves, the release frequency can be increased at pre-set decision points arriving finally at the 2 week release window over a period of 9 to 12 months.