Productive Automation Usage in Agile Testing


Automation is mostly understood as a task that is performed to save time, find more bugs, increase test coverage, and replace manual testers.

Let’s understand what each of these mean and entail:

  • Save Time – Automation definitely saves a lot of time by running tests. This in turn helps testers perform interesting exploratory testing while the automated tests are running. However, writing scripts for a newly developed feature takes a lot more time than testing a feature manually for the first time. No matter how many  functionalities are frozen in terms of UI, the way it works, and other aspects such as these, it still goes through many changes before attaining its final shape. The effort required to implement these changes and rescripting them eventually dilutes the focus on finding bugs. It shifts the focus to making the scripts work, which takes a lot more time.
  • Find More Bugs – So far no metrics have proved that there were more bugs found by automation than manual/exploratory testing. As we know, in any testing process we always find more bugs in new features than in existing functionalities. 
  • Increase Test Coverage – Does running more tests help find more bugs? With a good automation suite we may be able to run 100s of tests in a short period of time, but these may not always be good quality tests. Reports often say that despite running good automation tests the chances of bug leakage in the production environment always remain. This is because the kind of use cases, environment, test data, etc. used by end users  are often dynamic whereas in automation, everything is predefined and supposed to operate under certain conditions.
  • Replace Manual Testers – No matter how many automated tests we run, we have always found that manual testing helps find bugs more effectively. For a maintenance project of ours, we needed to leverage automation as much as we could to avoid any feature breaking. However, script maintenance was an added burden.

How did we use automation to our benefit?

  • Repetitive testing over a period of time
  1. For a complex domain like Unified Endpoint Management, we had a lot of challenges in testing due to the variety of devices and agents (across operating systems) involved in our product testing. Despite covering many tests in our regression over many devices, we still faced issues reported by customers on similar tests.
  2. One of the primary causes for this was the testing period and environment. Ideally, a customer sets up our software with custom settings and very rarely does any changes to this set up, but in our test environment we often make changes. 
  3. We were able to reproduce these issues if we repeated the tests over a period of time continuously in an environment that was undisturbed. This was practically impossible to do  through manual testing. So, we automated these tests and then using Jenkins executed these scripts continuously 24/7. Whenever there was a failure, emails were triggered to respective teams. This approach had another advantage – in case of unexpected server downtime or any other issues with an application or DB, we got a failure notification on our tests even before customers faced these issues.
  • Mobile App Data Usage and Battery Test Cases Automation – Data usage and battery testing needs repetitive tests to capture data and battery usage before and after a certain action has been executed. Automation was of great importance here.
  • Regression Automation for Customer Use Cases- To ensure our new releases do not break specific use cases of customers due to our complex domain, we automated important customer use cases and executed them for every release, big and small.
  • Sanity Test Case Automation – Post production, we had to perform sanity checks around over 90 tests over multiple DNS (Domain Name System) in production. Sanity test automation helped us bring this effort down.
  • Regression Automation of Stable Modules: Automation of modules that didn’t have much impact and are more or less completely evolved.
  • API Automation- Automation of customer facing APIs.

The ROI for the above types of automation was very good for us. While our manual testing focussed on new features, adhoc testing and important regression tests helped us uncover issues that were difficult to reproduce or not easy to find through manual testing.

Leave A Comment

Your email address will not be published. Required fields are marked *