Phy-gital Roundtable: Breakfast Roundup from Germany and Netherlands

02 May '15 | Debjyoti Paul

German Shoppers: Meet Them in the Fast Lane to Phy-gital

15 January '15 | Ralf Reich

Shoppers Will Share Personal Information (But They Don’t Want to be “Friends”)

15 January '15 | Anil Venkat

Modernize or Perish: Property and Casualty Insurers and IT Solutions

14 January '15 | Manesh Rajendran

Benelux Reaches the Phy-gital Tipping Point: Omnichannel Readiness is Crucial

13 January '15 | Anil Gandharve

The New Omnichannel Dynamic: Finding Core Principles Across Industries

13 January '15 | Debjyoti Paul

Technology does not disrupt business – CIO day 2014 Roundup

02 December '14 | Anshuman Singh

Apple Pay – The Best Is Yet To Come

02 December '14 | Indy Sawhney

Digital transformation is a business transformation enabled by technology

01 December '14 | Amit Varma

3 Stages of FATCA Testing and Quality Assurance

06 October '14 | Raman Suprajarama

3 Reasons why Apple Pay could dominate the payments space

18 September '14 | Gaurav Johri

Beacon of Hope: Serving Growth and Customer Satisfaction

05 August '14 | Debjyoti Paul

The Dos and Don’ts of Emerging Technologies Like iBeacon

30 July '14 | Debjyoti Paul

What You Sold Us On – eCommerce Award Finalist Selections

17 July '14 | Anshuman Singh

3 Steps to Getting Started with Microsoft Azure Cloud Services

04 June '14 | Koushik Ramani

8 Steps to Building a Successful Self Service Portal

03 June '14 | Giridhar LV

Innovation outsourced – a myth or a mirage or a truth staring at us?

13 January '14 | Ramesh Hosahalli

What does a mobile user want?

03 January '14 | Gopikrishna Aravindan

Hurrah for Automation, But Neglect Manual Testing at High Cost

Posted on: 24 March '09

Their board of directors had been alarmed by the report of deployment failures, and was adamant in insisting that this should never happen again. Because of the number of customizations, and because of the frequency of upgrades, the VP believed that the only solution was full automation. He understood that this could only be a long-term goal, but he wanted my company’s help in making it happen, along with whatever bridge solutions were required between now and then.

Their routine for QA and testing, which until now had worked satisfactorily, lacked sophistication. Their development team provided unit testing, and business analysts provided acceptance testing. Their small QA team of four full-time test engineers spent all their time developing and executing test cases for features or fixes that were to be rolled into the next scheduled monthly patch. A week before its scheduled release, everyone in the product development group installed the release candidate patch on their machines and for a couple of days ran certain scenarios selected from a distributed list. The test team barely had time to run their new test cases. They did not have modular tests; and they had stopped running regression tests at least two years ago.

Lack of sophistication in test strategy, the obvious problem at Buckflow, is not unusual. I pointed out that bugs found in the design stage are far less expensive to fix than bugs found during product integration testing. Also – especially applicable to Buckflow – every bug fix is a weak link because the product’s original design did not address it, and therefore must be tested in every release. The VP nodded with evident regret, and said that they had thought that disciplined development combined with unit testing would be sufficient.

It’s also not unusual to find companies who continue to have naive faith in automation, in spite of evidence against such disturbingly resilient illusions as:

– automation eliminates human errors that result from fatigue, boredom, and disinterest;

– automation can be re-used indefinitely (write once, run many);

– automation provides more coverage than manual testing

– automation eliminates the need for costly manual testing;

Every one of the above statements makes sense if properly qualified. Automation may eliminate some or all of the errors that result from manual testers growing weary, but it may also introduce other errors that are equally due to fatigue, boredom and disinterest, arising here in the people who develop automation.

Automation can be re-used indefinitely, provided that the application or system under test does not change, and that nothing else changes that might affect execution, such as common libraries or runtime environments (e.g. Java). Whatever return on investment may have been realized from automation will be quickly wiped out by maintenance costs, at which point the only advantages of automation are strategic, not economic.

If “coverage” means “test coverage” (rather than “code coverage”), then yes, automation can even provide 100% coverage: one need only automate all available test cases. A more significant data point however is the degree of code function or code path coverage provided by available test cases. While achieving 80% code path coverage may be better than 70%, a more significant consideration is what has not been covered, and why.

To avoid manual testing at all costs would be the costliest option, because only in manual testing can all of a test engineer’s understanding, probity, logic, and cleverness be put to work. Security testing of Buckflow at the application level, for example, depends on how the application was developed, where it stores its cookies, what scripts it runs during initialization and various transactions, how stateful connections are established in the inherently stateless HTTP protocol, etc.

While there are commercial test tools that can verify an application’s defense against cookie-cutter varieties of denial of service, or even 50% of the threat model for most applications, interoperation with new applications and with new versions of underlying technologies requires at least a significant investment in manual testing.

More obvious needs for manual testing include penetration testing, usability testing, and localization testing. But Buckflow had a particularly acute need for testing massively configurable applications in diverse environments. While there was room to talk about keyword-driven automation, it was clear that only manual testing would be able to identify configuration issues. In the end, we agreed that the best approach would be a combination of carefully orchestrated automated tests with rigorous manual testing.

Mindtree Blog Archives

Mindtree blog Archives are a collection of blogs by various authors who have independently contributed as thought leaders in the past. We may or may not be in a position to get the authors to respond to your comments.

  • Viplav Anand

    Automation is definitely the most viable solution for any application that has been 70% stable. When I say 70% , it does not signify that the newly incorporated modules should interfere with the modules already in production more than just integration level. Otherwise the rework effort would be so high for the automation scripts that it might take more than 20 runs to generate a positive ROI. Again if tool is very high in terms of technical capability of the automation tester, the resource crunch might be a paramount concern. I have just two years of experience in automation using multiple tools, and the same has been shared at my blog : http://www.csesupport.blogspot.com/p/automation-testing-framework-approach.html, but I would highly recommend that only technical people should decide on the approach of Automation, otherwise It has to end up nowhere!!