Phy-gital Roundtable: Breakfast Roundup from Germany and Netherlands

02 May '15 | Debjyoti Paul

German Shoppers: Meet Them in the Fast Lane to Phy-gital

15 January '15 | Ralf Reich

Shoppers Will Share Personal Information (But They Don’t Want to be “Friends”)

15 January '15 | Anil Venkat

Modernize or Perish: Property and Casualty Insurers and IT Solutions

14 January '15 | Manesh Rajendran

Benelux Reaches the Phy-gital Tipping Point: Omnichannel Readiness is Crucial

13 January '15 | Anil Gandharve

The New Omnichannel Dynamic: Finding Core Principles Across Industries

13 January '15 | Debjyoti Paul

Technology does not disrupt business – CIO day 2014 Roundup

02 December '14 | Anshuman Singh

Apple Pay – The Best Is Yet To Come

02 December '14 | Indy Sawhney

Digital transformation is a business transformation enabled by technology

01 December '14 | Amit Varma

3 Stages of FATCA Testing and Quality Assurance

06 October '14 | Raman Suprajarama

3 Reasons why Apple Pay could dominate the payments space

18 September '14 | Gaurav Johri

Beacon of Hope: Serving Growth and Customer Satisfaction

05 August '14 | Debjyoti Paul

The Dos and Don’ts of Emerging Technologies Like iBeacon

30 July '14 | Debjyoti Paul

What You Sold Us On – eCommerce Award Finalist Selections

17 July '14 | Anshuman Singh

3 Steps to Getting Started with Microsoft Azure Cloud Services

04 June '14 | Koushik Ramani

8 Steps to Building a Successful Self Service Portal

03 June '14 | Giridhar LV

Innovation outsourced – a myth or a mirage or a truth staring at us?

13 January '14 | Ramesh Hosahalli

What does a mobile user want?

03 January '14 | Gopikrishna Aravindan

‘Test Automation ROI’ Webinar Interaction Zone

Posted on: 31 March '09

Welcome to this blog page dedicated to the Test Automation ROI webinar which I am hosting on 22nd April. We have set up this blog page with an objective to make the interaction even more valuable to the participants. As you all appreciate, ROI in Test Automation is a vast subject and it is not practical to cover all view points in one hour. This blog is open immediately and will continue post webinar to interact as much as possible on the exciting subject of ROI in Test Automation.

Please feel free to submit any specific queries or scenarios that you would like us to talk about in the webinar or discuss in general. I would try to answer most of the queries here and also take up some common points as part of the webinar.

I hope you will find this useful and we will have some benefical discussion around the topic.

Webinar Download Page:
http://www.mindtree.com/itservices/testing/testing_webinar_download.html

John

  • Michelle Knowles

    Hi John,

    Thanks for the invite and look forward for the webinar.

    I belong to the old school of testing and believe that manual testing cannot be replaced by test automation when we begin application testing. The current testing community seems to be doing a bit of both at the same time and ignoring the impact “manual” part of the testing cycle. This brings us to three issues, a) should we start the application testing with automation b) what should be the composition of test automation and manual testing c) how do we know that the application is ready for test automation?

    Finally, will there be any impact on the Test Automation ROI if we do these three things correctly?

    What are your thoughts here?

    Michelle

  • Sanjay

    How do you differentiate the automation cost between Dev team and Test teams as Developers also do lots of automation in TDD environment?
    In most cases automation finds almost negligible bugs compare to lots of cost associated in its development, maintenance, test environments, setups etc. Do you have any standard matrix which compares ROI over the issues detected by automation, product quality, customer satisfaction index, hot fixes required, reduction in release cycles etc.?

  • Ravi

    Hi John,

    Queries in my mind,

    1. Will OSS automation tools help to reduce the cost ?

    2. OSS VS Commercial Automation Tools ?

    3. Do we have any OSS (STABLE) testing tool avail to implement ?

    3. Do we really need an Automation testing framework for an ERP? If yes, what are all the functional component and others areas need be automated for an ERP-Distribution domain ?

    Ravi

  • “When we begin application testing” is a pretty important phrase here. If testing begins as early in the SDLC as the coding or even the design stages, automation may not be possible. However even unit tests can be automated, and there is a very good argument to be made for using such unit test automation as the basis for regression tests throughout the rest of the SDLC. If the AUT (application under test) has components that do not have or are not accessible by means of UI, then API tests are in order, as soon as the code has been completed, and they are prime candidates for automation.
    Once the AUT has entered integration and system testing, there is usually a period of instability in which automation is either not possible or is very expensive (because of frequent maintenance). At that time, manual testing is undoubtedly the best.
    At any time during the cycle, there is no substitute for manual testing for the exploration of boundary values, interoperability, and complex user scenarios. When your budget is very tight, the expense of automation may appear to argue against it. Rather than reject it outright, though, it’s better to automate as much as you can, starting with high priority and low-hanging fruit like build verification tests and “canary in the coalmine” types of role-based user scenarios.
    To answer your last question – there are three primary inputs for ROI calculations: cost of automation, value of benefits, and reductions in testing tasks’ duration. Knowing what to automate will keep your costs down. Knowing when to develop automation will further reduce your costs.
    Finally, an AUT is ready for test automation when its interfaces are stable. For UI, that is probably after code freeze. For unit test and API test automation, automation is (in theory at least, and sometimes in practice) possible as soon as the relevant code has been checked in (which in Agile and TDD means that initial unit tests have passed).
    Best –
    John

  • A common formula for ROI expresses as a percentage the quotient resulting from (combined cost of benefits + value of those benefits) / (cost of benefits). That’s about as basic as ROI gets, and within a very limited scope it is useful insofar as it is accurate.

    Whether test automation has a high or low ROI depends on how many benefits you have quantified (to the satisfaction of all stakehoders); how much it cost to get those benefits; and — adding another factor to the formula — how much time you’re saving as a result of exercising those benefits.

    Test automation won’t find many bugs the day after it’s been completed, on the same build of the AUT (application-under-test) that has been used in developing the automation. Automation developers have already found those bugs during automation development. (And that in itself is a very good reason to write automation, just in case test engineers have no other motivation to think carefully about the AUT.)

    When developers add new features, though, or when they fix bugs in their code, test automation may find a lot of bugs, and because it does so very quickly, it reduces time-to-market.

    No, I don’t know of any standard matrix for ROI that evaluates test automation in terms of CSI, # hot fixes required (or eliminated), etc.; and I would be very suspicious of such a matrix, because there are so many variables affecting the many aspects of ROI. It is possible though to develop a matrix for an individual AUT that ranks possible benefits of test automation by cost and value, which will help determine how much of the AUT to automate. The procedure for developing such a matrix may be the same from AUT to AUT, but the matrices themselves will necessarily be very different.

    Best –
    John

  • Open source software (OSS) software tools are as good as their developers make them. Frank Cohen’s PushToTest tool is a great example of one that works well (SOA, RIA apps, etc.). What’s cool about Frank is that he’s also encouraged the use of PushToTest as a teaching framework. Downloading the code was free the last time I looked, and he puts on free workshops in using it. He sells an electronic book for something like $25, cheap at twice the price.
    Comparing OSS to COTS is very dicy. COTS almost always has more bells and whistles than OSS has, and a comparison that you find on the web may be written by the only guy in the world who cares whether the OSS you’re interested in supports setting breakpoints in your automation script which is executing Turbo Fortran in debug mode.
    Automation testing framework for ERPs? Depends on what you mean by “framework”. My company has what I would call accelerators, libraries of test cases that cover major areas in a specific domain like Travel and Transport. Someone else might call the same library a framework. I prefer to use “framework” to describe script drivers, in other words test applications whose purpose in life is to provide reporting, debugging, data management, etc. for some kind of scripts. COTS behemoths try to provide support for “all” scripts, but by addressing such a large market they make compromises that your area of specialization may not be willing to make. There is a very large aftermarket for COTS automation tools, providing drivers and plug-ins and libraries to help COTS behemoths meet those specialized needs. OSS in the hands of Class AAA developers can be used to build frameworks for specific purposes, such as the testing of a specific ERP, for less money than the cost of licensing the COTS behemoth.
    The problem of course is that most of us who are in testing test more than one application or type of application, so we want a versatile tool. And that’s where buying licenses for a COTS behemoth can be a great investment.

    Best –
    John

  • Hello Sanjay,

    The method of measuring ROI is entirely up to you. You can keep it really simple, e.g.

    (cost + benefits) / cost

    or

    (cost + benefits) – cost

    Of course you need to evaluate the value of the benefits in a consistent, justifiable manner, and that is not so simple. If you keep the benefits themselves very simple – “We will save 10 man-days per test cycle by spending $2000 on automating this suite of tests” – the formula stays simple, and it’s easier to defend to management.

    But there are many more possible benefits than cost savings, depending on how you plan to construct your test automation and when you develop it. Suppose you are determining ROI for automation of unit tests; that your developers have written 50% of the unit tests you need, and that your testers are writing the other 50% as well as automating the lot. In the best of circumstances, the automation will find very few bugs when it is executed in each test cycle; instead, most of the bugs will be found during development of the automation itself. I would say that test automation is responsible for finding those bugs, and that you’re saving an enormous amount of money by catching them early (and I can show you a graph demonstrating relative savings across the SDLC). But your manager may say it’s not automation that’s responsible: he could have had the same results had manual testers started at the same time.

    Now, suppose development checks in major bug-fixes, or re-factors big blocks of code for improved performance. Suddenly the same manager is a great believer in the value of test automation. He want to be able to run regression tests overnight, tests that will validate that there is no collateral damage from bug-fixes or re-factoring. Those same tests might require a full day or more if run manually. With automation, you can kick of the tests just before leaving for home that night, and have a copy of the logged test results delivered to your manager by email as soon as the tests finish.

    Was it your developers or your testers whose work is measured by your ROI formula? It’s entirely up to you, but if you’re measuring ROI on work performed by the entire production staff, you should include both. If both dev and test are contributing, it may not be possible to equitably discriminate between results due to developers’ and results due to testers’ work.

    Unfortunately the answer to your last question, about whether there is a standard matrix comparing ROI generated by a variety of techniques as measured by a variety of metrics, is “no”. Suppose we develop test automation and decide that one of its benefits will be the reduction of hot-fixes in a new release’s maintenance cycle, as compared to hot fixes required for the last release. For that to be a useful metric to use in calculating ROI, you would need to be sure that no other process in the production environment had changed; and even if you were able to nail that down, how would you demonstrate that the code developed for this release is of comparable complexity, difficulty etc. to the last release? Not possible, I think.

    That doesn’t mean that we shouldn’t think about whether test automation can reduce the number of hot-fixes. I think it can be argued, persuasively, that test automation can catch errors that would otherwise be missed and that would therefore ship with the product; and that such test automation enables development managers to confidently consent to otherwise dangerous last-minute bug-fixes or even features. These gains are, however, qualitative, not quantitative.

  • Nagendra BS

    !. The “Cost” that we are referring in ROI, I assume is not a onetime cost, because building framework and creating scripts might be one time activity, but maintaining them would be a long term activity. This becomes more important when you are using these scripts for products/applications which needs to be maintained over years with regular feature updates.

    2. Do you have any best practices or tips that would apply in generic to automation of any product/application which that would help to bring the quality of testing (measured more from probability of uncovering defects) done by automation(read as execution of test scripts) close to that of manual testing? This becomes more important for products/applications which need to be maintained over years with regular feature updates.

  • Pradnesh

    Hello John,
    I wish to bring in element of project management processes in the mix. Often times “Automated Scripts” not kept in sync with the code changes causes ROI not achieved. The “Change Management” and “Release Management” processes should ensure Test cases are added / modified to reflect the code changes and “automated scripts” are in sync. It is good idea to keep Traceability Matrix. Any experiences ?
    Pradnesh

  • Real companies have real problems to solve with real budgets. Project managers may want you to provide data for their ROI model, in which case they’ll have to answer your first question. If you are the decision-maker, it’s up to you to decide what to include in ROI. I would urge including maintenance costs (of all kinds), but you may have reason to prefer only to compare initial labor costs of automated testing vs. manual testing. When you have decided, and computed the ROI, don’t forget what you were measuring when you use the ROI figure to bolster your argument for or against pursuing test automation for your specific project.
    Your second question is more about best practices for test automation. It’s a huge topic, larger than what I can cover here, but I will say that manual testing has the potential for finding many more bugs than automation has, while test automation can provide the speed of broad coverage that will give your test engineers time to probe the program’s functionality if not code by smart manual methods.

  • Major COTS test management apps provide traceability across specifications, functional requirements, use cases, and test cases. “Provide” doesn’t mean you don’t have to work to get it. You still have to enter the data, and create the links that will propagate changes or alerts across the traceability matrix from whatever point you have initiated an update. If you change the requirement, you’re prompted to change the test case, and so on. I imagine it’s possible in a development environment to query the developer for unit cases or a test case when a bug has been fixed, but what I have seen is a “best practice” reinforced by checklists so that if you fix a bug that was discovered by “ad hoc” measures, you’ll not only write a test case to verify that the fix is good in future test cycles, but you’ll back-propagate a test requirement or specification. I have seen the latter implemented through the defect management database when that database is used not only for recording bugs but for recording suggestions for new features or modifications of existing features, tying them to a specific upcoming release or just “post-“.