Phy-gital Roundtable: Breakfast Roundup from Germany and Netherlands

02 May '15 | Debjyoti Paul

German Shoppers: Meet Them in the Fast Lane to Phy-gital

15 January '15 | Ralf Reich

Shoppers Will Share Personal Information (But They Don’t Want to be “Friends”)

15 January '15 | Anil Venkat

Modernize or Perish: Property and Casualty Insurers and IT Solutions

14 January '15 | Manesh Rajendran

Benelux Reaches the Phy-gital Tipping Point: Omnichannel Readiness is Crucial

13 January '15 | Anil Gandharve

The New Omnichannel Dynamic: Finding Core Principles Across Industries

13 January '15 | Debjyoti Paul

Technology does not disrupt business – CIO day 2014 Roundup

02 December '14 | Anshuman Singh

Apple Pay – The Best Is Yet To Come

02 December '14 | Indy Sawhney

Digital transformation is a business transformation enabled by technology

01 December '14 | Amit Varma

3 Stages of FATCA Testing and Quality Assurance

06 October '14 | Raman Suprajarama

3 Reasons why Apple Pay could dominate the payments space

18 September '14 | Gaurav Johri

Beacon of Hope: Serving Growth and Customer Satisfaction

05 August '14 | Debjyoti Paul

The Dos and Don’ts of Emerging Technologies Like iBeacon

30 July '14 | Debjyoti Paul

What You Sold Us On – eCommerce Award Finalist Selections

17 July '14 | Anshuman Singh

3 Steps to Getting Started with Microsoft Azure Cloud Services

04 June '14 | Koushik Ramani

8 Steps to Building a Successful Self Service Portal

03 June '14 | Giridhar LV

Innovation outsourced – a myth or a mirage or a truth staring at us?

13 January '14 | Ramesh Hosahalli

What does a mobile user want?

03 January '14 | Gopikrishna Aravindan

Test Automation Webinar Q&A Series – Automation Costs

Posted on: 19 May '09

Question: As you build up more automation shouldn’t the cost of maintenance of that automation go up?

Response: Yes, overall cost are almost certain to go up, but maintenance cost per added test case may come down, depending on how you have structured your test automation: whether the automation you have added can leverage code re-use (common libraries), whether current data management approach can supply data for additional automation or will require new dataset provision, whether your infrastructure can accommodate the added load (in fact added automation may trigger investment in server consolidation or virtualization which in the longer run would result in even greater reduced costs) – like that.

Question: We typically have missed costing code maintenance of the scripts when system changes happen. What do we highlight in the process to catch this?

Response: Costing code maintenance with any accuracy, like any other estimation, depends on reviewing actual costs to estimated cost over time, steadily revising your costing equations so that your estimation (costing) model approaches actual cost, and maintaining the practice so that costing adapts to organizational or management model changes.

Whenever your build team releases a new build, they should also send out release notes detailing new features, bugfixes, changes in inter-component interfaces, external interfaces, etc. The more substantial the changes, the more detailed the notes should be. This is the part of your process to highlight – when you get the release note, you kick your code maintenance costing engine into gear.

You can estimate the cost of maintenance based on a comparison of the announced changes to actual costs of similar work. In less formal environments – some implementations of Scrum, for example – you can take a similar approach but you may have to work it out without baselines, retrospectives, etc.

I could perhaps be more helpful if I knew the context for your question.

Question: Can you speak to the costs of investigating automation failures on machines that are not physically at your disposal? Or disk imaging system? Do you mark the machine when it has a failure so it doesn’t get wiped out?

Response: About disk images – I’m guessing that you’re describing a situation where you are remotely administering execution of test automation in a configuration in which after test completion, the system on the volume under test is automatically replaced by a disk image, ensuring that the next automation execution session does not inherit any corruption induced by previous testing. I can think of a couple of solutions. If your test automation’s error handling provides logging of the last line executed before failure, and dumps system state (e.g. in the Windows environment, a user dump or system dump) to file, you should (if timing is not a necessary condition) be able to reproduce the problem and examine it in isolation, set up breakpoints, etc. That’s a lot of ifs! Otherwise, I imagine you’re using something like Python, shell script or Perl to set up the remote environment (including system images) before firing off your automation. You could have that script look for a failure flag in the last-run automation suite’s logfile and abort on discovery, proceed if not, sending a system alert to the test administrator in the first instance. As far as cost goes, your highest cost would be the time of the engineer who’s trying to get a look at what happened. The only way I can think of for keeping that cost at a minimum is to figure out the least expensive approach in your environment and embrace that practice as part of the test automation checklist.

  • Ashok

    Hi John:We do about 8 releases in a year. Our initial objective was that regression testing cost will come down with automation. However, this has been offset (and made somewhat worse) with effort required to keep the automation scripts updated due to changes. Too frequent breakages. One option we are looking at is making our scripts more modular. Any other best practices or advice?

  • Hi Ashok. Without knowing more about your test automation, I can only respond in a general way.
    1. Modularization helps. You imply that your scripts are already organized by module, though, so you already know that it’s not just modularization that helps, but where you build your walls. Some walls have doors or even huge holes; so if you have only arbitrarily modularized, the tests for Module A may still be hammering Module B.
    2. Apply the wise practice, enforced by some unit test frameworks, of including set_up() and break_down() functions in each script so that you don’t inherit the garbage left by a previous script.
    3. Ideally one test case tests one requirement; and if a script contains more than one test case, failure of one of those test cases will not degrade or block execution of the other(s).
    4. Comment out (or disable in the automation driver) scripts that freqeuently fail due to code changes (and be sure these are added to manual testing). If you plan to use this test automation for several more releases, find out what is failing in those scripts that fail repeatedly, and work with the development team to see if debug asserts can be used to anticipate possible failures so that the automation driver blocks execution of those scripts but sets a flag that is logged.
    5. Work with the development team lead to write code that is testable. Here, that would mean that development is aware of test automation dependencies. With reasonable diligence the dev team should avoid compromising testability when fixing bugs or when revising features and fine-tuning performance.
    6. Avoid record-and-play as you would avoid drinking barn-yard runoff. Develop automation as you would write executable application code. Wrappers are wonderful.
    7. Use common function libraries so that you can make changes in one place, not in many places.

    Hope some of this is useful.

    Best –
    John