Question: As you build up more automation shouldn’t the cost of maintenance of that automation go up?
Response: Yes, overall cost are almost certain to go up, but maintenance cost per added test case may come down, depending on how you have structured your test automation: whether the automation you have added can leverage code re-use (common libraries), whether current data management approach can supply data for additional automation or will require new dataset provision, whether your infrastructure can accommodate the added load (in fact added automation may trigger investment in server consolidation or virtualization which in the longer run would result in even greater reduced costs) – like that.
Question: We typically have missed costing code maintenance of the scripts when system changes happen. What do we highlight in the process to catch this?
Response: Costing code maintenance with any accuracy, like any other estimation, depends on reviewing actual costs to estimated cost over time, steadily revising your costing equations so that your estimation (costing) model approaches actual cost, and maintaining the practice so that costing adapts to organizational or management model changes.
Whenever your build team releases a new build, they should also send out release notes detailing new features, bugfixes, changes in inter-component interfaces, external interfaces, etc. The more substantial the changes, the more detailed the notes should be. This is the part of your process to highlight – when you get the release note, you kick your code maintenance costing engine into gear.
You can estimate the cost of maintenance based on a comparison of the announced changes to actual costs of similar work. In less formal environments – some implementations of Scrum, for example – you can take a similar approach but you may have to work it out without baselines, retrospectives, etc.
I could perhaps be more helpful if I knew the context for your question.
Question: Can you speak to the costs of investigating automation failures on machines that are not physically at your disposal? Or disk imaging system? Do you mark the machine when it has a failure so it doesn’t get wiped out?
Response: About disk images – I’m guessing that you’re describing a situation where you are remotely administering execution of test automation in a configuration in which after test completion, the system on the volume under test is automatically replaced by a disk image, ensuring that the next automation execution session does not inherit any corruption induced by previous testing. I can think of a couple of solutions. If your test automation’s error handling provides logging of the last line executed before failure, and dumps system state (e.g. in the Windows environment, a user dump or system dump) to file, you should (if timing is not a necessary condition) be able to reproduce the problem and examine it in isolation, set up breakpoints, etc. That’s a lot of ifs! Otherwise, I imagine you’re using something like Python, shell script or Perl to set up the remote environment (including system images) before firing off your automation. You could have that script look for a failure flag in the last-run automation suite’s logfile and abort on discovery, proceed if not, sending a system alert to the test administrator in the first instance. As far as cost goes, your highest cost would be the time of the engineer who’s trying to get a look at what happened. The only way I can think of for keeping that cost at a minimum is to figure out the least expensive approach in your environment and embrace that practice as part of the test automation checklist.