Test Automation Mess

Every now and then, there is a big initiative to focus on Automated testing. A manager will decide that our software is complex and too manually intensive to regression test in detail. Automation seems the answer but it’s never that practical.

Our main software, a desktop application, requires interaction through the UI which is incredibly slow and unreliable. We used to have a dedicated Automation team that maintained the tests but they would take several hours to run, would randomly fail, then eventually the team disbanded and declared them obsolete. There’s been times we wanted to replace them with the likes of CodedUI (which turned out to have the same issues), and more recently FlaUI.

When the last “drive for automation” was announced by the CTO, our most experienced tester wrote an internal blog which I thought had a lot of subtext to it, basically saying “it’s a bad idea”.

Communities of Practice around Test Automation

With all of the new Communities of Practice around Test Automation*, I wanted to share some thoughts on whether automation is actually a good idea. This comes from experiences over the years. I hope this saves some people time, and provokes conversations.

To automate or not to automate? That is question….

A common question in a tester’s life:  “Should we automate our tests?”

Which of course really means, “Should we write our checks in code?”

This will inevitably give rise to more questions you need to answer:

  • which checks we should automate
  • and which we should not automate
  • and what information running the checks gives us
  • and how does that information help us assess risks present in the code
  • and which is the best tool to use
  • and how often we should run the checks

Asking and answering these questions is testing. We have to ask them because no automation comes for free. You have to write it, maintain it, set up your data, set up and maintain your test environment, and triage failures.

So how do you begin to decide which checks to automate?

Reasons for automating:

  • The checks are run frequently enough that if you spent a bit of time automating them then you would save time in the long run (high return on investment)
  • The checks would be relatively easy to write and maintain owing to the product having a scriptable interface (such as a REST API)
  • They can be performed more reliably by a machine (e.g. complex mathematical calculations)
  • They can be performed more precisely by a machine
  • They can be performed faster by a machine
  • You require use of code in order to detect that a problem exists
  • You want to learn how to code, or flex your programming muscles(Even if you ultimately decide not to automate your checks, you may decide to use code for other purposes, e.g. to generate test data.)

Reasons against automating:

  • There isn’t a scriptable interface; the product code can only be accessed via a User Interface (UI automation is notoriously expensive and unreliable).
  • In order to have a greater chance of finding problems that matter, the check should be carried out by a human being as they will observe things that would matter to a human but not a computer (e.g. flickering on the screen, text that is difficult to read).
  • The checks would have a short shelf life (low return on investment).

Beware of the fallacy that use of code or tools is a substitute for skilled and experienced human beings. If you gave an amateur cook use of a fancy food processor or set of knives, their cooking still wouldn’t be as good as that of a professional chef, even with the latter using blunt knives and an ancient cooker. Code and tools are ultimately extensions of your testing. If your testing is shallow, your automation will be shallow. If your testing is deep, your automation can be deep.

Ultimately the benefit you derive from writing coded checks has to outweigh the cost, and to automate or not is a decision no one else can make for you. 

Testers in my Team

Most of the testers we employ aren’t that technical, and most aren’t interested in writing Automated Tests since that requires knowledge as a  developer since it is coding. One of our testers went on a week-long training course about FlaUI. One of the first things he says is “FLAUI is not worth its value”, which made me laugh. The course cannot have painted it and a good light !” 😂

He then got asked to move teams to do pure automation for a few months. Another tester had no interest at all, but was instructed to “try learn”. 

“writing the steps is fine, it’s just when you go into the code”

Joanne

There was no way she was gonna be able to learn it. She isn’t technical and the desire isn’t there at all. Being pressured by managers to move away from “manual” testing to “automated” just disrespects them as a tester. It’s happened before and they end up leaving. She eventually moved internally to be a Release Manager.

Automation Mess

The original decision to move to FlaUI was made by a group of Testers and they didn’t get input from the Developers. 

I think it would be logical to code using the Coding Standards that us Developers have followed for years. If Developers want/need to help write Automated tests, they can fit right in since the process and code style is the same. Additionally, after years of writing Automated Tests, maybe the Testers want to switch roles and be a Developer and so it would be a smooth transition.

Not only did they invent their own Coding Standards, which meant variables/methods/classes were named differently, there was a lot of duplicated code to perform basic actions like logging in, selecting a customer record etc. 

The process including a branching strategy was different too, and so instead of having a Master branch, taking a Project Branch for longer-lived changes, and standard User Branches for simple short-lived branches, they went for a more convoluted strategy where they had Development, Devupdate, Master. Then it became a disorganised mess when work wasn’t merged to the correct branches at the right times.

I can’t even make sense of this:

Before the start of Regression: 

  • 1) Lock the Development Branch (no PRs to be allowed to come in to Development till regression is completed) 
  • 2) Development, Devupdate, Master are up-to-date by syncing your local with remote branch and get all the commits into local branch
  • 3) Merge from Development to DevUpdate 
  • 4) Merge from DevUpdate to MasterUpdate 
  • 5) Set <updateTestResults> to true and <testPlanId>(from URL ?planid=12345) inProjectSettings.xml in MasterUpdate 
  • 6) Raise a PR from MasterUpdate against Master. Throughout step 3, step 4, observe that ‘commits behind’ are equal after the merge process to that of master. 
  • Once the above process is completed, observe that Master branch is 1 commit ahead of other branches 

After the end of Regression: 

  • 1) Development, DevUpdate, Master are up-to-date by syncing your local with remote branch and get all the commits into local branch 
  • 2) Merge from Master to DevUpdate 
  • 3)Change the <testPlanId>toxxxxand<updateTestResults> to false in DevUpdate 
  • 4) Raise PR from DevUpdate against Development After Step 2, observe that ‘commits behind’ are equal after the merge process to that of master. 
  • Once the above process is completed, observe that Development branch is 1 commit ahead of other branches 

Eventually, a few more technical testers were moved into the team and tasked with aligning the process and codebase with our production code – ie sort the mess out.

This is the classic case of managers thinking they can just assign “resource” to a team, and give them an aim “automate this”; and expect results. But you need the technical know-how, and a clear direction.

Leave a comment