We have a small team of people that are known as Tech Quality Assurance. They are comprised of a group of experienced ex-Test Managers, although I often think they can’t have learned much in their experience due to some of the ideas they come up with or how unaligned they are with their communication.
A few years back, there was a project I was working on which was near completion and I got asked to fill in a “Merge Ready Checklist“. I was like “errr, what’s this?“
If the QA team are going to introduce a new process, shouldn’t this be communicated when it was created?
I looked through it, and most of the evidence they wanted were aspects that you needed to be aware of up front, so you could track them, and gather evidence as the project progresses. When I saw it, I thought it wouldn’t be possible to be allowed to merge. Even some of the simple stuff like:
“Branches are expected to be kept in sync with root development branch on weekly basis. Provide evidence of the frequency you were merging from Main and the date of the most recent merge.”
process on branch-merging
I merged when I wanted, so what are they gonna do? reject it because I didn’t merge enough? The branch history is the evidence, why is it worded as “provide evidence“. Why not just ask for the link to the code branch?
They then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However we have since downgraded to Professional. So I ask what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.
Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.
Test execution started. ___ / \ | | \ / / \ / \ ___/ \___ / \ | _______ | \__ _/ \___ / AxoCover Test Runner Console
It’s supposed to look like their logo, which looks more like a fidget-spinner.
Then I can’t even make sense of the statistics? Is that saying that I have removed methods, and deleted tests? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher. I had also added some Unit Tests but maybe would have expected 30% on new code.
I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.
After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.
They also wanted you to run the static analysis tool Sonar, but with the licence we paid for, we could only run it on our Main branch. So we need to merge our project in to run Sonar, but they want the Sonar results to authorise if we can merge it in. The classic chicken and egg scenario. Later on, we did get better licences to run Sonar on our project branches.
When we submitted our Merge Ready Checklist to the best of our abilities, we got the following feedback:
Surely that is the biggest rejection. Even resolving the issues won’t change their judgement.
We arranged a meeting with them to discuss the way forward. At one point they said:
“These are all things that are too late to address but are common themes we raise time and time again and significantly reduce confidence in the quality of a release.”
TechQA
Surely that’s a problem with their communication. So the process is just sprung on teams when they want to merge in, then the problems aren’t even fed back into the system/communicated to other teams so they can’t learn from it, and then they proceed to fail.
I then asked the release manager what we can do, and he said that the TechQA team are there just to advise him what should and shouldn’t go in the release. Due to contractual deadlines, he just lets projects go in anyway as long as the team explains what bugs they are aware of. All the other aspects like Sonar and Test Coverage don’t bother him at all because it is more “Technical Debt” and not really that reflective of the user’s experience.
So what we are concluding is that the TechQA team are completely pointless and we may as well save money by binning them off, because they are just creating processes that no one cares about, and aren’t enforcing anyway.
It looked like they were also doing these for another company we acquired. I found this on another team’s review:
It should come as no surprise that the assessment of the MRC comes out as critical.
- The “Definition of Done” specified in the document is not the one followed by the team as you point out that no testing was part of the user story and a Product Owner was not attached to the project towards the end and therefore could not do the PO review.
- Also, no unit tests exist for the code and there is no ability to confirm the changes have not degraded any quality metrics.
- Elements of the regression plan have not been run and once again the team have not been given sufficient time to consider quality.
- On top of all that the code is written in Visual Foxpro which was EOL’d by Microsoft in Jan 2010, that’s 11.5 years out of support!
An example for the other week saw the lead developer give a lot of backchat to TechQA.
"Link is to a PR not to a build - where do I go to see the state of the build" > All PR's into master need a passing build. If you would like a hand using Azure DevOps please let me know and I'll see if can find time to show you how it works. "there must be some sort of output that can be used as evidence surely" > Once you have looked at the build, you can also view the 35k passing unit tests. "Need some evidence of your test coverage levels in the form of a report or chart from the text coverage tooling you are using" > You can see the tests written in the PR to master. Each PR for the new work was accompanied by tests. > Coverage Report attached for the impacted code, although code coverage is a useless metric. "Sonar - Please provide evidence when you have it available" > If the work is ever completed, then we will have automatic sonar reports for projects.
No idea why he doesn’t have Sonar running, but he obviously doesn’t care enough, and definitely doesn’t care about Test Coverage. I do find it strange that we have been using Azure DevOps for years now, and TechQA still don’t know how our development process works – and you would think it would be a prerequisite for them to do their work.
They should be liaising with the Development team in order to create processes that are feasible, useful, and accurate.