Merge Ready Checklist

We have a small team of people that are known as Tech Quality Assurance. They are comprised of a group of experienced ex-Test Managers, although I often think they can’t have learned much in their experience due to some of the ideas they come up with or how unaligned they are with their communication.

A few years back, there was a project I was working on which was near completion and I got asked to fill in a “Merge Ready Checklist“. I was like “errr, what’s this?

If the QA team are going to introduce a new process, shouldn’t this be communicated when it was created?

I looked through it, and most of the evidence they wanted were aspects that you needed to be aware of up front, so you could track them, and gather evidence as the project progresses. When I saw it, I thought it wouldn’t be possible to be allowed to merge. Even some of the simple stuff like:

“Branches are expected to be kept in sync with root development branch on weekly basis. Provide evidence of the frequency you were merging from Main and the date of the most recent merge.”

process on branch-merging

I merged when I wanted, so what are they gonna do? reject it because I didn’t merge enough? The branch history is the evidence, why is it worded as “provide evidence“. Why not just ask for the link to the code branch?

They then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However we have since downgraded to Professional. So I ask what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.

Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.

Test execution started.
         ___
        /    \
        |     |
        \    /
        /    \
       /      \
___/        \___
/                      \
|     _______     |
\__ _/        \___ /
AxoCover Test Runner Console

It’s supposed to look like their logo, which looks more like a fidget-spinner.

Then I can’t even make sense of the statistics? Is that saying that I have removed methods, and deleted tests? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher. I had also added some Unit Tests but maybe would have expected 30% on new code.

I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.

After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.

They also wanted you to run the static analysis tool Sonar, but with the licence we paid for, we could only run it on our Main branch. So we need to merge our project in to run Sonar, but they want the Sonar results to authorise if we can merge it in. The classic chicken and egg scenario. Later on, we did get better licences to run Sonar on our project branches.

When we submitted our Merge Ready Checklist to the best of our abilities, we got the following feedback:

“TechQA have reviewed the MRC and assessed it with a critical level of risk. There are a number of items that we would deem vital to address however even resolving these items – it would leave the project at critical.”

We failed the Merge Ready Checklist

Surely that is the biggest rejection. Even resolving the issues won’t change their judgement.

We arranged a meeting with them to discuss the way forward. At one point they said:

“These are all things that are too late to address but are common themes we raise time and time again and significantly reduce confidence in the quality of a release.”

TechQA

Surely that’s a problem with their communication. So the process is just sprung on teams when they want to merge in, then the problems aren’t even fed back into the system/communicated to other teams so they can’t learn from it, and then they proceed to fail.

I then asked the release manager what we can do, and he said that the TechQA team are there just to advise him what should and shouldn’t go in the release. Due to contractual deadlines, he just lets projects go in anyway as long as the team explains what bugs they are aware of. All the other aspects like Sonar and Test Coverage don’t bother him at all because it is more “Technical Debt” and not really that reflective of the user’s experience.

So what we are concluding is that the TechQA team are completely pointless and we may as well save money by binning them off, because they are just creating processes that no one cares about, and aren’t enforcing anyway.

It looked like they were also doing these for another company we acquired. I found this on another team’s review:

It should come as no surprise that the assessment of the MRC comes out as critical.

  1. The “Definition of Done” specified in the document is not the one followed by the team as you point out that no testing was part of the user story and a Product Owner was not attached to the project towards the end and therefore could not do the PO review.
  2. Also, no unit tests exist for the code and there is no ability to confirm the changes have not degraded any quality metrics.
  3. Elements of the regression plan have not been run and once again the team have not been given sufficient time to consider quality.
  4. On top of all that the code is written in Visual Foxpro which was EOL’d by Microsoft in Jan 2010, that’s 11.5 years out of support!

An example for the other week saw the lead developer give a lot of backchat to TechQA.

"Link is to a PR not to a build - where do I go to see the state of the build"
 > All PR's into master need a passing build. If you would like a hand using Azure DevOps please let me know and I'll see if can find time to show you how it works. 
 
"there must be some sort of output that can be used as evidence surely"
 > Once you have looked at the build, you can also view the 35k passing unit tests. 
 
"Need some evidence of your test coverage levels in the form of a report or chart from the text coverage tooling you are using"
 > You can see the tests written in the PR to master. Each PR for the new work was accompanied by tests.
 > Coverage Report attached for the impacted code, although code coverage is a useless metric.
 
 "Sonar - Please provide evidence when you have it available"
 > If the work is ever completed, then we will have automatic sonar reports for projects.

No idea why he doesn’t have Sonar running, but he obviously doesn’t care enough, and definitely doesn’t care about Test Coverage. I do find it strange that we have been using Azure DevOps for years now, and TechQA still don’t know how our development process works – and you would think it would be a prerequisite for them to do their work.

They should be liaising with the Development team in order to create processes that are feasible, useful, and accurate.

Code Analysis: You Cannot Have More Than 6 Parameters

Recently, a software developer refactored some method calls in a strange way, so I questioned him on it. He stated “you cannot have more than 6 parameters”.

This is nonsense, but he means there is a Code Analysis rule that flags this up. The theory is that: if there’s many parameters, some of them are probably related, and so they can be grouped into a common object. 

public void AddUser(
   string forename,
   string surname,
   Date dateOfBirth,
   string addressLine1,
   string addressLine2,
   string town,
   string county)

In the example above, you can group the last 4 fields into an Address object.

When the Code Analysis rule flags your code, I think it’s more of a question of “can you group these parameters together?” and not a “you must never have more than 6 parameters”.

If you think the code is fine, you can simply suppress the rule. If you think it is correct, then you can create a new class to group these variables together (or use an existing class if there is something relevant).

I’ve written quite a few blogs on Developers writing code in strange ways because Code Analysis prompted them to. I think Code Analysis aims to prompt you to write code in a standard way, and avoid common errors/inefficiencies. It seems to have the opposite effect because many developers don’t think for themselves.

Out of interest, I asked a nerdy developer if there really was a limit to how many parameters you can have. So like a total nerd, he tested it out. Apparently, in C#, the max you can have is 65536 parameters in a constructor. Then he gave me additional nerd stats. His file was 4MB of source code which compiled to 1.9 MB exe file.

No idea why 65536 is the limit, and I didn’t spend any time investigating it, or quizzing him further. I just accepted it’s such a large number, you would never reach it. It’s more than 100, so you may as well say there isn’t a limit.

Database Patching – Everything Is Fine

When it comes to changes to the Database, we have a tool (which I will call DBPatcher) which runs your changes, runs the Unit Tests and runs Code Analysis (finds bad formatting, violations of coding standards, common mistakes etc). So it is vital that this passes successfully before you send your code to review.

I was doing a Code Review aka Pull Request (PR) and I saw evidence that they hadn’t run it through this DBPatcher tool.

Ronald was eager to get his code checked in, so wanted me to approve. However, I wanted them to run the tool and fix any issues first. The conversation went like this:

[Thursday 8:21 AM] Ronald
     Can we complete the PR? do you have any doubts on it 
​[Thursday 8:23 AM] Me
    I'm convinced DBPatcher will flag those select statements because there is a mix of tabs and spaces
<yes it is trivial to flag, but DBPatcher will flag this, so this is evidence they haven’t run it. There could be other errors too, but I will let DBPatcher find them>
​[Thursday 8:23 AM] Ronald
    OK, thank you. I will complete the PR 
​[Thursday 8:25 AM] Me
    what? I am saying the DB patcher will give you errors
​[Thursday 8:26 AM] Ronald
    sorry for misunderstanding 
    I ran it in the morning. We didn't get any error for our DB changes and unit testing also didn't throw any error for our code
<he attempts to send me a screenshot of the final result but it didn’t seem to transfer>
​[Thursday 8:44 AM] Me
   The image isn't showing for me. But since I started running DBPatcher when you messaged me, and mine has just finished, I can only assume you disabled the "Run Code Analysis" to speed it up
​[Thursday 8:45 AM] Me
    In fact, there's some failing unit tests too
<this is contrary to what Ronald claimed. He said there were no Code Analysis errors and no Unit Test failures, and I see both.
[Thursday 8:45 AM] Ronald
   I have enabled those and haven't unchecked it before running the patch 
​[Thursday 8:45 AM] Me
    What is in the output window?
​[Thursday 8:46 AM] Ronald
    yes there are some errors, but not related to our code and our schema 
​[Thursday 8:48 AM] Me    
DataWarehouse
Error on line: 12
ColumnListFormatting: Select column list incorrectly formatted
<clearly his code>
​[Thursday 8:50 AM] Ronald
    oh ok 
​[Thursday 1:19 PM] Ronald
    we resolved formatting in our SQL commands 
    we couldn't find which unit testing is failing and we are not sure if this unit test is part of our project. Can you help us with this one ?
​[Thursday 1:21 PM] Me
    
|20|[DataWarehouseTest].[Test1] |Error |
|21|[DataWarehouseTest].[Test2] |Error |
|22|[DataWarehouseTest].[Test3] |Error |
|23|[DataWarehouseTest].[Test4] |Error |
|24|[DataWarehouseTest].[Test5] |Error |
​[Thursday 1:26 PM] Ronald
    
I ran the DB patcher 20mins ago with the code analysis checked and we checked the output results also, we couldn't find anything related to DataWarehouseTest 
Attached the DB patcher output result we got 
[DBPatcher OutputResult.txt] 
<I look at the file. It has hundreds of errors, so it is hard to make sense of. His database is clearly screwed. No wonder it was running quick and he couldn’t see any Unit Test errors; they simply weren’t running>
​[Thursday 1:31 PM] Me
    your database looks absolutely messed up. You shouldn't have those errors. The unit tests are failing to run

C:\DatabasePatcher\tSQLt\RunAllUnitTests.sql
Could not find stored procedure 'tSQLt.RunAll'.

    you need a new database.
[Thursday 5:50 PM] Ronald
    Thanks for notifying us of these issues.
    Now we have fixed these issues and ran the patch, and there were no issues with our project.
​[Thursday 5:51 PM] Ronald
    please review it from your side 

I then look through their changes which fixed the unit test. With Unit Tests, you usually create a variable called “Expected” then set that manually. Then you create an “Actual” variable and this is set based on the actual code. They had those statements as normal, but then they had added this:

update #ActualResult set SessionGuid = '38090f0d-3496-48c3-a991-a0220fe3b58f', SlotGuid = '0b817794-7ffb-4ae3-8013-a7847a1b2139';

So this means their code isn’t returning the correct result, but they are then manipulating the result (#ActualResult) to force it to match – so the test passes. They could have just changed the Expected result, but that would be sabotage anyway. Why would they knowingly break a feature like this?

Anyone who is serious about software development shouldn’t be doing this. They have “Senior” in their job title, and this change was approved by two of their team members. It was up to me to be the gatekeeper and reject this change.

[3:51 PM] Ronald
Sorry for the unit test update statement, I have removed those and all the unit tests are passing correctly.
Sorry, that was some typo.

A typo!? How can you possibly claim that was a typo? “Sorry, I accidentally bashed on the keyboard and somehow produced a sequence of characters that was valid: not only to be executed without error, but for the unit tests to pass too.”

I also don’t understand how you can have hundreds of errors and just continue working like everything is fine. Then when someone is telling you something is wrong, you still pretend everything is fine. When I tell him he hasn’t run DBPatcher, why didn’t he respond with “I did, but there were loads of errors. Can you help me fix this?” Proceeding like he did just wasted my time, created unnecessary friction and made himself look like a complete idiot.