Reviewing AxoCover, the Code Coverage Tool

A while ago, I wrote a blog about the Merge Ready Checklist, which was a process we have to prove we have followed to be able to complete and release our software project.

The process was created by a some experienced former Software Testers, now basically Quality Assurance Managers.

As part of the checklist, they then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However, we have since downgraded to Professional. So I asked what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.

Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.

Test execution started.
      ___
     /   \
     |   |
     \   /
     /   \
    /     \
___/       \___
/               \
|    _______    |
\___/       \___/
AxoCover Test Runner Console

It’s supposed to look like their logo, which looks more like a fidget-spinner.

Here are the metrics it produces, before; and after.

CoverageUncoveredTotal
Classes11.4%30403433
Methods14.4%1778420770
Branches12.7%2908533308
Lines13.4%98844114114
Baseline figures. Code from the Main branch before my changes.
CoverageUncoveredTotal
Classes11.9%30343443
Methods13.3%1774720473
Branches12.0%2902032985
Lines13.4%98786114118
Codebase with my Project merged in.

Then I can’t even make sense of the statistics? Is that saying that I have removed methods (total has gone down!)? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher (but despite more Classes, the number of methods and lines has decreased). I had also added some Unit Tests but maybe would have expected 30% on new code.

I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.

After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.

Testing AxoCover

Since the numbers were nonsensical. I decided to create a simple test project and run it on simple examples. Let’s see how it judges what is a line/branch/method/class.

namespace Axo
{
	public class ExampleClass
	{
	}
}

0 method 0 branches 0 lines

So a class definition with no methods and actual lines of code results in zeroes all around. It must ignore boilerplate class code.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{

		}
	}
}

1 method 1 branches 3 lines

So now I have a method but it is empty. Seems the idea of ignoring boilerplate code doesn’t apply to methods. It must count the method definition plus braces inside the method for the line count, but it doesn’t make sense to count braces since that’s just to group related code. 1 branch is weird too, that should be for IF statements which we will test soon.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;
		}
	}
}

1 method 1 branches 6 lines

So now I have added 3 lines to the method. The line count has increased by 3 so it seems like it make sense.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				result = 0;
		}
	}
}

1 method 3 branches 8 lines

I’ve added 2 lines, where 1 is an If Statement. So now we have increased the branches but it has increased by 2. This must be the “implicit else” where the “result” is either greater than ten or it is less than 10, so there’s 2 paths. I’d still say that is 1 branch though.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();

		}

		private void MethodA()
		{

		}
	}
}

2 method 4 branches 10 lines

I’ve replaced one line with a method call to a new method. Method count increasing by 1 makes sense. Given the previous examples, adding a new method adds 1 to the branch count for some reason. In the empty method example, we got +3 to the line count, but now we only get +2, so that seems wrong. I don’t even think an empty method should increase the line count or the branch count, so the figures are becoming increasingly nonsensical.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
		}
	}
}

3 methods 5 branches 13 lines

So now instead of an implicit else, I’ve made it explicit, and created another Method. Method count makes sense. Branch count has increased by 1 which I think will be for the new method and not the else. We have +3 to the line count but should we have 2 for the else, then up to 3 for the new method.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
			{ }
		}
	}
}

3 methods 5 branches 15 lines

I was intrigued if it really was including braces. Some random braces gives +2.

	public class ExampleClass
	{
		public ExampleClass()
		{

		}

		public ExampleClass(string test)
		{

		}
	}

2 methods 2 branches 6 lines

I thought I’d reset and try again. So we have 2 methods, which as we have discovered means 2 branches with AxoCover’s metrics. It seems to count both methods as +3 lines.

namespace Axo
{
	public class ExampleClass
	{
		public void ExampleClassMethodA()
		{

		}

		public void ExampleClassMethodB(string test)
		{

		}
	}
}

2 methods 2 branches 4 lines

Looking back through the examples, I wondered if it is actually counting an empty Constructor as +3, but an empty method is +2. So this example has actual methods rather than constructors, and it seems to confirm my theory.

Discussion

When it comes to counting metrics in code, I think there is some degree of subjectivity to it. What even is a line of code? You could technically add many “statements” and put them all on one physical line. So you could look at the line numbers and see 1 line, but when actually reading the line of code, you can read multiple instructions. The analogy could be that you expect a list to be one item per line but someone could write a list on paper as a comma-separated list on one line. One is more readable than the other but it’s still valid either way. If someone asked you how many items were on the list, you could count them either way and end up with the same number. Therefore, I think the line count should actually be “statement” count.

I do think AxoCover’s definition of a branch seems wrong, and what they interpret as lines seems inconsistent and a possible bug in their logic.

On a larger and more complex codebase, the statistics it produces seems really nonsensical. So I think I have proved this tool isn’t worth using, and we definitely shouldn’t be using it to gatekeep on our Merge Ready Checklist.

SonarCloud, the Static Analysis Tool

SonarCloud is a static analysis tool. It runs on your codebase and points out where code can be written to “best practices”, possible bugs, and code coverage.

When my employer bought a SonarCloud licence, there was a large emphasis that they wanted existing errors fixed, and no errors in new code. I knew this would be a bad idea to emphasise because of the classic “you get what you measure”. If you tell people the goal is to not have errors, then they will do whatever they can to not have errors. The idea of a tool like SonarCloud is to improve code quality, but since the metric is a number, then the number becomes the goal rather than code quality. However, the aim should be to improve the quality of the code, optimised for readability, maintainability, scalability, and code coverage.

As more people began working with SonarCloud, we saw more changes with the title along the lines of “Fix Sonarcloud issues“.

Human Analysis

SonarCloud is just a static analysis tool, it can only spot problems and suggest pre-defined improvements. These improvements are not always in the best interest of the code base. You have to objectively ask yourself “Does the change I’m making make the code better or worse?” Before you can do that, you need to understand why Sonar thinks there is a problem with the code. Then you can decide if Sonar is right or not. Some of the rules are not applicable everywhere.

I have seen a number of changes where the code is actually made worse by the changes to satisfy Sonar.

Example 1: 7 Argument Limit

A constructor or method that contains loads of parameters/arguments can be a sign of bad design, and maybe some of the parameters can be grouped together inside an object. Once you reach 8 arguments, Sonar will flag this. A simple fix is just to create a class and throw in a couple of parameters in there. It then satisfies the rule but it doesn’t make logical sense unless the parameters are in fact related. Adding a new class when it is single use could just make the codebase more cluttered and seemingly more complicated. I would rather see a class with 8 values, than some complicated class with extra types in the way “Just because of Sonar”. Mark Seemann has a good blog post about Curbing code rot with thresholds.

Example 2: TryParse

Another example, I once wrote some code with the following:

var ok = int.TryParse(a, out var x) & int.TryParse(b, out var y);
//something with x and y

Sonar complained about the use of the bitwise & and it was confusing and suggested I use &&. However, if I did that then “y” wouldn’t always be defined because of the short-circuiting and the code wouldn’t compile. I was about to reject the Sonar issue as “Suggests code that doesn’t compile” and just keep my version.

Then I thought, “if Sonar cannot understand my code to make a decent suggestion, maybe other developers can’t either“. I was trying to be too clever with my code.

Instead I changed the code to:

var ok1 = int.TryParse(a, out var x);
var ok2 = int.TryParse(b, out var y);
var ok = ok1 && ok2;
//something with x and y

It wasn’t as terse as my original version, but it was certainly easier to read and understand, and Sonar didn’t have a problem with it any more either.

Example 3: Cyclomatic Complexity

When a method has loads of If statements, this creates loads of permutations that can be executed which means if you are aiming for true 100% test coverage, you have loads of tests to write. It can easily make the code hard to read and understand too. At a certain point, Sonar suggests breaking the methods into smaller methods. I have seen people take this extremely literally and you end up with a design that looks like

  SetPersonalDetailDifferences1(returnList);
  SetPersonalDetailDifferences2(region, returnList);


So there is no logical grouping for what goes in part 1 and part 2, it’s just that enough code gets placed in the first method then everything else in the second. Now the original single method is half the size with no logical reason other than to satisfy the Sonar rule.

Me (rhetorically)
Are these fields logically grouped or is this just to satisfy sonar?

Brijesh
It is just to satisfy sonar

Example 4: Making The Code Worse

Nullable DateTime

DateTime always has to have a value. However, You can declare a nullable DateTime in C# by appending a question mark like DateTime?

The existing code was checking a standard DateTime for null, which can never happen.

if (startDate == null)
   throw new ArgumentNullException("startDate");

The Code Analysis report was correctly flagging this code as completely unnecessary. Instead of removing the code, the developer then changed it to

if ((DateTime?)startDate == null)
    throw new ArgumentNullException("startDate");

The method was still accepting the startDate as a non-nullable DateTime so could never be null. But then it was being cast to a nullable DateTime so the check against null is technically valid.

Me:
Why are you casting to nullable datetime?

Chandeshwar
dateTime is value type , it can never be null value.
if check dateTime == null, it's always return false .

Me:
Yes, that is correct Chandeshwar. That’s why you can delete the code completely. Your code is always false too.

Sometimes, this type of scenario leads to multiple attempts. Either

  1. because they make some changes and they still get the same Sonar error,
  2. they introduce a new problem,
  3. or maybe the reviewer dislikes the changes and wants them to change it back.

So there’s plenty of files where our Change History looks like this

Fix Code Smell Issue <- doing what they think Sonar wanted them to do
Additional Static Analysis Fixes <-they messed up, so tried again
Addressing code review comments <- I pointed out it still wasn’t correct

Example: Magic Numbers

Magic numbers should not be used

A magic number is a number that comes out of nowhere, and is directly used in a statement. Magic numbers are often used, for instance to limit the number of iterations of a loop, to test the value of a property, etc.

Using magic numbers may seem obvious and straightforward when you’re writing a piece of code, but they are much less obvious and straightforward at debugging time

That is why magic numbers must be demystified by first being assigned to clearly named variables before being used.

-1, 0 and 1 are not considered magic numbers.

Vignesh changed code like

dosagesCount == 1

to

dosagesCount == Constants.Single

Me:
Constants are for a different purpose! They are not for replacing all the numbers in the codebase. It's a good idea to give numbers clear names, and if the same value is used multiple times, it means if you need to change the number, it will update in all places.

Vignesh
This is the rule we follow (quotes the magic number description shown above)... and I got comments from my senior level.

WHAT DOES IT SAY, VIGNESH?
“-1, 0 and 1 are not considered magic numbers”

And you have replaced 1 with the word “SINGLE”

Example: Lack of pragmatism

It frustrates me so much that so many developers even don’t agree with Sonar, or don’t understand the change, but still attempt to change it anyway. Their sole goal is to remove the error that they lose sight of the true aim to write great code that works.

Dave
OK, but why did it suggest it, and why does it make the code better? Did you understand the suggestion or just blindly do what it said?

Pavel
I think I understood it. The comment was: "Add the default parameter value defined in the overridden method". Default arguments are determined by the static type of the object. If a default argument is different for a parameter in an overriding method, the value used in the call will be different when calls are made via the base or derived object, which may be contrary to developer expectations.
That's why it suggested it. But in my opinion this change is redundant and doesn't make the code better.

There was some code which had loads of if statements. It was checking the types of “nodes” in a tree structure, and so attempted to cast the type. If successful, it would go into that code block, else it would attempt to cast to a different type.

Even though the developer didn’t need to change this code, he did change it to attempt to resolve Sonar issues with regards to the casting. However, he only updated some lines and not others which meant it was inconsistent so the codebase is more confusing, and still contains Sonar errors. He also took some lines out of the if statements and then performed all the casting at the top of the method. So now it was actually more inefficient because once you have found a match, you do not need to keep attempting to cast.

  {
            var creterionTag = row.Tag as LinkedCriterion;
            var relationshipTag = row.Tag as Relationship;
            var attributeTag = row.Tag as Attribute;
            var resultSetRuleTag = row.Tag as ResultSetRule;
            var conceptOrderingTag = row.Tag as ConceptOrdering;
Me
why change to "as casts" for most, but then not for Table, Concept and SharingDisplayValue?
I'm not even sure if there is an advantage to doing it this way. We now have several variables where only 1 will be set, and the rest are null.
Might be better spending more time refactoring it out to get rid of the branching.
Pattern matching is probably the neatest way for now. https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching

Kalya
I just simply resolved some existing SonarQube issue which not raised because of our changes. It is a kind of help to resolving existing issues, It is very difficult to resolving all the issues as of now

So he tried to help, was too difficult, so gave up, but still decided to submit the changes for review, despite making the code worse.

Example: Just Suppress it!

Of course, you can still get rid of the error without actually fixing anything. But merely hide it from the managers that are only looking at the figures:

#pragma warning disable S1075 // URIs should not be hardcoded
public const string InfoUrl = " http://med.info";
 #pragma warning restore S1075 // URIs should not be hardcoded

Conclusion

The goal isn’t to make Sonar happy, the goal is to write good clean code, sonar is a guide to help you do that, but it doesn’t guarantee success.

Me

Merge Ready Checklist

We have a small team of people that are known as Tech Quality Assurance. They are comprised of a group of experienced ex-Test Managers, although I often think they can’t have learned much in their experience due to some of the ideas they come up with or how unaligned they are with their communication.

A few years back, there was a project I was working on which was near completion and I got asked to fill in a “Merge Ready Checklist“. I was like “errr, what’s this?

If the QA team are going to introduce a new process, shouldn’t this be communicated when it was created?

I looked through it, and most of the evidence they wanted were aspects that you needed to be aware of up front, so you could track them, and gather evidence as the project progresses. When I saw it, I thought it wouldn’t be possible to be allowed to merge. Even some of the simple stuff like:

“Branches are expected to be kept in sync with root development branch on weekly basis. Provide evidence of the frequency you were merging from Main and the date of the most recent merge.”

process on branch-merging

I merged when I wanted, so what are they gonna do? reject it because I didn’t merge enough? The branch history is the evidence, why is it worded as “provide evidence“. Why not just ask for the link to the code branch?

They then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However we have since downgraded to Professional. So I ask what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.

Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.

Test execution started.
         ___
        /    \
        |     |
        \    /
        /    \
       /      \
___/        \___
/                      \
|     _______     |
\__ _/        \___ /
AxoCover Test Runner Console

It’s supposed to look like their logo, which looks more like a fidget-spinner.

Then I can’t even make sense of the statistics? Is that saying that I have removed methods, and deleted tests? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher. I had also added some Unit Tests but maybe would have expected 30% on new code.

I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.

After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.

They also wanted you to run the static analysis tool Sonar, but with the licence we paid for, we could only run it on our Main branch. So we need to merge our project in to run Sonar, but they want the Sonar results to authorise if we can merge it in. The classic chicken and egg scenario. Later on, we did get better licences to run Sonar on our project branches.

When we submitted our Merge Ready Checklist to the best of our abilities, we got the following feedback:

“TechQA have reviewed the MRC and assessed it with a critical level of risk. There are a number of items that we would deem vital to address however even resolving these items – it would leave the project at critical.”

We failed the Merge Ready Checklist

Surely that is the biggest rejection. Even resolving the issues won’t change their judgement.

We arranged a meeting with them to discuss the way forward. At one point they said:

“These are all things that are too late to address but are common themes we raise time and time again and significantly reduce confidence in the quality of a release.”

TechQA

Surely that’s a problem with their communication. So the process is just sprung on teams when they want to merge in, then the problems aren’t even fed back into the system/communicated to other teams so they can’t learn from it, and then they proceed to fail.

I then asked the release manager what we can do, and he said that the TechQA team are there just to advise him what should and shouldn’t go in the release. Due to contractual deadlines, he just lets projects go in anyway as long as the team explains what bugs they are aware of. All the other aspects like Sonar and Test Coverage don’t bother him at all because it is more “Technical Debt” and not really that reflective of the user’s experience.

So what we are concluding is that the TechQA team are completely pointless and we may as well save money by binning them off, because they are just creating processes that no one cares about, and aren’t enforcing anyway.

It looked like they were also doing these for another company we acquired. I found this on another team’s review:

It should come as no surprise that the assessment of the MRC comes out as critical.

  1. The “Definition of Done” specified in the document is not the one followed by the team as you point out that no testing was part of the user story and a Product Owner was not attached to the project towards the end and therefore could not do the PO review.
  2. Also, no unit tests exist for the code and there is no ability to confirm the changes have not degraded any quality metrics.
  3. Elements of the regression plan have not been run and once again the team have not been given sufficient time to consider quality.
  4. On top of all that the code is written in Visual Foxpro which was EOL’d by Microsoft in Jan 2010, that’s 11.5 years out of support!

An example for the other week saw the lead developer give a lot of backchat to TechQA.

"Link is to a PR not to a build - where do I go to see the state of the build"
 > All PR's into master need a passing build. If you would like a hand using Azure DevOps please let me know and I'll see if can find time to show you how it works. 
 
"there must be some sort of output that can be used as evidence surely"
 > Once you have looked at the build, you can also view the 35k passing unit tests. 
 
"Need some evidence of your test coverage levels in the form of a report or chart from the text coverage tooling you are using"
 > You can see the tests written in the PR to master. Each PR for the new work was accompanied by tests.
 > Coverage Report attached for the impacted code, although code coverage is a useless metric.
 
 "Sonar - Please provide evidence when you have it available"
 > If the work is ever completed, then we will have automatic sonar reports for projects.

No idea why he doesn’t have Sonar running, but he obviously doesn’t care enough, and definitely doesn’t care about Test Coverage. I do find it strange that we have been using Azure DevOps for years now, and TechQA still don’t know how our development process works – and you would think it would be a prerequisite for them to do their work.

They should be liaising with the Development team in order to create processes that are feasible, useful, and accurate.

Code Analysis: You Cannot Have More Than 6 Parameters

Recently, a software developer refactored some method calls in a strange way, so I questioned him on it. He stated “you cannot have more than 6 parameters”.

This is nonsense, but he means there is a Code Analysis rule that flags this up. The theory is that: if there’s many parameters, some of them are probably related, and so they can be grouped into a common object. 

public void AddUser(
   string forename,
   string surname,
   Date dateOfBirth,
   string addressLine1,
   string addressLine2,
   string town,
   string county)

In the example above, you can group the last 4 fields into an Address object.

When the Code Analysis rule flags your code, I think it’s more of a question of “can you group these parameters together?” and not a “you must never have more than 6 parameters”.

If you think the code is fine, you can simply suppress the rule. If you think it is correct, then you can create a new class to group these variables together (or use an existing class if there is something relevant).

I’ve written quite a few blogs on Developers writing code in strange ways because Code Analysis prompted them to. I think Code Analysis aims to prompt you to write code in a standard way, and avoid common errors/inefficiencies. It seems to have the opposite effect because many developers don’t think for themselves.

Out of interest, I asked a nerdy developer if there really was a limit to how many parameters you can have. So like a total nerd, he tested it out. Apparently, in C#, the max you can have is 65536 parameters in a constructor. Then he gave me additional nerd stats. His file was 4MB of source code which compiled to 1.9 MB exe file.

No idea why 65536 is the limit, and I didn’t spend any time investigating it, or quizzing him further. I just accepted it’s such a large number, you would never reach it. It’s more than 100, so you may as well say there isn’t a limit.

Database Patching – Everything Is Fine

When it comes to changes to the Database, we have a tool (which I will call DBPatcher) which runs your changes, runs the Unit Tests and runs Code Analysis (finds bad formatting, violations of coding standards, common mistakes etc). So it is vital that this passes successfully before you send your code to review.

I was doing a Code Review aka Pull Request (PR) and I saw evidence that they hadn’t run it through this DBPatcher tool.

Ronald was eager to get his code checked in, so wanted me to approve. However, I wanted them to run the tool and fix any issues first. The conversation went like this:

[Thursday 8:21 AM] Ronald
     Can we complete the PR? do you have any doubts on it 
​[Thursday 8:23 AM] Me
    I'm convinced DBPatcher will flag those select statements because there is a mix of tabs and spaces
<yes it is trivial to flag, but DBPatcher will flag this, so this is evidence they haven’t run it. There could be other errors too, but I will let DBPatcher find them>
​[Thursday 8:23 AM] Ronald
    OK, thank you. I will complete the PR 
​[Thursday 8:25 AM] Me
    what? I am saying the DB patcher will give you errors
​[Thursday 8:26 AM] Ronald
    sorry for misunderstanding 
    I ran it in the morning. We didn't get any error for our DB changes and unit testing also didn't throw any error for our code
<he attempts to send me a screenshot of the final result but it didn’t seem to transfer>
​[Thursday 8:44 AM] Me
   The image isn't showing for me. But since I started running DBPatcher when you messaged me, and mine has just finished, I can only assume you disabled the "Run Code Analysis" to speed it up
​[Thursday 8:45 AM] Me
    In fact, there's some failing unit tests too
<this is contrary to what Ronald claimed. He said there were no Code Analysis errors and no Unit Test failures, and I see both.
[Thursday 8:45 AM] Ronald
   I have enabled those and haven't unchecked it before running the patch 
​[Thursday 8:45 AM] Me
    What is in the output window?
​[Thursday 8:46 AM] Ronald
    yes there are some errors, but not related to our code and our schema 
​[Thursday 8:48 AM] Me    
DataWarehouse
Error on line: 12
ColumnListFormatting: Select column list incorrectly formatted
<clearly his code>
​[Thursday 8:50 AM] Ronald
    oh ok 
​[Thursday 1:19 PM] Ronald
    we resolved formatting in our SQL commands 
    we couldn't find which unit testing is failing and we are not sure if this unit test is part of our project. Can you help us with this one ?
​[Thursday 1:21 PM] Me
    
|20|[DataWarehouseTest].[Test1] |Error |
|21|[DataWarehouseTest].[Test2] |Error |
|22|[DataWarehouseTest].[Test3] |Error |
|23|[DataWarehouseTest].[Test4] |Error |
|24|[DataWarehouseTest].[Test5] |Error |
​[Thursday 1:26 PM] Ronald
    
I ran the DB patcher 20mins ago with the code analysis checked and we checked the output results also, we couldn't find anything related to DataWarehouseTest 
Attached the DB patcher output result we got 
[DBPatcher OutputResult.txt] 
<I look at the file. It has hundreds of errors, so it is hard to make sense of. His database is clearly screwed. No wonder it was running quick and he couldn’t see any Unit Test errors; they simply weren’t running>
​[Thursday 1:31 PM] Me
    your database looks absolutely messed up. You shouldn't have those errors. The unit tests are failing to run

C:\DatabasePatcher\tSQLt\RunAllUnitTests.sql
Could not find stored procedure 'tSQLt.RunAll'.

    you need a new database.
[Thursday 5:50 PM] Ronald
    Thanks for notifying us of these issues.
    Now we have fixed these issues and ran the patch, and there were no issues with our project.
​[Thursday 5:51 PM] Ronald
    please review it from your side 

I then look through their changes which fixed the unit test. With Unit Tests, you usually create a variable called “Expected” then set that manually. Then you create an “Actual” variable and this is set based on the actual code. They had those statements as normal, but then they had added this:

update #ActualResult set SessionGuid = '38090f0d-3496-48c3-a991-a0220fe3b58f', SlotGuid = '0b817794-7ffb-4ae3-8013-a7847a1b2139';

So this means their code isn’t returning the correct result, but they are then manipulating the result (#ActualResult) to force it to match – so the test passes. They could have just changed the Expected result, but that would be sabotage anyway. Why would they knowingly break a feature like this?

Anyone who is serious about software development shouldn’t be doing this. They have “Senior” in their job title, and this change was approved by two of their team members. It was up to me to be the gatekeeper and reject this change.

[3:51 PM] Ronald
Sorry for the unit test update statement, I have removed those and all the unit tests are passing correctly.
Sorry, that was some typo.

A typo!? How can you possibly claim that was a typo? “Sorry, I accidentally bashed on the keyboard and somehow produced a sequence of characters that was valid: not only to be executed without error, but for the unit tests to pass too.”

I also don’t understand how you can have hundreds of errors and just continue working like everything is fine. Then when someone is telling you something is wrong, you still pretend everything is fine. When I tell him he hasn’t run DBPatcher, why didn’t he respond with “I did, but there were loads of errors. Can you help me fix this?” Proceeding like he did just wasted my time, created unnecessary friction and made himself look like a complete idiot.