Time

When it comes to software, the concept of time can cause problems. There’s actually some really interesting scenarios, but even in simple applications, some developers really struggle with simple concepts.

In terms of standard problems, you can have problems where the client and server times can be out. This can just be because they are set incorrectly, or maybe are using a different timezone. As a developer, if you are looking at times in log files across the client and server, it can cause confusion if the timestamps are out. A common thing I have seen is that some servers don’t use Daylight Savings Time we have in the UK, but the client times often do. So the server can be an hour out.

Daylight savings time is interesting as time shifts forward or backwards one hour. So time isn’t linear.

I recall reading a blog about time by Jon Skeet who then discussed how if you are using historical dates, the time can also suddenly change. Like if a country switches to a different calendar system entirely, so moving a day could suddenly jump in years to align with the new system.
Computerphile have a discussion on this The Problem with Time & Timezones – Computerphile

Leap Years

We once had a leap year bug because someone created a new date using the current day and month, and added a year. So when it was 29th Feb, it tried to create a date of 29th Feb for next year which wasn’t a valid date. So the feature crashed. Everyone was panicking trying to rush out a fix, but then we realised we could only get the fix out to our customers tomorrow, and the bug wouldn’t happen. Not for another 4 years anyway. It was hilarious

-1

One weird mistake I saw recently, is that a developer defined a variable and set it to 5. The code they wrote was supposed to make sure that we never make an API call more than once every 5 minutes. However, they then minused 1, so were checking every 4 minutes instead.

var minimumNumberOfSecondsRequiredBetweenEachAPICall = (NumberOfMinutesRequiredBetweenEachAPICall - 1) * 60;

Ages

You would think everyone would understand the concept of ages since everyone has an age and it increases by 1 every time you have a birthday. However, many developers seem to struggle with the concept. The last implementation I saw had the following:

int age = DateTime.Now.Year - dateOfBirth.Year;

So it can be one year out because it basically assumes your birthday is 1st January.

It reminds me of an exchange on Twitter that I saw years ago. It was in the context of football.

PA: Why are Arsenal paying £25m for a 29 year old striker?
G: he’s 28 btw
PA: He’s a lot nearer to 29 than 28, that’s a fact
G: He’s 28, that’s a fact
PA: Why am I not surprised that fractions are beyond you. The day after his birthday, he is no longer 28.
G: He’s 28 until he becomes 29. That’s how it works
PA: Perhaps if you had paid more attention in Maths lessons? You might remember “round up or down to the nearest whole number”
G: He’s 28. That’s a fact.
PA: No, it is not. £1.75 is not one pound. You don’t even understand what a fact is now.
G: Until he is 29, he is 28.

When it is the next day after your birth, are you 1 day old? technically you could just be a minute old but claim you are 1 day old.

My instinct to perform mathematics on dates would be to use an existing date library. Another developer tried to make something themselves. This seemed a bit complex to me, but I think it actually worked, or at least seemed reasonable for how they wanted to use it.


public static double AgeInYearsAtDate(DateTime effectiveDate, DateTime dateOfBirth)
{
        double daysInYear = 365.25;
        int completeYears = Age.GetYears(dateOfBirth, effectiveDate);

        dateOfBirth = dateOfBirth.AddYears(completeYears);

        double proportion = effectiveDate == dateOfBirth ? 0 : Age.GetDays(dateOfBirth, effectiveDate) / daysInYear;

        return completeYears + proportion;
        }

        public static string ConvertCurrentAgeToYearsAndMonths(double age)
        {
                int monthsInYear = 12;
                int years = (int)age;
                int months = (int)Math.Round((age - (int)age) * monthsInYear);

        return $"{years} year{(years == 1 ? String.Empty : "s")} and {months} month{(months == 1 ? String.Empty : "s")}";
        }

Ages Part 2

Another developer was testing his age code and wrote this:

new object[]
            {
                new DateTime(2010, 05, 31),
                new DateTime(2009, 06, 01),
                AgeRange.UpToOneYear,
                "52 weeks and 0 days"
            },

If there’s 52 weeks in a year, then is that 52 weeks? kinda looks 1 day short to me. Time is mental isn’t it?

Incompetent Developer Tales Part 2

Although Junior developers can be useful, I have been against my employer’s over-reliance on them. If you get someone cheap with high potential, as long as you reward them, you end up with someone that knows your system, loves the company and is a great developer.

The problem is that we love hiring them but not rewarding them which means the best ones leave and the bad ones remain. The focus on cheap wages has led to more and more offshoring which has led to the rapid expansion of our Indian office. How do you hire so many people quickly? lower the standards. So now you have a high amount of incompetent people but they are cheap.

It’s not the fact they are Indian that is the problem, it is the fact the demand is high and the standard we had in hiring is low. The problem this has in the work culture is that it is easy to see a discrepancy in quality between the UK and Indian developers as a whole; which means you end up seeing them as inferior, despite some of them actually being good. The good ones tend to be the ones we hired in Senior positions, so they would naturally have higher wages anyway.

So building on from my recent blog on one particular developer, Here’s a collection of things other people have done:

Rollbacks

James has just rolled back someone’s changes who merged into the wrong folders.Don’t they think something isn’t right when it is showing [add] next to the main Database folder. Looks like they copied the folders up one level so now it is re-adding everything as a duplicate.

Dean 16:27:
haha
Me 16:28:
that change by Portia is mad when you look at the changesets
original patch
change to patch,
fix db patching errors,
rollback,
rollback other changes,
rollback from xml file,
then Chris comes into undo the rollback
Dean 16:30:
it's just wrong that we've got people who don't know what they're doing
Me 16:30:
but it's cheap
Portia went wild on that second “rollback” and manually reverted the files.
removed 8 patches and added 1, instead of removing 1
it's amazing how many rollbacks happen these days
Dean 16:40:
Rollbacks worry me in general

“used for identification”

In general, SQL databases are designed to reduce redundancy. So if you have a table storing a list of “job roles”, then if another table references this information, you can link it together via an ID of the row. What you shouldn’t do is copy the data into another table. This means if the data needs to be updated, then you need to remember to update both, and this will double the storage space too.

I saw that a developer was doing this. It was only one column of text, but why were they copying it over into their new table instead of just referencing it?

Me
Is there a reason why this isn't being taken from JobCategory?
It is never returned in a stored proc call so there is no need for it

Vignesh
JobCategoryName used for identification of JobCategoryID and not used in stored proc. Thanks

Me
Regardless if the system uses the data, or if it is there for our Staff to read in the database; you would just write a query that joins onto the JobCategory table.
what if the JobCategoryName in JobCategory is updated? The names in your new table won't be accurate

Vignesh
JogCategoryID only used in stored proc/code, JobCategoryName is just an identification for JogCategoryID in the table. Thanks.

Me
So it needs to be removed?

Vignesh
JobCategoryName was removed since it is used in code or stored proc. Thanks.

Refresh Link

private void llbl_RefreshList_LinkClicked(Object sender, LinkLabelLinkClickedEventArgs e)
{
IEnumerable<ExecutionScheduleDetail> runningItems = _service.GetAllScheduleSearches(AuthenticatedUser.Organisation.Guid);
int count = (from runningitem in runningItems select runningitem).Count();
if (count>0)
{
LoadRunningSearches();
}
else
{
MessageBox.Show(
"This run has just completed and details can no longer be viewed.",
"Running Searches",
MessageBoxButtons.OK,
MessageBoxIcon.Error);
LoadRunningSearches();
}
 
}

Instead of just getting the count from the Ienumerable, they are selecting all the items they already have. Then since they are calling the LoadRunningSearches method in both parts of the IF, it may as well be moved out. So then the code would just be

if count == 0 , show message box.

when a review comment was left about not specifying the method call twice, he then just moved the GetAllScheduleSearches to a field, which meant it would no longer get the latest searches every time you clicked. Since the link is to “Refresh”, it wasn’t doing what it was supposed to.

If Constraint Exists

 if exists (Select * from sys.check_constraints where name = 'DF_Templates_AgeRange_AgeTo ')
 alter table Templates.AgeRange drop constraint DF_Templates_AgeRange_AgeTo;

I noticed there was a redundant space at the end of the constraint name so I thought it was likely that the check would never be true (unless SQL ignores it automatically?)

Me 4/28/2022
Is this space intentional?
Vignesh 4/28/2022
Removed
Joel 4/28/2022
Do you actually need the select check? I think you should be able to use the dependent patch mechanism instead.
Vignesh 4/28/2022
I think it is not needed, we checked the condition to avoid any error. if its true it will execute
Joel 4/28/2022
The constraint is added in patch 7.809, on which you've marked this patch as dependent. So this patch will literally only run if the constraint was created successfully.

 We have an attribute in the xml so you can state dependent patches, so will only run if the prerequisite patch has run. Vignesh was aware of it because he had used it, but then he also had this guard clause that possibly didn’t even work.

When told he didn’t need it, he then agrees that it isn’t needed, yet, put it in there so it would “avoid error”. Does he mean there was an error? Or just being overly cautious?

Spelling

inActive ? ResourceStatus.Inactive : ResourceStatus.Active

Sometimes, it’s the little things like this that annoy me. How can you write “inActive”, and not realise that you either:

  • have spelt “inactive” wrong,
  • or alternatively – someone else has when they created this enum

Therefore why did they not fix it? There’s clearly an inconsistency there.

In a similar fashion, I saw this recently:

//Recieved and Transalted

Both words are spelt wrong. It was also copy and pasted from another file. It does pose a good question though, if you copy and paste, do you think you should correct the spelling or leave it for maximum laziness? I guess the advantage is if you search for that text to try and find the original code, it’s better to match it as much as possible.

throw new NotSupportedException("Can't able to fetch template consultation details!");

Indians always seem to write “Can able” and “Can’t able” instead of just “can” and “unable”.

Untested code

string.Format("{0} {1}", _isRegistered ? "Not Registered:" : "Registered:", statusCode)

The logic was consistently backwards. It wasn’t a case that they typed it wrong and didn’t bother testing it. There were several files with the same type of logic. I pointed it out and they rewrote the entire thing.

Reviewing AxoCover, the Code Coverage Tool

A while ago, I wrote a blog about the Merge Ready Checklist, which was a process we have to prove we have followed to be able to complete and release our software project.

The process was created by a some experienced former Software Testers, now basically Quality Assurance Managers.

As part of the checklist, they then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However, we have since downgraded to Professional. So I asked what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.

Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.

Test execution started.
      ___
     /   \
     |   |
     \   /
     /   \
    /     \
___/       \___
/               \
|    _______    |
\___/       \___/
AxoCover Test Runner Console

It’s supposed to look like their logo, which looks more like a fidget-spinner.

Here are the metrics it produces, before; and after.

CoverageUncoveredTotal
Classes11.4%30403433
Methods14.4%1778420770
Branches12.7%2908533308
Lines13.4%98844114114
Baseline figures. Code from the Main branch before my changes.
CoverageUncoveredTotal
Classes11.9%30343443
Methods13.3%1774720473
Branches12.0%2902032985
Lines13.4%98786114118
Codebase with my Project merged in.

Then I can’t even make sense of the statistics? Is that saying that I have removed methods (total has gone down!)? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher (but despite more Classes, the number of methods and lines has decreased). I had also added some Unit Tests but maybe would have expected 30% on new code.

I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.

After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.

Testing AxoCover

Since the numbers were nonsensical. I decided to create a simple test project and run it on simple examples. Let’s see how it judges what is a line/branch/method/class.

namespace Axo
{
	public class ExampleClass
	{
	}
}

0 method 0 branches 0 lines

So a class definition with no methods and actual lines of code results in zeroes all around. It must ignore boilerplate class code.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{

		}
	}
}

1 method 1 branches 3 lines

So now I have a method but it is empty. Seems the idea of ignoring boilerplate code doesn’t apply to methods. It must count the method definition plus braces inside the method for the line count, but it doesn’t make sense to count braces since that’s just to group related code. 1 branch is weird too, that should be for IF statements which we will test soon.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;
		}
	}
}

1 method 1 branches 6 lines

So now I have added 3 lines to the method. The line count has increased by 3 so it seems like it make sense.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				result = 0;
		}
	}
}

1 method 3 branches 8 lines

I’ve added 2 lines, where 1 is an If Statement. So now we have increased the branches but it has increased by 2. This must be the “implicit else” where the “result” is either greater than ten or it is less than 10, so there’s 2 paths. I’d still say that is 1 branch though.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();

		}

		private void MethodA()
		{

		}
	}
}

2 method 4 branches 10 lines

I’ve replaced one line with a method call to a new method. Method count increasing by 1 makes sense. Given the previous examples, adding a new method adds 1 to the branch count for some reason. In the empty method example, we got +3 to the line count, but now we only get +2, so that seems wrong. I don’t even think an empty method should increase the line count or the branch count, so the figures are becoming increasingly nonsensical.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
		}
	}
}

3 methods 5 branches 13 lines

So now instead of an implicit else, I’ve made it explicit, and created another Method. Method count makes sense. Branch count has increased by 1 which I think will be for the new method and not the else. We have +3 to the line count but should we have 2 for the else, then up to 3 for the new method.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
			{ }
		}
	}
}

3 methods 5 branches 15 lines

I was intrigued if it really was including braces. Some random braces gives +2.

	public class ExampleClass
	{
		public ExampleClass()
		{

		}

		public ExampleClass(string test)
		{

		}
	}

2 methods 2 branches 6 lines

I thought I’d reset and try again. So we have 2 methods, which as we have discovered means 2 branches with AxoCover’s metrics. It seems to count both methods as +3 lines.

namespace Axo
{
	public class ExampleClass
	{
		public void ExampleClassMethodA()
		{

		}

		public void ExampleClassMethodB(string test)
		{

		}
	}
}

2 methods 2 branches 4 lines

Looking back through the examples, I wondered if it is actually counting an empty Constructor as +3, but an empty method is +2. So this example has actual methods rather than constructors, and it seems to confirm my theory.

Discussion

When it comes to counting metrics in code, I think there is some degree of subjectivity to it. What even is a line of code? You could technically add many “statements” and put them all on one physical line. So you could look at the line numbers and see 1 line, but when actually reading the line of code, you can read multiple instructions. The analogy could be that you expect a list to be one item per line but someone could write a list on paper as a comma-separated list on one line. One is more readable than the other but it’s still valid either way. If someone asked you how many items were on the list, you could count them either way and end up with the same number. Therefore, I think the line count should actually be “statement” count.

I do think AxoCover’s definition of a branch seems wrong, and what they interpret as lines seems inconsistent and a possible bug in their logic.

On a larger and more complex codebase, the statistics it produces seems really nonsensical. So I think I have proved this tool isn’t worth using, and we definitely shouldn’t be using it to gatekeep on our Merge Ready Checklist.

Avoiding another boolean check

A developer wrote some code like this:

	public void RunScheduledJob()
	{
		if (_loggingEnabled)
			Log.Write("Main Job Complete", _category, _severity, _customData);
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			Log.Write(message, _category, _severity, _customData);
	}

I’ve removed some extra code to make the example clearer, but the RunScheduledJob would do something then write to a log if the feature is enabled. The LogWriteMethod writes to a log if the feature is enabled.

Although it’s not a major improvement, the obvious thing to do would be to use the LogWrite method in the RunScheduledJob method like this:

	public void RunScheduledJob()
	{
		LogWrite("Main Job Complete");
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			Log.Write(message, _category, _severity, _customData);
	}

So the reviewing developer Jim, pointed this out to Rich:

Jim: Could you call the LogWrite method here?
Rich: I could do but it would then evaluate _loggingEnabled twice for no reason.

Now, Jim and I were baffled what he meant. Even if it did have to check _loggingEnabled, it is a simple boolean so would only waste 1 millisecond to evaluate again. There’s no question of performance here; only clarity.

Rich then suggested this code as an improvement:

	public void RunScheduledJob()
	{
		if (_loggingEnabled)
			LogWriteNoCheck("Main Job Complete");
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			LogWriteNoCheck(message);
	}

	private void LogWriteNoCheck(string message)
	{
		Log.Write(message, _category, _severity, _customData);
	}

So we have lost a bit of clarity.

It’s weird how sometimes developers have moments of madness and over-complicate simple things. This particular developer has 30 years programming experience!

Crazy Code Design

Here is a selection of poor code design that I have come across over the years.

Opacity Slider

Sometimes, programmers write simple programs to help them, or their team. When I first joined this company, they were using a more basic version of a source control like SVN, and someone had designed a program to help with Code Reviews. On the tool, they had a slider that changes the opacity of the dialog, so you can make it see-through. It seemed like a case of the developer simply doing it because they could, and not because anyone actually found the feature useful. I suppose you could be more creative if it’s only used internally, and not for a customer; but still incredibly pointless.

Law of Demeter

Law of Demeter, also known as the principle of least knowledge, advocates for a design where objects are loosely coupled and only communicate with their immediate acquaintances. This means that a method in a class should only call methods on:

  • Its direct components.
  • The object itself.
  • Objects passed as parameters.
  • Any objects it creates.

In practice, this means you shouldn’t have a massive chain of methods/properties.

One example of breaking this law that I saw in our code looked like this:

responseMessage.acknowledgements.conveyingTransmission.controlActEvent.reason[0].detectedEvent.code.code

Awful.

Partial Record

One of our modules can potentially show a large amount of data. The data can come from the local record, or if there is a sharing agreement, it can come from other companies that they share with. There are certain screens where you don’t need all the information, so to try and cut down loading times, we have this concept of a Partial record and a Full Record.

	public IRecordPartial Partial
	{
		get
		{
			return IsFullLoaded ? Local : EnsurePartial();
		}
	}

The logic, and wording gets confusing real fast. The property above is called Partial, but there is a check for IsFullLoaded which implies that Local can be Full, and not just Partial like the property says. When you look further into the code, Local might not even be local because it can contain shared. Mindbending.

PercentNull

Coming up with descriptive names is always a challenge in programming, and there is occasions where naming can be ambiguous. However, I have no idea what a method called PercentNull does here:

transactionOut.Surname = helperFacade.PercentNull(customerDetail.Surname);

If it assigning the result to Surname, then it should be returning text. Nothing immediately obvious comes to mind if you are passing in Surname to a method called PercentNull and getting a Surname from that. So it’s not like it is returning a percent number if it can, or Null if it cannot. Or returning the percentage that the text contains whitespace. 🤷

How High

	public enum SeverityScale
	{
		None = 0,
		Low = 1,
		Medium = 2,
		High = 3,
		ExtraHigh = 5,
		VeryHigh = 6
	}

We have this enum to represent the Severity. It makes sense until you get further than High. Extra High makes sense to be more severe than High, but should Very High be higher than Extra High? Extra and Very sound like synonyms. You need something more extreme like Extremely High to make it immediately obvious what the ranking is.

Focussed or not?

	public void SetSearchBox()
	{
		SearchFocused = false;
		SearchFocused = true;
	}

When you see code like the above, it is most likely that the code is doing something weird, and because no one worked out how to fix the original issue, you then end up writing more weird code to work around that. These “hacks” just stack up and are hard to remove. If you try to do the right/honourable thing and remove a hack, then you will probably see some strange behaviour which then means you remove more code, and maybe more code when you find more problems. Repeat until you are left with the original issue which you then have to fix.

So how can setting a property to false, then immediately setting it to true actually do something? Probably if the property has loads of logic and side-effects. Probably best not to look. 🙈

Random Parameter

		public Organisation(bool randomParamToChangeSignatureForFasterConstruction)
		{
			// Had to create a new constructor that doesn't initialise internal state, param isn't used.
		}

The thing that is scary about this strange code, is that it was written by one of our smartest developers. I have no idea why he had to create an extra constructor. Presumably the parameter is there because the “default” constructor already existed. Intentionally not initialising data sounds like a recipe for bugs though. Maybe it needs a redesign.

Slow Request

I remember my Dad telling me a story of a software team putting a delay into their code. Then each month, they would simply reduce the delay a bit and tell their managers they have been working hard on performance improvements.

if (Helper.SlowRequest)
	Thread.Sleep(15000);

I found the above code in our codebase but it relies on a configuration value being present in the config file. It was present and set to true by default for Developers though so would always be slow. Changing the value to false would speed it up, but you have to know that it exists to change it.

<add key="SlowRequest" value="true"/>

Although it doesn’t affect our customers, there’s always the chance something will go wrong one day and it could affect them.

Boolean “Mode”

If you want to handle many options, you often use an “enum”. Using a “Boolean” which represents 2 values (true/false) is a very weird choice…

 <param name="mode">If set to <c>true</c> then [mode] is a delete operation, else add, edit and none operations</param>

so

true = delete

false = add OR edit OR none.

If you put the wrong value, then there’s an if statement after. So if you say it’s a delete and it isn’t, then things don’t get deleted.

		if(!mode)
		{
			if (@event.updateMode != vocUpdateMode.delete)
			{
			}
			else
			{
				if (@event.updateMode == vocUpdateMode.delete)

Not Supported Property

	public virtual String UploadedBy
	{
		get { return _document.Observation.EnteredByUser.DisplayName; }
		set { throw new NotSupportedException("Setting UploadedBy is not supported"); }
	}

Isn’t that Set a complete dick-move. If you call it, it will crash. If the setter wasn’t there at all, you would know it shouldn’t be set.

I guess it could be designed with the expectation that you would override the property. However, I thought it wouldn’t be set correctly, because the “get” returns a massive chain of properties. So it’s not just the case of setting a variable, it’s actually document.Observation.EnteredByUser.DisplayName that needs to be set, and that is breaking the Law of Demeter anyway.

Mutual Exclusive

This is gonna end in tears

private Boolean _isSingleClick;
private Boolean _isDoubleClicked;

Not only are there better ways than detecting clicks, when you have multiple variables tracking similar concepts like this, you can easily end up in invalid states. If _isDoubleClicked is true, then you would expect _isSingleClick to always be false. But it is easy to make a mistake in the code and not set it which then leads to a bug.

Notifications

		public bool IsNotificationConfigEmailEnabled()
		{
			if (!_configSmsEnabled.HasValue)
				_configSmsEnabled = NotificationConfiguration.IsEmailEnabled;

			return _configSmsEnabled.Value;
		}

The fact that this code had been there years means it should work. But when the property is supposed to be checking if Email is enabled but the code only looks at SMS enabled; then who knows how it works.

Resizing

int parentSize = Parent != null
                ? Parent.Width
                : Screen.PrimaryScreen.Bounds.Width;
 
var availableSpace = parentSize - _clientLocation.Y - _yOffset;

What a nonsense calculation that is! We want to calculate the height of a pop up box and make sure it can fit within the window. So we look at the width of control, or maybe the width of the monitor that they might not be using (if they actually have the program on their secondary monitor), then minus the Y coordinate of the window which would be related to height, and not width.

Splitting

Sometimes programmers like jamming as much code as they can on what is technically a single line. It is an absolute nightmare to debug chained logic like this:

return
	RequiresSingleItemOrder(item, customerRequestsSplit)
	?
	null
	:
	batchOrders
	.FirstOrDefault(
		orderInstance =>
			(orderInstance.IssueMethod != IssueMethod.Electronic || orderInstance.Items.Count() < 4)
			&&
			!orderInstance.Items.Any(m => RequiresSingleItemOrder(m, customerRequestsSplit))
			&&
			orderInstance.IssueMethod == issueMethod
			&&
			orderInstance.OrderType == OrderType
			&&
			orderInstance.IsUrgent == isUrgent
			&&
			(!ShouldSeparateControlledItems
			||
			orderInstance.AnyControlledItems == item.IsControlledItem)
			&&
			orderInstance.Warehouse == Warehouse
			&&
			!Authorisation.Separator.ShouldBeSeparated(authoriser: authoriser, orderAuthoriser: orderInstance.Authoriser)
			&&
			(!ShouldSeparatePrivateItems
			||
			orderInstance.IsPrivate == isPrivate)
			&&
			MatchesForRepeatOrdering(item, orderInstance)
			&&
			NullAndNonsequentialEqual(separationTypeIds, orderInstance.SeparationTypeIds));

Funny Youtube Comment

I saw this comment on a programming video on YouTube. It is remarking on how you can write really confusing code as a way to increase your job security because you can be in charge of code that only you can read:

And remember, kids – if you nest multiple null coalescing operators into a single line of poly-nested ternary operators, that’s called “job security” – cause ain’t no one wanna maintain that code when you’re gone.

return a> b ? a < c ? a != d ? e ?? f ?? 0 : f ?? g ?? : 0 e ?? g ?? 0;

Typescript Example

Years ago, we started rewriting our C# program using Web Technologies. Since everyone was new to Javascript and Typescript, everyone wrote awful code. But then you didn’t know if it was good or bad. One of the first bits of code I saw when I joined their team was this:

export const Actions: { [key: string]: <T extends {}> (value: T) => IAction<T> } = {
 setModuleName: <T extends {}>(value: T): IAction<T> => CreateAction<T>(ActionTypes.setModuleName, value),
};

Don’t ask me what it says, because I have no idea.

Contradiction

InsertUpdateResourceImmutableProperties

It’s so frustrating to read, it makes you want to punch someone. Immutability means it shouldn’t change after it has been created. So how can you Update something Immutable? Luckily it has a comment explaining it:

   -- If we are about to insert the most upto date version of the resource, then we should update the
    -- resources immutable properties, because they might have changed in the source system. (Even though they shouldn't).

Well, I am still none-the-wiser.

Boolean Extensions

If you have a boolean variable and you want to check it is false, you can just write:

			bool example = false;

			if (example == false)
			{
			}

Now with this handy extension method:

public static class BooleanExtensions
{
	public static bool IsFalse(this bool boolValue)
	{
		return !boolValue;
	}
}

You can now write:

bool example = false;

if (example.IsFalse())
{
}

What’s the point in that? No advantage really is there?

Not Set Up

		get
		{
			if (_setup)
			{
				_setup = false;
				return true;
			}

			return _noDocumentsInError;
		}

In the get property, we check the value of _setup. If true then we set it to false, and the overall result returns true. However, if we immediately call the same property, it will just return the value of _noDocumentsInError instead. It’s bad design to have side-effects in a get. Gets are supposed to return a value, and not set things. Then we seem to have different fields tracking different concepts which just looks like it will be prone to errors.

Reload

	public void ReloadTriggers()
		{
			// To be implemented
			return;
		}

This code doesn’t even need a return statement. It is just there for the bantz. When this code is in a module that is very error prone, and you have a method that is not implemented and does nothing at all, then doesn’t give a good impression does it?

Dialog Events

            _msgBox.ShowDialog();
        	return _continueWithSelection ? DialogResult.OK : DialogResult.Cancel;
}
 
private void Cancel_Selection(object sender, EventArgs e)
{
       _continueWithSelection = false;
       _msgBox.Dispose();
}

You can get the dialog result when the dialog is closing. However, this developer has decided to create their own event handlers for the button clicks, then using the variable to decide if it is a DialogResult.OK or Cancel.

NoEncryptionEncryptor

new NoEncryptionEncryptor() 

Is this an Encryptor or not?🤔

SillyException

You throw an Exception when something bad has happened. However, in our code there is a concept of SillyException where you are supposed to ignore it because apparently, it isn’t a real error. It’s only used in a specific part of our codebase though so goes undiscovered for ages. A colleague Michael found it again and we were reminiscing on a bug I found.

Michael 11:33: 
if (mapped is SillyException) return 0;         // Avoid throwing the SillyException

Me 11:33:
I love sillyexception

Michael 11:33:
/// <summary>Represents a data provider exception that doesn't indicate an actual error,
/// and should be ignored.</summary>
[Serializable]
public class SillyException

Me 11:34:
I remember David going on about SillyExceptions and it took me ages to realise he was referring to an actual thing

Michael 11:35:
"I keep getting this SillyException"
what kind of exception is it David?
IT'S A SILLYEXCEPTION
YES, BUT WAT KIND

Me 11:35:
yeah, it did go something like that

Michael 11:35:
haha

Me 11:37:
I think it was around the time he spent weeks investigating that bug I found that no one else could recreate
and I thought he had worked it out when he said he was getting a SillyException

 lame async call

var result = client.BeginUploadFile((Stream)payload, endpoint);
bool uploadCompleted = false;
while (!uploadCompleted)
{
	uploadCompleted = true;
	if (!result.IsCompleted)
	{
		uploadCompleted = false;
	}
}
if (!uploadCompleted)
{
	throw new Exception($"Failure occurred while uploading the data");
}
client.EndUploadFile(result);

weird that they set it to true, then set it back to false. Then it will never get to the exception, it will just be an infinite loop because the check is outside the loop that will only exit if it is complete.

Is this just a reimplementation of a lame async call?

Code Comments

When writing code, some developers like to write notes to themselves/others in the actual code using “comments”. The intent is for documentation; to explain what the code is doing. It is often argued that code should be “self-describing” which is an idea I agree with. However, there can be complex functionality because it is technical, the developer struggled to come up with a simple version, or maybe the domain logic is just that way.

I’ve collated various examples, with various degrees of humour.

Confusing

Comments are supposed to add clarity to the code. When you see code that looks simple, but the comment seems to describe something else, is ambiguous, or misleading, then the comment has decreased clarity.

// Display save dialog
RefreshFilters();
...
// Save over existing filter
RefreshFilters(); 

You would think RefreshFilters would simply reload the filters. It sounds like it is prompting the user with a save dialog, and even overwrites the existing filter.

Self-deprecating

//Well I've messed this up!!!
//Ultimately a filter ends up adding something to the where clause,
//  or if it's a relationships filter, then the AND clause of the join.
//The way I'm adding it to these is not clean and just sucks.  (two constructors with lots
//  of private variables that aren't needed half the time...yuk.
 
//Reading the above comment ages later, I'm not sure why relationships have to be on the
//join clause?  If it's a left one then yes but you can have a left join to a logical table.
//TODO: I've messed up
/// Ok.  I've messed up.  I've messed up good and proper.
/// Aggregate reports wants to format a date as a year, say.  The formatting stuff
/// is buried in compare functions (because ranges was the only place that I needed to do this)
/// There is the column.AddOutputFormatInformation that might be useful when this is fixed????
// This looks like a hack, and, frankly, it is. I'm sorry, I can't work out how to make this not stupid.
// If you're reading this and have some brilliant insight into how to make this work in a way that doesn't
// make people sad, please go nuts and fix it.

Criticising Others

(cg) => { return cg.DisplayName; })); // Wow this is awful
matchingSlotEntity.JobCategoryName = string.Empty; // This doesn't make sense. JobCategory belongs to a person, not a slot.
/// <summary>This is awful.  Get Ryan to support Guids for Mail Merge</summary>
//Please. Remove word interop. Not even Microsoft want us to use it.
TaskCount TaskCount { get; set; } // TODO: Again, this should be readonly. Something mucks with it, though.
MessageBox.Show("Sorry this feature hasn't been implemented yet... :(", "Sad Info..!!");

Funny

// Reverse the order as they are stored arse about face
this.ribbonBars.Reverse();
// Show yourself!
/// <summary>
/// Builds trees for a living.
/// </summary>
internal static class MailMergeTreeBuilder

this.AllowDrop = false;  // No dropping on this tree thankyou.

Even Microsoft throw in the occasional gem.

// At this point, there isn't much we can do.  There's a
// small chance the following line will allow the rest of
// the program to run, but don't get your hopes up.

Useful Warnings

This one warns you that the logic is confusing.

/**********************************************************************************
Description
-----------
You'd think that a stored procedure called 'GetNextBusinessDay' would return the
next business day for an organisation. That would be too easy.
 
What this stored procedure does, is return a day that is not explicitly marked
as 'closed'.
 
It also doesn't return the 'next' business day in the default case where the parameter @BusinessDayOccurrence is set to zero - in that case it returns the current day, or the next non-closed day if the current day has a closure defined on it.
 
@BusinessDayOccurrence doesn't find the first non-closed day after X days from today, incidentally. It returns the Xth non-closed day, which is an important difference. If you have closed bank holidays, and want to know the 2nd non-closed day after 24th December, it's not 27th December but 28th December Confusing!
 
**********************************************************************************/

Null checking

Introduction – The Basics

A NullReferenceException is an incredibly common mistake and probably the first problem new developers encounter. If you have a reference to an object, but the object is null, you cannot call instance methods on it without it throwing an exception.

So in C#, you could have 

Dog myDog = null;
myDog.Bark();

Which will throw an exception because myDog isn’t initialised.

Dog myDog = new Dog();
myDog.Bark();

This is fine because a dog has been initialised and holds a reference to an object.

If you allow the possibility of nulls, then whenever you want to call a method, you end up checking for null.

if (myDog !=null)
   myDog.Bark();

More concise syntax involves using a question mark which will conditionally execute if it is not null such as:

myDog?.Bark()

So the question mark acts as the IF statement but is a more concise way of expressing it.

New “Nullable Reference Types & Improved Design

A cleaner, and safer design, if you can, is to never allow nulls, then you never have to check for them. In the latest versions of C#, you can make it so the compiler will assume things shouldn’t be null, unless you explicitly specify they can be; see Introducing Nullable Reference Types in C# – .NET Blog (microsoft.com).

Our Problem

I work on some software which is over 10 years old using an older version of .Net. So things can be null, and are often designed for null. We noticed that newer developers (seem to be common for our Indian developers for some reason), seem to put null checks everywhere, regardless if the reference can be null or not. This makes the code much harder to read, and possibly debug, because you are writing misleading code, and adding extra code branches that would never execute.

In a recent code review, the developer added an if statement to check for null

if (user == null)
   return;

So if it got past this code, user cannot be null, yet for every single statement afterwards, he added the question mark to check for null! eg

var preferences = user?.Preferences;

Although it’s also bad design to chain loads of properties together, it is sometimes necessary. Trying to be concise and adding the questionmark to check for nulls can be hard to understand what is actually executed. Combined with LINQs FirstOrDefault, you get even less clarity. FirstOrDefault will return the first item in a list, and if there are no matches; it will return null. So when you see all the conditionals and a method like FirstOrDefault, then when you glance at the statement, it looks very likely it will return null.

var result = value?.Details?.FirstOrDefault() is Observation

So “result” is null if: value is null; details is null; or details contains no items.

“everything is null these days, or not sure what type it is”

Me

A longer example is the following:

if (recordData != null && (_templateDataSource?.EventLinks?.ContainsKey(recordData) ?? false))

Due to the null checks, they have added the “null coalescing operator” (??) and set to false when null. However, none of this data should have been null in their code, so it should be:

if (_templateDataSource.EventLinks.ContainsKey(recordData))

The Lead Developer made a good point that if this data was null, it would be a major bug, but if you put the null checks, then the logic gets skipped and a bug is hidden. It would be preferable to actually crash so you know something is very wrong.

Lead Developer

Should this ever be null? Or the EventLinks on it?
Remember, all these null safety checks could allow the software to continue on invalid data conditions and produce wrong answers instead of crash.
Please re-check ALL the ones you have added in these changes, and if something should never be null remove it.
Unless you are planning to test the behaviour of this when null is passed in and that it is correct.

When so many objects can actually be null, it is easy to miss a check. There was one code review where I questioned the original design which seemed to have excessive null checks (which was implemented by the same author of this new change). His new change was to add a null check that was missed. After making some dramatic changes based on my feedback, he ironically removed his new null check, and thus reintroduced the issue he was attempting to fix!

Developer: "have added a null check to prevent the issue."
Me: "then it will crash because you haven't added null checks"

Another good example of having excessive null checks, but somehow still missing them:

context.NotGiven = resource.NotGiven;
context.Date = Helper.GetDateTimeFromString(resource.Date);
context.IdentifierSystem = resource.Identifier?.FirstOrDefault().System;
context.IdentifierValue = resource.Identifier?.FirstOrDefault().Value;
context.Status = resource.Status;
Coding productCode = resource.productCode?.Coding?.FirstOrDefault(x => x.System == Helper.SystematicInfoUrl);
context.productCode = new FilerCommon.Coding(productCode?.System, productCode?.Code, productCode?.Display);
Coding productProcedureCode = (resource.GetExtension(Helper.ProductProcedureUrl)?.Value as CodeableConcept)?.Coding?.FirstOrDefault(x => x.System == Helper.SystematicInfoUrl);
context.productProcedureCode = new FilerCommon.Coding(productProcedureCode?.System, productProcedureCode?.Code, productProcedureCode?.Display);
Coding site = resource.Site?.Coding?.FirstOrDefault(x => x.System == Helper.SystematicInfoUrl);
context.Site = new FilerCommon.Coding(site?.System, site?.Code, site?.Display);
Coding route = resource.Route?.Coding?.FirstOrDefault(x => x.System == Helper.SystematicInfoUrl);
context.Route = new FilerCommon.Coding(route?.System, route?.Code, route?.Display);
Coding explanation = (context.NotGiven.HasValue && context.NotGiven.Value ? resource.Explanation?.ReasonNotGiven : resource.Explanation?.Reason)?.FirstOrDefault()?.Coding?.FirstOrDefault(x => x.System == Helper.SystematicInfoUrl);
context.Explanation = new FilerCommon.Coding(explanation?.System, explanation?.Code, explanation?.Display);
context.Quantity = new FilerCommon.Quantity(resource.Quantity?.Code, resource.Quantity?.Value, resource.Quantity?.Unit);            
context.LotNumber = resource.LotNumber;
context.Manufacturer = GetManufacturerName(context, resource);
context.OrganisationDetail = GetOrganizationDetails(context, resource);

Some of the FirstOrDefault() then call a property without a null check:

FirstOrDefault().System

Also, because of the null checks when they are setting “site”, it looks like it can be null. So when they set “context.Site”, they have null checks in the constructor for Coding. However that means they are passing null into the constructor which means it probably is an invalid object.

Null flavours

I thought the idea of “Null flavor” is very interesting. NullFlavor – FHIR v4.0.1 (hl7.org) Null means “nothing” but there’s different meanings within; or the reason why it is missing. Some examples are:

  • missing, omitted, incomplete, improper
  • Invalid
  • not been provided by the sender due to security, privacy or other reasons
  • Unknown
  • not applicable (e.g., last menstrual period for a male)
  • not available at this time but it is expected that it will be available later.
  • not available at this time (with no expectation regarding whether it will or will not be available in the future).
  • The content is greater than zero, but too small to be quantified.

Closing Words

Null checks are one of the simplest concepts in object oriented programming, so it’s bizarre how we are seeing many modern programmers struggling to understand when to use them. Even when they find a bug in their code, they still fail to learn their lesson and continue to write bad code with inappropriate null checks. A better design is to avoid the possibility of nulls by always dealing with an object reference (even if the object is one that simply represents “null”).

SonarCloud, the Static Analysis Tool

SonarCloud is a static analysis tool. It runs on your codebase and points out where code can be written to “best practices”, possible bugs, and code coverage.

When my employer bought a SonarCloud licence, there was a large emphasis that they wanted existing errors fixed, and no errors in new code. I knew this would be a bad idea to emphasise because of the classic “you get what you measure”. If you tell people the goal is to not have errors, then they will do whatever they can to not have errors. The idea of a tool like SonarCloud is to improve code quality, but since the metric is a number, then the number becomes the goal rather than code quality. However, the aim should be to improve the quality of the code, optimised for readability, maintainability, scalability, and code coverage.

As more people began working with SonarCloud, we saw more changes with the title along the lines of “Fix Sonarcloud issues“.

Human Analysis

SonarCloud is just a static analysis tool, it can only spot problems and suggest pre-defined improvements. These improvements are not always in the best interest of the code base. You have to objectively ask yourself “Does the change I’m making make the code better or worse?” Before you can do that, you need to understand why Sonar thinks there is a problem with the code. Then you can decide if Sonar is right or not. Some of the rules are not applicable everywhere.

I have seen a number of changes where the code is actually made worse by the changes to satisfy Sonar.

Example 1: 7 Argument Limit

A constructor or method that contains loads of parameters/arguments can be a sign of bad design, and maybe some of the parameters can be grouped together inside an object. Once you reach 8 arguments, Sonar will flag this. A simple fix is just to create a class and throw in a couple of parameters in there. It then satisfies the rule but it doesn’t make logical sense unless the parameters are in fact related. Adding a new class when it is single use could just make the codebase more cluttered and seemingly more complicated. I would rather see a class with 8 values, than some complicated class with extra types in the way “Just because of Sonar”. Mark Seemann has a good blog post about Curbing code rot with thresholds.

Example 2: TryParse

Another example, I once wrote some code with the following:

var ok = int.TryParse(a, out var x) & int.TryParse(b, out var y);
//something with x and y

Sonar complained about the use of the bitwise & and it was confusing and suggested I use &&. However, if I did that then “y” wouldn’t always be defined because of the short-circuiting and the code wouldn’t compile. I was about to reject the Sonar issue as “Suggests code that doesn’t compile” and just keep my version.

Then I thought, “if Sonar cannot understand my code to make a decent suggestion, maybe other developers can’t either“. I was trying to be too clever with my code.

Instead I changed the code to:

var ok1 = int.TryParse(a, out var x);
var ok2 = int.TryParse(b, out var y);
var ok = ok1 && ok2;
//something with x and y

It wasn’t as terse as my original version, but it was certainly easier to read and understand, and Sonar didn’t have a problem with it any more either.

Example 3: Cyclomatic Complexity

When a method has loads of If statements, this creates loads of permutations that can be executed which means if you are aiming for true 100% test coverage, you have loads of tests to write. It can easily make the code hard to read and understand too. At a certain point, Sonar suggests breaking the methods into smaller methods. I have seen people take this extremely literally and you end up with a design that looks like

  SetPersonalDetailDifferences1(returnList);
  SetPersonalDetailDifferences2(region, returnList);


So there is no logical grouping for what goes in part 1 and part 2, it’s just that enough code gets placed in the first method then everything else in the second. Now the original single method is half the size with no logical reason other than to satisfy the Sonar rule.

Me (rhetorically)
Are these fields logically grouped or is this just to satisfy sonar?

Brijesh
It is just to satisfy sonar

Example 4: Making The Code Worse

Nullable DateTime

DateTime always has to have a value. However, You can declare a nullable DateTime in C# by appending a question mark like DateTime?

The existing code was checking a standard DateTime for null, which can never happen.

if (startDate == null)
   throw new ArgumentNullException("startDate");

The Code Analysis report was correctly flagging this code as completely unnecessary. Instead of removing the code, the developer then changed it to

if ((DateTime?)startDate == null)
    throw new ArgumentNullException("startDate");

The method was still accepting the startDate as a non-nullable DateTime so could never be null. But then it was being cast to a nullable DateTime so the check against null is technically valid.

Me:
Why are you casting to nullable datetime?

Chandeshwar
dateTime is value type , it can never be null value.
if check dateTime == null, it's always return false .

Me:
Yes, that is correct Chandeshwar. That’s why you can delete the code completely. Your code is always false too.

Sometimes, this type of scenario leads to multiple attempts. Either

  1. because they make some changes and they still get the same Sonar error,
  2. they introduce a new problem,
  3. or maybe the reviewer dislikes the changes and wants them to change it back.

So there’s plenty of files where our Change History looks like this

Fix Code Smell Issue <- doing what they think Sonar wanted them to do
Additional Static Analysis Fixes <-they messed up, so tried again
Addressing code review comments <- I pointed out it still wasn’t correct

Example: Magic Numbers

Magic numbers should not be used

A magic number is a number that comes out of nowhere, and is directly used in a statement. Magic numbers are often used, for instance to limit the number of iterations of a loop, to test the value of a property, etc.

Using magic numbers may seem obvious and straightforward when you’re writing a piece of code, but they are much less obvious and straightforward at debugging time

That is why magic numbers must be demystified by first being assigned to clearly named variables before being used.

-1, 0 and 1 are not considered magic numbers.

Vignesh changed code like

dosagesCount == 1

to

dosagesCount == Constants.Single

Me:
Constants are for a different purpose! They are not for replacing all the numbers in the codebase. It's a good idea to give numbers clear names, and if the same value is used multiple times, it means if you need to change the number, it will update in all places.

Vignesh
This is the rule we follow (quotes the magic number description shown above)... and I got comments from my senior level.

WHAT DOES IT SAY, VIGNESH?
“-1, 0 and 1 are not considered magic numbers”

And you have replaced 1 with the word “SINGLE”

Example: Lack of pragmatism

It frustrates me so much that so many developers even don’t agree with Sonar, or don’t understand the change, but still attempt to change it anyway. Their sole goal is to remove the error that they lose sight of the true aim to write great code that works.

Dave
OK, but why did it suggest it, and why does it make the code better? Did you understand the suggestion or just blindly do what it said?

Pavel
I think I understood it. The comment was: "Add the default parameter value defined in the overridden method". Default arguments are determined by the static type of the object. If a default argument is different for a parameter in an overriding method, the value used in the call will be different when calls are made via the base or derived object, which may be contrary to developer expectations.
That's why it suggested it. But in my opinion this change is redundant and doesn't make the code better.

There was some code which had loads of if statements. It was checking the types of “nodes” in a tree structure, and so attempted to cast the type. If successful, it would go into that code block, else it would attempt to cast to a different type.

Even though the developer didn’t need to change this code, he did change it to attempt to resolve Sonar issues with regards to the casting. However, he only updated some lines and not others which meant it was inconsistent so the codebase is more confusing, and still contains Sonar errors. He also took some lines out of the if statements and then performed all the casting at the top of the method. So now it was actually more inefficient because once you have found a match, you do not need to keep attempting to cast.

  {
            var creterionTag = row.Tag as LinkedCriterion;
            var relationshipTag = row.Tag as Relationship;
            var attributeTag = row.Tag as Attribute;
            var resultSetRuleTag = row.Tag as ResultSetRule;
            var conceptOrderingTag = row.Tag as ConceptOrdering;
Me
why change to "as casts" for most, but then not for Table, Concept and SharingDisplayValue?
I'm not even sure if there is an advantage to doing it this way. We now have several variables where only 1 will be set, and the rest are null.
Might be better spending more time refactoring it out to get rid of the branching.
Pattern matching is probably the neatest way for now. https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching

Kalya
I just simply resolved some existing SonarQube issue which not raised because of our changes. It is a kind of help to resolving existing issues, It is very difficult to resolving all the issues as of now

So he tried to help, was too difficult, so gave up, but still decided to submit the changes for review, despite making the code worse.

Example: Just Suppress it!

Of course, you can still get rid of the error without actually fixing anything. But merely hide it from the managers that are only looking at the figures:

#pragma warning disable S1075 // URIs should not be hardcoded
public const string InfoUrl = " http://med.info";
 #pragma warning restore S1075 // URIs should not be hardcoded

Conclusion

The goal isn’t to make Sonar happy, the goal is to write good clean code, sonar is a guide to help you do that, but it doesn’t guarantee success.

Me

Extension methods

Even as an experienced software developer, it is amazing when you discover some really trivial things, or discover some interesting quirk of a programming languages.

I was looking at a Code Review the other week and I saw some code that looked really pointless. It was testing some code throws a ArgumentNullException.

[Fact]
public void LogWarningDetails_WithNullLogger_ThrowsArgumentNullException()
{
	ILogger logger = null;
	Assert.Throws<ArgumentNullException>(() => logger.LogWarning("Test Error Message"));
}

A NullReferenceException is an incredibly common mistake and probably the first problem new developers encounter. If you have a reference to an object, but the object is null, you cannot call instance methods on it.

Therefore if logger is null, then you cannot call LogWarning without an error being thrown.

So on first glance, this test looks like it is testing the basic fundamentals of the C# Programming language. However, this is testing for ArgumentNullException rather than NullReferenceException.

LogWarning was actually defined as an extension method, and this actually does allow you to call methods on null references. I’ve never realised this or even thought about it. It is the case because extension methods actually pass the reference in as a parameter.

So if you have an extension method (as indicated with the this keyword):

	public static bool IsNull(this object x) 
	{
		return x == null; 
	}

This can be called like this:

	static void Main() 
	{
		object y = null;
		Console.WriteLine(y.IsNull()); 
		y = new object(); 
		Console.WriteLine(y.IsNull());
	} 

Which would output true, then false. Which illustrates that the extension method does not crash if the reference to y is null, and the logic correctly works by returning true when y is null.

Conclusion:

Understanding NullReferenceExceptions is basically day 1 of learning to code in an Object Oriented Language like C# but I’ve never even considered there is an exception to the rule. A method call on a null reference won’t cause a NullReferenceException if the method is an Extension method!

Atalasoft DPI

We use a software library from Atalasoft in our product to allow users to add annotations to PDFs.

One of our Indian developers posted on Slack to ask a question about a bug he was assigned to fix. It was quite hard to understand what he wanted but it sounded like the quality of the users PDFs were lowered to a point that they were blurry and unusable.

Hi Everyone, Here is my doubt was more of a generic one. In Document Attachment Module, I’m trying to attach a PDF. The attached PDF gets depreciated in the doc viewer.. After analysis came to a conclusion that, the Atalasoft Viewer we are using Document Attachment viewer should pass only with 96dpi(dots per inch).

However in the Atalasoft Documentation itself was given that inorder to increase the quality of the document inside the Document Viewer of Atalasoft we need to pass on the default or hardcoded resolution as attached.

With respect to this have attempting a bug in which need to fix this depreciation not in a hardcoded format.

Is there any way to calculate a PDF file’s DPI through its file size. (Note: Since PDF file was vector based and doesn’t posses any information related to dpi).Can anyone please guide me on this ? Apart from hardcoding and passing on a resolution value.

After struggling with it, another developer started working on it, but then went on annual leave so yet another developer took over. None of them had put much thought into what they were doing because when I asked them to explain the code, they couldn’t seem to. I then googled the code and found it on the Atalasoft website. https://www.atalasoft.com/kb2/KB/50067/HOWTO-Safely-Change-Set-Resolution-of-PdfDecoder

using (var annotateViewer = new AnnotateViewer())
{
    annotateViewer.DataImporters.Add(new Atalasoft.Annotate.Importers.PdfAnnotationDataImporter { SkipUnknownAnnotationTypes = false });                
    using (var pdfDec = new PdfDecoder())
    {
        pdfDec.RenderSettings = new RenderSettings { AnnotationSettings = AnnotationRenderSettings.RenderAll };
        Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(pdfDec);
        SetPdfDecoderResolution();
    }     
    
    annotateViewer.Open(filePath);                
    var printer = new Printer(annotateViewer);
    printer.Print(printerSettings, documentName, printContext);
} 
 
 
static readonly object pdfLock = new object();

private static void SetPdfDecoderResolution()
{
    int standardResolution = 300;
    lock (pdfLock)
    {
        foreach (Atalasoft.Imaging.Codec.ImageDecoder rawDecoder in Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders)
        {
            if (rawDecoder is PdfDecoder)
            {
                //By default PdfDecoder sets to lower resolution of 96 dpi
                //Reason for PDF depreciation
                ((PdfDecoder)rawDecoder).Resolution = standardResolution;
                return;
            }
        }
        Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(new PdfDecoder() { Resolution = standardResolution });
    }
}

The code instantly stood out for being convoluted because we are creating a PdfDecoder called pdfDec, then instead of just setting properties on it, we add it to the RegisteredDecoders, then call our SetPdfDecoderResolution which loops through the decoders to find the one we added. If it can’t find it (which surely is impossible) it will add one.

I was talking to a Lead Developer about a completely different bug fix, and he says

“People just don’t think about what they write”

Lead Developer

So I decided to bring up this Atalasoft problem…

He said 

When I saw the lock I wanted to ask, “Which Stack Overflow post did you find this on?”

Lead Developer

So I told him they got it from the Atalasoft website!

So they had blindly pasted this Atalasoft code in without thinking. They could just set the Resolution property in the existing code since we already create the object; so already hold a reference to it. If this code can mean we can add multiple decoders (which you aren’t supposed to do), then we could create a method similar to SetPdfDecoderResolution where it checks if there is a decoder or add one if none exists. Then we ensure all the correct properties are set. 

They need to think

Lead Developer

I think the problem the Lead Developer had with the lock is that you use lock when you want to guarantee that only one thread is accessing a resource/section of code at any time; but this code wasn’t used in a multi-threaded context. So by blindly pasting in code without thinking, they were adding redundant lines and creating confusion. 

The actual fix was just

private const int highDotsPerInch = 300;
pdfDec.Resolution = highDotsPerInch;

But to reach this outcome, it took 3 developers to look at it, then 2 to review it.