Cloud Nomenclature

When people start talking about the cloud, they quickly start dropping in jargon terms. Sometimes they use multiple terms in the same sentence and it quickly becomes hard to understand when you aren’t familiar with the cloud providers. Even if you are familiar with one particular provider, other providers use different terms for their equivalent service. I think AWS is particularly bad for their naming which often aren’t intuitive. So when people start talking about Elastic Beanstalk, Route 53 and Redshift; it’s hard to grasp what the hell they are talking about.

Here’s an example of equivalent services by four different cloud providers.

AWSAzureGoogle CloudHuawei Cloud
Elastic Compute Cloud (EC2)Virtual MachineCompute EngineElastic Computer Server
Elastic BeanstalkCloud ServicesGoogle App EngineService Stage
LambdaFunctionsCloud FunctionsFunctionGraph
Elastic Container ServiceContainer ServiceContainer EngineCloud Container Engine
Auto ScalingAutoscaleAutoscalerAutoscaling
Simple Storage ServiceBlob StorageCloud StorageObject Storage Service
Elastic Block StorageManaged StoragePersistent DiskElastic Volume Service
CloudFrontCDNCloud CDNCloud Deliver Network
RDSSQL DatabaseCloud SQLRDS
DynamoDBDocumentDBCloud DatastoreDocument Database Service
VPCVirtual NetworksCloud Virtual NetworkVirtual Private Cloud
Direct ConnectExpress RouteCloud InterconnectDirect Connect
Route 53Traffic ManagerCloud DNSCloud DNS
CloudTrailOperational InsightsCloud LoggingCloud Trace
CloudWatchApplication InsightsStackdriver MonitoringCloud Eye
Identity and Access Management (IAM)Azure Active DirectoryCloud Identity and Access Management (IAM)Identity and Access Management
CloudHSMAzure Trust CenterGoogle Cloud Platform SecurityData Encryption Workshop
KinesisStream AnalyticsCloud DataflowData Ingestion Service
OpsWorksAutomationComputer Engine ManagementCloud Service Engine
CloudFormationResource ManagerCloud Deployment ManagementApplication Orchestration Service
Simple Notification SystemNotification Hub Simple Message Notification
Elastic Load BalancingLoad BalancingCloud Load BalancingElastic Load Balancing

How Not To Write Unit Tests

Introduction

Unit testing is a software testing technique where individual components of a program are tested in isolation. These components or “units” can be functions, methods, or classes.

When implemented correctly, Unit testing is a crucial practice in modern software development. It helps ensure the quality, reliability, and maintainability of your code. By incorporating unit testing into your development workflow, you can catch bugs early, improve code quality, and ultimately deliver better software to your users.

When I heard about Unit tests, they did seem awesome. But then the more I used them, I found that my opinion on them has declined. I find it quite hard to explain though.

I think in general, to make things testable, you have to split the logic up into smaller methods. But then when they are smaller, A) they are easier to understand and B) they are less likely to change. So if a developer has looked at that code, what is the chance they are gonna change it and break it? If you have a unit test and it never fails in the software’s lifetime, has it provided any benefit?

Then in the case that you decide to change the behaviour, then you have the overhead of rewriting all the unit tests and it can basically double the development time.

When there’s certain scenarios which could end up taking ages to manually test it, the unit tests are very beneficial. When there’s loads of permutation/optional aspects to logic, it is a prime candidate for unit tests. Without unit tests, retesting every time you make a simple change is incredibly tedious. But with unit tests, you just click a button and wait a few seconds.

Unit tests give you confidence you can refactor without risk. However, they are not automatically the silver bullet. Well-written, fast, reliable tests accelerate development and save time. Poorly-written, slow, flakey tests hinder development and waste time. 

Fast Tests

A test that takes a second to run doesn’t sound slow, but what if you have hundreds or thousands of tests? If the tests take a long time to run, the developers won’t run them as often, or at all, then what value do they serve? 

They also should run on a build to ensure only quality releases actually go live, but you want your release process to be fast.

There was a recent change where the developer was claiming to have sped up a long-running call, however, he hadn’t carried over that performance enhancement mindset to the tests, and had actually increased the time to run them by 6 seconds.

The code “Thread.Sleep” can be used in threaded code to intentionally call a delay. I’ve seen many developers add this to a unit test. Tests are supposed to be fast, so you should never add this in a unit test.

Measuring Tests & ExcludeFromCodeCoverage

When people write unit tests, they want to try to understand how much of their code is covered by tests. We have this metric of Code Coverage but it has some severe limitations in the way that it is measured. It’s often a simple metric of “does the line get hit by at least one test”, but since methods can be executed with different combinations of variables, you can end up having 100% statement coverage but without actually testing many combinations at all.

The metric is one that impresses managers so you often see developers writing bad tests simply to game the test coverage metric. This is bad as you end up being misled that your code changes haven’t caused any bugs but yet it could have introduced something severe because the unit tests weren’t adequate.

I’ve seen quite a few code changes purely to increase the code coverage. So the title of the change would be like:

“Added more code coverage”

Then when I check the build output:

“There might be failed tests”

How can you be adding more tests then not actually run them before submitting it to review? Madness. The explanation is that their focus is just on coverage and not on quality. Maybe a bit of arrogance and laziness.

This week I worked with a team to get code coverage over 80% (a corporate minimum). The problem with this effort: Code coverage can be gamed. Sure, low code coverage means there’s a lot of untested code. But, high code coverage doesn’t mean the code is well tested.

Cory House

You can add ExcludeFromCodeCoverage “attributes” to your code which tells the code coverage tool to ignore it. It’s a simple way of reducing the amount of lines that are flagged as untested.

Here’s one of our principal developer’s opinion on this attribute:

“Using ExcludeFromCodeCoverage is only good when the goal is 100% coverage. That should never be the goal. The goal should be a test suite that prevents bugs from ever going live. I’m happy never using it and just having coverage reports flag things that are not actually covered, it is a more realistic representation of what the tests cover and makes me much more cautious about changing them as I know I don’t have test coverage. Never add Exclude from Code Coverage, it’s just lying to everyone. Why suppress things that might be a problem, if they are, we need to fix them.”

Principal Developer

Personally, I think adding suppressions/attributes just clutters the code base. I’d rather just treat the stats as relative to each release. The numbers have gone up/down, but why? If we can justify them, then it’s all good. Chasing 0 code smells and a specific test coverage means you can just cheat and add the likes of ExcludeFromCodeCoverage to meet such metrics.

Another developer said:

I value a holistic set of metrics that help us understand quality in software development. Code coverage is a single metric that can be part of that set of metrics you monitor. No single metric can stand by itself, and be meaningful. Nothing is perfect, which is why we should value a toolbox. I don’t believe in gaming the system and “hiding” uncovered code to get to 100%.

You need engineering teams who are prepared and confident enough to publicly share their coverage reports. This sets the tone of the culture. Context is needed, always. There will be reasons why the coverage is as it is. Use tools that help engineering teams with confidence/delivering at pace and ultimately delivering customer satisfaction. You cannot compare reports from different teams or projects.

Useful Tests

You need to make sure your tests actually test some logic. Sometimes people end up seemingly writing tests that really test the actual programming language, but I suspect it is just so the Code Coverage metric is fooled. Code Coverage checks if lines of code are “covered” by tests, but the simplistic nature of the check just ensures that a line of code is executed whilst a test is running; rather than if there was a meaningful test.

So for example:

[Fact]
public void DefaultConstructor_ReturnsInstance()
{
        var redisMgr = new RedisStateManager();
        Assert.NotNull(redisMgr);
}

So there you are instantiating an object then checking it is not null. Now that’s how objects work in C#. You instantiate an object, and then you have an object. Now, I suppose an exception could be thrown and the object wasn’t created, but that is generally considered bad practice and also there was no other test to check a situation like that, so they haven’t tested all scenarios.

largeResourceDefinitionHandler.MissingResource = _missingResourceDefinitions;
 
Assert.NotEmpty(largeResourceDefinitionHandler.MissingResource);

Setting it then checking it is set. Unless the property has loads of logic which you could say is bad design, then checking it is set is really testing the “.net framework” but if you think you need this; that means you don’t trust the fundamental features of the programming language you are using. You are supposed to be testing the logic of your code, and not the programming language.

If there’s lots of setup then the Assert is just checking for Null, then it’s likely just to fudge the code coverage. Another classic that I’ve seen is loads of setup, then ends with:

Assert.IsTrue(true);

So as long as the test didn’t throw an exception along the way, then it would just always pass because True is definitely equal to True.

Those ones seem intentionally malicious to me, but maybe the following example is more of a case of a clear typo:

Assert.Same(returnTrigger, returnTrigger);

Whereas this following one looks like a typo, but it’s actually two different variables. Need to look closely (one is a single S in Transmission). 🧐

Assert.Equal(organisationTransmissionStatus.Enabled, organisationTransmisionStatus.Enabled);

What goes through people’s heads? How can you write code like that and carry on like nothing is weird.

Sometimes tests look a bit more complicated but on analysis they still don’t really test much:

        [CollectionDefinition(nameof(LoggerProviderTests), DisableParallelization = true)]
         public class LoggerProviderTests : IDisposable
         {
[Theory]
                 [InlineData("Verbose")]
                 [InlineData("Debug")]
                 [InlineData("Fatal")]
                 [InlineData("Information")]
                 [InlineData("InvalidLogLevel")] // Test with an invalid log level
                 public void GetMinimumLevel_ReturnsCorrectLogLevel(string logLevelSetting)
                 {
                         // Arrange
                         System.Configuration.ConfigurationManager.AppSettings["DistributedState.LogLevel"] = logLevelSetting;
var firstInstance = LoggerProvider.Instance;
                         var secondInstance = LoggerProvider.Instance;
// Assert
                         Assert.Same(firstInstance, secondInstance);
                 }
         }

So this sets a setting on AppSettings, presumably used by the “LoggerProvider”. However, all they are doing is testing that if you call the Instance property twice, it returns the same object  both times. So the setting of the different log levels is completely irrelevant. I mean, the log level could be completely wrong but you are comparing ‘is the wrong value of A the same as the wrong value of B’; and it will still pass.

Another common aspect is when you use a testing library like Moq, and you can use it to create objects and essentially say “when I call some code with these specific parameters, then give me this value back”. The thing is when developers use this as the actual thing they are testing, then you are testing Moq, and not your actual logic.

	[Fact]
	public void JobReposReturnsValidJobTest()
	{
		//Arrange
		ScheduledJob job = new ScheduledJob() { ScheduledJobId = Guid.Parse("e5ee8f7410dc405fb2169ae2ff086310"), OrganisationGUID = Guid.Parse("fbee8f7410dc405fb2169ae2ff086310") };
		_mockJobsRepo.Object.Add(job);
		_mockJobsRepo.Setup(e => e.GetById(Guid.Parse("e5ee8f7410dc405fb2169ae2ff086310"))).Returns(job);

	//Act
	var resultJob = _unitOfWork.JobsRepo.GetById(Guid.Parse("e5ee8f7410dc405fb2169ae2ff086310"));

	//Assert
	Assert.Equal( Guid.Parse("fbee8f7410dc405fb2169ae2ff086310"), resultJob.OrganisationGUID);
	}

“I think all this test is doing – is testing that JobsRepo returns an object that was passed into the constructor on line 22. The GetById is redundant, it will always work if it returns that object because the Moq was configured to return that value. That is testing Moq, and not our code. But then if you are just asserting a property returns an object, you are just testing that C# properties work.”

Me

“yes you are right , I am just testing if JobsRepo could return a value, so that it helps me in code coverage for get functionality of JobsRepo , as it is just set in the constructor of the class and there is no call for get”

Developer who wrote bad tests

So I think they are saying “I am just fudging the coverage”. Checks it in anyway.

There’s been loads of tests where you could actually cut out large parts of the method they are testing and the tests still pass. Again, sometimes you point this out to developers and they still want to check it in, purely for the statistics, and not for any benefit to any developer.

“do these tests add value? a quick glance suggests this is very dependent on your mock object. It might be the case that the production code can be changed without breaking these tests.”

Me

yeah, they’re kind of meaningless. Merging code to use as templates for future, better, tests.

Developer who wrote bad tests

Here is a rant I left on a similar review:

This name implies that it should always be disabled, especially because there’s no coverage for the case where it is true. However, these tests aren’t really testing anything. I think you’re basically testing that Moq works and the default boolean is false. I think the most you can really do is call Verify on Moq to ensure the correct parameters are passed into the GetBool call.

If you replace the contents of IsRequestingFeatureEnabledForOrganisation with return false, your tests pass which illustrate the coverage isn’t complete, or you aren’t guaranteeing the configuration manager code is even called at all. Personally, I don’t think it is worth testing at all though. All your class does is call the FeatureDetails class so you aren’t testing any logic on your side.

I think people are too concerned about getting code coverage up, so they insist on writing tests, even if it makes things more confusing.

I suppose it is up to you and your team to decide what you want to do, but I occasionally question people just to make them think if it is actually adding any value. I’ve seen tests where they simply assert if an object is not null, but it could literally return an object with all the wrong values and still pass (and the method always returned an object anyway so could never fail). If you see a method has tests, it gives you a false sense of security that you think it is going to catch any mistake you make, but it just always passes anyway

always think if your tests will add value and if it’s worth adding them. If you need to mock everything then they’re not very valuable, or you’re testing at the wrong level (too high), and you’re better off with integration tests than unit test. 100% code coverage is a really bad idea for complex software, massive diminishing returns in value the higher you try to push it. We change stuff all the time in our software too, so if everything has high-level unit tests then you spend more time fixing those tests.I tend to find you spend ages writing tests then if you change the implementation then you have to change the tests and you can’t run them to see if you broke anything because you had to change the test to run it.

Me

Test Driven Design (TDD)

There’s a methodology called Test Driven Development where you write a test first. It will then fail if you run it because there’s no functionality to run. Then you write the implementation and get it to pass. Then move onto writing the next test, and repeat. So you build up your suite of tests and get feedback if your new changes have broken previous logic you wrote.

I was recently listening to a podcast and the guest said that he always writes code first, then adds tests after. If he can’t write tests, then he will make a change to bypass code just for the tests. I wasn’t sure what he meant by this, maybe it’s like when people write a new constructor which is only ever called by the tests. But that’s bad design.

I thought he may as well just do TDD from the start, instead of always going through that struggle. He says TDD doesn’t often lead to good design because you aren’t thinking about design, you just think of how to make the tests pass. 

But doesn’t the design organically come from TDD? and his way of changing the design just for the tests is what he is arguing against TDD for. TDD often slightly over-engineers the solution with the likes of Interfaces. So then he is avoiding TDD, and instead writing the tests after; but his way adds “Technical Debt” via adding extra constructors that are only used by the tests.

“I’ll add tests in a separate change later”.

 5 reasons to add tests before merge:

1. Clear memory: Before merge, everything is fresh in my mind. I know what the code is supposed to do, because I wrote it. So I also know what tests I should write to assure it works. Every minute that passes after merge, I will understand the feature less, and thus, be less equipped to add proper test coverage.

2. More effective reviews: If I write the tests before merge, then anyone reviewing my code can use my tests to help them understand the code, and to watch the feature run.

3. Faster development: If I write tests during development, I can use the tests to accelerate my development. I can “lean” on my tests as I refactor. Faster feedback loops = faster development.

4. Better design: Writing tests during dev encourages me to write code that is testable. It makes me consider accessibility too since that tends to make automated testing easier by providing well-labeled targets.

5. Changing priorities: After merge, there’s no guarantee that I’ll have time to write the tests at all. I may get pulled away for other more “urgent” tasks.

Bottom line: The proper time to add tests is *before* merge.

Coty House

I recently saw the following conversation. A developer was basically saying he didn’t have time to write the tests, and it might end up in some drastic refactoring which would be risky. Then the plan is to rely on manual testers and get the changes released. Then the next part probably won’t happen (because important features will be prioritised), but his suggestion is that he then makes the changes for the next release with good unit test coverage.

Senior Developer:
This domain supports unit testing, you should be able to add tests to cover the changes you made to make sure it behaves as you expect

Developer
Currently there are no unit test cases available for the changes made class, and the class is tightly coupled. I have written some draft tests and will check them next month as a priority.

Architect
IMO, given the pressure, timescales and urge to complete this, I think we can defer for now and stress the testers to pay more attention to the areas that we have low confidence.

Senior Developer:
So instead of checking if it is correct by adding tests that we can be sure exercise the code changes, we just merge it and hope that the manual testers find any bugs over the next day or so, and if they do, then it is back to the dev team and another change?

Time In Unit Tests

Tests should be deterministic. If a test is run and passes, then if no changes have been made and we run it again, it should also pass (obviously). An unreliable test doesn’t give you confidence in code changes you make. It’s a surprisingly common occurrence when you make a change and an unrelated test breaks, and you are thinking “how can those changes break the test“? then you look at what it is doing, and it’s often something to do with time.

You see something like
data is "BirthDate":"1957-01-15T00:00:00"
And the test result says:
Expected "Age":"67y"
Actual: "Age":"68y"
Today is their birthday!

What you need to do is put a “wrapper” around the code that gets the current date. So instead of simply DateTime.Now, you create a class called something like DateTimeProvider, and in the production code, the class returns DateTime.Now. Then in your Unit Tests, you then create a MockDateTimeProvider and make it return a hard-coded date. That way, no matter when you run the test, it always returns the same date, and is a deterministic test.

I recently fixed some tests that were failing between 9pm-12am. I found that a developer had changed the MockDateTimeProvider to return DateTime.Now, completely rendering it pointless. Other parts of the test were adding 3 hours to the current time, and because 9pm+3 hours is tomorrow’s date, the date comparison it was doing then failed.

public class MockDateTimeProvider : IDateTimeProvider
{
        public DateTime Now { get { return DateTime.Now; } }
}

I think another red flag in unit tests is conditional statements. Logic should be in your production code, and not in tests. Not only does this following code have a DateTime.Now in it, it looks like they have put a conditional If statement in there, so if it would normally fail, it will now execute the other branch instead and pass. So maybe the test can never fail.


[Fact]
public void ExpiryDateTest()
{
        DateTime? expiryDate = (DateTime?)Convert.ToDateTime("12-Dec-2012");
        _manageSpecialNoteViewModel = new ManageSpecialNoteViewModel(_mockApplicationContext.Object);
        _manageSpecialNoteViewModel.ExpiryDate = Convert.ToDateTime(expiryDate);
        
        if (_manageSpecialNoteViewModel.ExpiryDate < DateTime.Now.Date)
                Assert.True(_manageSpecialNoteViewModel.IsValid());
        else
                Assert.False(_manageSpecialNoteViewModel.IsValid());
}

Other Bad Unit Tests

Maybe the most obvious red flag, even to non-programmers – is testing that the feature is broken. The developer has left a code comment to say it looks wrong!

Assert.Equal("0", fileRecordResponse.Outcome); // I would have thought this should have been -1

The One Line Test

How do you even read this. Is that actually one line? 🤔🧐

_scheduledJobsRepo.Setup(r => r.GetAllAsNoTracking(It.IsAny<Expression<Func<ScheduledJob, bool>>>(),
        It.IsAny<Func<IQueryable<ScheduledJob>, IOrderedQueryable<ScheduledJob>>>(),
        It.IsAny<int>(),
        It.IsAny<int>(),
        It.IsAny<Expression<Func<ScheduledJob, object>>>()))
        .Returns((Expression<Func<ScheduledJob, bool>> expression,
        Func<IQueryable<ScheduledJob>, IOrderedQueryable<ScheduledJob>> orderBy,
        int page, int pageSize,
        Expression<Func<ScheduledJob, object>>[] includeProperties)
        =>
        {
                var result = _scheduledJobs.AsQueryable();
                if (expression != null)
                {
                        result = result.Where(expression);
                }
                result = orderBy(result);
                result = result.Skip(page * pageSize).Take(pageSize);
                return result;
        });

When it is that hard to read, I wonder how long it took to write it.

Other Common Mistakes

I think tests can be unclear if you use a Unit Testing library but not understand what features are available. Like instead of using the ExpectedException check, they may come up with some convoluted solution like a try/catch block then flagging the test as passed/failed.

try 
{ 
        Helper.GetInfoArticles(articleName, _httpWebRequest.Object); 
        Assert.IsFalse(true); 
} 
catch 
{ 
        Assert.IsTrue(true); 
} 

Naming the tests can be tricky to make it clear what it does and to differentiate it from other tests. The worst is when the name says something completely different, most likely from a “copy and paste” mistake.

I’ve talked about how using DateTime can make tests fail at certain times. You can end up with tests that rely on some shared state, then the order you run the tests causes failure when the test expects data is set or not set.

Bank

Tests are even more important in certain domains. You know, like when money is involved. Commercial Bank Of Ethiopia allowed customers to withdraw more cash than they had.

“I’m really interested how a bank managed to deploy code that didn’t have tests for “can a user withdraw money when they don’t have enough balance.” Development teams at banks are usually conservative, process-heavy and slow-moving with changes exactly to avoid this. Wow”

Conclusion

Unit tests can be a useful and helpful tool to developers. However, there is an art to writing them and they have to be written with good intentions. If they aren’t written to be useful, fast, and reliable, then developers either won’t run them or won’t trust them.

Time

When it comes to software, the concept of time can cause problems. There’s actually some really interesting scenarios, but even in simple applications, some developers really struggle with simple concepts.

In terms of standard problems, you can have problems where the client and server times can be out. This can just be because they are set incorrectly, or maybe are using a different timezone. As a developer, if you are looking at times in log files across the client and server, it can cause confusion if the timestamps are out. A common thing I have seen is that some servers don’t use Daylight Savings Time we have in the UK, but the client times often do. So the server can be an hour out.

Daylight savings time is interesting as time shifts forward or backwards one hour. So time isn’t linear.

I recall reading a blog about time by Jon Skeet who then discussed how if you are using historical dates, the time can also suddenly change. Like if a country switches to a different calendar system entirely, so moving a day could suddenly jump in years to align with the new system.
Computerphile have a discussion on this The Problem with Time & Timezones – Computerphile

Leap Years

We once had a leap year bug because someone created a new date using the current day and month, and added a year. So when it was 29th Feb, it tried to create a date of 29th Feb for next year which wasn’t a valid date. So the feature crashed. Everyone was panicking trying to rush out a fix, but then we realised we could only get the fix out to our customers tomorrow, and the bug wouldn’t happen. Not for another 4 years anyway. It was hilarious

-1

One weird mistake I saw recently, is that a developer defined a variable and set it to 5. The code they wrote was supposed to make sure that we never make an API call more than once every 5 minutes. However, they then minused 1, so were checking every 4 minutes instead.

var minimumNumberOfSecondsRequiredBetweenEachAPICall = (NumberOfMinutesRequiredBetweenEachAPICall - 1) * 60;

Ages

You would think everyone would understand the concept of ages since everyone has an age and it increases by 1 every time you have a birthday. However, many developers seem to struggle with the concept. The last implementation I saw had the following:

int age = DateTime.Now.Year - dateOfBirth.Year;

So it can be one year out because it basically assumes your birthday is 1st January.

It reminds me of an exchange on Twitter that I saw years ago. It was in the context of football.

PA: Why are Arsenal paying £25m for a 29 year old striker?
G: he’s 28 btw
PA: He’s a lot nearer to 29 than 28, that’s a fact
G: He’s 28, that’s a fact
PA: Why am I not surprised that fractions are beyond you. The day after his birthday, he is no longer 28.
G: He’s 28 until he becomes 29. That’s how it works
PA: Perhaps if you had paid more attention in Maths lessons? You might remember “round up or down to the nearest whole number”
G: He’s 28. That’s a fact.
PA: No, it is not. £1.75 is not one pound. You don’t even understand what a fact is now.
G: Until he is 29, he is 28.

When it is the next day after your birth, are you 1 day old? technically you could just be a minute old but claim you are 1 day old.

My instinct to perform mathematics on dates would be to use an existing date library. Another developer tried to make something themselves. This seemed a bit complex to me, but I think it actually worked, or at least seemed reasonable for how they wanted to use it.


public static double AgeInYearsAtDate(DateTime effectiveDate, DateTime dateOfBirth)
{
        double daysInYear = 365.25;
        int completeYears = Age.GetYears(dateOfBirth, effectiveDate);

        dateOfBirth = dateOfBirth.AddYears(completeYears);

        double proportion = effectiveDate == dateOfBirth ? 0 : Age.GetDays(dateOfBirth, effectiveDate) / daysInYear;

        return completeYears + proportion;
        }

        public static string ConvertCurrentAgeToYearsAndMonths(double age)
        {
                int monthsInYear = 12;
                int years = (int)age;
                int months = (int)Math.Round((age - (int)age) * monthsInYear);

        return $"{years} year{(years == 1 ? String.Empty : "s")} and {months} month{(months == 1 ? String.Empty : "s")}";
        }

Ages Part 2

Another developer was testing his age code and wrote this:

new object[]
            {
                new DateTime(2010, 05, 31),
                new DateTime(2009, 06, 01),
                AgeRange.UpToOneYear,
                "52 weeks and 0 days"
            },

If there’s 52 weeks in a year, then is that 52 weeks? kinda looks 1 day short to me. Time is mental isn’t it?

Incompetent Developer Tales Part 2

Although Junior developers can be useful, I have been against my employer’s over-reliance on them. If you get someone cheap with high potential, as long as you reward them, you end up with someone that knows your system, loves the company and is a great developer.

The problem is that we love hiring them but not rewarding them which means the best ones leave and the bad ones remain. The focus on cheap wages has led to more and more offshoring which has led to the rapid expansion of our Indian office. How do you hire so many people quickly? lower the standards. So now you have a high amount of incompetent people but they are cheap.

It’s not the fact they are Indian that is the problem, it is the fact the demand is high and the standard we had in hiring is low. The problem this has in the work culture is that it is easy to see a discrepancy in quality between the UK and Indian developers as a whole; which means you end up seeing them as inferior, despite some of them actually being good. The good ones tend to be the ones we hired in Senior positions, so they would naturally have higher wages anyway.

So building on from my recent blog on one particular developer, Here’s a collection of things other people have done:

Rollbacks

James has just rolled back someone’s changes who merged into the wrong folders.Don’t they think something isn’t right when it is showing [add] next to the main Database folder. Looks like they copied the folders up one level so now it is re-adding everything as a duplicate.

Dean 16:27:
haha
Me 16:28:
that change by Portia is mad when you look at the changesets
original patch
change to patch,
fix db patching errors,
rollback,
rollback other changes,
rollback from xml file,
then Chris comes into undo the rollback
Dean 16:30:
it's just wrong that we've got people who don't know what they're doing
Me 16:30:
but it's cheap
Portia went wild on that second “rollback” and manually reverted the files.
removed 8 patches and added 1, instead of removing 1
it's amazing how many rollbacks happen these days
Dean 16:40:
Rollbacks worry me in general

“used for identification”

In general, SQL databases are designed to reduce redundancy. So if you have a table storing a list of “job roles”, then if another table references this information, you can link it together via an ID of the row. What you shouldn’t do is copy the data into another table. This means if the data needs to be updated, then you need to remember to update both, and this will double the storage space too.

I saw that a developer was doing this. It was only one column of text, but why were they copying it over into their new table instead of just referencing it?

Me
Is there a reason why this isn't being taken from JobCategory?
It is never returned in a stored proc call so there is no need for it

Vignesh
JobCategoryName used for identification of JobCategoryID and not used in stored proc. Thanks

Me
Regardless if the system uses the data, or if it is there for our Staff to read in the database; you would just write a query that joins onto the JobCategory table.
what if the JobCategoryName in JobCategory is updated? The names in your new table won't be accurate

Vignesh
JogCategoryID only used in stored proc/code, JobCategoryName is just an identification for JogCategoryID in the table. Thanks.

Me
So it needs to be removed?

Vignesh
JobCategoryName was removed since it is used in code or stored proc. Thanks.

Refresh Link

private void llbl_RefreshList_LinkClicked(Object sender, LinkLabelLinkClickedEventArgs e)
{
IEnumerable<ExecutionScheduleDetail> runningItems = _service.GetAllScheduleSearches(AuthenticatedUser.Organisation.Guid);
int count = (from runningitem in runningItems select runningitem).Count();
if (count>0)
{
LoadRunningSearches();
}
else
{
MessageBox.Show(
"This run has just completed and details can no longer be viewed.",
"Running Searches",
MessageBoxButtons.OK,
MessageBoxIcon.Error);
LoadRunningSearches();
}
 
}

Instead of just getting the count from the Ienumerable, they are selecting all the items they already have. Then since they are calling the LoadRunningSearches method in both parts of the IF, it may as well be moved out. So then the code would just be

if count == 0 , show message box.

when a review comment was left about not specifying the method call twice, he then just moved the GetAllScheduleSearches to a field, which meant it would no longer get the latest searches every time you clicked. Since the link is to “Refresh”, it wasn’t doing what it was supposed to.

If Constraint Exists

 if exists (Select * from sys.check_constraints where name = 'DF_Templates_AgeRange_AgeTo ')
 alter table Templates.AgeRange drop constraint DF_Templates_AgeRange_AgeTo;

I noticed there was a redundant space at the end of the constraint name so I thought it was likely that the check would never be true (unless SQL ignores it automatically?)

Me 4/28/2022
Is this space intentional?
Vignesh 4/28/2022
Removed
Joel 4/28/2022
Do you actually need the select check? I think you should be able to use the dependent patch mechanism instead.
Vignesh 4/28/2022
I think it is not needed, we checked the condition to avoid any error. if its true it will execute
Joel 4/28/2022
The constraint is added in patch 7.809, on which you've marked this patch as dependent. So this patch will literally only run if the constraint was created successfully.

 We have an attribute in the xml so you can state dependent patches, so will only run if the prerequisite patch has run. Vignesh was aware of it because he had used it, but then he also had this guard clause that possibly didn’t even work.

When told he didn’t need it, he then agrees that it isn’t needed, yet, put it in there so it would “avoid error”. Does he mean there was an error? Or just being overly cautious?

Spelling

inActive ? ResourceStatus.Inactive : ResourceStatus.Active

Sometimes, it’s the little things like this that annoy me. How can you write “inActive”, and not realise that you either:

  • have spelt “inactive” wrong,
  • or alternatively – someone else has when they created this enum

Therefore why did they not fix it? There’s clearly an inconsistency there.

In a similar fashion, I saw this recently:

//Recieved and Transalted

Both words are spelt wrong. It was also copy and pasted from another file. It does pose a good question though, if you copy and paste, do you think you should correct the spelling or leave it for maximum laziness? I guess the advantage is if you search for that text to try and find the original code, it’s better to match it as much as possible.

throw new NotSupportedException("Can't able to fetch template consultation details!");

Indians always seem to write “Can able” and “Can’t able” instead of just “can” and “unable”.

Untested code

string.Format("{0} {1}", _isRegistered ? "Not Registered:" : "Registered:", statusCode)

The logic was consistently backwards. It wasn’t a case that they typed it wrong and didn’t bother testing it. There were several files with the same type of logic. I pointed it out and they rewrote the entire thing.

Reviewing AxoCover, the Code Coverage Tool

A while ago, I wrote a blog about the Merge Ready Checklist, which was a process we have to prove we have followed to be able to complete and release our software project.

The process was created by a some experienced former Software Testers, now basically Quality Assurance Managers.

As part of the checklist, they then insist on having Test Coverage of 80% which I always think is an unreasonable ask. When I joined Development, we had Visual Studio Enterprise licences which have a Test Coverage tool available. However, we have since downgraded to Professional. So I asked what Test Coverage tools we can use because it needs to be something that IT have approved to download, and that we have a licence for. We were told we could use AxoCover, but I found it wasn’t compatible with Visual Studio 2019 or above, which was an inconvenience.

Ok, so let’s run it and see what happens. Firstly, you are greeted with a phallic symbol.

Test execution started.
      ___
     /   \
     |   |
     \   /
     /   \
    /     \
___/       \___
/               \
|    _______    |
\___/       \___/
AxoCover Test Runner Console

It’s supposed to look like their logo, which looks more like a fidget-spinner.

Here are the metrics it produces, before; and after.

CoverageUncoveredTotal
Classes11.4%30403433
Methods14.4%1778420770
Branches12.7%2908533308
Lines13.4%98844114114
Baseline figures. Code from the Main branch before my changes.
CoverageUncoveredTotal
Classes11.9%30343443
Methods13.3%1774720473
Branches12.0%2902032985
Lines13.4%98786114118
Codebase with my Project merged in.

Then I can’t even make sense of the statistics? Is that saying that I have removed methods (total has gone down!)? I have added a few classes with several methods each (these obviously contain lines of code and conditional statements so I expect all values to be higher (but despite more Classes, the number of methods and lines has decreased). I had also added some Unit Tests but maybe would have expected 30% on new code.

I asked Tech QA to explain the figures to me, and they were like “we dunno, we aren’t developers. We just look for the 80% number“. Then I point out that they were supposed to be judging the 80% coverage on NEW CODE only. This is for the entire solution file. So this doesn’t give them the evidence they want, and it’s not accurate either and cannot be trusted.

After running it several times and adding/removing code to see how the numbers changed, I then was suddenly low on disc space. Turns out Axo Cover reports are 252MB each! Yikes.

Testing AxoCover

Since the numbers were nonsensical. I decided to create a simple test project and run it on simple examples. Let’s see how it judges what is a line/branch/method/class.

namespace Axo
{
	public class ExampleClass
	{
	}
}

0 method 0 branches 0 lines

So a class definition with no methods and actual lines of code results in zeroes all around. It must ignore boilerplate class code.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{

		}
	}
}

1 method 1 branches 3 lines

So now I have a method but it is empty. Seems the idea of ignoring boilerplate code doesn’t apply to methods. It must count the method definition plus braces inside the method for the line count, but it doesn’t make sense to count braces since that’s just to group related code. 1 branch is weird too, that should be for IF statements which we will test soon.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;
		}
	}
}

1 method 1 branches 6 lines

So now I have added 3 lines to the method. The line count has increased by 3 so it seems like it make sense.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				result = 0;
		}
	}
}

1 method 3 branches 8 lines

I’ve added 2 lines, where 1 is an If Statement. So now we have increased the branches but it has increased by 2. This must be the “implicit else” where the “result” is either greater than ten or it is less than 10, so there’s 2 paths. I’d still say that is 1 branch though.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();

		}

		private void MethodA()
		{

		}
	}
}

2 method 4 branches 10 lines

I’ve replaced one line with a method call to a new method. Method count increasing by 1 makes sense. Given the previous examples, adding a new method adds 1 to the branch count for some reason. In the empty method example, we got +3 to the line count, but now we only get +2, so that seems wrong. I don’t even think an empty method should increase the line count or the branch count, so the figures are becoming increasingly nonsensical.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
		}
	}
}

3 methods 5 branches 13 lines

So now instead of an implicit else, I’ve made it explicit, and created another Method. Method count makes sense. Branch count has increased by 1 which I think will be for the new method and not the else. We have +3 to the line count but should we have 2 for the else, then up to 3 for the new method.

namespace Axo
{
	public class ExampleClass
	{
		public ExampleClass()
		{
			var a = 3;
			var b = 4;
			var result = a * b;

			if (result > 10)
				MethodA();
			else
				MethodB();
		}

		private void MethodB()
		{
		}

		private void MethodA()
		{
			{ }
		}
	}
}

3 methods 5 branches 15 lines

I was intrigued if it really was including braces. Some random braces gives +2.

	public class ExampleClass
	{
		public ExampleClass()
		{

		}

		public ExampleClass(string test)
		{

		}
	}

2 methods 2 branches 6 lines

I thought I’d reset and try again. So we have 2 methods, which as we have discovered means 2 branches with AxoCover’s metrics. It seems to count both methods as +3 lines.

namespace Axo
{
	public class ExampleClass
	{
		public void ExampleClassMethodA()
		{

		}

		public void ExampleClassMethodB(string test)
		{

		}
	}
}

2 methods 2 branches 4 lines

Looking back through the examples, I wondered if it is actually counting an empty Constructor as +3, but an empty method is +2. So this example has actual methods rather than constructors, and it seems to confirm my theory.

Discussion

When it comes to counting metrics in code, I think there is some degree of subjectivity to it. What even is a line of code? You could technically add many “statements” and put them all on one physical line. So you could look at the line numbers and see 1 line, but when actually reading the line of code, you can read multiple instructions. The analogy could be that you expect a list to be one item per line but someone could write a list on paper as a comma-separated list on one line. One is more readable than the other but it’s still valid either way. If someone asked you how many items were on the list, you could count them either way and end up with the same number. Therefore, I think the line count should actually be “statement” count.

I do think AxoCover’s definition of a branch seems wrong, and what they interpret as lines seems inconsistent and a possible bug in their logic.

On a larger and more complex codebase, the statistics it produces seems really nonsensical. So I think I have proved this tool isn’t worth using, and we definitely shouldn’t be using it to gatekeep on our Merge Ready Checklist.

Is Changing Text A Difficult Thing To Do?

Even though our UX Team has been around for a while, they never seem to understand what is possible in our software, so end up designing something that we cannot accurately recreate from their Figma designs.

I often think their standards change over time so it’s hard to predict what they would come up with. 

The UX Team asked what kind of formatting is possible in a Tooltip. You’d think they would know what is possible, and have plenty of old designs to refer to.

They said they had some upcoming projects that required tooltips containing large amounts of text; often with legal statements. They shared an example which had 3 sentences, then a Name, ID, Phone number, and address. So was a large amount of text in a tooltip, and some words were formatted.

I thought it wasn’t good UX to have loads of info inside a tooltip. Also, wouldn’t it be better to have that address somewhere where the user can copy and paste it? Seemed like a useful thing to have.

I often think it is good to evaluate the UX designs and give your own opinion on what’s possible to implement, but also suggest what would improve the user experience. You’d think the person employed in the UX team is the expert on user experience, but it’s best to not blindly accept it.

Cory House also seems to share this thought:

As a developer, I know I’m not a designer. But that doesn’t mean I should blindly implement designs.

I push back on designs that are:

  • Insecure
  • Confusing
  • Incomplete
  • Inaccessible
  • Inconsistent
  • Not performant

Assuring a good user experience is everyone’s job.

Cory House

More specifically, if we have an existing dialog, and the UX team decides to change what it says; you would think this is the simplest change possible. However, there could be a bit more to it than you would think.

I was explaining this concept to a Junior Developer. I was saying how I loved working with a Product Owner called Rob, who always asked “is that a hard thing to do?” no matter how trivial something sounded. He understood that there could be all kinds of crazy designs in the code.

In theory, it should never be hard. But sometimes adding more words means the words need to wrap onto the next line, and if the dialog hasn’t been coded to resize, then it might be a manual resize job. But if the “design view” is broken, then that makes it even more complicated.

The text might not just be set to specific words in the file where the control is. It could be dynamically generated then passed into another method, or maybe it even is set and read from a database. It’s still easy to change, but if you tried to search the source code for a specific word/words then you might not find it if it is dynamic or in the database instead.

I’m sure there have been times where, after investigation, you are like

“Rob, can’t we just keep the words as they are, I don’t have the skills to add a few more words!”.

Avoiding another boolean check

A developer wrote some code like this:

	public void RunScheduledJob()
	{
		if (_loggingEnabled)
			Log.Write("Main Job Complete", _category, _severity, _customData);
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			Log.Write(message, _category, _severity, _customData);
	}

I’ve removed some extra code to make the example clearer, but the RunScheduledJob would do something then write to a log if the feature is enabled. The LogWriteMethod writes to a log if the feature is enabled.

Although it’s not a major improvement, the obvious thing to do would be to use the LogWrite method in the RunScheduledJob method like this:

	public void RunScheduledJob()
	{
		LogWrite("Main Job Complete");
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			Log.Write(message, _category, _severity, _customData);
	}

So the reviewing developer Jim, pointed this out to Rich:

Jim: Could you call the LogWrite method here?
Rich: I could do but it would then evaluate _loggingEnabled twice for no reason.

Now, Jim and I were baffled what he meant. Even if it did have to check _loggingEnabled, it is a simple boolean so would only waste 1 millisecond to evaluate again. There’s no question of performance here; only clarity.

Rich then suggested this code as an improvement:

	public void RunScheduledJob()
	{
		if (_loggingEnabled)
			LogWriteNoCheck("Main Job Complete");
	}

	public void LogWrite(string message)
	{
		if (_loggingEnabled)
			LogWriteNoCheck(message);
	}

	private void LogWriteNoCheck(string message)
	{
		Log.Write(message, _category, _severity, _customData);
	}

So we have lost a bit of clarity.

It’s weird how sometimes developers have moments of madness and over-complicate simple things. This particular developer has 30 years programming experience!

Hidden Scanning Portal

Many years ago, my colleague Andy came up with a great software hack to fix a bug. I didn’t understand the fix at the time, so don’t remember the details, but the bug manifested as a red box replacing a UI control whilst the user was scanning a paper document.

Andy implemented a solution dubbed the “Hidden Scanning Portal,” a dialog box that remained invisible until the scan was complete, after which it was disposed of.

After a few months, another developer, Joe, convinced they had discovered a more permanent solution; removed Andy’s Hidden Scanning Portal. This action inadvertently introduced a new bug, so the Hidden Scanning Portal was swiftly restored, averting further complications.

Our Team Lead, Matt revealed that failing to fix the original issue could have resulted in a fine of £16,000. This revelation cast Andy’s quick fix in a new light, attributing a significant value to what might have otherwise been seen as a mere temporary solution. 

Andy’s reaction to the situation was a mix of pride and frustration. Despite his contribution to saving the company from a hefty fine, he lamented the lack of recognition in the form of a modest pay rise.

“and they didn’t even give me a measly 3% pay rise”

Andy

Quick fixes might not be ideal, and increase “technical debt”, but they can provide immediate relief, avoid hefty fines, and great stories to reminisce about.

Crazy Code Design

Here is a selection of poor code design that I have come across over the years.

Opacity Slider

Sometimes, programmers write simple programs to help them, or their team. When I first joined this company, they were using a more basic version of a source control like SVN, and someone had designed a program to help with Code Reviews. On the tool, they had a slider that changes the opacity of the dialog, so you can make it see-through. It seemed like a case of the developer simply doing it because they could, and not because anyone actually found the feature useful. I suppose you could be more creative if it’s only used internally, and not for a customer; but still incredibly pointless.

Law of Demeter

Law of Demeter, also known as the principle of least knowledge, advocates for a design where objects are loosely coupled and only communicate with their immediate acquaintances. This means that a method in a class should only call methods on:

  • Its direct components.
  • The object itself.
  • Objects passed as parameters.
  • Any objects it creates.

In practice, this means you shouldn’t have a massive chain of methods/properties.

One example of breaking this law that I saw in our code looked like this:

responseMessage.acknowledgements.conveyingTransmission.controlActEvent.reason[0].detectedEvent.code.code

Awful.

Partial Record

One of our modules can potentially show a large amount of data. The data can come from the local record, or if there is a sharing agreement, it can come from other companies that they share with. There are certain screens where you don’t need all the information, so to try and cut down loading times, we have this concept of a Partial record and a Full Record.

	public IRecordPartial Partial
	{
		get
		{
			return IsFullLoaded ? Local : EnsurePartial();
		}
	}

The logic, and wording gets confusing real fast. The property above is called Partial, but there is a check for IsFullLoaded which implies that Local can be Full, and not just Partial like the property says. When you look further into the code, Local might not even be local because it can contain shared. Mindbending.

PercentNull

Coming up with descriptive names is always a challenge in programming, and there is occasions where naming can be ambiguous. However, I have no idea what a method called PercentNull does here:

transactionOut.Surname = helperFacade.PercentNull(customerDetail.Surname);

If it assigning the result to Surname, then it should be returning text. Nothing immediately obvious comes to mind if you are passing in Surname to a method called PercentNull and getting a Surname from that. So it’s not like it is returning a percent number if it can, or Null if it cannot. Or returning the percentage that the text contains whitespace. 🤷

How High

	public enum SeverityScale
	{
		None = 0,
		Low = 1,
		Medium = 2,
		High = 3,
		ExtraHigh = 5,
		VeryHigh = 6
	}

We have this enum to represent the Severity. It makes sense until you get further than High. Extra High makes sense to be more severe than High, but should Very High be higher than Extra High? Extra and Very sound like synonyms. You need something more extreme like Extremely High to make it immediately obvious what the ranking is.

Focussed or not?

	public void SetSearchBox()
	{
		SearchFocused = false;
		SearchFocused = true;
	}

When you see code like the above, it is most likely that the code is doing something weird, and because no one worked out how to fix the original issue, you then end up writing more weird code to work around that. These “hacks” just stack up and are hard to remove. If you try to do the right/honourable thing and remove a hack, then you will probably see some strange behaviour which then means you remove more code, and maybe more code when you find more problems. Repeat until you are left with the original issue which you then have to fix.

So how can setting a property to false, then immediately setting it to true actually do something? Probably if the property has loads of logic and side-effects. Probably best not to look. 🙈

Random Parameter

		public Organisation(bool randomParamToChangeSignatureForFasterConstruction)
		{
			// Had to create a new constructor that doesn't initialise internal state, param isn't used.
		}

The thing that is scary about this strange code, is that it was written by one of our smartest developers. I have no idea why he had to create an extra constructor. Presumably the parameter is there because the “default” constructor already existed. Intentionally not initialising data sounds like a recipe for bugs though. Maybe it needs a redesign.

Slow Request

I remember my Dad telling me a story of a software team putting a delay into their code. Then each month, they would simply reduce the delay a bit and tell their managers they have been working hard on performance improvements.

if (Helper.SlowRequest)
	Thread.Sleep(15000);

I found the above code in our codebase but it relies on a configuration value being present in the config file. It was present and set to true by default for Developers though so would always be slow. Changing the value to false would speed it up, but you have to know that it exists to change it.

<add key="SlowRequest" value="true"/>

Although it doesn’t affect our customers, there’s always the chance something will go wrong one day and it could affect them.

Boolean “Mode”

If you want to handle many options, you often use an “enum”. Using a “Boolean” which represents 2 values (true/false) is a very weird choice…

 <param name="mode">If set to <c>true</c> then [mode] is a delete operation, else add, edit and none operations</param>

so

true = delete

false = add OR edit OR none.

If you put the wrong value, then there’s an if statement after. So if you say it’s a delete and it isn’t, then things don’t get deleted.

		if(!mode)
		{
			if (@event.updateMode != vocUpdateMode.delete)
			{
			}
			else
			{
				if (@event.updateMode == vocUpdateMode.delete)

Not Supported Property

	public virtual String UploadedBy
	{
		get { return _document.Observation.EnteredByUser.DisplayName; }
		set { throw new NotSupportedException("Setting UploadedBy is not supported"); }
	}

Isn’t that Set a complete dick-move. If you call it, it will crash. If the setter wasn’t there at all, you would know it shouldn’t be set.

I guess it could be designed with the expectation that you would override the property. However, I thought it wouldn’t be set correctly, because the “get” returns a massive chain of properties. So it’s not just the case of setting a variable, it’s actually document.Observation.EnteredByUser.DisplayName that needs to be set, and that is breaking the Law of Demeter anyway.

Mutual Exclusive

This is gonna end in tears

private Boolean _isSingleClick;
private Boolean _isDoubleClicked;

Not only are there better ways than detecting clicks, when you have multiple variables tracking similar concepts like this, you can easily end up in invalid states. If _isDoubleClicked is true, then you would expect _isSingleClick to always be false. But it is easy to make a mistake in the code and not set it which then leads to a bug.

Notifications

		public bool IsNotificationConfigEmailEnabled()
		{
			if (!_configSmsEnabled.HasValue)
				_configSmsEnabled = NotificationConfiguration.IsEmailEnabled;

			return _configSmsEnabled.Value;
		}

The fact that this code had been there years means it should work. But when the property is supposed to be checking if Email is enabled but the code only looks at SMS enabled; then who knows how it works.

Resizing

int parentSize = Parent != null
                ? Parent.Width
                : Screen.PrimaryScreen.Bounds.Width;
 
var availableSpace = parentSize - _clientLocation.Y - _yOffset;

What a nonsense calculation that is! We want to calculate the height of a pop up box and make sure it can fit within the window. So we look at the width of control, or maybe the width of the monitor that they might not be using (if they actually have the program on their secondary monitor), then minus the Y coordinate of the window which would be related to height, and not width.

Splitting

Sometimes programmers like jamming as much code as they can on what is technically a single line. It is an absolute nightmare to debug chained logic like this:

return
	RequiresSingleItemOrder(item, customerRequestsSplit)
	?
	null
	:
	batchOrders
	.FirstOrDefault(
		orderInstance =>
			(orderInstance.IssueMethod != IssueMethod.Electronic || orderInstance.Items.Count() < 4)
			&&
			!orderInstance.Items.Any(m => RequiresSingleItemOrder(m, customerRequestsSplit))
			&&
			orderInstance.IssueMethod == issueMethod
			&&
			orderInstance.OrderType == OrderType
			&&
			orderInstance.IsUrgent == isUrgent
			&&
			(!ShouldSeparateControlledItems
			||
			orderInstance.AnyControlledItems == item.IsControlledItem)
			&&
			orderInstance.Warehouse == Warehouse
			&&
			!Authorisation.Separator.ShouldBeSeparated(authoriser: authoriser, orderAuthoriser: orderInstance.Authoriser)
			&&
			(!ShouldSeparatePrivateItems
			||
			orderInstance.IsPrivate == isPrivate)
			&&
			MatchesForRepeatOrdering(item, orderInstance)
			&&
			NullAndNonsequentialEqual(separationTypeIds, orderInstance.SeparationTypeIds));

Funny Youtube Comment

I saw this comment on a programming video on YouTube. It is remarking on how you can write really confusing code as a way to increase your job security because you can be in charge of code that only you can read:

And remember, kids – if you nest multiple null coalescing operators into a single line of poly-nested ternary operators, that’s called “job security” – cause ain’t no one wanna maintain that code when you’re gone.

return a> b ? a < c ? a != d ? e ?? f ?? 0 : f ?? g ?? : 0 e ?? g ?? 0;

Typescript Example

Years ago, we started rewriting our C# program using Web Technologies. Since everyone was new to Javascript and Typescript, everyone wrote awful code. But then you didn’t know if it was good or bad. One of the first bits of code I saw when I joined their team was this:

export const Actions: { [key: string]: <T extends {}> (value: T) => IAction<T> } = {
 setModuleName: <T extends {}>(value: T): IAction<T> => CreateAction<T>(ActionTypes.setModuleName, value),
};

Don’t ask me what it says, because I have no idea.

Contradiction

InsertUpdateResourceImmutableProperties

It’s so frustrating to read, it makes you want to punch someone. Immutability means it shouldn’t change after it has been created. So how can you Update something Immutable? Luckily it has a comment explaining it:

   -- If we are about to insert the most upto date version of the resource, then we should update the
    -- resources immutable properties, because they might have changed in the source system. (Even though they shouldn't).

Well, I am still none-the-wiser.

Boolean Extensions

If you have a boolean variable and you want to check it is false, you can just write:

			bool example = false;

			if (example == false)
			{
			}

Now with this handy extension method:

public static class BooleanExtensions
{
	public static bool IsFalse(this bool boolValue)
	{
		return !boolValue;
	}
}

You can now write:

bool example = false;

if (example.IsFalse())
{
}

What’s the point in that? No advantage really is there?

Not Set Up

		get
		{
			if (_setup)
			{
				_setup = false;
				return true;
			}

			return _noDocumentsInError;
		}

In the get property, we check the value of _setup. If true then we set it to false, and the overall result returns true. However, if we immediately call the same property, it will just return the value of _noDocumentsInError instead. It’s bad design to have side-effects in a get. Gets are supposed to return a value, and not set things. Then we seem to have different fields tracking different concepts which just looks like it will be prone to errors.

Reload

	public void ReloadTriggers()
		{
			// To be implemented
			return;
		}

This code doesn’t even need a return statement. It is just there for the bantz. When this code is in a module that is very error prone, and you have a method that is not implemented and does nothing at all, then doesn’t give a good impression does it?

Dialog Events

            _msgBox.ShowDialog();
        	return _continueWithSelection ? DialogResult.OK : DialogResult.Cancel;
}
 
private void Cancel_Selection(object sender, EventArgs e)
{
       _continueWithSelection = false;
       _msgBox.Dispose();
}

You can get the dialog result when the dialog is closing. However, this developer has decided to create their own event handlers for the button clicks, then using the variable to decide if it is a DialogResult.OK or Cancel.

NoEncryptionEncryptor

new NoEncryptionEncryptor() 

Is this an Encryptor or not?🤔

SillyException

You throw an Exception when something bad has happened. However, in our code there is a concept of SillyException where you are supposed to ignore it because apparently, it isn’t a real error. It’s only used in a specific part of our codebase though so goes undiscovered for ages. A colleague Michael found it again and we were reminiscing on a bug I found.

Michael 11:33: 
if (mapped is SillyException) return 0;         // Avoid throwing the SillyException

Me 11:33:
I love sillyexception

Michael 11:33:
/// <summary>Represents a data provider exception that doesn't indicate an actual error,
/// and should be ignored.</summary>
[Serializable]
public class SillyException

Me 11:34:
I remember David going on about SillyExceptions and it took me ages to realise he was referring to an actual thing

Michael 11:35:
"I keep getting this SillyException"
what kind of exception is it David?
IT'S A SILLYEXCEPTION
YES, BUT WAT KIND

Me 11:35:
yeah, it did go something like that

Michael 11:35:
haha

Me 11:37:
I think it was around the time he spent weeks investigating that bug I found that no one else could recreate
and I thought he had worked it out when he said he was getting a SillyException

 lame async call

var result = client.BeginUploadFile((Stream)payload, endpoint);
bool uploadCompleted = false;
while (!uploadCompleted)
{
	uploadCompleted = true;
	if (!result.IsCompleted)
	{
		uploadCompleted = false;
	}
}
if (!uploadCompleted)
{
	throw new Exception($"Failure occurred while uploading the data");
}
client.EndUploadFile(result);

weird that they set it to true, then set it back to false. Then it will never get to the exception, it will just be an infinite loop because the check is outside the loop that will only exit if it is complete.

Is this just a reimplementation of a lame async call?

Incompetent Developer Tales

We had an Indian Developer who wasn’t very good and he wasn’t that great at communicating in English. I don’t mind people starting out as a Junior and improving over time, but sometimes I think you need to have a mindset for quality and care about your work, and Manikandan didn’t seem to have that.

Text

Making a change which involves updating text is one of the easiest changes in software development. This particular area of the code had unit tests to check the message we were displaying to the user. So if the requirement changed, you would have to update the unit tests, then change the actual logic. Manikandan changes the code, but for some reason puts an extra space at the end, and doesn’t update the unit tests.

“As suggested the changes were made and PR got patched.” 

Manikandan

Another change he did involved changing the file path of where a specific file (sqlite) was meant to be. He ends up changing the file path for some other file instead. I pointed it out and he claimed he had tested it.

“I have dev tested this yesterday, it looks like i have missed sqlite folder path.

Mistakenly i have taken up the cache folder.

Will update and patch the PR. Apologies as i have misplaced the path location mistakenly and have corrected it now and done dev testing”

Manikandan

Corrupt Files

When viewing files that have been attached by users, some of these we natively support like text and images, whereas specialist files we allow opening externally. Some we blocked completely. 

A new change would allow files to be saved regardless if they were unsupported or we suspected them to be corrupt. We will inform the user of this though, so this was the 2 requirements:

When you attach:

This file “xxxxx.tif” has been successfully attached. However the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.

When you select “open file”:

This file “xxx.tif” cannot be opened because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.”

He writes the following note in the bug report:

Made a proper message information need to be displayed to the user as well as restricted further attachment of the corrupted file into user record history.

Manikandan

So that sounds like he is blocking suspected corrupted files, which is not the requirement.  When I looked at his code changes, it seemed like he had done the wrong thing.

So I ask him:

“Have the requirements changed since the product owner’s comment? It looks like you have the message that should be displayed, and it sounds like the attachment should still be saved.”

Me

he replies

“Corrupted Document will not be attached to the record. Once again checked in the database to confirm”.

Manikandan

So I reply again:

“Who has decided that it shouldn’t be attached to the record? The Product Owner said it should be attached and there should be a specific message shown to the user.”

Me

“Apologies for that, will check once again with product owner”

Manikandan

Got the update and have made the changes as per Document gets attached to the record irrespective of the document was corrupt or not. Will show a message to the user regarding the file corruption but allows the user to attach the doc irrespective of its correctness

Manikandan

So it seems he finally understands the requirement. However, when he submitted his change to review, it will show a message on Loading (but not the one the Product Owner asked for), but will still crash on Save so he hadn’t really changed anything.

Finishing my work

I had made loads of changes for a new set of validation, but didn’t have time to finish it off as I got moved onto a different project. I had to hand over my work to him. He made some changes and submitted it to me for review. One file I had already wrote the code to show the new message, but then he had added another IF statement, and added the exact same message, however it had some extra spaces in it! So redundant code, and it was wrong.

Another requirement was 

This Warning should be available irrespective of Admin Org flag...”. Yet he had an If statement to check for the flag.

else if(IsAdminOrg)
 {

There should be no IF statement because it should happen regardless if the flag is on or off.

In another code block, his logic was based on a check :

specialisedCode.Equals(SpecialisedCode.HDS3)

This equals check wouldn’t actually ever equate to “true” because the comparison was against different object types. And even if it did work, won’t this display the same text twice? He already added code to add it to the list, then had some code to add it again if the If statement was true:

 if (matchedItems != null)
{
	var displayName = matchedItems.GetAttribute("displayName");
	var specialisedCode = matchedItems.GetAttribute("code"); ;
	if (!allComponentNames.Contains(displayName))
	{
		allComponentNames.Add(displayName);

		if (specialisedCode.Equals(SpecialisedCode.HDS3) || specialisedCode.Equals(SpecialisedCode.HDS4))
		{
			allComponentNames.Add(displayName);
		}
	}
}

In another part of the code, he had this code

var hasExceptionalComponents = HasExceptionalComponents(sectionDefinition);

if(!data.IsSystemLibraryItem || (data.IsSystemLibraryItem && hasExceptionalComponents))

So if the first part of the if statement wasn’t true (“!data.IsSystemLibraryItem”), then it would evaluate the second part, but data.IsSystemLibraryItem would always be true (since it is just the inverse, and the opposite of “false” is “true”). So that bit could be excluded; so yet more redundant code from Manikandan. This would mean you could write:

if(!data.IsSystemLibraryItem || hasExceptionalComponents)

but what was he doing if this statement was true or false? Here is more of the code:

if (!data.IsSystemLibraryItem || hasExceptionalComponents)
	FormListFromTemplateSectionDefinition(sectionDefinition, allComponentNames);
else
	FormListFromTemplateSectionDefinition(sectionDefinition, allComponentNames);

That’s right, he was doing the exact same thing, so that code block can actually be one line:

FormListFromTemplateSectionDefinition(sectionDefinition, allComponentNames);

So the majority of the code he writes is just redundant. How can you be so incompetent?

When I flagged it to Manikandan, he said

or i am i missing it somewhere, sorry. lil bit confused on this, as i am alone working on this project.

Manikandan

This isn’t a misunderstanding of the requirements, it’s just a lack of care and thought.