Overtime

In a follow-up to my blog on Work Hours, I wanted to discuss Overtime. From my experience, Overtime has rarely been worth it for the employer, and my strong guilty conscience prevents me from taking advantage.

I haven’t looked into the topic, but the widely adopted “9-5 work hours, 5 days a week” must have come into play after years of theorising and research. Working past this limit will lead to burnout. After working for a few months in the 9-5 structure without additional time off, I do come to the point where I need time away. Working past this point will just push me to that point earlier.

Recently, my project has been approaching the final deadline, so overtime has been available. Some team members have taken advantage of this, whereas others have avoided it completely.

One day, I was helping out testing and was finding appropriate Test Cases to link to the Work Item. I saw that some tests I had previously wrote now were incorrect. I checked the history, and the tester that was doing overtime (I’ll call him Brian) had changed them on Sunday, late at night. He turns up late to work on Monday, looking really fatigued, and several times during the day, he calls me over to explain Work Items to him; which was unusual. So basically, the employer paid him a full days work on Sunday, and then he fails to perform on a Monday when he is actually contracted to work. So it seemed that we gained a day, but then lost it straight away.

There was another Work Item I was looking at, and I saw there was an Overtime Task from Colin assigned to it. Weird, since the tests were written by Brian, someone else wrote the fix, and it hadn’t been tested yet. I check the history, and Brian had removed all the tests that Colin had written, and rewrote them during work time. Colin was claiming 7 hours overtime on it. A full day at the weekend for no benefit.

There was a day where I overheard Colin talking to the Product Owner, saying that he had lost his work over the weekend. Now, I didn’t hear his reason, but I was thinking “how the hell can you lose your work?”. Let’s say it was a hard-disc failure; well, you wouldn’t be complaining that you lost your work – you would be moaning about losing everything. Let’s say it was a server failure and his Shelveset was gone; many other people would be complaining. So the only thing I can think of is that he didn’t create a Shelveset, then undid his changes; which is just incompetence. The only other alternative, is that he had nothing to show for his several hours of development, so came up with a “dog ate my homework” excuse to try and get paid anyway.

Yesterday, I was sat with Colin discussing the work we had left. He then told me that he was sick of the project and felt overworked. Since we are roughly on track, he was gonna take a couple of days off work. He then asks me why I don’t do overtime. Well, I think this proves my point.

We were seemingly behind, he does all this overtime and the end result is that we’d probably be in the same position if there was no overtime available, but now we have fatigued, demotivated team members.

I don’t know what the end result is going to be, but there was a time were promotions were dished out, but I didn’t get one despite it being a widely accepted fact that I was among the highest performers in the team. In a meeting with my line manager, he said the reason why Tim was picked over me, was that Tim did overtime when the company really needed it. I was like “WTF, I’m contracted to do a job, and I do it in normal hours, or stay behind and get it done. Tim wastes time doing person shopping on Amazon, then when his work isn’t done, he volunteers to do overtime at an extra cost to the company”. Nice guys finish last.

Work Hours

I always wonder what is an acceptable amount of breaks because people have their own views on what is efficient. Some people like regular breaks, whereas others prefer fewer but longer breaks.

Often, managers say:

“I’m not bothered if the work gets done”

but it would be strange if someone did 3 hours of perfect work, then left for the day.

Personally, I rarely leave my desk, but sometimes procrastinate by peeking at other team’s work to learn what else is going on.

Derek takes 20 minute smoke breaks at certain times of the day. He will also have a 20 minute break with his best work-mate, then take an hour lunch even though contractually we get 30 minutes. On a good day, he will have 2.5 hours out of his 7.5 hour day away from his desk, but on bad days he can have up to 4 hours.

Now, the amount of output we get from Derek isn’t actually the lowest in the team, although often you can question the quality of work. My view is that since Derek spends so much time trying to avoid work, he isn’t very good at it. So when he actually tries doing work, he has to try and cut corners and cobble something together.

Meanwhile, Colin comes into work late, takes way fewer breaks, but his breaks are longer. He is known to work obscure hours, remoting in at home. When available, he will also claim as much overtime as possible, so he appears dedicated. His output is much higher than Derek, but the quality is often erratic.

If I was a manager, I think I would much prefer people to be around their desks, so even if they are chilling out; they are free to answer questions from team members that are trying to get something done. Ultimately, I think the person’s overall output should be analysed to have evidence of their approach. Low quality, or low quantity is poor; so I think Derek and Colin’s approach needs to be questioned.

Polymorphism

This blog entry probably requires some basic understanding of programming in an Object Oriented language.

A technique in Object Oriented programming is to take advantages of Base Classes or Interfaces to make code more reusable.

So you could define an Interface with a method on it, lets say IAnimate, and you could create a class, let’s say “Animator” which takes in some kind of object that can be animated. So the constructor would look something like this:

private IAnimate  animatedObject;

public Animator(IAnimate animatedObject)
{
_animatedObject = animatedObject;
}

Then let’s say this class also has a method that calls the method defined in the interface.

public void PlayAnimation()
{
animatedObject.Animate()
}

As long as your defined objects implement the IAnimate interface, you can pass them into the Animator class. The Animator class doesn’t need to know anything about what objects implement it. It doesn’t have any dependency on Cat, Dog, Fish etc. In the future, more animals can be added to the program, and the Animator never needs to change. Simple to understand for a Junior Programmer.

Colin came up with a class which completely rendered this concept pointless.

private Dog _dog;
private Cat _cat;
private Fish _fish;

public Animator(IAnimate animatedObject)
{
_dog = animatedObject as dog;

_cat = animatedObject as cat;

_fish = animatedObject as fish;
}

public void PlayCatAnimation()
{
_cat.Animate()
}

public void PlayDogAnimation()
{
_dog.Animate()
}

public void PlayFishAnimation()
{
_fish.Animate()
}

So with this design, we are still passing in an interface, but there is some bizarre casting going on in the constructor, and we save the result to the appropriate variable. We also have a method for each implementation of IAnimate, so there’s a separate method for Cats, Dog, Fish, even though they do exactly the same thing. Whatever class actually uses Animator has to know what type of animal it is for, because creating an Animator for a fish, then calling PlayDogAnimation() will crash. In the future, when new animals are added, Animator will not work without extra code changes. You would need a new variable and new method for each type of animal.

Any advantage that Polymorphism brings has been completely eradicated with this design. All that has happened is that there’s extra complexity and crashes. It shows a complete misunderstanding of such a fundamental technique to Object Oriented Programming.

Inverted Logic

A few months back, Derek was writing some logic to decide if something was valid or not. This would then be used to display warning messages to the user.

He ended up writing a method like this:

public bool
NotValid()

Now, if you aren’t a programmer, this sounds fine. But when you come to use it, things can get confusing. When you want the Valid case, you will have to invert it using the logical negation operator “!” (i.e. “!NotValid()”) which then reads “if not not valid”. A double negative.

I flagged this up on his Code Review, but Derek left it in. This led to a bug, because he had used it in a method and didn’t use the “!” to invert it.

I was at his desk the following week discussing a problem he was having. I saw a method that was work in progress which had the same double negative problem (still hadn’t learned his lesson). I was about to comment on it, but then he turned to me and said “I know you don’t like that negative logic, but I find it easier”.

How can it be easier? Having a method called “Valid()” then using “!Valid()” if you want the opposite is perfectly fine. It is easy to understand and easy to read: “Valid” and “Not Valid”; completely fine. Versus “Not Valid” and “Not Not Valid”; confusing. I don’t see any aspect you can use to justify it is easier.

On a similar note, there was a Code Review from Colin that was to check one date wasn’t greater than the other. The common validation scenario where you have two Date controls, and want to enter a date range; From date, and To date.

public boolean IsFutureDate(DateTime startDate, DateTime endDate)
{
return !(startDate <= endDate)
}

Here, the method signature is fine (maybe their naming was better than mine), but the logic inside is using a negative approach which makes it hard to read. “Start date not less than or equal to end date”. Why not simply “start date greater than end date”?

return startDate
> endDate

Lessons Learned

Logic within programs can be naturally complex, but the skillful programmer can break this down into small, manageable functions. There is no point adding in additional complexity by writing simple statements in an illogical, hard to read way.

Time In Ints

I wanted to keep my stories anonymous, but there is a very specific developer I want to write many stories about. I want to attribute those stories to the same person to build up his character, so I’ll always blog about him under the name Derek. Any other story, I’ll blog under the name Colin, and collectively, maybe Colin’s persona will match the insanity of Derek.

So here is a Colin story.

Just a background for those not familiar with C# or programming languages. If you want to store a number, you will often use the type “int” which is short for Integer; a whole number.

Colin had to program a countdown timer which took the time from a config file. Here was the Property he chose:

public int TimeInInts

The second word is the return type, so as we begin to read this, we know the Property returns an integer. What does this integer represent? A time. Fair enough, but in what unit? Ints.

Ints!

Absolute useless! Is it hours, minutes, seconds, milliseconds, something else? We don’t know without reading more code. That’s the telling sign of a badly named Variable/Property/Method; if you have to dig deeper to work out how you can use it.

I assumed it was minutes, but I look in the same file and there is another method.

public int GetTimeInMinutes() 
{
return TimeInInts * 60 * 1000;
}

So GetTimeInMinutes uses TimeInInts in order to calculate minutes, so TimeInInts isn’t minutes.

Hang on, “* 60 * 1000“. That’s the calculation to convert minutes to milliseconds. So TimeInInts IS time in minutes and GetTimeInMinutes returns the time in milliseconds!

What.

The.

Hell.

How can you even write that, use it, and possibly get the feature working when everything you are dealing with has the wrong name?

Unbelievable.

Anyway, if you wondered where my username came from; now you know.

Lessons Learned

Always try to come up with the best variable names possible. Anything ambiguous, misleading or wrong will lead to confusion and bugs.

Derek Origins

There was a project I was working on which appeared to be low priority. When the Lead Developer had left the company, he wasn’t replaced. Additionally, another developer was reassigned to another team. This left us with three developers, but one soon handed in his notice. That meant I had suddenly become the Lead Developer in the team by default, but there was only so much you can do in a team which essentially has 2 developers.

I always find it inspiring working with better developers because you can aim to be as good, or better than them. So when the news broke that we were getting a Senior Developer and Derek, a Developer that had just been promoted from his Junior position; I was excited. The business seemed to have a great track record in bringing good Juniors in that prove their worth (myself, for example), so I had high hopes indeed.

However, Derek instantly made a bad impression, and he hasn’t improved in the following years.

One of my earliest memories of Derek was when he was tasked with fixing a trivial bug, but came up with a solution I just didn’t expect anyone with more than a few weeks experience to even consider.

We had a simple feature that you select an entry in the grid, click a button, fill in some text and click save. The text, along with an ID is passed into a stored procedure to save the data to the database. It was simple bit of code to understand, the SQL took in the ID of the row, and updated one text field. However, it seemed some recent changes had caused the ID to be Null, so we were asking the database to save some text against an unspecified row. As a result, this crashed the application.

The solution to everyone is obvious. Find the line of code that should set the ID, and make sure it is populated with the correct ID. The SQL doesn’t even need to be changed.

Derek changed the SQL.

He decided that: because the Bug Report was reporting a crash; the solution is to simply stop the crash. His solution was to put an If Statement in: “If the ID is null, then return”. Crash averted. However, the feature is still broken. All that happens is that the user is no longer told something is wrong. The system just pretends it has saved (deceiving the user) and carried on as normal. This could have serious implications, because this was actually a fairly important bit of data that should be saved. There was a bug in the system, and Derek had made the problem worse.

I obviously dismissed the Code Review, but to my surprise, he had checked it in anyway. I went over to his desk and told him in person that this needs to be looked at. He got angry and argued that I was changing the requirements because it never said in the bug report that this needs to work; it only said there was a crash and this shouldn’t happen. He had met the requirements and advised I needed to log a new User Story. I think he was actually deadly serious, but I walked off, deciding to wait until the next day before pursuing the issue further.

Luckily, Derek must have reflected on the incident somewhat and did revisit it, although he gave me a flippant comment about only doing it to “make me happy”.

Lesson’s Learned

Everyone understands that an application crash is bad. The software isn’t working as intended, and the user will become frustrated when there’s a feature they cannot use. However, a crash informs the user that something is wrong, and the user has full knowledge that they have to perform this action again when the issue is resolved. Hiding the error from the user means the user has lost their work and is unaware. Depending on the feature, the user may take a while to realise the data is lost, and by that point, maybe they don’t recall what they entered to be able to redo it. The missing data could mean someone makes a different decision, potentially having serious implications.

There may be times when problems can safely be hidden from the user. Careful thought needs to applied in order to fully determine if the functionality is trivial enough, and the problem should be logged (in some kind of log file) for developers to investigate.

Data Table Madness

The following story is true, but is anonymised. The domain of Animals wasn’t used in the exact code, and the requirements about Identifiers for England and Scotland is purely fictional. However, if you find and replace the fictional terms for the actual ones, then the story is 100% true.

So we have a dialog which displays various animals which basically come under three types. Dogs, Cat, and Fishes. The Dogs and Cats both have Mammal Identification Numbers, although we only display them for the Dogs as per requirements.

Now Scotland has come along with the requirements that Dogs now need to display alternate identifiers (ScottishPrimaryIdentifier and BreedID), so there’s two different identifiers compared to the one in England.

This new way of displaying the information is in a dialog, and one of the buttons launches a different dialog which shows a slightly different view, but still needs this information. You would think it was simple, but when Derek is on the case, you know things can get messy.

So the first dialog, Derek adds two columns and programs it so they show for both mammals which was incorrect, and due to the way it was trying to populate the information; it was crashing (obviously no regression testing was done on Derek’s part). Not only that, but he didn’t actually add two columns for the new identifiers like you would expect. He added a duplicate column for the “Animal Name” (hiding the existing “Animal Name” column in England, showing the new one for Scotland), then changed the name of the EnglishPrimaryIdentifier column and shared it with BreedID, with a new column being added for the ScottishPrimaryIdentifier. This is absolute madness of course, because if you felt the need to share a column, the ScottishPrimaryIdentifier would be the only logical column to share with EnglishPrimaryIdentifier since they are primary identifiers and somewhat logically equivalent. It looks even more stupid in the code, since he ended up writing “breedIDCellValue = englishPrimaryIdentifier” for the English case. Further to this, he had all kinds of local variables which were wrongly named so you even saw lines of code like “englishPrimaryIdentifierCellValue = animalName” even though it was actually the cell for Animal Name. So I end up rewriting it, correcting all the names, removing the duplicate column, separating the shared column, correct the logic to only apply to Dogs, and then notifying Derek that we must ensure we only change these dialogs for Dogs.

The next day, a Code Review comes through for the second dialog which looks like it wouldn’t work for Cats (yet again), so I leave a comment on the Code Review asking him to check the logic. This gets checked in, but reopened by the tester because, to Derek’s surprise; it didn’t work for Cats.

So Derek picks up the work item for his second attempt. Instead of using the same logic I had written, he decides he will use his own approach. He creates a property called IsCat and uses that to set the visibility of the columns. This obviously fixes the Cat problem, but breaks it for Fishes because obviously Fishes aren’t Cats. Fishes now show extra columns with no data in them. It gets reopened again.

So Derek picks it up for the third time and creates a new property called IsDog which sounds correct, but he still insists on using the redundant IsCat property; ie IsDog = (!IsCat && animalID == dogTypeID). Obviously stupid, but it works.

When analysing the rest of the final code, I notice that he has also pursued a very stupid design using more redundancy, plus misleading method signatures. There’s a method AddIdentifierRow(string animalName, string primaryId, string breedID, bool isCat). Remember how I said we don’t show identifiers for Cats? Well when you look at the code inside the method: if isCat is true, then the two data cells for identifiers are set to string.Empty, rather than using the two identification parameters that you passed in. So basically the boolean isCat means “ignore the previous parameters”. It’s also misleading that it is named AddIdentifierRow when it isn’t gonna display Identification Numbers in some cases. Even more misleading is the fact there was an overload of AddIdentifierRow which didn’t even take the isCat parameter, and the only usage was inside an If Statement where it was always guaranteed to be a Cat. But how does AddIdentifierRow method know it is dealing with a Cat rather than a Dogs? Well, this time it checks if BreedID is null. Technically Cats do have a BreedID, it’s just the business logic to not show it; so that’s not a great way of determining it wasn’t a Dog. So we have the same method name which has different ways of determining the intent of the caller! So confusing! To be fair though, this was existing code before Derek got his mitts on it, but a good developer would have refactored the code to make the intent clearer. Simply putting up with pre-existing poor code and making it worse is unacceptable.


Lessons Learnt

I’m never a fan of passing booleans into a method. In most cases booleans will result in an If statement with two outcomes. If we already know what the outcome should be, then we should be able to call a method directly which describes exactly what we want to do. So AddIdentifierRow will become AddCatRow, AddDogRow, AddFishRow, and the parameters you pass in will always be used. So AddDogRow will have a signature like (string animalName, string primaryId, string breedID), yet AddCatRow can just have (string animalName)