This one has a selection of emoji keys. For someone that loves emojis, it sounds like a great idea in theory. However, at work, I tend to react to messages on Slack/Teams and I think this would only add them when you are writing text. Also, they have chosen very generic ones which they think are popular. Although I might use a laughing emoji, I like using more obscure ones based on inside-jokes. So it wouldn’t really work for me.
Lego
There is this Lego-like keyboard which looks bizarre, and the straight layout and limited keys makes it much harder to type. A total gimmick.
When writing code, some developers like to write notes to themselves/others in the actual code using “comments”. The intent is for documentation; to explain what the code is doing. It is often argued that code should be “self-describing” which is an idea I agree with. However, there can be complex functionality because it is technical, the developer struggled to come up with a simple version, or maybe the domain logic is just that way.
I’ve collated various examples, with various degrees of humour.
Confusing
Comments are supposed to add clarity to the code. When you see code that looks simple, but the comment seems to describe something else, is ambiguous, or misleading, then the comment has decreased clarity.
// Display save dialog
RefreshFilters();
...
// Save over existing filter
RefreshFilters();
You would think RefreshFilters would simply reload the filters. It sounds like it is prompting the user with a save dialog, and even overwrites the existing filter.
Self-deprecating
//Well I've messed this up!!!
//Ultimately a filter ends up adding something to the where clause,
// or if it's a relationships filter, then the AND clause of the join.
//The way I'm adding it to these is not clean and just sucks. (two constructors with lots
// of private variables that aren't needed half the time...yuk.
//Reading the above comment ages later, I'm not sure why relationships have to be on the
//join clause? If it's a left one then yes but you can have a left join to a logical table.
//TODO: I've messed up
/// Ok. I've messed up. I've messed up good and proper.
/// Aggregate reports wants to format a date as a year, say. The formatting stuff
/// is buried in compare functions (because ranges was the only place that I needed to do this)
/// There is the column.AddOutputFormatInformation that might be useful when this is fixed????
// This looks like a hack, and, frankly, it is. I'm sorry, I can't work out how to make this not stupid.
// If you're reading this and have some brilliant insight into how to make this work in a way that doesn't
// make people sad, please go nuts and fix it.
Criticising Others
(cg) => { return cg.DisplayName; })); // Wow this is awful
matchingSlotEntity.JobCategoryName = string.Empty; // This doesn't make sense. JobCategory belongs to a person, not a slot.
/// <summary>This is awful. Get Ryan to support Guids for Mail Merge</summary>
//Please. Remove word interop. Not even Microsoft want us to use it.
TaskCount TaskCount { get; set; } // TODO: Again, this should be readonly. Something mucks with it, though.
MessageBox.Show("Sorry this feature hasn't been implemented yet... :(", "Sad Info..!!");
Funny
// Reverse the order as they are stored arse about face
this.ribbonBars.Reverse();
// Show yourself!
/// <summary>
/// Builds trees for a living.
/// </summary>
internal static class MailMergeTreeBuilder
this.AllowDrop = false; // No dropping on this tree thankyou.
Even Microsoft throw in the occasional gem.
// At this point, there isn't much we can do. There's a
// small chance the following line will allow the rest of
// the program to run, but don't get your hopes up.
Useful Warnings
This one warns you that the logic is confusing.
/**********************************************************************************
Description
-----------
You'd think that a stored procedure called 'GetNextBusinessDay' would return the
next business day for an organisation. That would be too easy.
What this stored procedure does, is return a day that is not explicitly marked
as 'closed'.
It also doesn't return the 'next' business day in the default case where the parameter @BusinessDayOccurrence is set to zero - in that case it returns the current day, or the next non-closed day if the current day has a closure defined on it.
@BusinessDayOccurrence doesn't find the first non-closed day after X days from today, incidentally. It returns the Xth non-closed day, which is an important difference. If you have closed bank holidays, and want to know the 2nd non-closed day after 24th December, it's not 27th December but 28th December Confusing!
**********************************************************************************/
When things go wrong with your software, it’s obviously good practice for the developer to log relevant information into an error log. You can then know when users are affected, but also how many are affected – to understand how widespread the issue is. With that information, it becomes easier to triage. There can be loads of other projects to work on, and bugs to fix, so being able to prioritise them is key.
Choosing what to log and how often can require some thought. You can come up with categories to your logging such as Information, Warning, and Errors. Errors are when things have gone wrong, Warning could be that you suspect something has gone wrong like missing config, and Information could be useful for debugging like if there is an optional service the user connects to, you can log “user connected“, “user disconnected“.
We have a chat functionality which uses a PubSub (publish/subscribe) model, and we were logging status changes and connection statuses. If you just blindly log scenarios like this, then it might be counterproductive. If the statuses are changing frequently, and there are thousands of users, you can be spamming the error log and then it makes it harder to see the real problems. If you see the same entries logged again and again, it becomes likely that you just think “we expect that“, and then just ignore it.
There can be extra costs associated with logging too. Data takes some memory to store and adding thousands of rows to a database per day can quickly increase the size. All those extra network calls can be excessive too.
We have had a few projects recently with the aim of trying to cut down the amount of errors.
In the case of problems, then obviously fixing the root cause of the problem is the best strategy. If the logs aren’t useful, then it’s best to stop logging them.
If the logs are useful, sometimes it’s best to cut down the logs rather than stop completely. So if you have a log such as “failed to connect” then it retries in a few seconds, do you really want to log another “failed to connect“? Maybe the functionality should try 5 times then give up until the user manually attempts to reconnect. Maybe the logs could remain on the user’s computer then submitted once with the number of failure attempts. So instead of 5 separate entries, it could just submit 1 saying it tried 5 times then gave up.
On a large scale system like ours, the number of entries in the databases are crazy. Read this statement from a concerned Support team member (which I think were the stats 1 month after a recent release):
Based on the daily volume of errors logged over the past few days I’m expecting the number of errors logged in Monitoring to increase by 82% over the course of a month.
A NullReferenceException is an incredibly common mistake and probably the first problem new developers encounter. If you have a reference to an object, but the object is null, you cannot call instance methods on it without it throwing an exception.
So in C#, you could have
Dog myDog = null;
myDog.Bark();
Which will throw an exception because myDog isn’t initialised.
Dog myDog = new Dog();
myDog.Bark();
This is fine because a dog has been initialised and holds a reference to an object.
If you allow the possibility of nulls, then whenever you want to call a method, you end up checking for null.
if (myDog !=null)
myDog.Bark();
More concise syntax involves using a question mark which will conditionally execute if it is not null such as:
myDog?.Bark()
So the question mark acts as the IF statement but is a more concise way of expressing it.
New “Nullable Reference Types & Improved Design
A cleaner, and safer design, if you can, is to never allow nulls, then you never have to check for them. In the latest versions of C#, you can make it so the compiler will assume things shouldn’t be null, unless you explicitly specify they can be; see Introducing Nullable Reference Types in C# – .NET Blog (microsoft.com).
Our Problem
I work on some software which is over 10 years old using an older version of .Net. So things can be null, and are often designed for null. We noticed that newer developers (seem to be common for our Indian developers for some reason), seem to put null checks everywhere, regardless if the reference can be null or not. This makes the code much harder to read, and possibly debug, because you are writing misleading code, and adding extra code branches that would never execute.
In a recent code review, the developer added an if statement to check for null
if (user == null)
return;
So if it got past this code, user cannot be null, yet for every single statement afterwards, he added the question mark to check for null! eg
var preferences = user?.Preferences;
Although it’s also bad design to chain loads of properties together, it is sometimes necessary. Trying to be concise and adding the questionmark to check for nulls can be hard to understand what is actually executed. Combined with LINQs FirstOrDefault, you get even less clarity. FirstOrDefault will return the first item in a list, and if there are no matches; it will return null. So when you see all the conditionals and a method like FirstOrDefault, then when you glance at the statement, it looks very likely it will return null.
var result = value?.Details?.FirstOrDefault() is Observation
So “result” is null if: value is null; details is null; or details contains no items.
“everything is null these days, or not sure what type it is”
Me
A longer example is the following:
if (recordData != null && (_templateDataSource?.EventLinks?.ContainsKey(recordData) ?? false))
Due to the null checks, they have added the “null coalescing operator” (??) and set to false when null. However, none of this data should have been null in their code, so it should be:
if (_templateDataSource.EventLinks.ContainsKey(recordData))
The Lead Developer made a good point that if this data was null, it would be a major bug, but if you put the null checks, then the logic gets skipped and a bug is hidden. It would be preferable to actually crash so you know something is very wrong.
Lead Developer
Should this ever be null? Or the EventLinks on it? Remember, all these null safety checks could allow the software to continue on invalid data conditions and produce wrong answers instead of crash. Please re-check ALL the ones you have added in these changes, and if something should never be null remove it. Unless you are planning to test the behaviour of this when null is passed in and that it is correct.
When so many objects can actually be null, it is easy to miss a check. There was one code review where I questioned the original design which seemed to have excessive null checks (which was implemented by the same author of this new change). His new change was to add a null check that was missed. After making some dramatic changes based on my feedback, he ironically removed his new null check, and thus reintroduced the issue he was attempting to fix!
Developer: "have added a null check to prevent the issue." Me: "then it will crash because you haven't added null checks"
Another good example of having excessive null checks, but somehow still missing them:
Some of the FirstOrDefault() then call a property without a null check:
FirstOrDefault().System
Also, because of the null checks when they are setting “site”, it looks like it can be null. So when they set “context.Site”, they have null checks in the constructor for Coding. However that means they are passing null into the constructor which means it probably is an invalid object.
Null flavours
I thought the idea of “Null flavor” is very interesting. NullFlavor – FHIR v4.0.1 (hl7.org) Null means “nothing” but there’s different meanings within; or the reason why it is missing. Some examples are:
missing, omitted, incomplete, improper
Invalid
not been provided by the sender due to security, privacy or other reasons
Unknown
not applicable (e.g., last menstrual period for a male)
not available at this time but it is expected that it will be available later.
not available at this time (with no expectation regarding whether it will or will not be available in the future).
The content is greater than zero, but too small to be quantified.
Closing Words
Null checks are one of the simplest concepts in object oriented programming, so it’s bizarre how we are seeing many modern programmers struggling to understand when to use them. Even when they find a bug in their code, they still fail to learn their lesson and continue to write bad code with inappropriate null checks. A better design is to avoid the possibility of nulls by always dealing with an object reference (even if the object is one that simply represents “null”).
Performance of SQL database queries is an interesting topic that I never understand. When dealing with a large amount of data, we are often told it is beneficial to process the results in batches. So if you are wanting to update/delete rows based on some calculations, for example; we may be told to iterate through groups of 1000 rows.
You can’t just put your query in a batch then assume it is efficient. There was one attempted data fix where the reviewing Database Expert reckoned it would take 2h 46 mins to execute.
“It doesn’t matter if you select 1 row or 1m rows – it still takes 10 seconds. So 1000 batches x 10 sec = 2h 46min processing time just working out what to delete. We need to change the proc so that it gets all 1m rows in one hit at the start into a temp table (should take 10 seconds) and then use that list to work from when batch deleting the rows”
Database Expert
Performance Tales: LINQ to SQL
Although we mainly use SQL Stored Procedures to retrieve data from the database, some of our code uses LinqToSQL which uses code to dynamically generate SQL. Sometimes it seems that handcrafting your own database query (Stored Procedure) can give better and consistent results since the SQL is the same every time. Sometimes the performance issues isn’t with the actual query, but possibly that the columns don’t have an appropriate Index applied to them to allow fast lookups on that column.
There was a problem where our default 30 second timeout was been reached for some users. So if they attempted to load a large record, it would take 30 seconds then error out. One developer suggested to increase from 30 seconds to 5 minutes.
“i’d like the customers to still be able to see the user’s record.”
Developer
The record would eventually load for the user (presumably), if the user has actually waited for 5 minutes and not closed the application, thinking it has crashed (often the user interface in this example would just show “Not Responding“).
This idea doesn’t fix the root problem though, and the software still seems unusable for the user’s point of view.
An Architect stated:
“there is something very wrong with the query if it’s taking longer than 30 seconds, particularly when the use case of using LINQ2SQL is commonly for small batches of data.
Software Architect
The default is usually a good indicator that something is wrong, but if we increase the timeout across the board we will miss out on such problems.
The reason the timeout is being breached needs to be investigated:
Is it general: applying to any query seemingly at random?
Is it during the connection to SQL Server or during the execution of a query?
Could a particular problem query be optimised, be made asynchronous, or the timeout be altered for that individual query?
The Database Expert stated:
Nothing by default should take over 30 seconds – it’s a good default. If we have particular problems with SQL taking over 30s it should be investigated and addressed. It is possible that some features should be allowed to take over 30s (e.g. not user facing, known to take a long time). Allowing >30s means more chance for blocking and wider impacts.
Having queries run longer than 30s increases the amount of time they are running in SQL – this could lead to more blocking, more CPU/memory demand which could make the entire server unreliable, so we go from 1 user complaining to many.
LINQtoSQL can be optimised, we’ve done it many times over the years. The simplest is to replace the queries with Stored procedures – but it is possible to improve bad LINQtoSQL too. It depends what is wrong. None of daily operations should be anywhere near that.
SQL can do near 0ms operations on many more rows of data than we have. It isn’t a problem because we have more data, it will be something with that query. Poor plan choice or Blocking.
Database Expert
Performance Tales: Limit User Input
A developer looked into an issue related to our Searches module which allows the user to query their data and generate tables and charts to create their own reports. The Developer claimed their new changes gave performance “up to 10 times better“
The Database Expert looked at his changes and stated:
“I think that this might not be the right way to approach this issue in isolation. The issue with this procedure is not that it is slow in general, it is that our software allows you to call it in a huge variety of ways, some of which can only produce terrible execution plans as it stands. We should be analysing what kinds of searches that users do, and see if we can restrict some of the sillier searches that they can do. For example, users can right now do an audit search filtered on “anything before today’s date” with no further filters.
For example, see this snippet of a live trace done right now: <shows screenshot>
Clearly, some calls are very well optimised and others are terrible.
We should be looking at restricting what users can do, for example requiring people to search on a more focussed timeframe.
The way I would approach this issue would be to look at trace data and look for two main things:
-The styles of queries that are run most often
-The styles of queries that run the worst
Both need focus. The ones that run most often, even if they are relatively quick, can yield a big improvement simply from scale alone. The ones that run the worst improve the overall user experience as well as the impact on the system.
Of course improving a query might not involve any SQL changes at all, instead they might involve app changes to prevent silly queries from being run.”
Database Expert
Keep lowering the number with each MI
There was a Major Incident regarding a feature. The stored procedure used already had a limit on the amount of data selected in one go. A developer changed the number from 50 to 10. Since it is a very small number, I couldn’t understand why such small numbers made a difference. There was a code comment next to this limit to say it was there to “improve performance”. I looked at the file history to see if I could find any more information about why this line was added in the first place. I saw a very interesting trend, where that line was the only line to be changed in the previous 2 revisions:
original: select top 250
24 Sept 2015: select top 100
2 Oct 2015: select top 50
25 May 2022: select top 10
The developer did explain that the data is passed to another process that takes 90 seconds to process 10 instances. It’s essentially a queue that is polled every 10 minutes and would only have a small number of tasks each time.
The concerning thing is that the number keeps being lowered and the performance is not deemed good enough. Maybe the overall implementation needs revising.
“I hope you want to test infinite scroll by scrolling down to the last element. You can use mockIsIntersecting(dom.getByTestId(“datatestId”), true) from library react-intersection-observer/test-utils to test this.”
This made me think deeply:
Is it using infinite scroll if you can scroll to the last element? 🤔
Me
Maybe in most situations, an “infinite scroll” feature isn’t truly infinite unless it is dynamically generating content. But in some cases, it’s near infinite, like in Twitter where your feed could just go on for seemingly forever because there is far more content than a human can read.
However, in that case, you could never do a test on real data that scrolls to the bottom. If you had mock data, then you could test that scrolling with limited data does reach the bottom.
I told a colleague about this and got him thinking about other claims of infinite. He said it’s like the grains of sand idea. There’s a massive amount but it’s not really infinite. Or the size of the universe; constantly expanding, but at any given time; is finite.
Mike: A line on a graph that extends out to infinity, it doesn’t though; it extends out as far as you graph.
Me: Buzz Lightyear can go beyond infinity (“To infinity, and beyond!”).
Recently, our CEO has become obsessed with the idea of “Growth Mindset” which she seems to crowbar into all her company updates, and we received a talk from an external speaker. The actual talk seemed like a lot of waffle to me, but the general idea sounds like the type of life lessons that Simon Sinek says (he calls his philosophy “The Infinite Game“). Although I normally think ideas/mentality like this are pretentious, I respect Simon and I think there’s probably something in this way of thinking, so shouldn’t be put off by the external speaker’s presentation. Many colleagues stated they were aware of the idea of Growth Mindset and cited the book “Mindset” by Dr Carol Dweck.
Join us on the latest and most exciting stage of our journey with Growth Mindset. We have been working with the NeuroLeadership Institute on embedding a growth mindset culture in order to increase employee engagement, equip our people with tools and techniques for personal and professional development and to help everyone in the organisation navigate change. We have been running Growth Mindset sessions with our leadership team since September last year and we will be making a set of incredible tools and resources available for everyone to benefit from.
Company announcement
Growth mindset is thinking that every day is a learning experience and you don’t know everything. When you change your perspective, it changes outcomes. You have to allow for failure which is a learning opportunity. Don’t strive for perfection. In Software Development, you may never release software if you always strive for perfection.
“Growth Mindset can be applied to all aspects of your life. It’s easier to think about what a growth mindset is by thinking about what it isn’t. So if you think about a fixed mindset, a fixed mindset says this is the way it’s always done and this is the way it will always be. I can only do this, I can’t do that. I don’t want to share. I don’t want to learn. I’m in my swim-lane. I’m not getting out of it, and I’m not interacting with anyone else. A Growth Mindset is the opposite of all of that. It’s like, we’ve done it this way, but there are other ways to do it. I can learn to do anything. I believe that I can learn to do anything that I put my mind to. It’s genuinely sharing in other people’s success, making it not just about you. The fact that other people have achieved something based on what you’ve kind of started or an idea that you might have had. Let yourself be happy about other people’s success.“
NeuroLeadership Institute
The application of a growth mindset is not confined to any single aspect of life. It can be equally impactful at work, in personal relationships, or in pursuing personal hobbies and interests. The concept of a growth mindset is a transformative and powerful approach to personal development. It’s the belief that one’s abilities and intelligence can be developed through dedication, hard work, and perseverance. While we may have inherent talents, our abilities are not fixed. Unlike a fixed mindset, which limits potential and discourages risk-taking, a growth mindset thrives on challenge and sees failure not as evidence of unintelligence but as a springboard for growth and for stretching existing abilities.
I believe the growth mindset is an antidote to cynicism. I hate cynicism – it doesn’t lead anywhere and worse than that it spreads. Nobody in life gets exactly what they thought they were going to get. But if you work hard, smart and you’re kind, amazing things can happen!
Joshua (colleague)
A new scheme, which I expect to be fairly short-lived is the idea of mentorship. Several people across the business put themselves forward to be a mentor and one person could ask them to be their mentor. To put yourself forward, you had to create a profile of your skills to show what you were offering mentoring support on. One colleague came up with the following:
Key skills, experience and behaviours:
Growth Mindset
People Management
Interpersonal skills development
Commercial awareness and negotiation
Presentation creation and delivery
What sort of person puts “Growth Mindset” at the top of their list of skills? Surely number 1 should be your top skill. Seems like he is just sucking up to the CEO.