The Outage Part 2: Feedback on the new process

In my blog, The Outage, I described a Major Incident and a knee jerk response from the CTO.

He described this situation as a

“major incident that impacted the whole estate, attributed directly to a failed Change. We recognise that the change was not intended to have the adverse impact that it did, but sadly the consequences have been a major blow to Users and us. Therefore, we are seeking to create immediate stability across our estate, and are implementing several amendments to the way Technology Changes are approved and implemented”

CTO

He came up with 5 changes that he came up with, presumably with no consultation from others. I gave my view on them in the blog. After a few months of carnage, the CTO has put out some revisions to the process.

CTO = Chief Technology Officer

SLT = Senior Leadership Team.

ELT = Executive leadership team

BAU = Business as usual

Suggestion from CTOMy View at the timeCTO’s update
“There will be a comprehensive change freeze for the month of June, with only changes meeting enhanced criteria being passed for implementation.”The size of the release wasn’t the problem, so cutting it down won’t solve anything. It might annoy the users even more if we then delay features that we announced.“as a knock-on effect, we have also reduced our delivery capacity and timescales.”
 “Pre-approved changes are suspended”The idea of a “pre-approved” change is that it is something that is often run on the live servers to fix common issues and is low risk, hence it is pre-approved (eg the ability to restart a crashed server/service.). This is just going to annoy staff members in Deployment. The CTO also remarks:  “Preapproved changes are wonderful. They have been reviewed and tested to death. My goal is to increase the number of preapproved changes in the future. It’s just with the existing ones, we don’t know if they have been reviewed or not”.  You don’t know if they have been “reviewed” but they have been run 100’s of times, and never caused an issue. So you are temporarily banning them on the grounds that they could cause an issue?“The door for pre-approved Standard Change has been re-opened. Standard Change templates can be proposed and put forward as before. As part of our continued governance and enhanced view of change taking place, we do ask for the following:   Each Standard Change template requires approval from one SLT or ELT member. A full review of both the implementation and rollback steps needs to have been undertaken.”
“Any changes submitted for approval will require TWO members of SLT. ”How many times has there been some kind of approval process and the people with authorisation are too busy or on annual leave? Why are we going from 0 approvers to 2? Would the managers understand a change to enable a feature for users belonging to company A, B and C? Would they go “hang on, C don’t have the main feature! I’m rejecting this”? It’s going to be a box-ticking exercise.  We already have a problem when changes are Code Reviewed by Developers – there’s not enough “expert” people that can review it in the required level of detail. So how would a manager understand the change and technical impact? It will be more like “does this make us money? Yes we like money”; approved.“A significant challenge impacting time to deliver has been the ‘two eyes on’ stipulation. We recognise that not every type of Change requires two sets of eyes and so are refining this slightly.   Standard Changes will need to follow the above process. Where ‘two eyes on’ is not deemed necessary, two SLT approvers will need including in the template submission verifying that this is not required. Normal Changes will follow the BAU process. Where ‘two eyes on’ is not deemed necessary, two SLT approvers will need including in the submission verifying that this is not required.”
“Implementation activity must be witnessed by two or more staff members. Screen sharing technology should be used to witness the change. No additional activities are carried out that are not explicitly in the documentation.”This might actually help, although might be patronising for Deployment. The CTO made a comment on the call about having “Competent” people involved in the deployment process. So if a Developer has to watch a member of Deployment click a few buttons; it feels like babysitting and not respecting them as employees.no specific comment was made
“All changes must have a comprehensive rollback plan, with proof of testing. The rollback plan must be executable within 50% of the approved change window.”The rollback idea is one of these ideas that sounds logical and great in theory but this is the biggest concern for the technical people in Development.no specific comment was made

So in conclusion, it seems I was correct.

Strava

Tweets:

I was looking through some old Twitter bookmarks and found this interesting thread on the running app Strava.

Note: Strava have apparently drastically improved their privacy options and default settings since this discussion. There are options to hide your home and work place using a buffer zone where it won’t track you.

“Out running this morning on a new route and a lady runs past me. Despite only passing, when I get home Strava automatically tags her in my run. If I click on her face it shows her full name, picture and a map of her running route (which effectively shows where she lives). This is despite the fact that I don’t follow her and she doesn’t share her activity publicly. So basically if someone sees a woman running alone there’s an app they can go to see her name, picture and address”

Andrew Seward

Other people pointed out that all visibility settings default to “Everyone” and this feature was called “Flyby” but was not clear that people will be able to see your running route and similar.

Discussion:

When a feature is designed by someone without bad intentions, an idea can sound great on paper but with more thought, can potentially have negative implications. In this case, the feature sounds like a great social aspect, and maybe runners can learn better running routes and compete for the best times. However, it can be used for nefarious purposes: 

  • A stalker can learn where you will be and at what time, and can even determine where the most secluded area will be. 
  • A thief will know when your house could be vacated and how long for.

This doesn’t just apply to running apps, and caution should be used when using all apps. The classic example is not posting on social media about how excited you are for your holiday, and instead: posting about it when you come back. Exposing when you will leave your house is useful for burglars.

Of course, features could have more nefarious purposes. People often accuse Google of collecting data to use for its primary business which basically makes money off your data with its advertising business. These features can often be framed for your own benefit with claims of “personalised experience”.

Often features can be enabled by default which takes advantage of people’s laziness to read the options and turn them off. However, even if you do check the settings, you might not understand what the feature actually is, just like people didn’t fully understand Strava’s  “Flyby” feature.

Notes On: The Art of Captivating Conversation – Patrick King

Introduction

I’ve finished reading The Art of Captivating Conversation by Patrick King. I made notes from the most interesting points and ideas. I’ve always found small-talk to be awkward and the author gives tips on how to make the conversation flow and sound more interesting, and more interested in the other person.

Conversations 

Conversations are the threads that weave the fabric of social interaction, and they serve two primary purposes: entertainment and utility. The art of conversation lies in the delicate balance between these two elements, ensuring that our interactions are both enjoyable and productive.

At the heart of our interactions are the six primary emotions: happiness, sadness, fear, anger, surprise, and disgust. These emotions are universal and often drive the direction and tone of our conversations. Recognizing and responding to these emotions in others can lead to more meaningful and empathetic communication.

Small talk

Small talk plays a crucial role in initiating conversations and building rapport. Common small talk questions include inquiries about one’s day, weekend, work, family, and plans. Small talk, often seen as a necessary evil, is widely disliked for its superficial nature. It’s a societal construct designed to convey politeness, yet it often feels insincere. The key to transcending small talk lies in personalising the conversation with genuine interest and shared stories.

To engage effectively in small talk, one should aim to provide entertainment, make the other person feel good, and offer substantial content that allows the conversation to flow with minimal effort. This can be achieved through two methods:

1. Answering a fuzzy version of the question: This involves focusing on a keyword from the question and expanding on it with a more interesting or entertaining anecdote. For example, if asked about the weekend, one might share a memorable weekend experience from the past rather than a mundane recount of the past days.

2. Completely redirecting the conversation: By briefly acknowledging the question and then pivoting to a more engaging topic, one can steer the conversation away from generic small talk. Using transitional phrases like “it was good, but did you hear about…” can quickly shift the focus to something of mutual interest.

What Would Jay Leno Do?

“You can make more friends in two months by becoming truly interested in other people than you can in two years by trying to get other people interested in you.” 

Dale Carnegie

When people sense you care, they respond in kind and open up. The best way to articulate this is to picture your favourite talk show host. The guest is the centre of his world for the next ten minutes. His genuine curiosity, enthusiastic reactions, and positive demeanour not only make his guests feel valued but also entertain and engage his audience. This approach is highlighted as a model for personal interactions, where showing real interest in others can lead to more meaningful and reciprocal relationships.

Everyone has unique knowledge and experiences. By being curious about others, we acknowledge that every person we meet can teach us something new, thereby enriching our own lives. This mindset encourages a sense of humility and openness to learning from others.

Be aware of social narcissism, where conversations are dominated by one’s own interests, disregarding the value of others’ experiences. This behaviour is characterised by listening only to respond rather than to understand, and it hinders the development of genuine connections.

Break The Ice

Social interactions, especially in settings such as networking events or parties, can often feel like navigating a minefield. The challenge of breaking into a conversation group can seem daunting, as if invisible barriers are erected around them. Common internal objections include the fear of interrupting, appearing awkward, or being perceived as strange.

However, the key to overcoming these social hurdles lies in establishing a “Social Goal.” This goal acts as a beacon, overriding any social defence mechanisms. It could be as specific as learning about an individual, collecting a set of business cards, or memorising names at a gathering.

To facilitate this process, icebreakers can be invaluable. They can be categorised into three types:

1. Subjective Queries: These involve asking for personal opinions on topics of mutual interest, such as the music at a party. It’s a way to show curiosity and invite others to share their passions.   

2. Objective Inquiries: These are questions about factual information, like the time, directions to the nearest café, or the location of the host. Such questions are non-threatening and serve as a natural entry point into a conversation.   

3. Comments on Shared Reality: Observations about the immediate environment or universally acknowledged truths can also serve as icebreakers. By expressing an opinion on something already within the other person’s awareness, it opens up the floor for a shared discussion.

Interestingly, it’s perfectly acceptable to ask questions to which you already know the answers. The primary aim is not to seek information but to initiate interaction and establish a connection.

In essence, breaking the ice is less about the content of the conversation and more about the willingness to engage. Remember, the objective is to engage, not to impress. With practice, the art of conversation becomes less of a challenge and more of a rewarding journey.

Never Laugh First

Initiating laughter in a conversation might inadvertently pressure others to conform to your emotional state, potentially creating discomfort. Moreover, it hinders your ability to assess the genuine humour of your remarks.

Belief Police

We feel that since we know so much better than the other person, we have some sort of responsibility to correct them. We then take it upon ourselves to prove to them just how smart we are. We can’t stand someone believing something wrong to what we believe. This habit is obnoxious to talk to. 

Questions

When you ask a general question, you will get a general answer. Questions like “what do you do for fun?” are hard to answer because no one thinks about their life in such broad terms. You want to enable people to be lazy and open ended questions actually make us think quite a bit and inject lulls into the conversation. “What is your favourite movie of all time?” This is hard because it wants one single answer and to represent you in a positive light. It can be hard to think of a single movie. A better question is “what’s a good movie you have seen recently?”. You can easily recall a movie you have seen recently and it doesn’t have to be the best. So the advice is to put boundaries and qualifiers on your question to make it less specific. You can even provide answers/prompts, so “what do you do for fun?” can be prompted with “playing sports, go outdoors, or music”.

Take The Hint

Recognizing cues of disinterest, such as a lack of engagement, prolonged silences, or shifting to generic topics, is crucial in respecting the other person’s boundaries and maintaining a comfortable conversation flow.

Eye Contact

Balancing eye contact is key; too much can be as disconcerting as too little. A general guideline is to maintain eye contact 80% of the time when listening and 50% when speaking to foster a sense of ease and attentiveness.

HPM, SBR, & EDR

HPM emphasises the use of personal experiences (History), personal opinions (Philosophy), and associative thinking (Metaphor) to engage in a conversation. 

SBR is a method of guiding a conversation by asking questions. ‘Specific‘ questions delve into the details of a topic, ‘Broad‘ questions open up new avenues for discussion, and ‘Related‘ questions tie in relevant but potentially separate ideas, allowing the conversation to flow naturally and informatively.

EDR focuses on emotional intelligence, asking for specifics, and confirming understanding. By acknowledging emotions (Emotion), probing for more information (Detail), and paraphrasing what has been said (Restatements), a person can demonstrate empathy, interest, and attentiveness, which are crucial for meaningful interactions.

Together, these strategies provide a comprehensive framework for effective communication, whether in casual conversations or more formal discussions. They encourage a deeper connection between individuals by fostering an environment where personal stories, emotions, and details are valued and explored.

Storytelling

1. Detail-Oriented Approach: Instead of crafting a full narrative, focus on providing five distinct, specific details. These serve as hooks, leading the listener from one piece of information to another, creating a chain of engaging tidbits.

2. Emotion-Driven Narrative: Concentrate on encapsulating a single motion or emotion in one sentence. Stories should evoke emotional responses, such as happiness, empathy, surprise, or curiosity.

Breaking into banter

Use light misunderstandings, double entendres, puns, and comical confusion to break the ice and introduce humour into the conversation.

Flow

Avoid stagnation by shifting the conversation to related topics, delving deeper into subjects, sharing personal experiences, inquiring about favourites, discussing emotions, expressing nuanced opinions, posing hypothetical questions, or referencing friends and articles.

Conversation Threading

This technique enhances your ability to respond quickly and thoughtfully in conversations. As a listener, use the storytelling method to pick up on topics and steer the conversation in a direction that interests you. For instance, if skiing is mentioned but holds no interest for you, pivot the discussion to talk about mountains or related experiences.

By employing these methods, you can transform simple exchanges into memorable conversations that resonate with those involved. Whether you’re a storyteller or a keen listener, the key is to keep the conversation moving, engaging, and full of life. Remember, the goal is not just to talk but to connect.

This is very concerning to hear

On a code review, a Senior Developer, Lee questioned why there was no database changes when the Developer Neil had removed all the related C# server code. Neil replied that he “wasn’t sure how the patching process worked” (despite being here years, and was in a team with experienced developers), and wasn’t sure if there were any backwards compatibility issues to consider.

So what was his plan? just hope it gets past the code review stage unchallenged? Then we would have some obsolete stored procedures, and unused data lingering in the database for years?

I initially thought his claim for backwards compatibility issues was nonsensical but from an architectural standpoint, it makes sense due to how it works in our system. The server code doesn’t call the other’s server; it goes direct. So that means if the old version calls the new version, then it would expect the stored procedures and data to exist. However, for this particular feature there were no cross-database calls at all.

I suppose being cautious and not deleting the data makes sense from a rollback point of view. It’s hard to restore the data if it is lost, but easy to restore the C# code. I have never seen us use this approach though.

The Senior Developer said :

This is very concerning to hear, can you please work with your team lead to understand how our versions are deployed, and if they are unable to answer all the questions, please reach out to someone. We do not support any version changes by default, though there are cases where we do have cross version server/database calls, but these are for specific cross organisation activities.
You can safely remove these columns, update these stored procedures.
There is no value in leaving something half in the system, if it is no longer needed, remove all references, database rows/columns/tables, class Properties, etc.

In my previous blog, I discussed Project vs Domain Teams. This is kinda linked in the sense that specialising in a certain area of the system means you gain knowledge of the functionality and architecture of that area. There would be less chance of this scenario happening where the developer is questioning if there could be backwards compatibility issues. However, he could have also found this information out by raising questions.

This example does cover many topics I have discussed on this blog:

  • Poor communication
  • Bad decisions
  • Funny quote from a senior developer ”This is very concerning to hear”

Domain Teams, Project Teams & Cross-Cutting

In the world of Software Development, there are often differing views on how to arrange teams. Regardless of the approach, people will leave/join over time, but team members need to be replaced and teams need to adapt.

There was a time when we were arranged into teams that were assigned to a Project, then moved onto a completely different one once complete. Any bugs introduced by the projects then get assigned to a “Service Improvement” team who only deal with bugs (and possibly ad-hoc user requests).

Then after a few years, and maybe under a new Development manager, they would restructure to Domain teams where you take ownership of a group of features and only projects related to those would be assigned to your team. Any bugs introduced by the projects stay with the team, which gives you greater incentive to fix them early as possible. People build up knowledge of their areas and become experts.

Then a few years later, we will switch back to Project teams.

There’s pros and cons to each structure, and there’s always edge cases which pose a management problem. Even in a Domain Team, there will be certain features that don’t neatly fit into the groups you defined, or ones that apply to many modules eg Printing.

Sometimes we have called a team that handles the miscellaneous features “Cross-Cutting”. Managers would sell it on being for features like Printing that really are used by many areas of the system, but we all know it becomes a team that gets miscellaneous and unrelated projects. They end up being like the “Service Improvement” team that deals with random bugs, and work no one else wants to do.

Cross-Cutting

There was a meeting where managers were announcing the new Domain Teams and I got assigned to Cross-Cutting. One developer was voicing his concerns about having a Cross-Cutting team. He wanted to point out that Domain Teams are supposed to have specialist knowledge on the Domains but most people that were assigned to their teams had little-to-no knowledge. For some reason he chose my name to make a point.

“What does TimeInInts know about Cross-Cutting?”

Which received a room full of laughter. I’m sure some were laughing at his point, some laughed at his emphasis and delivery, and others probably saw it as an attack on my knowledge. I was probably one of the best people for it really, given my experience in the previous Service Improvement teams.

The whole idea of keeping Domain knowledge in the team only works if there is a true commitment to keep the teams stable over years. However, people will leave the business, some will want to move to a different project to broaden their skills, or people could just fall out with their team members.

Another concern this developer had was with his own team. He was assigned to a Domain team he was the expert on, but was used to working with a couple of developers in the UK. This new team had two Indian developers. They had recently acknowledged the distributed teams weren’t really working so these new Domain teams were supposed to be co-located. But this setup seemed to signal that he was there merely to train these Indians up to then essentially offshore the Domain. Since he was the expert and proud of it, he still wanted to work in that area. But he can’t work on the same software forever.

In the Cross-Cutting team, we had an open slot labelled “new starter” so we were going to get a new hire in. You have to start somewhere, but again, this doesn’t help the teams specialise if they don’t already start with the knowledge.

Colleagues Opinions:

Developer 1:

Me 13:39: what does a new starter know about Cross-Cutting? 
Mark 13:39: sounds more like Cost Cutting! 

Developer 2:

It’s infinitely harder to build something if you don’t understand the thing you’re building. Hard to catch issues and make sense of designs if you had no opportunity to learn the domain.

Developer 3:

isn’t one of our major issues is we’ve lost domain expertise for core/bread and butter modules.  For any “module”, there’s a combination of what the requirements are/how it should work, and what the code is actually doing. Without “domain teams”/ownership – we’ve lost a large part of the puzzle (how module should work).

Developer 4:

our teams are completely ineffective, expertise has been spread too thin. We probably need to reorganise the department again with who is remaining.

Build stronger teams first that only have one junior-ish person, then have weaker teams helping out where possible. It will be very hard for the weaker teams, but unless we do this, we’ll lose the stronger people.

The weaker teams should be given appropriate projects with longer timescales, and given as much help as possible while ultimately having to struggle their own way, stronger people who put in the effort will begin to emerge from those teams.

Extension methods

Even as an experienced software developer, it is amazing when you discover some really trivial things, or discover some interesting quirk of a programming languages.

I was looking at a Code Review the other week and I saw some code that looked really pointless. It was testing some code throws a ArgumentNullException.

[Fact]
public void LogWarningDetails_WithNullLogger_ThrowsArgumentNullException()
{
	ILogger logger = null;
	Assert.Throws<ArgumentNullException>(() => logger.LogWarning("Test Error Message"));
}

A NullReferenceException is an incredibly common mistake and probably the first problem new developers encounter. If you have a reference to an object, but the object is null, you cannot call instance methods on it.

Therefore if logger is null, then you cannot call LogWarning without an error being thrown.

So on first glance, this test looks like it is testing the basic fundamentals of the C# Programming language. However, this is testing for ArgumentNullException rather than NullReferenceException.

LogWarning was actually defined as an extension method, and this actually does allow you to call methods on null references. I’ve never realised this or even thought about it. It is the case because extension methods actually pass the reference in as a parameter.

So if you have an extension method (as indicated with the this keyword):

	public static bool IsNull(this object x) 
	{
		return x == null; 
	}

This can be called like this:

	static void Main() 
	{
		object y = null;
		Console.WriteLine(y.IsNull()); 
		y = new object(); 
		Console.WriteLine(y.IsNull());
	} 

Which would output true, then false. Which illustrates that the extension method does not crash if the reference to y is null, and the logic correctly works by returning true when y is null.

Conclusion:

Understanding NullReferenceExceptions is basically day 1 of learning to code in an Object Oriented Language like C# but I’ve never even considered there is an exception to the rule. A method call on a null reference won’t cause a NullReferenceException if the method is an Extension method!

Atalasoft DPI

We use a software library from Atalasoft in our product to allow users to add annotations to PDFs.

One of our Indian developers posted on Slack to ask a question about a bug he was assigned to fix. It was quite hard to understand what he wanted but it sounded like the quality of the users PDFs were lowered to a point that they were blurry and unusable.

Hi Everyone, Here is my doubt was more of a generic one. In Document Attachment Module, I’m trying to attach a PDF. The attached PDF gets depreciated in the doc viewer.. After analysis came to a conclusion that, the Atalasoft Viewer we are using Document Attachment viewer should pass only with 96dpi(dots per inch).

However in the Atalasoft Documentation itself was given that inorder to increase the quality of the document inside the Document Viewer of Atalasoft we need to pass on the default or hardcoded resolution as attached.

With respect to this have attempting a bug in which need to fix this depreciation not in a hardcoded format.

Is there any way to calculate a PDF file’s DPI through its file size. (Note: Since PDF file was vector based and doesn’t posses any information related to dpi).Can anyone please guide me on this ? Apart from hardcoding and passing on a resolution value.

After struggling with it, another developer started working on it, but then went on annual leave so yet another developer took over. None of them had put much thought into what they were doing because when I asked them to explain the code, they couldn’t seem to. I then googled the code and found it on the Atalasoft website. https://www.atalasoft.com/kb2/KB/50067/HOWTO-Safely-Change-Set-Resolution-of-PdfDecoder

using (var annotateViewer = new AnnotateViewer())
{
    annotateViewer.DataImporters.Add(new Atalasoft.Annotate.Importers.PdfAnnotationDataImporter { SkipUnknownAnnotationTypes = false });                
    using (var pdfDec = new PdfDecoder())
    {
        pdfDec.RenderSettings = new RenderSettings { AnnotationSettings = AnnotationRenderSettings.RenderAll };
        Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(pdfDec);
        SetPdfDecoderResolution();
    }     
    
    annotateViewer.Open(filePath);                
    var printer = new Printer(annotateViewer);
    printer.Print(printerSettings, documentName, printContext);
} 
 
 
static readonly object pdfLock = new object();

private static void SetPdfDecoderResolution()
{
    int standardResolution = 300;
    lock (pdfLock)
    {
        foreach (Atalasoft.Imaging.Codec.ImageDecoder rawDecoder in Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders)
        {
            if (rawDecoder is PdfDecoder)
            {
                //By default PdfDecoder sets to lower resolution of 96 dpi
                //Reason for PDF depreciation
                ((PdfDecoder)rawDecoder).Resolution = standardResolution;
                return;
            }
        }
        Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(new PdfDecoder() { Resolution = standardResolution });
    }
}

The code instantly stood out for being convoluted because we are creating a PdfDecoder called pdfDec, then instead of just setting properties on it, we add it to the RegisteredDecoders, then call our SetPdfDecoderResolution which loops through the decoders to find the one we added. If it can’t find it (which surely is impossible) it will add one.

I was talking to a Lead Developer about a completely different bug fix, and he says

“People just don’t think about what they write”

Lead Developer

So I decided to bring up this Atalasoft problem…

He said 

When I saw the lock I wanted to ask, “Which Stack Overflow post did you find this on?”

Lead Developer

So I told him they got it from the Atalasoft website!

So they had blindly pasted this Atalasoft code in without thinking. They could just set the Resolution property in the existing code since we already create the object; so already hold a reference to it. If this code can mean we can add multiple decoders (which you aren’t supposed to do), then we could create a method similar to SetPdfDecoderResolution where it checks if there is a decoder or add one if none exists. Then we ensure all the correct properties are set. 

They need to think

Lead Developer

I think the problem the Lead Developer had with the lock is that you use lock when you want to guarantee that only one thread is accessing a resource/section of code at any time; but this code wasn’t used in a multi-threaded context. So by blindly pasting in code without thinking, they were adding redundant lines and creating confusion. 

The actual fix was just

private const int highDotsPerInch = 300;
pdfDec.Resolution = highDotsPerInch;

But to reach this outcome, it took 3 developers to look at it, then 2 to review it. 

Missed Deadline For the Proof Of Concept

As a software developer, you are always given projects without knowing the contractual details involved. However, there was one project that I was originally assigned to do, and was forwarded some documents about the project. In the document, there were some fairly formal documents which included some pricing.

The project was actually straightforward because we already had the functionality for users in England and they wanted users in Wales to use similar functionality. It was the same for the most part, but there was some minor customisation required. So it mainly involved deleting or tweaking a few files to remove the validation based on the country. Then there would be some testing involved to make sure the feature really did work when configured for Wales.

Some Senior Developers and Architects had estimated the project at 6 months which was a bit extreme, and reckoned the cost of development was £442,404, then some miscellaneous costs for “platform, network and migration” which would take the total to £445,620!

On the face of it, that sounds expensive. But when I think of the labour cost involved, where I work, a Junior might earn £25k a year, then Seniors are more like £32k-£45k. So if you have a few developers and testers on a project, with some managers involved, and it really does take 6 months, then the costs soon add up. Then you want to make a decent profit on it too.

I guess the cheeky thing is, the customer might not know what you already have; so you could charge as if it was new but you are just recycling/reusing existing code. 

The end result is the same for the new customer isn’t it?

What I didn’t understand in the document is that there was a line that said:

“The requirements described within this CCN must be delivered by January 2024 in order to support a proof of concept with a limited number of users in a live environment. Once the proof of concept is complete, an implementation plan will be defined by the programme team to determine the pace of the national rollout, to be complete by January 2026.”

My question is, does it make sense to create a proof of concept (POC) that works well enough, but then have 2 years to actually complete the work? 

Well people don’t have any experience of what they are suggesting so are just making it up. I agree though, if you have a proof of concept you’re kind of almost there. Depends on how hacky the POC is I suppose

Robert (Senior Developer)

Even more confusing is that we didn’t deliver the POC by January, but we did deliver the completed feature by the end of March.

Performance Tales:  Out of Memory Fixes

We have an area of our system that is a major pain for memory usage. We allow users to create what we can generically refer to as “Resources” and new users will then download the entire set of them, which are then cached locally.

The initial download is very large, then they are loaded into memory the next time the application loads. Most of the time, it is on-demand, but it can be slow and be very memory consuming.

Another problem is that due to various bugs, sometimes these resources can be missing/corrupted and have to be downloaded again.

Over time, the area of code has been cobbled together by developers that don’t really understand the system and so has perpetuated the inefficiency and bugs, which then becomes an endless cycle of making the system worse.

There was a big push to improve this area of the system, but no one has learned their lesson, so many juniors got assigned to fix the problem.

When it comes to code reviews, code can be surprising, and the surprise either comes from the fact that the developer is significantly smarter than me, or maybe significantly dumber. So sometimes I find myself looking at it and wonder if it really is bonkers, or some genius understanding of code that I need to learn. So sometimes it’s best to ask your colleagues to check your understanding.

I don’t remember seeing a cast in a property before:

public IEnumerable<MissingResource> MissingResources
{
    get { return _missingResources; }
    private set { _missingResources = (List<MissingResource>)value; }
}

So it’s either incredibly smart, or incredibly dumb.

“That cast is mental!

You could set it to anything that implements IEnumerable<MissingResource> – but it better be a List<>

Dave (translation of what the code is saying)

Is the following just a lack of trust that .Net won’t clear the old objects? To me, this code makes it seem like there is a bug which they are working around, or are just going wild nulling everything out to save memory.

public void ClearData()
{
	NewResources = null;
	ExistingResources = null;
	MissingResources = null;
	SkippedResources = null;
	NewResources = new List<IResource>();
	ExistingResources = new List<IResource>();
	MissingResources = new List<MissingResource>();
	SkippedResources = new List<IResource>();
	IndexedResources = new List<Guid>();
}

trust understanding

Dave

Does that finally block do anything? it’s a local variable so should be marked for garbage collector at that point anyway

finally
{
	bulkResources = BulkResource.Empty();
}

Yes it does something.

That something is worse than doing nothing!!!!

the finally allocates another instance and loses scope of the current one, meaning there are 2 things to GC now 

Dave

I do wonder if sometimes they don’t really know what you are asking but just change stuff anyway. So after I point out their use of null didn’t do anything good, we now create some empty lists and clear them if they are not null. (which they aren’t null, and are definitely empty because we just created them).

public virtual bool PrepareItemsForImport(ImportProcessParameters parameters)
{
	DialogService.SetProgressFormText("Preparing to import...");
	_newResources = new List<IResource>();
	_existingResources = new List<IResource>();
	_missingResources = new List<MissingResource>();
	_skippedResources = new List<IResource>();
	_indexedResources = new List<Guid>();
	ClearData();
	_importStartDateTime = DateTime.Now;
	_mappingInformation = RemappingService.MappingIdentifiersForOrganisation;
	return true;
}

public void ClearData()
{
	NewResources?.Clear();
	ExistingResources?.Clear();
	MissingResources?.Clear();
	SkippedResources?.Clear();
	IndexedResources?.Clear();
}

Does “ClearDataInViewModel” do anything? You call it right before the view model goes out of scope and is eligible for garbage collection anyway?

Me
using (var dialogService = new DialogService())
{
    var viewModel = new ImportDetailsDialogViewModel(dialogService);
    viewModel.InitializeFromImportProvider(importProvider);
    var dialog = new ImportDetailsDialog();
    dialog.DataContext = viewModel;
    Application.ShowModal(dialog);
    viewModel.ClearDataInViewModel();
}

Remember what the point of this work was. It was to reduce memory leaks, and also improve performance in other ways (fixing bugs in the caching, reduce server calls, remove redundant code). What they have done so far is to add more redundant code and show a complete lack of understanding how/when the garbage collector in C# works and runs. The garbage collector is the way that memory (RAM) is freed up.

public IEnumerable<TemplateHeader> GetMobileTemplateHeaders()
{
	List<TemplateHeader> headers = Retrieval.GetMobileTemplateHeaders().ToList();

	return headers;
}

The above code was changed to this:

public IEnumerable<TemplateHeader> GetMobileTemplateHeaders()
{
	IEnumerable<UserTemplateDefinition> mobileUserTemplateDefinitions =
		Retrieval.GetMobileTemplateHeaders();

	IEnumerable<TemplateHeader> mobileTemplateHeaders =
		mobileUserTemplateDefinitions
		.Select(
			template =>
			new TemplateHeader(
				id: template.Identifier,
				title: template.Name));

	return mobileTemplateHeaders;
}
Me
Retrieval.GetMobileTemplateHeaders doesn't seem to return TemplateHeaders anymore

Jaz
Fixed this

Me
You are still taking the output from a method called GetMobileTemplateHeaders and converting them to TemplateHeaders. Seems like the method should be renamed, or the return type changed

Jaz
It is returning template headers enabled for mobile. So it was named as GetMobileTemplateHeaders.

Me
This was the code before. It's of type TemplateHeaders
List<TemplateHeader> headers = Retrieval.GetMobileTemplateHeaders().ToList();

This is the code now
IEnumerable<UserTemplateDefinition> mobileUserTemplateDefinitions = Retrieval.GetMobileTemplateHeaders();
It isn't of type TemplateHeaders
but you want TemplateHeaders. So you then take the output of Retrieval.GetMobileTemplateHeaders and convert it to TemplateHeaders, storing it in a variable called mobileTemplateHeaders.

The code looks strange to have a call to GetMobileTemplateHeaders then the line straight after it creates a variable called mobileTemplateHeaders.

Surely we expect the code to be more like IEnumerable<TemplateHeader> mobileTemplateHeaders = Retrieval.GetMobileTemplateHeaders();?

Jaz
Change done.

Another developer pointed out they had introduced another inefficiency by grabbing ALL resources and not just the ones they were interested in. So they aimed to cut down memory usage but actually managed to increase it!


Gary
Are you sure you want to do a get bulk resources to only just get the templates out?

You are getting all types of resources ~20k+ items etc to only throw the majority of that data away?

Jaz
Checked with the team and changed the approach to get templates only

Conclusion

It is very easy to understand why this particular area of the system is a massive problem area. If you tell the developers to look into improving performance, they just end up changing random bits of code and hope it somehow works. Then when it is a half-decent change, they won’t put much thought into the naming, so then it’s hard and confusing to read.

What we need to do is actually assign some smarter developers to the project; ones that understand how memory leaks can occur, look at the number of resources being loaded at certain points, and analyse the SQL queries to do the initial retrieval.

Balance in Teamfight Tactics

I’ve read about, or watched videos on computer game balance and find it such an interesting topic. How you can measure and perceive the strength of each character/unit, or attempt to fix the issue to rebalance the game.

Second Wind have made a video on Teamflight Tactics.

I’ve never played this game, or even similar games, but it has the same general problems to solve in its design that many games do.

So taking the transcript, and running it through AI, I’ve made a good blog on it.

Teamfight Tactics

Teamfight Tactics (TFT) by Riot Games is a strategic auto-battler, inspired by the League of Legends universe and drawing elements from Dota Auto Chess. In this competitive online game, players are pitted against seven adversaries, each vying to construct a dominant team that outlasts the rest.

In a game like League of Legends, a single overpowered champion can only be selected by one player and would be banned in competitions once discovered. In TFT, all Champions and items are available all at once creating many possibilities for players to find exploits in.

Balancing the dynamic of Teamfight Tactics (TFT) is a compelling challenge. Comparing it to card games like Hearthstone, where adjustments are made through a limited set of variables, TFT presents a stark contrast with its myriad of factors such as health, armour, animation speed to name a few.

Initially, it might seem that having numerous variables at one’s disposal would simplify the balancing process. Even minor adjustments can significantly influence the game’s equilibrium. For instance, a mere 0.25-second reduction in a character’s animation speed can transform an underperforming champion into an overwhelmingly dominant force.

The sensitivity of each variable is due to the intricate interconnections within the game. A single element that is either too weak or too strong, regardless of potential counters, can trigger a cascade of effects that alter the entire gameplay experience.

Consider the analogy of a card game where an overpowered card exists. In such a scenario, there are usually counters or alternative strategies to mitigate its impact. However, if a card is deemed too weak, it’s simply excluded from a player’s deck without much consequence. Contrast this with a game like Teamfight Tactics, where the strength of a champion is intrinsically linked to its traits and the overall synergy within a team composition. If a champion is underpowered, it doesn’t just affect the viability of that single unit; it extends to the entire trait group, potentially diminishing the strength of related champions. This interconnectedness presents a challenging balance but manageable through data analysis. Player perceptions of balance are shaped by this data.

Vladimir The Placebo, and Vain the Unappreciated

The character Vladimir in League of Legends had become notably powerful, overshadowing others in the game’s “meta”. To address this, developers proposed minor tweaks to balance his abilities. However, when the update was released, Vladimir’s dedicated players were outraged, believing their favourite character had been weakened to the point of being nonviable. But, in an unexpected turn of events, the nerf was never actually implemented due to an oversight. The players’ reactions were solely based on the anticipated changes they read about, not on any real modification to Vladimir’s capabilities. This psychological effect influenced Vladimir users to play more cautiously, while their opponents became more bold, illustrating how perception can shape reality.

Data only reflects the current state, not the potential. Particularly in a strategy game like Team Fight Tactics, which is complex and “unsolved”, players’ understanding and use of characters can be heavily swayed by their perceptions. Perception often becomes the player’s reality. 

In the fifth instalment of the game, there emerged a low-cost champion named Vain. Initially, after the game’s release, the consensus was that Vain was underperforming—deemed the least desirable among her tier. The development team had reservations; they believed she wasn’t as ineffective as portrayed. Consequently, a minor enhancement was scheduled for Vain. However, before the update could go live, feedback from players in China indicated they had discovered a potent strategy for Vain. This revelation transformed her status drastically within three days, elevating her from the least favoured to potentially one of the most overpowering champions ever introduced.

This scenario underscores the limitations of relying solely on data, whether from players or developers, as it may not reveal the full picture. Balancing in gaming is often perceived in black and white terms by the player base—they view a character as either strong or weak, which leads to calls for nerfs or buffs. However, they frequently overlook the subtle intricacies and minute adjustments that can have significant impacts on gameplay.

Different Players

In competitive games like League of Legends, different balance parameters are set for various levels of play. A character might dominate in lower ranks but may not be as effective in higher tiers of play. 

When it comes to balancing games like Teamfight Tactics, developers have taken an approach by balancing the game as if computers were playing it. The game is designed to test strategic thinking rather than reflexes and mechanical skill.

In Army A versus Army B, the outcome is predetermined. However, this does not mean we should nerf an army simply because it performs well at a lower skill level. Instead, it presents a learning opportunity for players to improve their skills.

Interestingly, perceived imbalances can serve as educational tools. As players engage with the game, they gain knowledge through experimentation. For example, if a player tries a certain composition with specific items and it fails, they can reflect on whether it was a misstep or an unforeseen event. Learning that a champion doesn’t synergize well with a particular item is valuable knowledge to carry into future games.

There are build combinations that could potentially disrupt the game’s balance if the perfect mix is achieved. This aspect works well in single-player modes like Roguelikes, where the aim is to become overwhelmingly powerful. However, the challenge arises in maintaining this sense of excitement while ensuring these powerful builds don’t lead to exploitation in a multiplayer setting. 

Risks & Rewards

Balancing isn’t merely about pitting one army against another to see the outcome. It’s also about the risks involved in reaching that point. For instance, if there’s a build that appears once in every 10,000 games, requiring a perfect alignment of circumstances, it’s only fair that such a build is more potent than one that’s easily attainable in every game. Therefore, in games like TFT, balancing involves weighing the relative power against the rarity of acquisition, ensuring that when a player encounters a significantly rare build, it feels justified because of the risks taken or the innovative strategies employed.

TFT thrives on the abundance of possible outcomes, with a multitude of combinations and variables at play. It’s crucial for these games to offer not just a handful of ‘high roll’ moments but a wide array, potentially hundreds, allowing for diverse gameplay experiences. TFT reaches its pinnacle when players are presented with numerous potential strategies and must adapt their approach based on the augments, items, and champions they encounter in a given game, crafting their path to victory with the resources at hand.

New Content Updates

The allure of both playing and developing this game lies in its inherent unpredictability. Each session is a unique experience, a stark contrast to many Roguelike games that, despite their initial promise of variety, tend to become predictable after extensive play. Teamfight Tactics, however, stands out with its vast array of possible combinations. Just when you think you’ve seen it all, a new set is introduced, refreshing the game entirely. This happens every four months, an impressive feat that adds a fresh roster of champions, traits, and augments.

The question arises: how is it possible to introduce such a significant amount of content regularly while maintaining balance and preventing the randomness from skewing too far towards being either underwhelming or overpowering? The answer lies in ‘Randomness Distribution Systems’. These systems are designed to control the frequency and type of experiences players encounter. As a game designer, the instinct might be to embrace randomness in its purest form, but the key is to harness it. By setting minimum and maximum thresholds for experiences, we ensure that all elements of randomness fall within these bounds, creating a balanced and engaging game environment.

In Mario Party, have you ever noticed that you never seem to roll the same number on the dice four times consecutively? This isn’t a coincidence; it’s actually by design. Nintendo has implemented a system of controlled randomness to prevent such repetition, as it could lead to a frustrating gaming experience.

This concept is akin to a crafted ‘Ludo-narrative’, where game designers aim to shape player experiences through seemingly random events, but with a controlled distribution to keep the gameplay enjoyable and engaging. The goal is to allow players to encounter extreme situations, but these are skewed towards positive outcomes rather than negative ones.

This scenario might distort the essence of randomness, but surprisingly, players may not voice their dissatisfaction. Despite the statistical improbability, with millions of players engaging in a game daily, someone is bound to encounter this experience. Even odds as low as 1 in 10,000 can impact thousands of players at scale, highlighting the importance of considering player frustration as a crucial aspect of the gaming experience.

Perfectly Balanced

When discussing game balance, it’s not just about whether a feature is frustrating; it’s about recognising that frustration indicates a flaw in the design that needs to be addressed and learned from. Game balance is a complex, ever-evolving challenge that developers continuously tweak, hoping to align with player expectations. However, there will always be criticism, no matter the adjustments made.

The perception of balance is significant, and within any gaming community, you’ll find voices claiming that perfectly balanced video games don’t exist. Some players set such lofty standards for balance that they seem nearly impossible to meet. The key is establishing a solid foundation that dictates how the game should unfold, ensuring that the core gameplay aligns with the intended player experience.

In Teamfight Tactics, the ideal duration for rounds is targeted to be between 18 and 25 seconds, which is considered the standard for a well-paced battle. By setting these benchmarks, developers can align the game’s balance with this envisioned state and is key to achieving a finely-tuned game.

Conclusion

It’s essential to have a clear, balanced vision for the game and to persistently follow through with it. Balancing a game is a complex and dynamic challenge, not merely a matter of adjusting to data but also managing player perceptions and their experiences of frustration. Navigating this ever-changing landscape is no easy feat, especially when the development team must juggle multiple roles at a rapid pace. However, it’s precisely this complexity that adds to the excitement and enjoyment of Teamfight Tactics.