In the world of Software Development, there are often differing views on how to arrange teams. Regardless of the approach, people will leave/join over time, but team members need to be replaced and teams need to adapt.
There was a time when we were arranged into teams that were assigned to a Project, then moved onto a completely different one once complete. Any bugs introduced by the projects then get assigned to a “Service Improvement” team who only deal with bugs (and possibly ad-hoc user requests).
Then after a few years, and maybe under a new Development manager, they would restructure to Domain teams where you take ownership of a group of features and only projects related to those would be assigned to your team. Any bugs introduced by the projects stay with the team, which gives you greater incentive to fix them early as possible. People build up knowledge of their areas and become experts.
Then a few years later, we will switch back to Project teams.
There’s pros and cons to each structure, and there’s always edge cases which pose a management problem. Even in a Domain Team, there will be certain features that don’t neatly fit into the groups you defined, or ones that apply to many modules eg Printing.
Sometimes we have called a team that handles the miscellaneous features “Cross-Cutting”. Managers would sell it on being for features like Printing that really are used by many areas of the system, but we all know it becomes a team that gets miscellaneous and unrelated projects. They end up being like the “Service Improvement” team that deals with random bugs, and work no one else wants to do.
Cross-Cutting
There was a meeting where managers were announcing the new Domain Teams and I got assigned to Cross-Cutting. One developer was voicing his concerns about having a Cross-Cutting team. He wanted to point out that Domain Teams are supposed to have specialist knowledge on the Domains but most people that were assigned to their teams had little-to-no knowledge. For some reason he chose my name to make a point.
“What does TimeInInts know about Cross-Cutting?”
Which received a room full of laughter. I’m sure some were laughing at his point, some laughed at his emphasis and delivery, and others probably saw it as an attack on my knowledge. I was probably one of the best people for it really, given my experience in the previous Service Improvement teams.
The whole idea of keeping Domain knowledge in the team only works if there is a true commitment to keep the teams stable over years. However, people will leave the business, some will want to move to a different project to broaden their skills, or people could just fall out with their team members.
Another concern this developer had was with his own team. He was assigned to a Domain team he was the expert on, but was used to working with a couple of developers in the UK. This new team had two Indian developers. They had recently acknowledged the distributed teams weren’t really working so these new Domain teams were supposed to be co-located. But this setup seemed to signal that he was there merely to train these Indians up to then essentially offshore the Domain. Since he was the expert and proud of it, he still wanted to work in that area. But he can’t work on the same software forever.
In the Cross-Cutting team, we had an open slot labelled “new starter” so we were going to get a new hire in. You have to start somewhere, but again, this doesn’t help the teams specialise if they don’t already start with the knowledge.
Colleagues Opinions:
Developer 1:
Me 13:39: what does a new starter know about Cross-Cutting?
Mark 13:39: sounds more like Cost Cutting!
Developer 2:
It’s infinitely harder to build something if you don’t understand the thing you’re building. Hard to catch issues and make sense of designs if you had no opportunity to learn the domain.
Developer 3:
isn’t one of our major issues is we’ve lost domain expertise for core/bread and butter modules. For any “module”, there’s a combination of what the requirements are/how it should work, and what the code is actually doing. Without “domain teams”/ownership – we’ve lost a large part of the puzzle (how module should work).
Developer 4:
our teams are completely ineffective, expertise has been spread too thin. We probably need to reorganise the department again with who is remaining.
Build stronger teams first that only have one junior-ish person, then have weaker teams helping out where possible. It will be very hard for the weaker teams, but unless we do this, we’ll lose the stronger people.
The weaker teams should be given appropriate projects with longer timescales, and given as much help as possible while ultimately having to struggle their own way, stronger people who put in the effort will begin to emerge from those teams.
Even as an experienced software developer, it is amazing when you discover some really trivial things, or discover some interesting quirk of a programming languages.
I was looking at a Code Review the other week and I saw some code that looked really pointless. It was testing some code throws a ArgumentNullException.
A NullReferenceException is an incredibly common mistake and probably the first problem new developers encounter. If you have a reference to an object, but the object is null, you cannot call instance methods on it.
Therefore if logger is null, then you cannot call LogWarning without an error being thrown.
So on first glance, this test looks like it is testing the basic fundamentals of the C# Programming language. However, this is testing for ArgumentNullException rather than NullReferenceException.
LogWarning was actually defined as an extension method, and this actually does allow you to call methods on null references. I’ve never realised this or even thought about it. It is the case because extension methods actually pass the reference in as a parameter.
So if you have an extension method (as indicated with the this keyword):
public static bool IsNull(this object x)
{
return x == null;
}
This can be called like this:
static void Main()
{
object y = null;
Console.WriteLine(y.IsNull());
y = new object();
Console.WriteLine(y.IsNull());
}
Which would output true, then false. Which illustrates that the extension method does not crash if the reference to y is null, and the logic correctly works by returning true when y is null.
Conclusion:
Understanding NullReferenceExceptions is basically day 1 of learning to code in an Object Oriented Language like C# but I’ve never even considered there is an exception to the rule. A method call on a null reference won’t cause a NullReferenceException if the method is an Extension method!
We use a software library from Atalasoft in our product to allow users to add annotations to PDFs.
One of our Indian developers posted on Slack to ask a question about a bug he was assigned to fix. It was quite hard to understand what he wanted but it sounded like the quality of the users PDFs were lowered to a point that they were blurry and unusable.
Hi Everyone, Here is my doubt was more of a generic one. In Document Attachment Module, I’m trying to attach a PDF. The attached PDF gets depreciated in the doc viewer.. After analysis came to a conclusion that, the Atalasoft Viewer we are using Document Attachment viewer should pass only with 96dpi(dots per inch).
However in the Atalasoft Documentation itself was given that inorder to increase the quality of the document inside the Document Viewer of Atalasoft we need to pass on the default or hardcoded resolution as attached.
With respect to this have attempting a bug in which need to fix this depreciation not in a hardcoded format.
Is there any way to calculate a PDF file’s DPI through its file size. (Note: Since PDF file was vector based and doesn’t posses any information related to dpi).Can anyone please guide me on this ? Apart from hardcoding and passing on a resolution value.
After struggling with it, another developer started working on it, but then went on annual leave so yet another developer took over. None of them had put much thought into what they were doing because when I asked them to explain the code, they couldn’t seem to. I then googled the code and found it on the Atalasoft website. https://www.atalasoft.com/kb2/KB/50067/HOWTO-Safely-Change-Set-Resolution-of-PdfDecoder
using (var annotateViewer = new AnnotateViewer())
{
annotateViewer.DataImporters.Add(new Atalasoft.Annotate.Importers.PdfAnnotationDataImporter { SkipUnknownAnnotationTypes = false });
using (var pdfDec = new PdfDecoder())
{
pdfDec.RenderSettings = new RenderSettings { AnnotationSettings = AnnotationRenderSettings.RenderAll };
Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(pdfDec);
SetPdfDecoderResolution();
}
annotateViewer.Open(filePath);
var printer = new Printer(annotateViewer);
printer.Print(printerSettings, documentName, printContext);
}
static readonly object pdfLock = new object();
private static void SetPdfDecoderResolution()
{
int standardResolution = 300;
lock (pdfLock)
{
foreach (Atalasoft.Imaging.Codec.ImageDecoder rawDecoder in Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders)
{
if (rawDecoder is PdfDecoder)
{
//By default PdfDecoder sets to lower resolution of 96 dpi
//Reason for PDF depreciation
((PdfDecoder)rawDecoder).Resolution = standardResolution;
return;
}
}
Atalasoft.Imaging.Codec.RegisteredDecoders.Decoders.Add(new PdfDecoder() { Resolution = standardResolution });
}
}
The code instantly stood out for being convoluted because we are creating a PdfDecoder called pdfDec, then instead of just setting properties on it, we add it to the RegisteredDecoders, then call our SetPdfDecoderResolution which loops through the decoders to find the one we added. If it can’t find it (which surely is impossible) it will add one.
I was talking to a Lead Developer about a completely different bug fix, and he says
“People just don’t think about what they write”
Lead Developer
So I decided to bring up this Atalasoft problem…
He said
When I saw the lock I wanted to ask, “Which Stack Overflow post did you find this on?”
Lead Developer
So I told him they got it from the Atalasoft website!
So they had blindly pasted this Atalasoft code in without thinking. They could just set the Resolution property in the existing code since we already create the object; so already hold a reference to it. If this code can mean we can add multiple decoders (which you aren’t supposed to do), then we could create a method similar to SetPdfDecoderResolution where it checks if there is a decoder or add one if none exists. Then we ensure all the correct properties are set.
They need to think
Lead Developer
I think the problem the Lead Developer had with the lock is that you use lock when you want to guarantee that only one thread is accessing a resource/section of code at any time; but this code wasn’t used in a multi-threaded context. So by blindly pasting in code without thinking, they were adding redundant lines and creating confusion.
The actual fix was just
private const int highDotsPerInch = 300;
pdfDec.Resolution = highDotsPerInch;
But to reach this outcome, it took 3 developers to look at it, then 2 to review it.
As a software developer, you are always given projects without knowing the contractual details involved. However, there was one project that I was originally assigned to do, and was forwarded some documents about the project. In the document, there were some fairly formal documents which included some pricing.
The project was actually straightforward because we already had the functionality for users in England and they wanted users in Wales to use similar functionality. It was the same for the most part, but there was some minor customisation required. So it mainly involved deleting or tweaking a few files to remove the validation based on the country. Then there would be some testing involved to make sure the feature really did work when configured for Wales.
Some Senior Developers and Architects had estimated the project at 6 months which was a bit extreme, and reckoned the cost of development was £442,404, then some miscellaneous costs for “platform, network and migration” which would take the total to £445,620!
On the face of it, that sounds expensive. But when I think of the labour cost involved, where I work, a Junior might earn £25k a year, then Seniors are more like £32k-£45k. So if you have a few developers and testers on a project, with some managers involved, and it really does take 6 months, then the costs soon add up. Then you want to make a decent profit on it too.
I guess the cheeky thing is, the customer might not know what you already have; so you could charge as if it was new but you are just recycling/reusing existing code.
The end result is the same for the new customer isn’t it?
What I didn’t understand in the document is that there was a line that said:
“The requirements described within this CCN must be delivered by January 2024 in order to support a proof of concept with a limited number of users in a live environment. Once the proof of concept is complete, an implementation plan will be defined by the programme team to determine the pace of the national rollout, to be complete by January 2026.”
My question is, does it make sense to create a proof of concept (POC) that works well enough, but then have 2 years to actually complete the work?
Well people don’t have any experience of what they are suggesting so are just making it up. I agree though, if you have a proof of concept you’re kind of almost there. Depends on how hacky the POC is I suppose
Robert (Senior Developer)
Even more confusing is that we didn’t deliver the POC by January, but we did deliver the completed feature by the end of March.
We have an area of our system that is a major pain for memory usage. We allow users to create what we can generically refer to as “Resources” and new users will then download the entire set of them, which are then cached locally.
The initial download is very large, then they are loaded into memory the next time the application loads. Most of the time, it is on-demand, but it can be slow and be very memory consuming.
Another problem is that due to various bugs, sometimes these resources can be missing/corrupted and have to be downloaded again.
Over time, the area of code has been cobbled together by developers that don’t really understand the systemand so has perpetuated the inefficiency and bugs, which then becomes an endless cycle of making the system worse.
There was a big push to improve this area of the system, but no one has learned their lesson, so many juniors got assigned to fix the problem.
When it comes to code reviews, code can be surprising, and the surprise either comes from the fact that the developer is significantly smarter than me, or maybe significantly dumber. So sometimes I find myself looking at it and wonder if it really is bonkers, or some genius understanding of code that I need to learn. So sometimes it’s best to ask your colleagues to check your understanding.
I don’t remember seeing a cast in a property before:
public IEnumerable<MissingResource> MissingResources
{
get { return _missingResources; }
private set { _missingResources = (List<MissingResource>)value; }
}
So it’s either incredibly smart, or incredibly dumb.
“That cast is mental!
You could set it to anything that implements IEnumerable<MissingResource> – but it better be a List<>
Dave (translation of what the code is saying)
Is the following just a lack of trust that .Net won’t clear the old objects? To me, this code makes it seem like there is a bug which they are working around, or are just going wild nulling everything out to save memory.
public void ClearData()
{
NewResources = null;
ExistingResources = null;
MissingResources = null;
SkippedResources = null;
NewResources = new List<IResource>();
ExistingResources = new List<IResource>();
MissingResources = new List<MissingResource>();
SkippedResources = new List<IResource>();
IndexedResources = new List<Guid>();
}
trust understanding
Dave
Does that finally block do anything? it’s a local variable so should be marked for garbage collector at that point anyway
finally
{
bulkResources = BulkResource.Empty();
}
Yes it does something.
That something is worse than doing nothing!!!!
the finally allocates another instance and loses scope of the current one, meaning there are 2 things to GC now
Dave
I do wonder if sometimes they don’t really know what you are asking but just change stuff anyway. So after I point out their use of null didn’t do anything good, we now create some empty lists and clear them if they are not null. (which they aren’t null, and are definitely empty because we just created them).
public virtual bool PrepareItemsForImport(ImportProcessParameters parameters)
{
DialogService.SetProgressFormText("Preparing to import...");
_newResources = new List<IResource>();
_existingResources = new List<IResource>();
_missingResources = new List<MissingResource>();
_skippedResources = new List<IResource>();
_indexedResources = new List<Guid>();
ClearData();
_importStartDateTime = DateTime.Now;
_mappingInformation = RemappingService.MappingIdentifiersForOrganisation;
return true;
}
public void ClearData()
{
NewResources?.Clear();
ExistingResources?.Clear();
MissingResources?.Clear();
SkippedResources?.Clear();
IndexedResources?.Clear();
}
Does “ClearDataInViewModel” do anything? You call it right before the view model goes out of scope and is eligible for garbage collection anyway?
Me
using (var dialogService = new DialogService())
{
var viewModel = new ImportDetailsDialogViewModel(dialogService);
viewModel.InitializeFromImportProvider(importProvider);
var dialog = new ImportDetailsDialog();
dialog.DataContext = viewModel;
Application.ShowModal(dialog);
viewModel.ClearDataInViewModel();
}
Remember what the point of this work was. It was to reduce memory leaks, and also improve performance in other ways (fixing bugs in the caching, reduce server calls, remove redundant code). What they have done so far is to add more redundant code and show a complete lack of understanding how/when the garbage collector in C# works and runs. The garbage collector is the way that memory (RAM) is freed up.
Me Retrieval.GetMobileTemplateHeaders doesn't seem to return TemplateHeaders anymore
Jaz Fixed this
Me You are still taking the output from a method called GetMobileTemplateHeaders and converting them to TemplateHeaders. Seems like the method should be renamed, or the return type changed
Jaz It is returning template headers enabled for mobile. So it was named as GetMobileTemplateHeaders.
Me This was the code before. It's of type TemplateHeaders List<TemplateHeader> headers = Retrieval.GetMobileTemplateHeaders().ToList();
This is the code now IEnumerable<UserTemplateDefinition> mobileUserTemplateDefinitions = Retrieval.GetMobileTemplateHeaders(); It isn't of type TemplateHeaders but you want TemplateHeaders. So you then take the output of Retrieval.GetMobileTemplateHeaders and convert it to TemplateHeaders, storing it in a variable called mobileTemplateHeaders.
The code looks strange to have a call to GetMobileTemplateHeaders then the line straight after it creates a variable called mobileTemplateHeaders.
Surely we expect the code to be more like IEnumerable<TemplateHeader> mobileTemplateHeaders = Retrieval.GetMobileTemplateHeaders();?
Jaz Change done.
Another developer pointed out they had introduced another inefficiency by grabbing ALL resources and not just the ones they were interested in. So they aimed to cut down memory usage but actually managed to increase it!
Gary Are you sure you want to do a get bulk resources to only just get the templates out?
You are getting all types of resources ~20k+ items etc to only throw the majority of that data away?
Jaz Checked with the team and changed the approach to get templates only
Conclusion
It is very easy to understand why this particular area of the system is a massive problem area. If you tell the developers to look into improving performance, they just end up changing random bits of code and hope it somehow works. Then when it is a half-decent change, they won’t put much thought into the naming, so then it’s hard and confusing to read.
What we need to do is actually assign some smarter developers to the project; ones that understand how memory leaks can occur, look at the number of resources being loaded at certain points, and analyse the SQL queries to do the initial retrieval.
I’ve read about, or watched videos on computer game balance and find it such an interesting topic. How you can measure and perceive the strength of each character/unit, or attempt to fix the issue to rebalance the game.
Second Wind have made a video on Teamflight Tactics.
I’ve never played this game, or even similar games, but it has the same general problems to solve in its design that many games do.
So taking the transcript, and running it through AI, I’ve made a good blog on it.
Teamfight Tactics
Teamfight Tactics (TFT) by Riot Games is a strategic auto-battler, inspired by the League of Legends universe and drawing elements from Dota Auto Chess. In this competitive online game, players are pitted against seven adversaries, each vying to construct a dominant team that outlasts the rest.
In a game like League of Legends, a single overpowered champion can only be selected by one player and would be banned in competitions once discovered. In TFT, all Champions and items are available all at once creating many possibilities for players to find exploits in.
Balancing the dynamic of Teamfight Tactics (TFT) is a compelling challenge. Comparing it to card games like Hearthstone, where adjustments are made through a limited set of variables, TFT presents a stark contrast with its myriad of factors such as health, armour, animation speed to name a few.
Initially, it might seem that having numerous variables at one’s disposal would simplify the balancing process. Even minor adjustments can significantly influence the game’s equilibrium. For instance, a mere 0.25-second reduction in a character’s animation speed can transform an underperforming champion into an overwhelmingly dominant force.
The sensitivity of each variable is due to the intricate interconnections within the game. A single element that is either too weak or too strong, regardless of potential counters, can trigger a cascade of effects that alter the entire gameplay experience.
Consider the analogy of a card game where an overpowered card exists. In such a scenario, there are usually counters or alternative strategies to mitigate its impact. However, if a card is deemed too weak, it’s simply excluded from a player’s deck without much consequence. Contrast this with a game like Teamfight Tactics, where the strength of a champion is intrinsically linked to its traits and the overall synergy within a team composition. If a champion is underpowered, it doesn’t just affect the viability of that single unit; it extends to the entire trait group, potentially diminishing the strength of related champions. This interconnectedness presents a challenging balance but manageable through data analysis. Player perceptions of balance are shaped by this data.
Vladimir The Placebo, and Vain the Unappreciated
The character Vladimir in League of Legends had become notably powerful, overshadowing others in the game’s “meta”. To address this, developers proposed minor tweaks to balance his abilities. However, when the update was released, Vladimir’s dedicated players were outraged, believing their favourite character had been weakened to the point of being nonviable. But, in an unexpected turn of events, the nerf was never actually implemented due to an oversight. The players’ reactions were solely based on the anticipated changes they read about, not on any real modification to Vladimir’s capabilities. This psychological effect influenced Vladimir users to play more cautiously, while their opponents became more bold, illustrating how perception can shape reality.
Data only reflects the current state, not the potential. Particularly in a strategy game like Team Fight Tactics, which is complex and “unsolved”, players’ understanding and use of characters can be heavily swayed by their perceptions. Perception often becomes the player’s reality.
In the fifth instalment of the game, there emerged a low-cost champion named Vain. Initially, after the game’s release, the consensus was that Vain was underperforming—deemed the least desirable among her tier. The development team had reservations; they believed she wasn’t as ineffective as portrayed. Consequently, a minor enhancement was scheduled for Vain. However, before the update could go live, feedback from players in China indicated they had discovered a potent strategy for Vain. This revelation transformed her status drastically within three days, elevating her from the least favoured to potentially one of the most overpowering champions ever introduced.
This scenario underscores the limitations of relying solely on data, whether from players or developers, as it may not reveal the full picture. Balancing in gaming is often perceived in black and white terms by the player base—they view a character as either strong or weak, which leads to calls for nerfs or buffs. However, they frequently overlook the subtle intricacies and minute adjustments that can have significant impacts on gameplay.
Different Players
In competitive games like League of Legends, different balance parameters are set for various levels of play. A character might dominate in lower ranks but may not be as effective in higher tiers of play.
When it comes to balancing games like Teamfight Tactics, developers have taken an approach by balancing the game as if computers were playing it. The game is designed to test strategic thinking rather than reflexes and mechanical skill.
In Army A versus Army B, the outcome is predetermined. However, this does not mean we should nerf an army simply because it performs well at a lower skill level. Instead, it presents a learning opportunity for players to improve their skills.
Interestingly, perceived imbalances can serve as educational tools. As players engage with the game, they gain knowledge through experimentation. For example, if a player tries a certain composition with specific items and it fails, they can reflect on whether it was a misstep or an unforeseen event. Learning that a champion doesn’t synergize well with a particular item is valuable knowledge to carry into future games.
There are build combinations that could potentially disrupt the game’s balance if the perfect mix is achieved. This aspect works well in single-player modes like Roguelikes, where the aim is to become overwhelmingly powerful. However, the challenge arises in maintaining this sense of excitement while ensuring these powerful builds don’t lead to exploitation in a multiplayer setting.
Risks & Rewards
Balancing isn’t merely about pitting one army against another to see the outcome. It’s also about the risks involved in reaching that point. For instance, if there’s a build that appears once in every 10,000 games, requiring a perfect alignment of circumstances, it’s only fair that such a build is more potent than one that’s easily attainable in every game. Therefore, in games like TFT, balancing involves weighing the relative power against the rarity of acquisition, ensuring that when a player encounters a significantly rare build, it feels justified because of the risks taken or the innovative strategies employed.
TFT thrives on the abundance of possible outcomes, with a multitude of combinations and variables at play. It’s crucial for these games to offer not just a handful of ‘high roll’ moments but a wide array, potentially hundreds, allowing for diverse gameplay experiences. TFT reaches its pinnacle when players are presented with numerous potential strategies and must adapt their approach based on the augments, items, and champions they encounter in a given game, crafting their path to victory with the resources at hand.
New Content Updates
The allure of both playing and developing this game lies in its inherent unpredictability. Each session is a unique experience, a stark contrast to many Roguelike games that, despite their initial promise of variety, tend to become predictable after extensive play. Teamfight Tactics, however, stands out with its vast array of possible combinations. Just when you think you’ve seen it all, a new set is introduced, refreshing the game entirely. This happens every four months, an impressive feat that adds a fresh roster of champions, traits, and augments.
The question arises: how is it possible to introduce such a significant amount of content regularly while maintaining balance and preventing the randomness from skewing too far towards being either underwhelming or overpowering? The answer lies in ‘Randomness Distribution Systems’. These systems are designed to control the frequency and type of experiences players encounter. As a game designer, the instinct might be to embrace randomness in its purest form, but the key is to harness it. By setting minimum and maximum thresholds for experiences, we ensure that all elements of randomness fall within these bounds, creating a balanced and engaging game environment.
In Mario Party, have you ever noticed that you never seem to roll the same number on the dice four times consecutively? This isn’t a coincidence; it’s actually by design. Nintendo has implemented a system of controlled randomness to prevent such repetition, as it could lead to a frustrating gaming experience.
This concept is akin to a crafted ‘Ludo-narrative’, where game designers aim to shape player experiences through seemingly random events, but with a controlled distribution to keep the gameplay enjoyable and engaging. The goal is to allow players to encounter extreme situations, but these are skewed towards positive outcomes rather than negative ones.
This scenario might distort the essence of randomness, but surprisingly, players may not voice their dissatisfaction. Despite the statistical improbability, with millions of players engaging in a game daily, someone is bound to encounter this experience. Even odds as low as 1 in 10,000 can impact thousands of players at scale, highlighting the importance of considering player frustration as a crucial aspect of the gaming experience.
Perfectly Balanced
When discussing game balance, it’s not just about whether a feature is frustrating; it’s about recognising that frustration indicates a flaw in the design that needs to be addressed and learned from. Game balance is a complex, ever-evolving challenge that developers continuously tweak, hoping to align with player expectations. However, there will always be criticism, no matter the adjustments made.
The perception of balance is significant, and within any gaming community, you’ll find voices claiming that perfectly balanced video games don’t exist. Some players set such lofty standards for balance that they seem nearly impossible to meet. The key is establishing a solid foundation that dictates how the game should unfold, ensuring that the core gameplay aligns with the intended player experience.
In Teamfight Tactics, the ideal duration for rounds is targeted to be between 18 and 25 seconds, which is considered the standard for a well-paced battle. By setting these benchmarks, developers can align the game’s balance with this envisioned state and is key to achieving a finely-tuned game.
Conclusion
It’s essential to have a clear, balanced vision for the game and to persistently follow through with it. Balancing a game is a complex and dynamic challenge, not merely a matter of adjusting to data but also managing player perceptions and their experiences of frustration. Navigating this ever-changing landscape is no easy feat, especially when the development team must juggle multiple roles at a rapid pace. However, it’s precisely this complexity that adds to the excitement and enjoyment of Teamfight Tactics.
A few years ago, I was talking to my manager about how we have a lack of people that like to perform Code Reviews, and it needed to change. He said since I had gained a reputation of being a good code reviewer, maybe I should do a presentation. I said it probably wasn’t that easy to come up with something generic and it needs a lot of thought. Maybe I would attempt to write a guide one day. Over the years, I have bookmarked a few blogs, collated some notes and ideas; but never put them together. So here is my attempt.
What Does Code Review Mean?
When a software developer has code they think is ready for Testing, they need to commit these changes to the main branch. Before this happens, there is often an approval process by their peers. This is the Code Review. The Github terminology is Pull Request. I’ve always found that to be a strange name.
Andy: pull requests?! since when do we use that term? Me: when nerds go dating
I suppose you could say the aim is to attempt to perfect the code. But “Perfect” is not defined; does perfect code even exist? The goal is to always write better code.
There’s the 80/20 rule of Project Management which states that 80% of the feature can be written in 20% of the time; but then to perfect it, it then takes the remaining 80% of the time. Sometimes you have to be pragmatic between programmer effort and actually delivering features on time.
Code Reviews are a great opportunity to learn either as a reviewer or the reviewee. Receiving questions, suggestions, or instructions from your team members help you learn about the programming language, design patterns, common practices, alternate ways of approaching problems, or learn about the actual domain. As the reviewer, you are practising critical thinking and essentially debugging in your head which is a skill you will be doing every single day; since more time is spent reading code than writing code.
We all benefit from an extra pair of eyes. Sometimes you think you have written some great code, and keep reading over it and think it is fine. Then another team member glances at it and spots improvements which you agree on. I’ve seen plenty of smart developers write poor code which I know they would flag up if their colleagues wrote it.
Looking at Code Reviews helps team members gain context on other pieces of the code base. Having more awareness of what code is changing gives you a headstart in fixing future bugs when you know exactly where to investigate.
Code Reviews take time, and time is money. Formal reviews are costly; that cost has to be justified by the nature of the software project. Delaying new features and bug fixes can have an impact on users, but also other team members due to higher code churn causes merge issues when it is checked in. The longer code changes stay in a branch, the more likely you run into these merge issues.
Readable code
I have heard some developers say “reading code is way more difficult for me (than writing)”. Given that programming actually involves more reading than writing, then I think developers that struggle to read code need to be questioned!
Since code is read many more times than it is written, you need to write code that is easily read by humans. If we were writing code just for the computer, we’d be writing in binary. Programming languages are meant to communicate to humans first, then computers after that. Although you don’t want to be too verbose, if you are optimising for the fewest lines of code; you are probably optimising the wrong thing.
Tone of your comments
The art in reviewing is to flag all possible problems, but you need to be careful how you word comments because finding problems in someone’s code can be seen as a personal attack, and people can feel like you are questioning their coding abilities. Reviewers can easily come across as condescending and demoralising their team members. The aim of the review is to improve the code, and grow your team member’s skills. It’s not about the reviewer’s ego or a chance to show off.
However, developers should take pride in their work, so attempting to submit some code that took no thought probably deserves a wake-up call. The severity of bad code can differ between systems. Does the software involve the potential loss of life (e.g. a medical device, vehicle safety system) or the handling of millions pounds of assets? Or is it a simple command-line utility that your colleagues use? In these severe situations, I can understand how reviewers lose their patience and write brutal comments. See Brutal PR Comments section.
The reviewer often holds a senior position, but could also be on the same level as the author. Regardless of any power dynamics, you need to bear in mind the author may have way more context and be involved in other meetings about the work; leading them to write the code in the way they did. Given that possibility, it’s better to phrase comments as questions rather than stating something with authority. Instead of “You should do it Y way” you could say “Can you talk about your reasons for choosing X? In most cases I’d use Y, is there a reason not to here?“. This approach comes across as more collaborative and friendly. In the case your suggestion was correct, they should realise when they try to answer. In the case that you are wrong, they can explain the additional information they have; and so all participants can learn.
You don’t always have to be negative as well. You can even add comments to add praise, or to admit that you learned something by reading their code. A bit of positivity helps negate any negative tone you had in previous comments.
Reviewers are empathetic that the recent joiner might not be aware of all the coding guidelines especially ones that are informal ones (not written down anywhere). Reviewers are also well aware that a new joiner is still ramping up with the codebase, and won’t be up-to-date with conventions and functionality.
Where To Start:
Sometimes I just dive straight into the code and start reading. Sometimes this can give you the best judgement to whether the code is clear or not. If it isn’t immediately obvious, or the code looks/”feels” strange, then I will try to gain more context. I read the Title, I read the description provided. I look at the linked Bug/Requirement and read the information that the developer had, and understand the problem they were trying to solve.
To get an overview of the change, you can glance at which files were changed/added/deleted. Sometimes reading the folder names gives you more context. Are you looking at client or server code? Are there database changes? etc. Do the files match up with what you expect; maybe they added a file by mistake. Maybe they added some temporary “hack” they made to test their code and haven’t deleted it.
Read In More Detail
Two questions to keep in mind are
• How could I improve that?
• What could go wrong?
You can review it by two approaches: readability and functionality.
If you can’t understand that code now, in one year you won’t get it either. If code is hard to understand, then it is harder to change, and more error prone. An easy thing to look for are typos, unclear names (“Variables with intent”), ambiguous names, wrongly named code.
Small functions are easy to read, are less likely to need code comments, and also easy to test. You can look for large functions and consider if they can be broken down.
If there are code comments, are they needed? Do they add value? Comments are good to explain hard-to-read code, or weird design decisions the author made because you can’t think of a better solution. Maybe there is a way to make the code more readable without the comments.
Does the code conform to your “coding standards”. An example can be casing eg:
// camelCase
// PascalCase
// snake_case
// kebab-case
Your team may have other rules about styling such as:
“returning early from methods”,
using certain syntax,
Keep argument list small.
However, if a given piece of syntax should never show up in your codebase, you should really add an automatic “linter” rule that will either flag it, or automatically fix it. It’s a waste of time to make this a manual process and it doesn’t provide a ton of value. You could say “if it’s not worth it to add the rule; then it’s probably not worth it to highlight it in the code review either”. Not all things can be linted though such as coming up with good names for variables/methods/classes.
Sometimes, you may have a recommendation that should not prevent the code from moving forward, but you want to note anyway. Marking these things with a prefix such as “NB” or non-blocking can be a great way to flag a small annoyance that the author can ignore if they don’t want to fix now. You might do this if you don’t feel too strongly about the issue, or think it’s not worth holding up the change. You always need to remember to be pragmatic.
A little code review habit I appreciate:
When my teammates make a minor suggestion, they often prefix it with "Nit:"
It's a short, polite way to say "I know this is minor, but I suggest changing this."
Useful for misspellings, typos, naming suggestions, etc
The functionality approach would be considering code for how it meets the requirements, as well as “non-functional” requirements like scalability and security. Is there any code that is redundant or duplicated? Are there any obvious bugs like the classic Null Reference or Index out of bounds? You could also ask yourself “How would I have done it? Is it my way better? Could that logic improve the current one?”
Has the person added any Unit Tests, and if not, can they? If tests have been deleted, is this the correct thing to do?
Does this change impact another system?
Are errors handled correctly?
Is the functionality over-engineered? Are there new third-party dependencies that are unnecessary?
Are they using a design pattern incorrectly?
Does the feature cause problems as the number of users scales? It might work on their machine, but will it work in live?
What should I do when I don’t know the language of the code?
There can be scenarios where you don’t know about a certain coding language or technology, but you have to review it. You can make it clear that you have limited knowledge before making comments. If there is no automatic linting on the code, a good starting point is the superficial review: look for typos and well-name of variables. Try to ask questions so you can learn, and also check their understanding. Sometimes asking questions gets them thinking and they find flaws in their own code.
A related point to make here, is that if someone is writing in a language they aren’t fluent in, they can write against the convention. We had a developer writing C# but was fluent in C++, so we often see him write If statements backwards like: “if (false == value)” which is a c++ naming convention.
“If you’ve ever seen Java code written in a C++ style or vice versa, you’ll know what I mean. I’ve previously referred to this in terms of speaking a language with an accent – you can speak C# with a Java accent just as you can speak French with an English accent. Neither is pleasant.” – Jon Skeet
Approve/With Suggestions/Reject
Once you have written your comments, you can set an overall status of the review. The terms differ depending on the system (ADO/GitHub etc) but it generally follows Approve/With Suggestions/Reject.
It’s possible to select an overall status that doesn’t match your comments.
Andy rejected the pull request The change is implemented perfectly, I’m just thinking we could alter the design slightly to provide better flexibility.
One developer explained his process of what he selects. So he can Approve, but also leave comments. But that is a different message from when he uses the “With Suggestions” and leaves comments.
The way I do code reviews is as follows · Just comments – I’m starting a conversation on the change, and will Finish it later, usually I am expecting you to reply, so feel free to reply to my comments. I usually choose a “finish reason” after discussion. · “Looks Good” – just check it in. · “Looks Good” + Comments – just check it in but I had something to say. · “With Comments” + Comments – there are minor things, style/formatting that I'd like changing, please make where appropriate (I can be wrong) and check in. I don't need another review. · “With comments” + No comments – I am agreeing with someone else’s comments, or if I was first, I probably clicked the wrong button - check with me for clarification. · “Needs work” + Comments – Please make the suggested changes and submit a new Code Review. · “Needs work” + No comments - Agreeing with someone else’s comments, or if I was first, I probably clicked the wrong button - check with me for clarification.
Brutal PR Comments
John If you want something done, do it yourself. Yesterday at 10:06
John Well, this shows that you did not even attempt to OPEN the DB Utility Tool, let alone test via the DB Utility Tool. It would crash opening this. Line 351 | 5 hours ago
John I have not reviewed any of the C# code, I expect it to be as depressing as the DB code though. 5 hours ago
John What the hell is this doing in here? Also why have you stopped inserting the ONE important thing from this trigger - the change to the organisation! Line 101 | 5 hours ago
When To Do A Project Review
When it comes to merging an entire project, this can consist of hundreds of files. Despite being reviewed by team members, the final merge will be reviewed by an Expert. We have tried to get Experts involved early in the projects but since it can take a long time and the deadline can be far away, they aren’t inclined to do it. Then when you want to release it in the next few weeks, they will review it and dump loads of comments on it, blocking the merge.
“This is taking a long time and there are quite a few problems with it, nothing that can’t be fixed in a week or so, but some things that should have been flagged up by someone / thing before it gets to me. This process has to change.” – Expert
You probably need to ensure that each team has an Expert Reviewer in the team, so that quality reviews are done throughout the project. We often didn’t have teams structured in this way.
“they need to stop having teams that don’t have at least one person who knows what they’re doing” – Dan
One of my colleagues wrote the following about this issue. He often gets blamed for holding up projects when he is being asked to review with limited time. Any feedback he gives then blocks the project, and if he doesn’t review in time, then he also blocks the project:
Mike’s Pull Request (PR) ideas
For the most part we are doing a good job with pull requests, but occasionally I feel we can do better. I’ve thought of some useful guidelines, that will ensure quality, again most people are following these, so great job, but please keep them in mind PR Guidelines
Your team should be self-sustaining:
As a developer you should always be able to send your PR to someone in your team for a thorough review.
If you’re regularly having to pull in external resource to review your code, you should make your team leader/scrum-master aware, so they can discuss this with resource managers.
Code should always be reviewed internally before someone external to the team is added to the review, this ensures that the external reviewer only sees code which has survived the initial review pass.
If external expertise is required:
Let your team leader/scrum-master know that expertise is required, identify the person with expertise and contact them to let them know you will require their time, preferably a sprint in advance, so they can discuss with their team and prioritise time in the next sprint.
Your PR is not “urgent” unless its SLA is at risk of expiring.
You are not to refer to external reviewers as “a blocker”. If external expertise is required, then it is an essential part of the development process, and they are only seen as blockers due to poor planning.
Draft PRs are not very useful to external reviewers, since you can only comment on them: not approve, but they’re great for sharing day-to-day progress updates between remote developers.
They should be used to update your team’s senior developers and technical architects on your progress, and receive feedback.
I would say in a well-oiled team, that developers should share code each day, by some mechanism that makes it visible to their seniors for feedback, this ensures valuable short feedback-cycles and is the most cost-effective way of ensuring quality during development
Respect the reviewer
I think a key takeaway from this idea is that you need to respect the reviewer. They are kindly dedicating their time. You also need to understand that the review process is there to improve code quality and not just a box-ticking exercise.
I find that sometimes people create a review, then instantly message you about it – even though you are often notified through alerts you have set up, or will check your review list when you have free time. Being nagged is not nice.
There have been times where I have submitted comments then am messaged a few minutes later asking to re-review. If you ask that quickly, then I know that you didn’t even build your changes never-mind test them to see if they work. Should I really approve something you took no care with? (Maybe a 100% unit tested solution would mean that this is possible though).
We also usually have a rule that 2 people need to review, so even if I approve it; then it still cannot go in, so I hate being nagged to approve when there is time. Sometimes code needs more thought to consider if there’s more aspects to it than initially thought. A rushed review isn’t a quality review.
Making statements like “please approve this, it needs to be released tomorrow” isn’t good for the reviewer. I want to be able to review it properly, leave comments as I wish, and even Reject it if I really don’t think it will work.
Conclusion
If you see reviews as just a box-ticking exercise, then it defies the whole point of the review. It really needs buy-in throughout the team. If you want quality and continuous improvement, then support the review process. If you want speed, then it can be sacrificed at the expense of quality, and other benefits.
The code review process has a wide range of benefits and outcomes: teams see improved code quality, increased knowledge transfer within and across teams, more significant opportunities for collaboration and mentorship, and improved solutions to problems.
In my blog How To Make Your Team Hate You #3, I wrote about Barbara, a Tester who I used to work with that caused a lot of conflict and was constantly trying to get out of doing work, whilst taking credit for other people’s work.
Recently, when going through old chat logs, I found some brilliant “dirt”, which, in hindsight; I could have probably used to get her sacked because it was fairly strong evidence that – not only was she not doing work; she was falsely passing Test Cases. When you are paid to check if the software is behaving correctly, claiming you have tested it is very negligent.
When running test cases, if you pass each step separately, and haven’t disabled the recording feature, Microsoft Test Manager would record your clicks and add it as evidence to the test run.
I think the feature worked really well for web apps because it can easily grab the name of all the components you clicked, whereas on our desktop app, it mainly just logged when the app had focus and read your keystrokes.
The bad news for Barbara, is that she liked going on the internet for personal use, and liked chatting using instant messenger as we will see.
The Remedy
Type 'Hi Gavin. ' in 'Chat Input. Conversation with Gavin Ford' text box Type 'Hi Gavin. I've been telling everyone about this concoction and it really worked wonders for everyone that's tried it, myself included. This is for cold, cough and general immunity. 1 cup of milk + 1 tablespoon honey + 1/4 teaspoon of turmeric - bring to a rolling boil. Add grated root ginger (2 teaspoons or 1 tablespoon) and let it boil for another 5 mins. Put thru sieve and discard root ginger bits (or drink it all up if you fancy), but drink it hot before you sleep every night and twice a day if symptoms are really bad. Hope you feel better soon. 🙂 ' in 'Chat Input.
Pumpkins & Tetris
Type 'Indian pumpkin growing{Enter}' in 'Address and search bar' text box Type '{Left}{Left} {Right} {Left}{Left} {Up}{Up}{Up}{Up}{Up}{Up}{Left}{Left} {Up}{Up}{Up}{Right} {Up}{Up}{Left} {Right}{Right} {Up}{Right}{Left}{Left}{Left}{Left} {Right}{Up}{Left}{Left}' in '(1) Tetris Battle on Facebook - Google Chrome' document
Me 11:26: Barbara has been doing the Assessment regression pack for 3 days she says there is only a few left in this morning's standup. There's 15 left out of 27 Dan Woolley 11:28: lol Me 11:29: I don't even think she is testing them either. It looks like she is dicking about then clicking pass Click 'Inbox (2,249) - [Barbara@gmail.com]Barbara@gmail.com - Gmail' label Click 'Taurus Horoscope for April 2017 - Page 4 of 4 - Su...' tab Click 'Chrome Legacy Window' document Click 'Chrome Legacy Window' document Click 'Close' button Click 'Paul' label in the window 'Paul' Click image Type 'Morning. ' in 'Chat Input. Conversation with Paul' text box Type '{Enter}' in 'Chat Input. Conversation with Paul' text box Step Completed : Repeat steps 6 to 19 using the Context Menu in the List Panel End testing
Next Day
Me 12:42: Barbara said this morning that all the Assessments test cases need running. She has just removed them instead
Greek Salad
Type 'greek salad{Enter}' in 'Chrome Legacy Window' document Type 'cous cous salad' in 'Chrome Legacy Window' document Type 'carrots ' in 'couscous with lemon and coriander - Google Search ...' document
Click 'Vegetable Couscous Recipe | Taste of Home' tab Click 'Woman Traumatized By Chimpanzee Attack Speaks Out ...' tab
Marshall 11:50: oh damn haha these are things that were inadvertently recorded? Me 11:51: yeah Marshall 11:51: ha you've stumbled upon a gold mine Me 11:53: I don't think she is actually testing anything. I think she just completes a step now and then the other day Rob went to PO approve an item and he couldn't see the changes because they hadn't even patched
Haven’t Been Testing From The Start
we are in Sprint 8 and Barbara suggested Matt does a demo on the project so we know how it works; it’s a right riot
Me. 4 months into a project
Bad Audits
I wonder if Barbara was inconsistent with how she ran the test cases, or realised by the end that it tracked you. So near the end of her time, she was just hitting the main Pass button rather than passing each individual step. Managers liked the step-by-step way because if you mark a step as failed, it is clearer what the problem is.
Me 16:15: Barbara called me. Matt is monitoring our testing! Dan Woolley 16:15: how? Me 16:17: looking at the run history she said he was complaining it wasn't clear which step failed because we were just using the main pass button, and also bugs weren't linked when they had been failed I told Barbara I linked mine, then she checked and said it was Sam that didn't. I checked and saw it was Sam and Barbara so only the developer did testing properly 😀 you just can't get the staff
Obviously The Wrong Message
Me 09:12: Bug 35824:Legal Basis text needs to be clear what's all that about? Barbara Smith 09:12: Charlotte asked me to raise it for visibility We need to fix the text that appears on that tab Me 09:13: what's wrong with it? Barbara Smith 09:21: It says that on the Bug LOL And with a screenshot (mm) Me 09:22: it says "needs to be clear" and has a screenshot with a part of it underlined. But it doesn't say what the text should be instead.
She rarely logged bugs because she did minimal testing. Then when she did log something it didn’t have enough info to be useful.
Karma
Barbara got well conned in the end. She was gonna take the entire December off but delayed it for the end of the project and then she has been told she has lost her job, so they are telling her to take the holiday now. She had just bought a house so would be relying on the money for the mortgage payments. Luckily for her she got accepted for a new job, but she was looking for a brand new way of getting out of it, as we will see below.
Tax Fraud
Type 'what if I don't contact hrmc about my tax{Enter}' in 'Address and search bar' text box Sam 11:23: Ha ha You are savage Me 11:24: she is gonna get jailed for tax evasion
Recently, my employer has been looking to analyse their impact on the environment and the aim is to become carbon-neutral. A group of people have taken ownership of this idea and call themselves “Green Champions”.
During the launch of our Sustainability Strategy, we announced our environmental goal: “Environmental sustainability is an integral part of our operations and value chain delivered through steady, measurable improvement”.
I find a few of their announcements a bit misleading, or fairly random with what they take issue with.
For example, someone requested a “sharable greeting card” idea. These would either be physical cards people can send, or something similar to email templates we can send to each other for events such as Christmas. This idea was declined.
“Due to the environmental impacts from sending mass communications through mail or email, this will not go ahead“
We keep hearing about how we need to cut down the number of emails we send because of how bad they are for the environment – but I don’t understand the logic.
Me: Why are emails always said to be bad anyway? does sending a Slack message cause the ozone layer to deplete as well? Can you architect me a Green Email system? think this is gonna be the next big idea GreE-nm@il The latest big tech company
Architect: what a load of absolute bollocks! just justifying not spending money if emails are so expensive how much electricity is wasted by the "cameras-on policy"
So emails are bad. Instant messaging is fine. Video calls are encouraged.
Travel
Is that really what we should be focussing on anyway? Recently, the entire UK business travelled to one location for some presentations which we could have easily done remotely. Then a few months later, most of the directors and some senior leaders flew to India to do the same presentations. The amount of emissions caused by all the cars/coaches/planes etc, and all the money wasted on hotels and food expenses is surely a bigger problem than sending a few emails for special occasions.
Cars
We have also replaced all our company cars with electric ones. Discounts were available for people to personally purchase an electric car. We now have charging stations at the office, and it seemed a few people were quite eager to travel to work at the office just so they could charge their car for free. Isn’t that encouraging more unnecessary travel, and increasing the company electric bill?
Whose problem?
“our estate is now fully in the AWS cloud, a huge milestone on our road to net zero”
Green Champion
Isn’t that like dumping your rubbish in your neighbours garden?
This brings us to another point. If you have transferred a carbon footprint from one company/person to another, then the problem still exists. We claimed that moving our servers from on-premise to the cloud has reduced our carbon footprint. The servers are still there though, they just belong to a different company. There could be savings elsewhere though because our servers were on 24/7 but a big selling point of the cloud is that you can use auto-scaling (high demand uses more servers, low demand then uses fewer). Surely you can use this feature on your own servers though; it was just that we didn’t.
Are Electric Cars even environmentally friendly?
Let’s call upon AI to write part of the blog…
Electric vehicles (EVs) have been hailed as a cornerstone of the transition to a more sustainable future, promising a reduction in the carbon footprint associated with personal transportation. However, the environmental impact of EVs is a complex subject, with various factors that could potentially diminish their green credentials.
One of the primary concerns is the carbon emissions associated with the production of EVs, particularly the batteries. The manufacturing process for EV batteries is energy-intensive, often relying on electricity generated from fossil fuels. Studies suggest that the emissions from producing an electric car can be up to 70% higher than those from manufacturing a traditional petrol vehicle.
Another point of contention is the source of electricity used to charge EVs. In regions where renewable energy sources like wind or solar power are less prevalent, the advantages of EVs in reducing greenhouse gas emissions may not be as pronounced.
Furthermore, there is the issue of battery disposal and recycling. EV batteries contain hazardous materials, and improper disposal can lead to environmental contamination. While recycling programs are developing, the infrastructure is not yet widespread, and the process itself can be resource-intensive.
Earth Day Blog #1
A colleague posted an internal blog on what they did for Earth Day.
Here in my local town, they had an event at the Town Hall where lots of local groups gathered to raise awareness and share what they do in particular.
The Thirsk Wombles work tirelessly to clear rubbish from our town. I had no idea what a problem the disposable vape containers are. The Thirsk Wombles have collected a really big boxful in the first 20 days of April and the lady I talked to reckons they will be able to do that and more every month.
I then had a lovely long talk about "North Yorkshire Contented Bee Project" and bought some amazing local honey - very few food miles, masses of taste and it'll help with my hayfever.
Earth Day Blog #2
My personal passions are aligned with the department I work within. I wanted to share today an aligned post for Stress Awareness Month and Earth Day next week about eco-anxiety.
Eco-anxiety (or climate anxiety) is a feeling of distress that comes from thinking about environmental breakdown, based on what we see happening around us.
It is impossible to ignore the information we receive via news, social media etc. that our planet earth is in trouble. We hear information that the planet is warming up, freak weather conditions, wildlife species declining and becoming extinct, overpopulation, deforestation and the list heartbreakingly continues. The effect our modern lives are having on the planet is now catching up with us and it is hard to ignore the information we are seeing. So much is now being documented via TV programmes such as Planet Earth and the The Earthshot Prize initiative.
I, myself, hold my hand up and admit I have feelings of sadness and guilt about the impact modern life is having on the planet. Every day I make a conscious effort to review my recycling, plant more native biodiverse plants, use less aerosol products, review the products used in my home to reduce the amount of microplastics and chemicals down the drain, say no to fast fashion, reduce my heating by 1 degree and have No Meat Mondays.
Despite all this I know I can do more. But where do I start and do my small actions help?
To all of you reading today...every small action helps. As the famous saying goes ….Knowledge is power. I have learnt so much through the Green Champions about what else can be done, alternative products and, more importantly, there is a group of people who have the same passions and feel the same. The Wellbeing Programme and Mental Health First Aiders invaluable content during Stress Awareness Month assists me in navigating through eco-anxiety.
If I may pass on any nuggets of inspiration to you today, it is that you are not alone in any types of stress or anxiety felt. I assure you many people feel the same and change is possible and 100% can be achieved.
Fear of Climate Change - Climate change and the state of nature is having an impact on mental health - Watching the world change sometimes combines with feelings of personal guilt. - Witnessing climate indifference may evoke feelings of anger, powerlessness and hopelessness. Leading to being uncomfortable and overwhelmed - Aligns with Stress Awareness Month and Earth Day - Speak up and seek support. Take action — even the smallest contributions make a difference. - Your feelings are a healthy response to this topic. Our MHFAs are 100% available to talk to. - You are not alone
Closing Thoughts
Eco-anxiety sounds very problematic. How can you live life with that much worry? It’s really not a healthy mindset to have. There’s loads of other issues in the world too. Does she spend all her time crying when she sees food due to child poverty which she has no control over? How many things does she do that are actually bad for the environment but is unaware of? Does she drive an electric car, thinking it is 100% eco friendly?