What happened to the revolutionary software?

I work at a software company whose flagship product (which I’ll refer to as SystemNow) has been around for maybe 20 years now. However, even way back in 2016, the software architects began a project to look into what the next software would look like (ProjectFutures), and development really started ramping up around late 2018/early 2019

I reckon it took around 5 years of development for SystemNow to be initially released (then continuously enhanced and fixed), so maybe you would think 2023 could have been a good release date for ProjectFutures – its core features at least.

However, in 2025, as I began drafting out this blog. Despite the claim from managers that ProjectFutures has been “released” and is a “success”; in my opinion, it’s like 10% of the scope, and barely been spoken about. 2025-26 has seen little progress.

I think it’s been an absolute disaster and I do wonder where it stands among other high profile software disasters. So I thought I’d reminisce and go back through the many poor decisions and failures along the way. You could say there’s all kinds of failures, and you probably can’t point your finger at any one aspect/decision, but many intersect.

Why Rewrite It?

When we have a very successful software product, then why rewrite it? I had a quote from Cory House who explains some of the reasons for the initial drive:

“Each year, I work with a few teams that are basically doing the same thing: Rewriting old web apps to use React. The old apps use ASP NET Web Forms, JSP, jQuery, Cold Fusion, Perl, etc. Often, the requirements are simple: “Make it work like the old one”. 

This might seem silly. If it ain’t broke, don’t fix it, right? But there are good reasons:

  • They’re having a hard time hiring and keeping good help.
  • It’s harder for them to add new modern features compared to modern technology.
  • The old tech isn’t actively supported.”

Internally, I’ve heard various of our Software Developers complain about “Technical Debt” (or the perceived amount of it); which was the excuse for slowly getting features out, or an excuse for introducing more bugs as we fixed other bugs. Common excuses like “this code is messy and hard to read or change”, “there’s a lack of tests so if I change it, some requirement may break which I didn’t know about”. The “old tech” was often criticised for contributing to a slow deployment process. We did lose a few developers who wanted to go work on modern web technology using Cloud and other buzzwords.

So for a company with previous experience making a market-leading software product, and with a large team of passionate developers who think they know what great code looks like; it should be pretty easy to deliver, right?

Failure Reasons

Reason 1: Skills in the wrong areas

SystemNow was C# front end and SQL Server backend. The vision for ProjectFutures was that it would be React (Javascript) frontend, then using various Cloud aspects (Lamdas, API, DynamoDB). Some developers have the capacity to learn quickly and could instantly claim they are a Senior in the new language. However, most people basically go from a Senior in C# to Junior in React. If most people are Junior, then you end up writing code that is full of technical debt and so you cause the exact same problem you criticised the old SystemNow code for. (I think this was proven by the restarting or cancelling projects as explained later).

As a similar point, we also hired people too quickly. We have severely reduced hiring people in the UK and gone for hiring in India. Aside from some being poor communicators (and then having Indian English grammar and phrases which people have to get used to; (ie “do the needful”, “preponing meetings”); being Indian isn’t inherently bad. The problem is that as a business we have wanted to hire large amounts at once; which then leads to an “accept all” type of hiring. In addition to wanting to pay low wages as well; you end up attracting people that can’t get a job elsewhere. So you end up getting loads of Junior developers all starting at once, some with no experience of making software at all, and some only having basic understanding in a programming language we aren’t even using e.g. Python. There were some good hires but I think it would be better just to go with far fewer staff but higher quality engineers that can really drive the decision making. 

Reason 2: The Cloud

The current way of working is that the developers created software, then gave the build to the Deployment team to release. The clear separation means you can specialise your knowledge. 

This mindset to move to “the cloud” means that now the Developers have to have more understanding how their code is actually deployed.

Historically, over time, specialist “Frontend” developers and specialist “Backend” developers were replaced with more generalist “Full Stack” Developers. Now you are throwing servers and deployment processes into the mix (DevOps), so now it’s like “Full Stack plus DevOps”.

Due to this, you spend more time learning, more time investigating, and giving worse output because you don’t specialise in it.

The world of cloud computing is full of buzzwords, especially if you use multiple providers (Azure vs AWS naming is completely different). This is extremely daunting and gives you a massive list of things to learn before you can even begin to communicate with people.

No idea where I stole this from, but I think it sums up the attitude that managers had:

The company’s technical leadership thrives on keeping tabs on the rapidly growing technology industry and new innovations built on top of fully-managed services provided by cloud providers. Like many other technical leaders, they are keen on learning new buzzwords and leaving it up to their developers to do something useful with them. The hottest trend these days seems to be ‘serverless’. The promise of consumption-based pricing, where you only pay for what you use and nothing more, is enticing. Furthermore, they have heard many good things about how serverless platforms help you prototype and develop faster by reducing the amount of code and configuration required by more traditional web platforms. The cherry on top is the fact that serverless platforms also tend to manage scaling resources to meet demand automatically, giving them dreams of releasing a wildly popular new ride share app and enjoying near-instantaneous customer growth.

We were a massive company and had our own Data Centres. I think Cloud is vastly beneficial to small companies who cannot invest in the up-front cost of their own server room, but also can easily scale up if demand drastically increases. We already had the resources for large demand but then moved to the Cloud just for the sake. I think managers were so obsessed with this “serverless” buzzword that they demanded it was the focus, rather than designing a solution with an open-mind, and using what was necessary to achieve the objective.

Reason 3: Fail Fast, the problematic developer, and working on tools/process

When we begun development on ProjectFutures, managers kept on telling us that a “culture of innovation” means “there is no such thing as failure”: You can try new things and as long as you feel like you have learnt, then it’s not a bad thing because failure then leads to you doing the correct thing.

This flexibility of being able to choose our tools/approach was taken a bit too literally, or extremely. So you had multiple teams investigating the same ideas. You also had teams successfully implementing something, then rewriting it using something else just to compare.

There was one problematic Principal Developer that they hired called Liam who seemed to have massive influence. Liam always persuaded people to investigate a different tool to what we were currently using. Then once people had made progress, he would then recommend something else. He wasted so much time. He’d end up arguing his case using some strange analogies too.

They are the same as chocolate bar wrappers. You need it for the delivery of chocolate, but once used, you get rid, otherwise it’s just messy 😉  

Also I see a load of chocolate bar wrappers on a desk and I instantly check them all for chocolate.. Friends don’t let friends do this.

He’d often cause conflict, switch teams, then repeat his behaviour of changing all the tools and processes, then moving on once more. Instead of just cracking on and actually writing the software, we wasted months, even years switching between different tools which then got abandoned.

“these guys makes me wonder if I can even call myself a developer if I just don’t care about processes and tooling. To me right, I consider us to be like master carpenters building beautiful cabinets. And these guys are having a massive debate about the lockup out the back where we keep the wood. Another problem is people just getting excited over every different bit of tech 

grunt 

WTF IS GRUNT????????? 

gulp too 

I’m sure some of these are being made up just to piss me off. I bet a lot of people can’t even remember how to code any more. All they can do now is manage Nuget packages”

Dean (Senior Developer)

I’m sure most developers wouldn’t actually care how their code is deployed. Just that it is deployed and is easy to sort when things go awry. Most teams were using GitHub Actions, but Liam demanded teams he worked with switch to Jenkins, and even convinced many managers that this should eventually be pushed out to all teams. However, as teams began to switch over, managers realised that Jenkin’s self-hosted nature was costly in resources, and comes with even more overhead of actually maintaining its use. So then they had a change of plan.

“The current Jenkins solution is estimated to cost $2,126.83 a month just in AWS resources.”

“(By not using Jenkins) we can become a leaner outfit that can deliver value to the business quicker by focusing on our own strengths which is delivering software, not CI/CD software”

After exploring different options, they realised that GitHub Actions made the most sense to use after all because our code is in GitHub so there’s no cross-account security issues to worry about, and we already had some free-build minutes in our subscription.

 “So far we’ve been told to do ADO -> AWS CodePipeline -> Jenkins -> ADO. So we’re excited to try Github Actions next!” – Lead Developer

A month later, the manager who announced we should be using GitHub Actions then said his team had been exploring using Netlify for his next project. Facepalm.

When the problematic Developer Liam eventually did quit the business (or we think forced out), one manager seemed hell-bent on undoing everything he implemented. Another idea that Liam had was a Tech Radar. It was basically a decision log of sorts, listing all the software/tools we have tried, the pros and cons and why they were chosen/declined. If other teams looked at it and maintained it, then it should mean they don’t waste time exploring using tools that had already had a decision made.This was also binned off due to:

“The Tech Radar required a significant amount more administration and the rules around reviewing were not sustainable, we were not able to commit to the reviews when it was in full flow.”

Reason 4: Using other tools are expensive

If you asked me to come up with a plan to remake SystemNow, or similar existing large-scale complex software, I would try to limit the initial scope by reusing as much of it as possible. So you could just create a new user interface, but when it comes to saving the data, you could still call your current/old servers. So we could have replaced the UI to be a website using React, but still use our existing C# server code and still keep the SQL Server database.

I’d also look to basically outsource part of it too. If there is a company that specialises in something, it makes sense to pay them to do it, rather than making an inferior version yourself, and having the overhead of maintaining it. 

You can always decide to implement these yourself later if you have no other projects to do.

If you try to replace the entire system yourself, it will extend the timescales further and you run the risk of failing to deliver at all.

To me, authentication is an ideal candidate for this. Originally we used Okta, but then a few months later, a Software Architect decided to create a project to make our own authentication. So that’s several developers spending months of their time making their own security solution, and none of them are security experts. What can go wrong?

The reason I heard for this quick switch is that Okta was far too expensive. It was projected to cost around £15k per month to use. This sounds mad. However, when I mentioned this to one of our Architects, he said “that equates to around 1p per user per month”. I suppose when you look at it like that, then it does sound cheap.

I don’t know how much we have truly saved when we have to pay several developers for the length of time it took to build the solution. It’s not that simple though because you have to then hope that it works, and hope we never get fined for any security issue. Also there is the “opportunity cost” that they could have been working on some actual functionality and not just allowing the user to log-in. Surely it wasn’t a priority at that moment because we had Okta working and only Developers were using it to log in as we developed.

UI Suite

Another thing we made in-house was the “UI Suite”. So this refers to the basic components like text boxes, combo boxes, buttons, icons, and the styling. Now, what is the cost of using a third-party library? Well most of them are free, and it took YEARS for us to create something decent. Every time I heard about a basic bug we introduced; I was just facepalming since we wouldn’t have had any problems if we just used MaterialUI.

As the UI Suite was being built, teams were reluctant to use it because it barely had any components. It really needed to be feature-complete by the time any teams actually started their projects. When components did exist, the bugs slowed down all development teams which caused some teams to use something else as a stop-gap. But without many teams using it, then that stifles adoption by other teams. Whose idea was the “UI Suite” anyway? Yeah it was Liam’s again!

Within a few days of me using it, I reported that the combo box did not allow you to select an item, and the text box didn’t allow you to change text programmatically. Another team reported that the Table component didn’t display unless you manually called the Resize method as a workaround. The Table also was a fixed size due to a hardcoded value. Then, once that was fixed, the column headers were not resizing correctly. Another team reported that the List component’s itemSelected property didn’t work when set programmatically.

To me, it’s inexcusable that a combo box won’t allow you to select an item. With severe bugs like that, it just slows down your development. Then it’s the opportunity cost that the team could be assigned to a meaningful project, and we could have been using a popular UX library like Material UI that we know works.

On a Development Department Meeting, one angry developer asked:

“Is having a UI Suite team a pragmatic allocation of an entire team’s knowledge and resources?”

This question then triggered a 30 min meeting the following week to try to justify why the UI Suite was important. It was full of blagging from the manager because no one really could think it was a good idea. In my opinion; if you have to create a presentation after 3 years into a project to convince people it is a good idea – it probably isn’t a good idea. He acknowledged there was “growing concern” across the department that people “didn’t think it was a productive use of our developers and there are already ready-made solutions available”.

I’m not even sure the team that was making it believed they could actually complete it. 2 members had left which meant they were down to: an experienced Developer, another Developer who had spent most of his career in UX design (drawing not coding), and a Junior Developer who had just began her year off for maternity leave. Then they had a Product Owner, Technical Manager, then a Manager overseeing it which is inflating the costs and bureaucracy.

His main slides stated the following:

Why UI Suite?
Why not? We're not all the same.
• There are several design systems out there already such as Bootstrap and Material - They all have their pros and cons
• Someone else's idea of a design system may not always work for us; having our system gives us the flexibility to define our own style
• Creative freedom to ensure our products stand-out from the rest, and are easy to use
• Large organisations such as the NHS, Government, Atlassian and Uber have their own design systems because it is right for them.

“is that right for us? eeeeerm…………………..”

manager struggling to blag

My View: I think he is blagging justifications. MaterialUI has been around for years and you can apply your own theme so I don’t understand this “design system” nonsense. You could also make tweaks to it because it is open source so could deviate if there’s some things you don’t like.

He then implied they were having a reboot of sorts and changing direction, but I’m not sure what that entailed. But if the UI kit isn’t ready, then how can people develop their apps effectively?

As more teams used it, the backlog of requests grew, and other teams were asked to fix bugs themselves. The claim was that the project was always intended to be “open inner source” and all teams needed to contribute. In September 2024, the UI Suite team disbanded and now every single new feature or bug fix needs to be implemented by teams that desire it (or you could just abandon it and use Material UI). The messaging from the managers was that it was a ”lack of shared ownership” why UI Suite failed in its current form.

“This change is made with the best intentions… It’s also just the start of a journey. As we learn and identify challenges, we will adapt and improve the approach.”

More recently, I did notice that some of the components were now simple tweaks of another 3rd party UI library so I think they had binned off their original implementations in their last reboot. Now they are on a second reboot. “ It’s also just the start of a journey” What a waste of 5-6 years.

Reason 5: Deprecating unreleased functionality

Due to the idea of “microservices”, people went wild creating repositories for everything. It does make sense to have general shared functionality like “logging” for error logging which everyone can use. Some of the projects ended up consisting of multiple repositories because they took the idea too extreme. With the sheer volume of repositories under our organisation, it became difficult to see what was actually useful. Just like the UI Suite, we had rebooted projects multiple times so then there were a few repositories for “logging”. You’d see similar names like “logging-sdk”, “logging-sdk-typescript”, “logger”, “monitoring”. Did one replace the other? Are they for different scenarios? Are they poorly named?

Occasionally, someone would announce something like the following and you do wonder who was using it and why no one had deleted it earlier:

“Hello, the client-logging package exists within the logging-sdk-typescript. It hasn’t been updated in almost two years, and more importantly it offers no functionality to store logs so it’s not fit for purpose. This client-logging package was meant as a proof of concept. We have no more need of it and all it offers is a false sense of security, so we will be deleting this package at the end of the month”

Intermission: Q4 2021 – Q2 2022 – Hyping Up Basic Features: Powered By ProjectFutures (and the framework reboot)

“This week we activated <new feature> which is a new web based application, powered by ProjectFutures cloud technology, and integrated into SystemNow.”

“Powered by ProjectFutures” sounds like some jargon to sell to customers, but was repeated by the managers internally. It was like we were gaslit by what our current progress was. There was nothing noteworthy at this point. I got sick of hearing it and we all knew it didn’t mean anything. Even when they didn’t use that phrase, it was just the usual corporate buzzwords and pretentious tech jargon: 

“The teams working in this Application Composition Platform space are doing some amazing, innovative work focused on future technology that will allow us to respond rapidly to a changing market. Keep coming back to see how this develops and please ask any questions you might have.”

Check out this guff:

“I’ve worked here for nearly 20 years and have played a part in, or witnessed, a number of momentous achievements during that time. I’ve seen us adapt to a constantly changing market, grow as a business and continuously offer new and exciting challenges for our people. However, we have often struggled to move at the pace our users demand and this becomes more of a competitive challenge as new players move into the market. We have to continue evolving and never allow ourselves to become complacent about our position in the market.

Technology is a major enabler in allowing us to achieve business agility, giving us the ability to rapidly respond to market changes and user demand. We have recently taken an important step forward in using modern tech to our advantage with the first release of the Application Composition Platform (ACP), powered by ProjectFutures.

The ACP is a suite of technology enablers that allow us to rapidly build high quality cloud hosted web applications in a consistent design system. It is essentially a lego kit that gives Product, UX and Development the tools to quickly go from idea to design to implementation and then getting it into our users’ hands. It allows us to provide apps that are standalone or integrated/embedded within our existing product suite

Our first release provides a simple view of the SystemNow news feed in a standalone web app. This has been released to two sites so far and the rollout will continue once we've had a bit of feedback. Although this feature isn’t ground breaking, it allows us to prove the technology and confidently move forward with rapid app development. There are a number of dependencies to overcome, so that apps created in the ACP can easily access data through consistent APIs and this work is going on alongside development of the ACP.

This is the beginning of a journey to modernise our technology stack and take us to the next level of business agility. I will be providing further details this month on the next plans for the ACP during Q4 and into 2022. I have an incredibly talented team working in the app development space and I’m excited about what we can achieve.
So what does all this jargon mean?

When I worked on ProjectFutures during 2019, I was working on what we called the Application Shell. It seemed that this was rebooted and renamed the Application Composition Platform (ACP), (powered by ProjectFutures, of course). All it was back then was basically a Home Page with some “Context” so it could access the User Token to see what modules the user was allowed to access, and would populate the menu with the apps they could use. So a simple explanation was: it was just the basic framework to embed other applications in

The original idea was ProjectFutures was a separate application that only really would be used once all the major modules were available (what we were calling “the Big Bang”; one day the user would just completely switch over and stop using SystemNow). Now the plan is that ProjectFutures is a “Companion” app, so will run alongside SystemNow. We also have the option of embedding it inside SystemNow. Therefore if we actually make an amazing module, we can replace SystemNow’s version. So now it’s a gradual phasing out.

Note: In the intro, I said the core modules should be out by 2023 . At this point they have quickly pushed out an RSS news feed by January 2022.

Dean
Did they scrap your framework and do it again?

Me:
I think they scrapped everything we did. We called it Application Shell

Dean:
If you put it to your ear you can hear the distant sounds of developers crying.

In Summer 2022, our CEO was hyping up ProjectFutures to our customers. She was really fixated on the concepts of Cloud, and that it’s not a “big bang” approach. So she announced there is no longer a brand new product, we will now just migrate the existing product “module by module”. The phrase “Powered By ProjectFutures” was going strong. It’s still just a meaningless buzzword because the “technology” is just AWS. We haven’t got any innovative proprietary technology at all, but we just have to pretend. We do have the RSS feed though.

“We’ve got an interesting and enjoyable journey with ProjectFutures which has evolved over the last 2-3 years. The Key to ProjectFutures strategy is moving systems into the cloud but we will avoid disruption by doing things in a gradual way.”

Our CEO made an announcement which was available to view on YouTube. The gist was that COVID had created an extra demand on certain services which required companies to increase innovation in technology and processes. This change had to happen rapidly. This is evidence that software changes in today’s world need to happen faster. So actual development time needs to come down, but also delivery time to get the change to the end user.

“Companies had seen time scales that would have previously been thought impossible”

Most of it was nonsense, like she claimed we had evolved our existing software to use ProjectFutures’s “Cloud Infrastructure”.

“So now we are “using the latest, trusted secure technology… we are building new modular components powered by ProjectFutures to enhance existing applications in the cloud that can support improved performance and enhancements….our development to release cycle is reduced so we can bring innovation to our customers faster. We no longer need to have a big bang approach. Instead we can build cloud modules that will interface with our existing systems and also with other supplier systems using industry standards retaining the familiar look and feel and the workflows for our users but at the same time being able to deliver additional functionality without impacting performance. We can reduce the impact of change on our customers releasing enhancements in an incremental way rather than building the entire new systems”

After that, customers will surely expect the Companion (ACP) to be populated with features pretty consistently, right? I mean, we are bringing innovation to our customers faster, and delivering in an incremental way.

Q4 – 2022

The Internationalisation project for SystemNow was a massive one, lasting a couple of years. They had already assigned more people to it, putting on hold other projects for it. Then around October, they had temporarily assigned developers to do code reviews and help out on various tasks to get it complete. However, a few months later, it was announced to be scrapped. 

Instead ProjectFutures was the focus, and would initially target England only. Then the focus will be then on Internationalisation once more, but they would get ProjectFutures instead. 

Since SystemNow was the system bringing in money for the company, I imagine cancelling the contract was a massive loss. Surely there are financial penalties for the cancellation, then the loss of any future sales in other regions, then it was costly in terms of the number of developers working across all those years. They have really put faith in ProjectFutures bringing in the money years down the line.

Reason 6: The Big Restart: X2

In addition to the cancellation of the Internationalisation Project, there was some shuffling around with managers and teams which was announced in a Development Department meeting. You’d think managers would be sacked rather than moved around because surely only having a News Feed after 4 years is a shambles.

I do wonder if managers just blag each other. On the Department meeting, the newly promoted ProjectFutures director said “subsequent difficult market challenges that have led to attrition globally, this has led to delays in delivering the solution”. What is he on about? ProjectFutures hasn’t been impacted by any outside influences at all.

He then explained a new plan. It sounded like all work so far would be rebooted under a new internal moniker “ProjectFutures2”.

On the roadmap, launching straight away is a ProjectFutures product which is actually from a company we bought out last year, but we are slapping our branding on it so we can say it is Powered by ProjectFutures. Then there will be 3 small features embedded into SystemNow which are also Powered by ProjectFutures. 

In 2023, we will have 4 more inside SystemNow, as well as one that will actually run in the ProjectFutures framework alongside the News Feed; Instant Messaging.

In 2024, 3 massive core features will be delivered as well as some secondary modules.

By the end of 2025 we will have the remaining modules completed. So the start of 2026 will have the full system launch for new customers. 

Why will we succeed this time?
- Commercial driver to get it done
- Resourcing appropriately
- Focus on successful onboarding
- Well defined roadmap and features
- User research and design
- Mature technology choices
- Strong architecture input
- Proven approach with ACP and Companion

He said that with the extra Developers from the Internationalisation project plus a few new hires, there will be 134 people assigned to ProjectFutures.

“We also continued to work on our core UI capability which gives teams a consistent, modern suite of components built in React. These form the lego blocks of every app we’ve developed, and the integration between our UX designers and the team who looks after UI has been phenomenal” (as previously described, this “phenomenal” team disbanded in September 2024). 

Reason 7: LATENCY IS NOT A PROBLEM Q3 2023 

The original plan was that ProjectFutures would have its own data storage, but the obvious problem is “how do we get users to switch over to the new system?” There will be a period of time when they have to run them side-by-side in a training period and data needs to be available in both. 

Since we then had the philosophy of being able to migrate modules independently, we really needed ProjectFutures to write into the SystemNow database

I don’t understand the complexities involved, because using the same database sounds like a simple solution. However, the Database Experts then came up with the idea that ProjectFutures will write to the Primary database, but then that data is then copied to a read-only duplicate called a “Secondary”. 

When teams were developing and testing on internal development environments; it all seemed to work fine. They would update data and see it reflected straight away in the User Interface. However, when they tried it on a demo site, they found that the data didn’t update until you manually refreshed it. What was happening is that after the update (which goes into the Primary database), ProjectFutures was then requesting the new data from the readable Secondary database, but it hadn’t synced with the Primary database yet; so then retrieved the old data.

So the solution was to make sure that every page had a refresh button and the user had to click refresh after they saved any data. It should be available a second later, so there shouldn’t be the need to keep refreshing the same page.

What makes this situation funny is that ProjectFutures is supposed to be an improvement in every way, and not having the current data when the user has just changed it locally is a massive step backwards. Another reason it’s funny is that I was asking a Software Architect about this “readable Secondary” idea when they announced it because it sounded problematic to me. He said he was arguing with the other Architects about it because the latency would be an issue, and he was repeatedly told that the delay is only a few milliseconds and so wouldn’t be a problem.

Missing features and various quirks just keep adding up to make the new modules inferior to the current solutions.

Reason 8: Not prioritising the right projects

Due to the restarts, it means the basic prerequisites aren’t there ahead of time, so you end up needing teams to do work almost in parallel.

One significant module was released by August 2023. In the internal announcement, the Development Director said that 11 teams were involved, alongside auxiliary staff in UX, Product and Architecture. Changes were needed in the Users API, SystemNow, Tracking API, Person API, Product API, Documents API, ACP, and UI Suite.

The way I see it, the ACP changes should already be complete, and be linked with a working Users and Person API. The UI Suite controls should be available and working. The Tracking (monitoring and auditing) is another fundamental module that should already be available. 

Due to delays, restarts, and prioritising modules that aren’t important, you then get chaotic development to deliver one module. Hopefully now that the work is done, then other modules should be able to be delivered without as many changes to other aspects.

We’d barely got a few modules out before managers were then talking about planning brand new functionality. A couple of teams had been assigned to research how AI could be used in our products. Speech to text is cool, right? How about predicting what tasks you need to complete next and drafting them out?

Why would we be working on brand new features rather than converting all the existing modules over? Because “AI” is the cool buzzword. 

Isn’t it just “shiny object syndrome” again?  “oooh Jenkins let’s use that”

Reason 9: The Load Q1 2024

One reason why SystemNow is criticised is the poor performance seen by many customers. There can be various reasons for this. Sometimes it is caused by any change that causes more server calls, and it’s quite noticeable when it happens. 

Rather than architecting the Project Futures “Companion” to share data with SystemNow, Companion performs its own server calls. So if you log into SystemNow and select to view a User’s record, it makes a server call to get the data. The Companion then makes its own call to the server to get the same data so now the apps are in sync. If we enable the Companion to all users in England, then this is essentially going to essentially double the server load in most cases.

In 2022, the Companion was released with an RSS feed, but this was just to a few pilot users. Then Instant Messaging followed much later. At the start of 2024, the decision was made to release it nationally to all customers in England at once. There were a few other modules available, albeit in Read-Only mode. 

“ProjectFutures is behaving extremely well, however, there is an incident currently opened against Identity Service in SystemNow as it isn’t handling the load from all the sites.”

Who would have predicted that?

It wasn’t the first issue like that. 

As an aside, someone shared the AWS bill. It was $20k a day! Imagine the cost when everything goes live. There are modules that are finished but not enabled. Or are enabled but in Read-Only mode. When there’s more requests and more users, then that price is gonna jump significantly.

We hyped up the functionality to say that if you have two monitors, you can have SystemNow on one screen and the Companion on another screen to view information as you are working in SystemNow. What was even more amazing is that you can copy and paste information across! Wow!

“This is a hugely valuable milestone for the business – our customers love this new tech, this is what will allow us to retain our existing customers and go out to win new business across the UK. The Companion App is the proving ground for ProjectFutures2”

Reason 10: Cumbersome development set up

“for anyone wondering why ProjectFutures itself seems slow, this is our integration environment. It’s lightning fast in Production. Just thought I’d add that – can’t help being protective over it.” – Development Manager

Why don’t we get a good test system? Isn’t it costly to slow our developers and testers down?

I get frustrated that we have authentication enabled on our environments too. When I’m developing, I don’t want to constantly log in. I don’t want to get logged out if I leave it for a few minutes either. 

Also, instead of having a set-up where we can run changes in isolation, we end up running most of our changes on a shared environment. Some weeks it has been a daily occurrence that the environment is broken in some way. You can’t log in. You can log in, but cannot select a customer. You can select a customer but the module won’t load.

We are constantly hindering ourselves.

Conclusion

In “The Big Restart” the Development Manager said: “the start of 2026 will have the full system launch for new customers”.

It’s now Q2 2026, and there hasn’t been much progress. There’s still a few modules Read-Only, and some that I heard are finished but were never released. Maybe they will be released. Maybe they will be scrapped. Time will tell.

There are many reasons that are related and tie into each other. I think they can be summarised by the following:

  1. Skills in the wrong areas: too many juniors, not enough experts to drive decision making.
  2. Wanting to use certain technology ie Cloud, AI, rather than choosing technology based on the requirements.
  3. Developers taking the idea of “fail fast” too extreme. Spending time on tooling rather than making features. 
  4. Reinventing the wheel: not using out of the box features like Authentication, UI controls.
  5. Restarting projects.
  6. No manager accountability, so they can make mistakes and carry on making them.
  7. Changing fundamental ideas of architecture: big bang vs incremental. Performance issues: latency, plus high server load.
  8. Not prioritising the right projects, prerequisites blocking other projects, core modules not delivered to customers.
  9. Cumbersome development set up so development of any feature is slower than it should be.

Slowness Complaints

There was a time where our software was becoming infamous for being slow. Some managers wanted to actively attempt to address user’s concerns. Some changes were made; some with improvements to all users, and some that were situational – such as only affecting calls when the returned data was large.

Soon, managers were boasting about a graph they put together which showed the number of complaints made from users containing the word “slow”; it generally showed a negative trend.

I wasn’t sure I understood the graph too well. How many users call Support multiple times for the same complaint? because if you have complained about it, then you might not call back. So that means; after a week, all the complaints might have already come in so you expect a drop off. 

So if no fixes were produced, I would expect a downward trend because everyone that was willing to complain about it will have done so.

There were some spikes in the graph, but managers did not explain the increases.

There have been times where figures have been manipulated like the times where we would log new bug tickets rather than re-open the closed one we knew had been incorrectly closed. There was even a time where bugs were closed then re-logged just to restart the SLA (Service Level Agreement) timer.

It would be funny if Support were told they would be monitored on the word “slow“, so they logged all new complaints under “not fast“.

“In other news, cases mentioning “snail’s pace” or “never loads” have risen by 300%”.

Fake quote

Self-inflicted problems

Instead of reinventing the wheel, Software Developers like reusing code. When using Javascript, a common place to use such code is NPM; Node Package Manager.

The recent NPM incident caused malware to be shared via many NPM packages. As far as I understood, I believe Github security tokens were stolen which allowed impersonation to check in malware and propagate to more packages.

As a precaution, we were told to cease development so you had hundreds of developers sitting idle.

We knew our teams didn’t use these packages directly but I suppose investigation was needed for transitive dependencies, and there was always the risk of packages we did use suddenly being infected, but we could just stop upgrading to new versions.

We thought it would only take a day or two, so I took the day off and would come back over the weekend to normality.

However more investigation was needed, then we had to switch over to a different internal NPM server with only approved packages being available.

I had no idea what the criteria is for approved packages. How do you know if something is secure other than the currently known, published security issues? Typically, bugs and fixes are added all the time. 

I think there was some kind lead time before new versions were accepted. I suppose it prevents this quick publication of malware if you just have a lead time of only taking packages more than a month old.

Since we had a new NPM server, we needed to publish packages to the new one, but we weren’t allowed to use the old ones. So new versions had to be published which then means you need to update all your software, and it’s a massive chain of dependencies involving all teams in the department.

I wasn’t personally involved but it sounded horrendous to coordinate and we were delayed by around 3.5 weeks.

We weren’t even affected by malware. We have loads of sub-companies in the overall group and no real incidents between them. Hundreds of developers were idle, and it was self-inflicted.

KM to miles

We had a form where the user can search for certain types of businesses in the area, and it displayed the distance from your location in miles.

One day, a user was posting all kinds of angry abuse because they said the distance was wildly inaccurate.

We were using a 3rd party API that was supposed to return the distances to the businesses in KM. It seemed that at some point, without us knowing, the API had changed to return the distance in miles, and our current code was expecting KM then we had code to convert it to miles. So now it was coming in as miles but we were manipulating the value to be wrong.

My reaction was that we should just inform the 3rd party to change it back. If their documentation says it is KM then they can’t change it without informing all the providers. If we changed our code, and the 3rd Party then reverted without informing us again, then the values would be wrong again.

A manager decided to go against my advice and told a team they needed to fix it. Soon, a Junior developer contacts me to say he is doing the fix and wants me to explain the situation and be available to review the code.

It’s a simple change, and he adds some unit tests. I pointed out a few more scenarios that he could add since our code was formatting to 2 decimal places. He could add tests for 3 or more decimal places in the data, and include larger values like >10 so you have 2 values before the decimal point.

So as far as I was concerned, if we HAVE TO make a change, then this is the simplest change and he has improved the codebase with the unit tests.

I approved the changes, and 2 other Senior Developers approved too.

When it was ready to go into the Main branch, another Senior flagged some duplicate code. He was correct, but it wasn’t a big deal because it’s one line of code. To “fix” it, you would need to create a new file for one method with one line of code which seemed overkill.

Then another Senior Developer questioned why one class called a method just to delegate to another method; when it could go direct. That scenario happens a lot, especially with design patterns where you want strict degrees of responsibility so the UI shouldn’t have logic. This was a very opinionated change, but since a Senior Developer had said it, then the Junior felt like they should do it.

So after the initial simple change, and 3 developers requesting some very simple changes, he was finally ready to commit the changes. Then a Principal Developer saw the change and said it was definitely the wrong thing to do since we need to get the 3rd party to revert their change. 

That was exactly what I said initially.

Secrets in code 

Recently, we received the following message sent to the entire Software Development department from a manager:

Over the last couple of days, we have witnessed multiple instances of sensitive credentials committed into GitHub. This has been noticed by chance whilst supporting teams / reviewing PRs. Instances have included an individual’s GitHub PAT & test user account password. 

The security team are likely to enable a secret scanning tool in GitHub to help identify instances of this in future. However, this isn’t guaranteed to spot all issues and by the time it does; we are already potentially at risk. 

Please ensure that you are vigilant in reviewing your own / other’s code before committing / merging code. If a sensitive credential is identified; please ensure that this is removed and revoked immediately to prevent misuse of the secret. 

I’m really surprised if multiple people have done this, because we keep talking about improving security, and improving processes. Adding private keys/credentials etc to your repository sounds like a common cliché and is like Security Lesson 1. From the very start of the project, we have talked about having security scanners on our code from the very start.

I thought GitHub automatically scanned for credentials but it might just be for public repositories, and ours are private.

Security keys/tokens can often easily be regenerated so it’s easily fixed. However, your mistake of committing them is in the history of the file. I suppose there’s technically ways to modify the history but at a massive inconvenience.

Remember when people used to know what they were doing?

Remember when people used to know what they were doing? those were the days.

“what concerns me the most is that there was a time where everything almost worked like clockwork and now it seems like more ruins every day”

Software Architect

“I am more surprised when something works”

Me

We used to be a company full of smart people, working effectively. Now we work slowly and people just cut corners and do incredibly dumb things. In more recent times, people now don’t think for themselves because they ask AI what code to write. Sometimes it’s absolute rubbish but they never reviewed it themselves; so it really is zero thought. You point it out to them that it’s not going to work, and they respond back with an overly polite message, clearly written by ChatGPT; which just adds insult to injury.

So it’s like developers don’t even develop because AI does it. Then they don’t do any dev-testing. Then the Testers don’t know what they are doing either.

Recently Testers have been installing our software on the application servers.

Even though one of the Lead Testers has been posting angry rants about it; it keeps on happening. The Lead Tester’s points were that it’s not representative of live, and how it takes up the RAM/processing time and lags out the app server for everyone else.

I don’t get why people got the idea to install the client on the app server, and remote on. You can’t think that is official. The servers were always configured to only allow 2 people on at once, so it’s not like the entire department can log on to test if it was the official process.

I just hate what this company has become. I feel like it’s just gonna keep getting worse with managers constantly encouraging people to use AI.

Let’s read the words, the words, the words, of the developer

Introduction

When working with Indian developers, their English skills can vary. You also need to be aware of certain words exclusive to Indian English; some of which I actually like. For example they have the word “prepone” which is the opposite of “postpone”, but in UK English, we don’t seem to have a single word for that.

Some phrases seem more like poor grammar. An example of that is “Can able” or “Can’t able” when we would say “I’m able/unable”.

  • “i think you can able to see the second image is it?”
  • “I can’t able to find any relationship between those two codes” 
  • “still we can’t able to recreate the issue”

“For the same” is an interesting phrase because it just refers to something earlier in the sentence without having much meaning. It’s similar to when they say “do the needful” which just means “do whatever is required” but often doesn’t really add anything to the instruction; if they have requested something from you, then surely you will do it if you can.

There’s a few strange greetings like saying “good noon” which I’d assume is just a shortened version of “good afternoon” rather than being appropriate for a very specific time period. There’s a few people that have a strange greeting of “Ho!”

“Ho!! Is it please can you share those knowledge with me…”

To take time off, they like to “avail”. As a bonus, here’s a strange requests:

Morning Team,
I have picked up fever and heavy cold. Availing AL today.
Please conduct stand up and end call.
Available over mobile for any urgent issues.
Thanks and Regards,
Jeeva

I’m glad you told me to end the call Jeeva, because I’d have stayed on it all day otherwise.

Indian Pull Requests

When it comes to the Code Review process aka Pull Requests (PRs), it can be hard to ask them why they are making certain changes. Sometimes asking questions can just lead to further confusion. Also, sometimes I’m sure some developers try to blag and hope you move on.

I was discussing this with a Lead Developer and he agreed that asking questions can either result in

  • Blagging
  • Revert the code and hope it works
  • Or you actually get a good answer. But then if it’s not clear why the code was written like that, then maybe it does need a code comment or some documentation so others don’t get confused in future.

Even though I often got frustrated with their comments, in recent times, a lot of them use AI like ChatGPT to rewrite their responses, or sometimes I get the impression they just put your question into the AI and hope it comes out with a good response. So instead of poorly written English, it’s all robotic and a blag of jargon. So you can’t win really.

Row

“Refresh on special while saving special note, row background, Radio button alignment based on include exclude” 

Blagging with Words on PRs

I questioned their pointless try/catch blocks which were catching an exception then rethrowing the exact same type of exception.

“Yes, as I couldnt use the dll in the resourcepicker project, so we can thrown the exception and catched it in resourcepicker class”

And

“The resources can be used due to filecahe, but no changes can be saved, when service is down. The above message is already used in Picker solution.”

Then when their project was being merged into the main branch, another developer questioned the same code. This time they said:

“To restrict that, have drilled up the ux tree and displayed the error message.”

Observation 

“Found an observation while testing 12602 in 9.3.6 branch”

what does that even mean? I assume “observation” means “bug” or “potential problem”.

Bad Refactoring

He refactors some existing codec but also changes the return type of the method, which means the caller’s logic will have to be changed so was causing cascading changes which weren’t really relevant to his main change. Also, the logic didn’t look equivalent so I wouldn’t call it refactoring:, more like introducing a bug. He then claims he hasn’t changed it…

Me: "is this equivalent? It was checking >1 not >=1"
Them: "Actually, I haven't attempted to modify that as the logic written working as per acceptance criteria, and it already tested"
Me: "I don't understand, this method has been changed in this PR"
Them: "Just used expression for methods as commented by Andy. Apart from that i haven't changed any logic around that."

Down Merge

Vignesh
Here after no comments fixed against assurance branch?
Just need information about down merge

Andy:
sorry I'm not sure what you mean?

Vignesh
Two comments pending for our side... if any one raise PR I will raise PR also. Because of down merge... Incase only I will raise PR again do down merge that's why I am asking

IsMobileEnabled 

IsMobileEnabled needs to return boolean value, so removed exception caused by null and also the GetResources during Trigger prompting needs to include Template also along with Protocols.

Didn’t Launch The Portal

me: “where is this used?”

developer: “This is used at TryLaunchPortal()…. At this point of time we never know the portal type to compare and verify the condition because the user didn’t launch any portal

walkie talkie comms going on here

This reminds me of walkie-talkies, stating “over” so you know it’s the end of the message.

Roshni 
give line break after method over

Shoban
Ok Roshni, Updated the changes

Shoban
Completed with the Changes

Roshni
give line break after method over not before the method over

Shoban
Thanks Roshni, Got your point. Made Changes

Roshni
and again please remove the empty line no 267

Shoban
Code changes completed as mentioned

Welsh 

PR: Updated the Walsh text

Description: Updated the resource file with Walesh text

Do you think the text is gonna be accurate if he can’t get the title correct in English? It should say “Welsh text” as in “the Welsh language”.

Customer

Merge from Curomer first branch to main

Accelerator Keys

To define an accelerator key (allows you to use Alt key to select it), you place an & character before the letter. So Export has E defined. Edit can’t use E because Export has taken it, so they have chosen D. Cancel seemed an odd choice of N.

btnBackup.Text = "&Export";
btnContinue.Text = "E&dit";
btnCancel.Text = "Ca&ncel";
btnBackup.DialogResult = DialogResult.None;

Me
can't C be used as an accelerator key?

Kalyanaraman
C for Continue

Me
what is the continue button? Isn't this it? btnContinue.Text = "E&dit"; that is using D

SQL is up to 10 times better

yes i have tried with mocked 10 lacks data in local
and while this query the data was well optimized.
For data, I ran sp thrice

I bet you can’t tell if this is from some old children’s folk tale or an Indian’s PR

Always Run SQL Code Analysis

Roshni has worked here several years, and when she started, I’m sure she made the same mistake several times. When making a database patch, we have a Patching Tool that not only applies the patch but runs some code analysis to make sure it conforms to coding standards.

Many times when developers have reviewed her code, Roshni has been told her patch would have been flagged by the tool if she had run it as part of her Developer Testing.

When I was a junior, once I was told by the Seniors; I never forgot to run it again. It’s like the embarrassment/shame makes you remember. Also I cared about quality and this was a simple process that ensured quality and standardisation of our SQL code.

Recently, she had merged her fix ready for release and a Tester, Mick pointed out there were patching errors so her SQL patch cannot have been run through the Patching Tool, or even tested.

She claimed it had been tested, and it was a problem between SQL versions. So her claim was that – both her local machine and the test server it was run on (by another tester in her team) was a different version to what was on the main test environment we use before releasing the software.

So Mick looked at the SQL patch and saw the error was about a missing namespace. The patch was inserting XML, and XML has a namespace attribute on the first line. So then he looked at what data is currently in the table, and saw that all the existing entries had a namespace declared, and this was missing from Roshni’s patch.

So Mick embarrassingly pointed this out. So she had lied about testing the patch locally, she must have lied about it being tested in her team, and lied that it was an SQL version issue.

She then submits a brand new patch which conditionally checks if the previous patch had created the entries. If it hasn’t then, this new patch would insert them, then if it had already added them, then her new patch would run an update statement instead.

Mick then points out that this is nonsense because the original patch had failed so would have just rolled back and stopped patching. What she needed to do was just to fix the original patch so it would run. So then she quickly deletes her new patch, and updates the original one.

Although it’s what we wanted, the speed that she did it makes me think she hadn’t run the Patching Tool because it can be very slow to run. So yet again, we have told her it is important to run it through the Patching Tool, and she hasn’t bothered.

Although I think nothing was actually wrong with her new change, another tester had pointed out that her changes were across two repositories and her changes in the other repository were also flagging errors in the Patching Tool. So it’s not like she just forgot to run it once, it’s just that no matter how many times in the past we have told her YOU MUST RUN PATCHING TOOL; she never does.

It’s just infuriating we keep employing people like that that don’t listen or care about the work they are doing.

Innovation shambles

Recently, managers decided that every few months we should have an Innovation Week. The idea is that you can work on ideas that can improve our work processes or even add a new feature to our products. However, the time limit of one week is a bit limited to actually get something complete in my opinion.

To be efficient, we really need to come up with a great list of ideas before the innovation starts, otherwise it cuts into the week. Some people did submit ideas before, and others on the day.

The initial meeting quickly became a bit of a shambles. Paul had created a Miro board under a different account that the attendees didn’t have write permissions for. Even when we clicked the link to request access, and Paul claimed he approved it; it still didn’t work.

He then tried creating a different board, but that didn’t work. To not waste further time, we just posted ideas into the Microsoft Teams chat which then he transferred onto Miro.

Since the ideas were essentially just titles on the board, people were supposed to explain their ideas but I don’t think many explained too well. We probably needed some kind of formal process to:

  1. describe the problem, 
  2. ideas on how to solve,
  3. pros and cons, 
  4. any possible costs like software licences,
  5. prerequisites to be able to investigate or implement the idea.

Another thing was missed is that you have to have accounts to use many of the AI tools, and that was a focus of this month’s innovation. With a lot of software, it often needs a special licence for commercial use and we weren’t advised how to acquire licences. We had Github Copilot and Office Copilot but what about other AI tools?

One guy apologised for misunderstanding that the ideas should be process improvements and he had come up with an idea for our software that our users would use. Paul said he hadn’t misunderstood at all and we could suggest either process improvements or new features… but that’s not what the Miro board said. It was only for process improvements and so all but one idea was for process.

We needed to assign our names to them, so initially Paul tried to create a spreadsheet but couldn’t work out how to share it so we could all edit at the same time. He ended up pasting the ideas into a Microsoft Teams “Whiteboard” which I had never used before but it looked like the Miro boards.

There were loads of ideas, but many were of debatable value. However, like I stated, we never discussed them effectively. Without knowing the pros and cons or prioritised the business value; there were loads of ideas that definitely weren’t strong enough. So with a large list, it was hard to pick something to work on. Some of them would need more than one person, but what guarantee is there that the team will be full? Less likely when the list is so big.

So I asked the question if we should only put our name against 1 item, or vote for several so we can see which teams are full, then the full teams get approved. Paul said to only vote once otherwise it will look like teams are full, but you’d end up dropping out if another one of your votes were successful. I suppose that’s a good point, but only voting once will mean you could be the only person to vote on a team project, so would then have to choose something else anyway, or gamble and go by yourself.

With most people finally assigned (and many just disappearing, presumably to slack off), with many going solo, and some probably having more team members than required; we got told to communicate with our team members.

I was in a team of 3 but I thought the ideal team would just be a pair. I waited for 30 mins or so but the guy that came up with the idea hadn’t contacted me, and you would assume he would take the team leader position.

I then took initiative and added a group chat with my 2 team members, and after another 1.5 hours, I finally got a response from one person who asked how we should begin to plan. I responded with my notes I had created to set the scene. He suggested one extra point to my notes, then didn’t hear from him for the rest of the day. The other team member didn’t respond at all.

The next day, my manager contacted me and said I was assigned to help finish a project that was behind schedule so my “innovating” had come to an end.

Absolute shambles really.

The Chop

I was once contacted by my manager to review a particular code change, but with the instructions to actually select the Reject option if there was anything wrong with it.

Things seemed a little odd, so I was suspicious about the request. Usually you would only use the Reject option if it was completely the wrong approach, not just if one thing could be improved.

The change was an SQL data fix, but seemed for a bug that would rarely occur, so my instinct is that it should be run manually on the afflicted sites rather than sent out as part of the normal patching process. 

The normal process would mean the script would be run on all servers, and be subject to the usual slow “roll out” process; therefore delay its application to the affected site.

Looking at the comment from Support, it sounded like it was possibly just on one site.

There were 3 cases linked to it; 2 from the same site, 1 with a title of a completely different error number. Then the workaround is stated as “Re-add the Default Location to the Template” so they probably fixed it already. So maybe we didn’t even need to do anything.

Looking at the dates that it was logged, it seemed like it was classed as a minor bug so had a long time period to fix as per the Service Level agreement so I was sceptical it was still an issue after 2 years.

So my initial instinct says it should be applied as a manual patch, then reading the details from Support, it sounds like it was just on one site and they had already manually fixed it, presumably via the UI.

So I asked my manager if the site has been contacted to see if it is still a problem? Then we can just close it.

And that’s when my manager said

“His skills are currently being assessed so he’s been left to figure it out and told to ask for help if/when he needs it.”

Manager

So it seems like they have given him a bug to investigate then fix/close. Since he has struggled to resolve items before and been reluctant to ask for help, they have chosen this one to test if he is collaborating with the correct people. He didn’t, so they sacked him.

I haven’t encountered many sackings; everyone seems to imply it’s a lot of hassle, but certain managers are more willing to do it. Another approach is to declare that a “new opportunity” has come up and they will be placed in a new team to see if that causes an improvement.