It’s been a long time since I wrote about Colin, a pretty incompetent software developer that seemed to be good friends with one of the high-ranking managers that seemed to lead to some bizarre promotions: to Senior Developer, to Principal Developer, then eventually switching to a managerial role. Talk about failing upwards.
The thing is, he came across as a bit scatter-brained so couldn’t imagine him actually being a good manager.
Here are some random stories I found from my notes and chat logs about how he performed as a manager.
Salary
Mike said Colin began sharing his screen on a meeting, and had a list of salary changes in a spreadsheet. Interesting how they have salaries lined up BEFORE the reviews which we haven’t had. Just as I have previously suspected. I suppose other managers have messed up in the past when a promotion was announced a week prior to the performance reviews.
I can imagine Colin eventually getting sacked for that type of mistake. It’s classic Colin, and as I predicted, the mistake did happen again a few months later. This time at a meeting I was involved in. We were trying to hire new developers for Colin’s teams. Colin was sharing his screen and had a list of employees that were leaving and their salaries.
When it came to my reviews, Colin kept on saying I was doing a great job but then pointing out one thing that was holding me back. It always seemed like an excuse to not give me more money or promote me. When I did switch managers, my new manager promoted me within a few months and gave me a £14k raise due to how behind I was compared to my peers.
Arranging Meetings
Colin often arranged meetings then didn’t turn up, or turned up late. He was constantly saying he was busy all day with meetings so sometimes scheduled meetings at bizarre times.
I was particularly annoyed when he arranged a weekly update meeting during lunchtimes, then half of the time doesn’t even show up. The update was mainly for him to collate info then take it to his manager, but he said we had to give our updates to the other teams, much like an Agile Scrum of Scrums meeting. So regardless if he was there, he went ahead.
There were some other meetings which he arranged, and where he was an important attendee and he turned up 25 mins late.
One time, I was about to leave for the day, and Colin said he had an end of year review meeting with someone in Chennai. That would be 10:30pm on a Friday. Indians often have a dedicated attitude towards work. I think just because they would agree to something like that, doesn’t mean you should actually book it.
An example of scatter-brained or panicky behaviour was when he started a meeting, shared the wrong screen. He declares he is “sharing the wrong screen”, but instead of stopping ‘sharing’, he leaves the meeting, then takes him a few minutes to actually rejoin the call, where he carries on like nothing weird happened.
Informing & Criticising
Colin: "he is coming in as a Solutions Architect rather than Technical Architect" Me: "what's the difference in the roles?" Colin: "I don't know, I'm just telling you the news"
I thought it was funny when he gave an update on the performance of the teams he was managing. “Last week was pretty bad for us. You guys don’t know this“, then says there were 8 Major Incidents, which got escalated to the Directors. What made it more funny to me was that the CEO had given out bonuses to his teams for apparently doing a great job. It was a fairly small bonus like a £50 Amazon gift card but still probably a regrettable action. I’ve said many times that managers seem to reward the wrong behaviour and struggle to identify the best performers. That’s another example. How can you go from doing a great job, to creating 8 disasters in one week?
I often found Colin to not practice what he preaches. So might lecture people about needing to improve code quality, but when he was a software developer, he was constantly cowboying solutions. Another example was that he says we should never put-off taking our annual leave because it can hide problems (it would illustrate a reliance on someone if they weren’t in), and show higher output for months then would suddenly drop towards the end of the year when people take annual leave at once. Then after his lecture, he then admits he hadn’t even taken 1 day off and we were 75% through the year.
Colin complained that Rob and I haven’t handled the project well, and it overran by over a month. A week or so later, the team was on a call with other stakeholders and he said “you guys have done a tremendous job”, then said the delay was caused purely by scope creep and nothing to do with the developers at all. I don’t know what to believe there. Maybe he did believe it was our fault but didn’t want to berate us publicly so was deflecting like a good manager. However, not declaring that to us meant we got mixed messages.
Near the end of that project, Colin showed me the items we had remaining and was like “you only have a few left to do…surely you can complete it all quickly”. I told Rob and he was as annoyed as me:
Rob: Its things like that that really make me nervous Blind hope without actually looking into the problems SURELY you can do it quickly right? If not you must be crap! Thanks for the morale boost!
The problem is, the project has dragged on due to complications, so the remaining work is probably quite difficult, but Colin is just seeing simple numbers. “3 tasks left; that’s not a lot”. But each task could take a week or two to get right. So even between 2 of us, it could take 2 weeks. Then Colin is setting the expectation it can be done within the week.
Closing Thoughts
When people have done a job for a while, then become a manager of those people, you would expect them to be great managers because they understand the work involved, the process, and problems they have faced with previous managers. However, time and time again, it’s like people forget their experiences and end up becoming bad managers.
The debate about generative AI for images is an interesting one because it’s clear it can easily take work away from human artists. A few years ago when AI was a bit inconsistent and drew obvious errors like humans with extra/missing fingers, then you couldn’t use these images in a professional context without editing the image, but then maybe you would need to hire someone with those editing skills to fix it.
With how creative these AI models can be, it has the likes of JimllPaintIt fearing for the future. Images can be generated in a famous artist’s style, so what happens if people can just generate ones in the style of JimllPaintIt?
In a now deleted thread, he stated:
“My attitude towards AI “art” has – in a short space of time – gone from mild disinterest to acute irritation to absolute despair for the future of humanity. The most depressing thing is seeing artists embrace it. Talk about turkeys voting for Christmas.”
JimllPaintIt
Some others raised a good point, that the person typing the prompts still needs to be creative:
“The irony I have seen so far is that the best results from it come from talented artists. I don’t think it’s the awful thing you think it is. Talent is still needed for vision. I think it just opens up art to more people who have vision but not necessarily the physical skills.”
The animator Cyriak then chimes in:
I’m sure musicians have great record collections as well. The idea that “skills” and “talent” are magical properties some people are born with is rubbish. “talent” is just being bothered to keep trying, and skill accumulates as you keep trying.
Cyriak
Which I think isn’t correct. It’s more like a combination of what you are born with, then learned skill, (nature/nurture) as someone else points out:
In that case, if you kept practising you could run faster than Usain Bolt? or is he just naturally faster than you?
Matt_Francis
“I don’t draw pictures by running in a straight line with pencils tied to my shoes. I’m not sure anyone does”
Cyriak
Not sure what Cyriak’s response even means. Is he saying it’s a completely different skill so art is from practice, but physique is natural?
People keep talking about how AI will take away Software Developer’s jobs but at the moment, I think it can be used to take away some of the tedious aspects, and also give a good starting point (boilerplate code) to then enhance with your skills. You also need to understand how to ask the AI to realise your vision. I think there are comparisons in the Art world, but I think it’s easier to understand how their jobs are impacted more directly. ie Hiring an artist for one (or a few images) when you can use AI – versus hiring a developer for a few weeks to get a fully working program/website.
We recently had staff from Github Copilot do a presentation on how their product can be useful to Software Developers. I found their answers to be a bit wishy-washy. I think it’s a really complex topic and having what I think were essentially sales managers trying to pitch something technical to us was a challenge. They didn’t have a full understanding of how it actually worked.
Someone asked a question to clarify if Copilot just looked at your open documents, or if it had the permission to see all the other files in your repository. Their answer was vague, along the lines of “it might do. Could come down to chance“.
For it to be effective, it really does need to look at your codebase to see what your product does, what features are already developed, and for bonus points, your coding style.
When it needs to suggest calling third-party code and installing additional libraries, does it understand that you may need to abide by a certain licence (pay some fee, or not use it in open-source etc)? and does it know that you may be limited to a certain version of it due to other dependencies? when features and the API (required parameters etc) can change drastically between versions, does Copilot understand that?
It’s probably the same scenario as what Wolfram Alpha were talking about when they came to our company to do a presentation on AI. They were emphasising how standard language models often suggest some text which reads like it makes sense, but it’s actually nonsense. They gave an example where it quoted a real journal from that country, stated the title of a chart that exists, quoted some figures and years – but the figures were fictional.
I saw a news article about how a lawyer presented some documentation to a judge about similar cases, but it turns out the lawyer had used ChatGPT and it had made up the case numbers and years.
The way those models work is that it knows some related words, and knows sentence structure, but the likes of ChatGPT doesn’t understand that something like that needs to be accurate and you can’t make stuff up. So Wolfram were saying their plugin can be combined with ChatGPT’s conversational structure to plug in actual figures to make accurate essays. TEAMWORK.
I would imagine there’s a good chance Copilot has exactly the same issue. It knows a bit of structure, slaps in the correct programming language, but it has no idea that it’s from a different library version that you aren’t using.
From what I have seen of Copilot, it is very impressive but does often give you code that doesn’t quite compile but gives you a good template and inspiration of how to progress.
In the past I have seen people blindly copy code from the internet, or just do what a colleague suggests without actually thinking about it. I think we are gonna be seeing this more from now on, but it’s gonna be the AI’s fault.
I am not against AI in programming because it can speed up development in certain tedious areas, but it always comes down to the idea that the best programmers are ones with a certain mindset of quality, and I think AI is gonna produce more developers with the wrong mindset because it’s about speed and cutting corners.
I’ve heard people suggest that the next wave of developers can be so dependent on AI, that they will be unable to come up with a solution when the AI doesn’t get it right.
There was an internal meeting where a new product called “Recruit” was announced. The first question was that “it sounds like it could be confused with a method of recruiting staff to work for us, so was that discussed?”
The manager said “to be honest, I never considered that“.
He then added there were around 20 people who were in the meetings, and no one had questioned it, or raised any objections.
A few months prior, there was an announcement about a new team that was handling branding in Marketing. We were told we couldn’t create any names without going via them. The last product names they came up with were ASSistant, and ANALytics.
I thought that if the software isn’t well received, it could easily gain a negative nickname, and people could make statements like “the software is ass”.
A Product Owner recently stated that the Assistant branding will soon be phased out, and it will just be merged into our main software’s branding. The decision came about when another Product Owner was doing a demo and had created a test user with the name “ass”. A manager flagged it as unprofessional and was concerned that we could easily demo something like that to external clients.
“you probably want to change those Ass users”
Manager
So far, the marketing naming team hasn’t got a good track record.
Datadog is a monitoring tool my employer purchased licences for, and quickly became the cool thing to use and impress the senior managers with (see Datadog, and Datadog – The Smooth Out).
I discussed problems in both those blogs, but a concern with all metrics is;
What do you want to measure?
Who is viewing the data? And when?
What does “good” and “bad” look like, and who acts when that state is shown?
In “Datadog Knee Jerk“, I explained how our CTO and Technical Director demanded that everyone create a Datadog dashboard to monitor all services, regardless of what they are.
If we don’t have a clear idea of what to measure, who needs to view it, and how do they know it is good/bad; then aren’t we just throwing money away? (even the dashboard itself doesn’t cost, you still have the time to create one. Some dashboards would require additional logging to be effective though). Surely an obvious problem with wanting to monitor everything is that it can become quite costly when you look into Datadog’s pricing model.
Easy To Make Nonsense Dashboards
From my brief time making Datadog dashboards and analysing other teams’ dashboards, I realised that the data can often look wrong, and it’s really easy to misinterpret the metrics due to the jargon used, and when/how the data is actually collected.
“I know nothing about Datadog, yet have been told to make a dashboard”
Principal Tester
Surely the worst case is to make dashboards that show nonsense data. You will waste time investigating problems that don’t exist, or not be alerted to actual problems that happen. So once we create a dashboard, who checks that it is valid?
Look at this one that I saw:
This is supposed to be plotting a line (purple) for failures in the time frame specified, then another (blue) for “week_before“.
It looks immediately wrong at a single glance. If I have set the time-frame combo box to show the “previous month”, should week_before be last week, or should it be the week before last month? It seemed to be neither. Also, notice that the graph is exactly the same shape/numbers. It just seems to be plotting the exact same data but pretending it is a week later.
Jargon
You would think you just need some understanding of statistics to draw some charts, but in the usual nerd fashion, they throw around jargon to be cool. So people end up saying stuff like this:
What is datadog? Fundamentally, a platform like Datadog provides us with a scalable solution for ingesting observability data from our services. Datadog is built upon the three pillars of observability: Metrics provide numerical measurements that allow us to assess our system performance and behaviour Traces allow us to understand the flow of a request or transaction through our systems Logs allow us to capture the details of system events and errors
When you read the official documentation, it’s difficult to understand what it actually can do. It’s the combination of jargon plus hyping up features to be powerful:
Datadog vector Vector is a high-performance observability data pipeline that puts organizations in control of their observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow.
Imagine sending your metrics to vendors that you want in the future. They are like “mate, stop spamming us with your info, you aren’t our customer“.
Then you are given the implication that this is the ultimate solution that can somehow solve some of the major problems with our system:
Having access to this data provides us with opportunities to understand the inner workings of our complex and distributed systems in a way that we haven’t been able to before. However, the data alone is limited in its usefulness, and it is the insights from this data that offer the greater value. Datadog provides the tooling to surface these insights in a way that enables proactive support and improvement of our systems.
DevOps Engineer
The bad thing about overhyping a tool like this is that you have to manage expectations and make it clear what the scope is, otherwise your interactions with managers is more difficult than it should be. One of the DevOps engineers made a vague statement like:
“Our dashboards monitor everything”
So they got a question from a manager “Can you tell me who uses our API?”
“no, our dashboards can’t see that”
What we have enabled so far:
Configured service metadata to populate service ownership details
Enabled traces
Enabled RUM (Real User Monitoring) traces to provide full end to end tracing
Improved our service & environment tagging
Enabled version tracking so that we can observe version related anomalies
Defined a baseline set of monitors to cover availability, anomalous throughput, errors, latency and infrastructure performance
Defined strict availability & latency SLOs
Implemented 24 SLOs & 264 monitors
Configured PagerDuty automatic incident creation and resolution
Enabled logging
Driven several key Information Governance decisions
Established a Data asset inventory to provide more objectivity as to what data can be stored in Datadog
Performance Issues
One problem with our system – is performance issues. Although we have blamed all kinds of things, performance issues still remain in general. There’s been claims that Datadog could help us diagnose where the performance issues are, but they have also increased network traffic and server resources; so that they have caused performance issues of their own!
DD agent is using a lot of resources on our test systems and looks to be causing performance issues, I have stopped the agent multiple times when testing as the CPU and memory usage is maxed out. This has been raised before.
Tester
Architect: Datadog seems to be showing memory usage on all app servers is high, wonder why?
Me: Does it only happen when Datadog is watching it? We licence Datadog to prevent Major Incidents and performance issues… Datadog causes Major Incidents and performance issues and tells us about it
Another aspect is that some things we wanted to measure required querying our SQL databases. To write an efficient SQL query, the columns you filter on need Indexes to be performant, but Indexes themselves take up space. Then we are always moaning about the cost of storage.
We wanted to look at adding Datadog to monitor the usage of a particular feature that managers were making a lot of fuss about. So we asked the Database Administrators about the repercussions of adding an index to our tables. It soon adds up to be absurd.
I checked a random server and a new Index on RecordID (int 4 byte), Method (tinyint 1 byte) and AvailabilityTimeStamp (datetime 8bytes) would be around 2.5GB for a server. There are 60 servers so we need around 150GB for an extra index across Live. Testing the Datadog query before and after the creation of this index shows a 98.6% improvement in total execution time.
Deployment Engineer
Architect I wondered if anyone else had noticed (and looking into?) poor performance spikes occurring every 2 hours, they seem to present on most servers I checked.
Me no one actually looks at Datadog can you create a Meta Dashboard, so it shows you the time since Dashboards were looked at?
Architect I can only assume it's genuinely the case that no one actually looks the dashboards I've raised 4 issues now, purely from observing the trends in the last 2 weeks we've had wrong servers in the public and private pools Windows Updates running in the day and killing servers servers sat idle with no traffic hitting them SQL Server spikes on F: drive access these spikes every 2 hours don't know what they're doing I've had a look at the Monitoring DB for Server07 this afternoon, and I'm absolutely appalled at how horrendous it is, I can't see the wood for the trees. I can only assume that users aren't getting any work done
Me Interesting that the spikes are exactly 2h apart, but at different base minutes between servers
Architect it is interesting, but we're still no closer to anyone paying attention to the issue Philip will probably sort it, he sorted the last DB-related issue
Datadog pricing
The following are our discounted rates, Per month costs as follows (Sept ‘22 - Sept ‘23):
•Infrastructure $11.75
•Network Performance Monitoring (NPM) $4.30
•Application Performance Monitoring (APM) $29.00
•Custom metrics $5 (per 100, per month)
•High use of logs (>1m/month) $1.52 (per 1m, per 15 days)
•Database Monitoring $77.28 (not discounted)
“Related to this, the Azure Pipelines integration for CI Visibility starting September 1st, 2023 will have a cost of $8 per committer per month (on an annual plan, or $12 per committer per month on-demand). Additionally, 400,000 CI Pipeline Spans are included per Pipeline Visibility committer per month. Based on our June usage data, our monthly cost for Azure Pipelines integration for CI Visibility would have been $644.74. We’ve had this enabled for sometime now, is anybody actively using this?”
CTO
Product
Product Charges ($)
APM Hosts
$2,320.00
Audit Trail
$1,846 54
Database Monitoring
$463.68
Fargate Tasks (APM)
$128.06
Fargate Tasks (Continuous Profiler)
$70.84
Fargate Tasks (Infra)
$145.73
Infra Host
$42,206.00
Log Events – 15 Days
$10,265.18
Log Ingestion
$28.20
NetFIow Monitoring – 30 Days
$507.80
Network Devices
$993.14
Network Hosts
$1,242.70
Pipeline Visibility
$574.08
Profiled Hosts
$275.08
RUM Browser or Mobile Sessions
$1,186 91
RUM Session Replay
$48.74
Sensitive Data Scanner
$2,414.65
Serverless APM
$48.77
Serverless Workloads (Functions)
$1,385 52
Synthetics – API Tests
$1825.25
Synthetics – Browser Tests
$0.06
Workflow Automation
$13.03
Grand Total
$67,989.96
These were our monthly charges at one point (although one entry is 15 days, so double it). Then if you estimate how much this costs yearly, it’s going to be $936k unless we either cut down what we are logging, make efficient changes, and don’t scale by adding more servers. So around about $1m just for monitoring for a year. How ridiculous is that!?
Obviously when the CTO was fully aware of these costs, he then calls an All Hands meeting.
CTO: Tells everyone that Datadog should be used by everyone
Also CTO: Help! the costs are spiralling out of control
Jim
Lol - classic.
Jan: We are investing in this awesome new technology.
Apr: Please stop using this technology - it's too expensive.
Me
ha yeah, that was yesterday's meme for me
we've got this amazing feature
but there's not enough licences to use it properly
so we only partially use it
classic
part of my Dev Haiku collection
We even did that in relation to Datadog. It integrated with a product called Pagerduty to notify the team that something is bad, but there’s not enough licences to alert everyone involved! What’s the point even paying for it at all if it is half done. You can’t get the value. It is bonkers.
One of my colleagues who works on a different project to do with an API said it only costs $2,500 to run the API for a month and it’s used by millions of people. Yet here we are spending $78k on monitoring alone.
Service owners and leads have been granted additional permissions to access more detailed billing and account information. Please take some time this next week to:- Confirm that all active services and features are necessary and being actively used- Identify any areas where we can optimise our setup to avoid unnecessary costs- Prioritise production system costs over other environments. This is a critical opportunity for us to do some housekeeping and ensure our resources are being used efficiently. We spend a significant amount of money on Datadog (7 figures), and we need to ensure that we are getting as much bang for our buck!If anyone has any questions about the above, please ask in here or reach out directly!
The Costly INFO
“Please consider very carefully if storing success status logs is required for your applications.”
As predicted, encouraging people to log everything to Datadog, not think about if it is useful, and to not check on it; soon led to an angry message from the CTO.
You know how it was ridiculous we were spending $10k on logs for 15 days? Well, it quickly grew to $15k and counting. On investigation, it was caused by one particular feature who was constantly logging the status INFO or OK.
327.59m OKs were logged in 1 day.
I looked at the total logs across a 1 week period and it showed as 1.18G. There’s a Gagillion logs!
How does it get to this point that no one noticed the excessive logging though? I suppose the Datadog company aren’t gonna alert you to it. They love money.
It proves I was right about getting everyone to create dashboards and increase logging, without actually having an owner to actually check and respond to them.
Costly Switch On
This is a big mea culpa. In an effort to get early visibility in Datadog of AWS, I enabled the AWS Integration (many months ago). This means that lots of metrics, hosts, logs etc come from these accounts automatically (adding to our costs!).
I’d like to undo this mistake but want to understand if anyone is using these monitors (for example NLB monitors).
Any views?
The problem is we pay $11.75/host/month whether or not we use the metrics
CTO
James
Both. You pay per metric and for polling the API.
Neal
Didn't we find this out the hard way with granular replicator CW metric capture? (swiftly removed though)
John
yep, that gets expensive if your pulling from datadog which is why there is the kinesis stream option
James
Yes - we added very granular cloudwatch custom metrics, as we didn't have a direct Datadog connection. This pushed up our AWS spend significantly, so we turned that off. Custom metrics direct in Datadog is significantly cheaper, but still worth keeping an eye on. E.g. we wanted to track latency, error rate and a few others at a per org level - that quickly pushed up to 40K metrics. In DD you pay $5 per month per 200 custom metrics. So we had to change our approach to metrics / observability to only surface details for error situations.
CTO
I've disabled the metric collection for all these accounts now. That saves at least $6,800 per year. Every little helps!
Who is viewing the data? And when?
Next challenge is this: I want the rest of the Senior Leadership Team (who are mostly not as technical as me) to be able to connect to DataDog and be able to understand how well our systems are working. I would suggest using the Golden Signals as a standard for all the systems. The dashboard that’s created needs to be easily consumed, consistent across products and reflective of customers’ experience. Can we make this happen?
CTO
Me:
Are the Directors actually gonna connect to DataDog? Trying to picture how this can be useful, and how they should get the info.
Architect
it's all laughable really
I was cringing when he put in the initial request to move it to single-sign on, could see where it was heading!
I don't think they need access to Datadog, surely they just want:
-Everything is ok
-Some things are broken (these things...)
-Everything is in ruins
everything else requires some level of investigation and interpretation
and we should probably have that information on our business status page, unless we're scared about how frequently we have problems
Me
That's true. People probably already tell the CEO if things aren't fine
and if they want to fiddle the figures, they can still do it in the dashboard that she sees
yep that dashboard was a first attempt to get us a first step to what you describe problem with us at the deep technical level – is knowing what data SLT / GXT find useful. Count of Active Alerts ? Just a simple RAG status ? average response time ?
DevOps Engineer
The conclusion was, the metrics that we create should have the following properties:
Consistent – the same interpretation should be made across all products
Comparative – we can tell from the metric how close we are to having an issue (ie percentage of link utilisation is better than Mbps)
Trending – we can see the past and if there is an underlying trend that points to an issue in the future the metric would make that obvious
RAG status’d – you can tell if a metric is ok quickly by seeing if it’s red, amber or green.
Relatable – the metric is connected to the experience by a customer, partner or patient.
We Monitored Everything But Not That Important Bit
Hi all, just letting you know about a lesson learnt from last weekend’s Mobile disruption incident; An internal service Mobile relies on had a dependency on a 3rd party service endpoint that went into an error state. Unfortunately we weren’t monitoring that as part of the service monitoring and therefore we had a significant delay in detecting that down-stream failure. We also didn’t have the Mobile service mapped as depending on the other internal service within PagerDuty, so even if an alert had fired, we wouldn’t have seen that incident bubble up from Identify Link into Mobile as the cause of the Mobile disruption.
CTO
Conclusion
I say in a lot of these blogs that you really need to understand the problem you are trying to solve. Otherwise you end up wasting time, money and causing more problems. It’s ridiculous that we have spent $1m a year on monitoring, and we can’t predict or react to Major Incidents. There’s gaps in the monitoring, incidents caused by the monitoring, and people not looking at the monitoring.
Also in predictable fashion, we are moving away from Datadog to Dynatrace which is supposed to be much cheaper. However, all the dashboards will have to be remade so there’s going to be lots of time wasted.
I’ve written blogs about how our CTO tried to change our release process and announced it on a “Town Hall” call with the entire department; then loads of teams told him it couldn’t be done, so he had to back down.
Then later, on another Town Hall, he tried to change the Change Control process, but wouldn’t back down when we told him it wouldn’t work. He made the claim of it being a sackable offence if we didn’t follow it. Then a month later, he found out someone couldn’t turn a server on because of the Change Control process. He said it was “malicious compliance” and that will be a sackable offence in future. Within a few months, nearly the entire process had been reverted.
Last week, he was talking about how we needed to move a Data Centre to a new location. He said he preferred to move to the Cloud since it is “inline with our strategic targets”. However, after having several meetings with the experts involved in the Data Centre, they decided the best solution would be to move to another data centre. However, the CTO didn’t like this because it wasn’t inline with their strategy and he thought the move would be too slow.
Therefore, he took the executive decision to overrule them and demand they move to the cloud.
“Do I know we can move all servers to the cloud in time? No. But I was prepared to take the risk. I would rather make decisions and be wrong, than sit by and not make any”
CTO
It seemed strange to me to claim that moving to a physical data centre would be slow, but then moving to the Cloud probably couldn’t be done in time.
He then claimed that
“we have wasted enough time deciding what the plan should be; to move to the cloud or to move to a physical data centre”.
CTO
Isn’t this the worst scenario though? He could have made the decision before any meeting was arranged. But it sounds like they had debated the decision, came to a conclusion, then he told them he didn’t like their conclusion. Then he moaned that they wasted time debating.
So they have had meetings with the experts, and conclude the data centre is the best decision, but since the CTO loves the cloud, he has overruled them. So what was the value of the meeting? And will the staff be motivated to do something they don’t believe in?
“I think I’m compassionate with my team. It’s what binds us together as a team. Otherwise we are a bunch of individuals.”
CTO
I don’t get how he can make these statements and not realise the hypocrisy. How can you be compassionate if you have shown no confidence in their opinions and decision making?
One of the latest buzzwords to be thrown around is “Customer experience”. My understanding is that it’s a focus on customer interactions, from awareness of the product to purchase. This covers brand perception, sales process, and customer service.
Customer Experience is shortened to the acronym CX, because using the letter X is always cooler. For some reason, we went a bit further and put a hyphen in there for good measure; “C-X Experience Centre”.
The weird thing is that it kinda looks like a letter is missing like you are supposed to pronounce it like SEX; and a Sex Experience Centre is a different thing entirely. Does it even make sense, or sound sensible to call it the Customer Experience Experience Centre?
“The Customer-Xcellence Programme is all about putting our customers and users at the heart of everything we do. It directly supports our strategic priority of delighting our customers and partners. But we can only do that if we really put ourselves in their shoes and truly understand what day-to-day working life is like for them. By doing so, we can ensure the products and solutions we design, enhance and implement are directly informed by their experiences.”
We lost even more office space to create this C-X Experience Centre. Since we worked at home, they made the desk space more spacious for those that did go into the office, then over time have reassigned meeting rooms to nonsense like this.
To make it more pretentious, we invited a local politician for the grand opening
“The C-X experience Centre is a real gamechanger in how we immerse ourselves in the experiences of our customers and users.”
I think all it is is a few computers in a room decorated to look like a customer’s office.
“This will help everyone learn about the challenges our customers and users face, and how our solutions help them provide a better service.”
As well as showcasing our solutions to customers and key stakeholders, it will be used for:
onboarding new starters
supporting sales enablement training
launching and testing new solutions and products
“Thank you to the whole Customer-Xcellence team for turning this vision into reality – it will make such a difference in how we understand our customer’s and user’s challenges.”
Big changes coming from the Employee Forum. We are getting a variety of skin-tone plasters (band-aid) for the first aid kit. What sort of insane social justice warrior asked for that?
If anything, the default plaster is brown so us white folk need lighter ones.
This is the most extreme woke thing I have ever heard of. I don’t think we will beat it.
Skin Tone Plasters. A great shout and a big thank you for the lack of variety being highlighted to the Employee Forum. Aligning to our environmental credentials, the incumbent plasters will remain where they’re within a 3yr use-by lifespan. As we move to replace, this will be done with a wide variety of skin colour matching plasters.
I asked a friend what he thought of this:
Jack: I've heard of this before, so dumb
Me: Bet we get sacked for using the black plaster
Jack: Haha I would, just to make a point. Whoever came up with this has too much time on their hands, and whoever gets upset about wearing a wrong coloured plaster is a melt
Me: I should swap 'em with kids ones with cartoon characters on them
I do raise a good point there. If you did use the wrong colour plaster, would people get offended? What happens if you took the last dark-skinned one and someone saw you and they wanted it?
How often do you need a plaster when you are in the office? If the plaster is in a visible part of your body, is the presence of the plaster uncomfortable/embarrassing anyway regardless of colour? I think the default brown one is probably a good compromise for all skin types anyway, but I suppose modern ones can be white or transparent.
This surely has to be a case of a white person suggesting this, using their wokeness to raise an injustice against darker skinned people, even though no dark-skinned person was actually offended. However, if you now take the plaster that is reserved for them; then they will be offended.
Do we have the same policy in regards to bandages? They are usually white too.
We had our daily “Stand-up” meetings at 9:45, where the team states what they did yesterday and what they will aim to complete today. In our “Retrospective” meeting, where the team reflects on what went well and what didn’t go well over the last 2 weeks, and also what improvements could be made, one developer questioned why we have our meetings at 9:45 rather than 9:00.
Most people in the team start their day at 9:00, so to do a bit of work, then have to go on the meeting seemed like a distraction. After thinking about it, the longest serving team members said they might have set that time due to the previous Product Owner having several teams and it was the best slot for them to attend. Then no one questioned it and so it remained.
I suggested that we start at 9:05, given that if we turn on our laptops at 9:00, we will end up being late. Often if team members request help at the start of the day, they often reply “let me get my coffee”, so it made sense to allow people to finish their breakfast and get a drink. People seemed to think it was a good idea.
However, the next week, the meeting invite was set at 9:00. I questioned some of the team members.
[Monday 09:19] Me has Sam ignored my idea of starting at 9:05 instead
[Monday 09:20] Dennis must've
[Monday 09:24] Dean what was the 9:05 idea? can't remember that
[Monday 09:25] Me I said if we start at 9, it gives us a chance to get logged in, and you get your coffee then everyone was like, "yeah dude that is sick idea mate", and you kicked off about being falsely accused of wanting coffee
[Monday 09:26] Dean haha i don't fully understand i get logged in and get my coffee based on what time the call is T - 5 mins doesn't matter whether that's 9am or 9:07
Dean had previously criticised me for being late to a meeting that was arranged for 8:30 (before my 9:00 start), and I had missed another that was arranged less than an hour’s notice. I felt like he was using this as another opportunity to question my attitude towards meetings.
On the third day, Dean then says he is finding it really hard to get used to starting at 9:00 and might consider asking for it to be moved back to 9:45. He finds it hard to wake up and be alert for the meeting. Some days he will be late from childcare because it’s a bit of a rush back and can be hard with the traffic.
Quite interesting how he made out he always starts working before 9:00, and made sure he is ready for any meeting 5 minutes before it starts. Then when it comes down to it, he admits he starts late some days due to childcare, and other days he isn’t fully alert to work efficiently. So when we had our meeting at 9:45, it seemed like he was never really working at 9:00 like he claimed, whereas I would be ready at 9:05.
Recently, we were filling in our forms for the end of year performance reviews. We have tried all kinds in the past, but have settled on something simplistic in recent years. It’s basically structured around open questions of “what went well?”, “what didn’t go well?”, “What have you learned?”, “What would you like to learn?”.
Since we had already just evaluated ourself, it was a surprise to get an email directly from the CTO wanting us to evaluate ourselves.
Hope you are well.
We are currently conducting an assessment exercise across our Portfolio to establish our strengths and areas for improvement, with the goal of growing our capability as a department and to drive to our long term vision of moving to development on our new product.
To facilitate this process, we have developed an assessment questionnaire that will help us understand your capabilities and your career trajectory.
Could you please complete this form by selecting the option that best reflects your current capability or skill.
It’s an unexpected email, states urgency, and contains a suspicious link. All the hallmarks of a phishing email. I waited for a colleague to click the link before clicking mine. Given that it asks similar questions to what is on our performance review, as well as many others that are specific for our job role; why wouldn’t they just standardise the review process in order to get the information?
Clicking the link loads up a Microsoft form with Employee ID and Name filled in with editable fields but the question says “Please do not change this”. My name had double spaces in it which was really annoying. What would happen if I did correct it? Does Microsoft Forms not allow you to have non-editable fields? Seems a weird limitation regardless.
The questions were labelled with the following categories:
Delivery, Code Quality, Problem Solving, Accountability, Technical Proficiency, Domain Proficiency, Cloud Knowledge, New Gen Tech Stack Proficiency, Joined Up, Process and Communication, Innovation.
I really didn’t like the way the questions were written. There are 5 answers labelled A-E, but C is often written to sound like a brilliant option when you would expect that to be average. B and A just sound like behaviour reserved for the Architects/Engineering Managers/Principal Developers.
Given that the answers just seem to link directly to your job role, then it reminded me of those online quizzes where it is gonna decide what TV Character/Superhero you are, but you can easily bias your answers because you can see exactly where it is going. In this case, this assessment just seems like it is gonna rank you Architect, Expert, Senior, Junior based on your answers.
Some of the wording for the lowest answers seem like a strange thing to admit.
“Only engages in innovation efforts when directly instructed, showing a complete lack of accountability. “
Why would you admit to showing a complete lack of accountability? Most people probably don’t “innovate” but selecting an answer with “showing a complete lack of accountability” seems crazy.
So given that some answers are never gonna be selected because it’s a difficult thing to admit, and given some answers were clearly based on your job description; then people would just select answers based on what they SHOULD be doing, rather than what they ACTUALLY do. So therefore, it’s a pretty pointless survey. Also there is bias that it was given during the review period so people would suspect it would be used to decide pay-rises and promotions rather than just for some team reshuffle.
This one on Code Quality is weird because B and C seem similar in standard, but then when you read D, it sounds like you admit you are an incompetent Software Developer.
Code Quality (cq.a) Established as code guru and plays a key role in shaping optimal code quality in the team through effective reviews, acting on insights from tools, identifying and resolving inefficiencies in the software and process. (cq.b) Effectively uses insights from tools like Sonarcloud and influences team quality positively by enforcing standards and showing an upward trend of improved quality and reduced rework. (cq.c) Upholds the highest standards of unit testing, coding practices, and software quality in self-delivery and ensuring the same from the team through effective code reviews. (cq.d) Rarely identifies refactoring opportunities, misses critical issues in code reviews, and struggles to positively influence the team's approach to code quality. (cq.e) Engages minimally in code reviews, allowing issues to slip through; unit tests are skipped and/or yet to begin influencing the code quality of the team.
This one seems applicable to only the top people, or ones that love the limelight and want attention from the managers.
Joined-up (ju.a) Designs personalised learning paths for team members to ensure comprehensive skill development. (ju.b) Takes ownership of training needs, seeking opportunities for personal growth. Takes the initiative to identify advanced training opportunities for skill enhancement. (ju.c) Demonstrate robust team communication, encourage team to contribute in weekly Lunch and Learn sessions, actively recognising peers, support juniors wherever needed. Be active in recruitment. (ju.d) While excelling as an individual contributor, there is an opportunity to engage more with team members by sharing ideas, seeking input, recognition and offering support in team/organisation initiatives (ju.e) Need to start taking on a mentoring role by sharing knowledge, providing guidance, and offering constructive feedback to the juniors help them grow and succeed.
I think it is difficult to make meaningful surveys or assessments, but you need to put some thought into the value, and the accuracy of the results.