Remember when people used to know what they were doing?

Remember when people used to know what they were doing? those were the days.

“what concerns me the most is that there was a time where everything almost worked like clockwork and now it seems like more ruins every day”

Software Architect

“I am more surprised when something works”

Me

We used to be a company full of smart people, working effectively. Now we work slowly and people just cut corners and do incredibly dumb things. In more recent times, people now don’t think for themselves because they ask AI what code to write. Sometimes it’s absolute rubbish but they never reviewed it themselves; so it really is zero thought. You point it out to them that it’s not going to work, and they respond back with an overly polite message, clearly written by ChatGPT; which just adds insult to injury.

So it’s like developers don’t even develop because AI does it. Then they don’t do any dev-testing. Then the Testers don’t know what they are doing either.

Recently Testers have been installing our software on the application servers.

Even though one of the Lead Testers has been posting angry rants about it; it keeps on happening. The Lead Tester’s points were that it’s not representative of live, and how it takes up the RAM/processing time and lags out the app server for everyone else.

I don’t get why people got the idea to install the client on the app server, and remote on. You can’t think that is official. The servers were always configured to only allow 2 people on at once, so it’s not like the entire department can log on to test if it was the official process.

I just hate what this company has become. I feel like it’s just gonna keep getting worse with managers constantly encouraging people to use AI.

Innovation shambles

Recently, managers decided that every few months we should have an Innovation Week. The idea is that you can work on ideas that can improve our work processes or even add a new feature to our products. However, the time limit of one week is a bit limited to actually get something complete in my opinion.

To be efficient, we really need to come up with a great list of ideas before the innovation starts, otherwise it cuts into the week. Some people did submit ideas before, and others on the day.

The initial meeting quickly became a bit of a shambles. Paul had created a Miro board under a different account that the attendees didn’t have write permissions for. Even when we clicked the link to request access, and Paul claimed he approved it; it still didn’t work.

He then tried creating a different board, but that didn’t work. To not waste further time, we just posted ideas into the Microsoft Teams chat which then he transferred onto Miro.

Since the ideas were essentially just titles on the board, people were supposed to explain their ideas but I don’t think many explained too well. We probably needed some kind of formal process to:

  1. describe the problem, 
  2. ideas on how to solve,
  3. pros and cons, 
  4. any possible costs like software licences,
  5. prerequisites to be able to investigate or implement the idea.

Another thing was missed is that you have to have accounts to use many of the AI tools, and that was a focus of this month’s innovation. With a lot of software, it often needs a special licence for commercial use and we weren’t advised how to acquire licences. We had Github Copilot and Office Copilot but what about other AI tools?

One guy apologised for misunderstanding that the ideas should be process improvements and he had come up with an idea for our software that our users would use. Paul said he hadn’t misunderstood at all and we could suggest either process improvements or new features… but that’s not what the Miro board said. It was only for process improvements and so all but one idea was for process.

We needed to assign our names to them, so initially Paul tried to create a spreadsheet but couldn’t work out how to share it so we could all edit at the same time. He ended up pasting the ideas into a Microsoft Teams “Whiteboard” which I had never used before but it looked like the Miro boards.

There were loads of ideas, but many were of debatable value. However, like I stated, we never discussed them effectively. Without knowing the pros and cons or prioritised the business value; there were loads of ideas that definitely weren’t strong enough. So with a large list, it was hard to pick something to work on. Some of them would need more than one person, but what guarantee is there that the team will be full? Less likely when the list is so big.

So I asked the question if we should only put our name against 1 item, or vote for several so we can see which teams are full, then the full teams get approved. Paul said to only vote once otherwise it will look like teams are full, but you’d end up dropping out if another one of your votes were successful. I suppose that’s a good point, but only voting once will mean you could be the only person to vote on a team project, so would then have to choose something else anyway, or gamble and go by yourself.

With most people finally assigned (and many just disappearing, presumably to slack off), with many going solo, and some probably having more team members than required; we got told to communicate with our team members.

I was in a team of 3 but I thought the ideal team would just be a pair. I waited for 30 mins or so but the guy that came up with the idea hadn’t contacted me, and you would assume he would take the team leader position.

I then took initiative and added a group chat with my 2 team members, and after another 1.5 hours, I finally got a response from one person who asked how we should begin to plan. I responded with my notes I had created to set the scene. He suggested one extra point to my notes, then didn’t hear from him for the rest of the day. The other team member didn’t respond at all.

The next day, my manager contacted me and said I was assigned to help finish a project that was behind schedule so my “innovating” had come to an end.

Absolute shambles really.

Cutting Corners With AI

I’ve written about how Large Language Model AI is great for generating text but it’s not necessarily accurate when it comes to facts, because its essentially just plugging in grammatically correct words. When people don’t understand this, then there can be some embarrassing failures like lawyers producing case numbers that don’t exist.

A scary idea is students getting expelled for plagiarism because AI was asked to be the judge and claimed false credit.

Report of a teacher who seems to think running portions of a student’s essay through ChatGPT will allow ChatGPT to tell them if the AI wrote the paper. ChatGPT lies and will just give you an answer that may not be true!

https://x.com/paleofuture/status/1658227685548785665

Students need to be aware that if they use AI, then teachers can too.

Recently, we were using a third party API and realised it was accepting a message that it should reject because the content wasn’t using the correct digital signature. We were previously using the Sandbox environment which we thought should basically have no validation, but we were then sending the same message to the Integration environment and expected a Rejected status.

So we logged a helpdesk ticket with them and quickly got a reply back. It was a long-winded answer and basically agreed with us it was incorrect behaviour. The wording was clearly AI. But at least they admitted they had just used AI.

It might have been the case that their technical staff were unavailable so a manager made an attempt to deal with it without actually reading or understanding the problem at all.

I have just ran this through the chatbot and have had the following response: The details provided do not specifically mention the behavior of the Order endpoint in relation to the signature from the example request, especially in the context of different environments such as INT. However, it is common in software development to have different configurations for different environments such as development, testing (often referred to as int for integration), staging, and production.   

In many cases, the integration or testing environment is configured to be more permissive to facilitate testing and debugging, which might include accepting a static example signature for all requests. This would not be the case in a production environment, where security and proper verification of digital signatures are critical for ensuring non-repudiation and the integrity of the order. 

Based on standard practices, if the order endpoint is accepting a specific example signature in an integration environment, it could be by design for testing purposes. However, for a production environment, this would be a significant security risk and would not be by design. 

Since the provided documentation does not explicitly confirm the intentional behavior for the order endpoint in the int environment, it is advisable to refer to the specific implementation details or consult with the development team responsible for the API to understand the intended behavior.  

Can you confirm if this answers your query, please? 

AI Art 

The debate about generative AI for images is an interesting one because it’s clear it can easily take work away from human artists. A few years ago when AI was a bit inconsistent and drew obvious errors like humans with extra/missing fingers, then you couldn’t use these images in a professional context without editing the image, but then maybe you would need to hire someone with those editing skills to fix it.

With how creative these AI models can be, it has the likes of JimllPaintIt fearing for the future. Images can be generated in a famous artist’s style, so what happens if people can just generate ones in the style of JimllPaintIt?

In a now deleted thread, he stated:

“My attitude towards AI “art” has – in a short space of time – gone from mild disinterest to acute irritation to absolute despair for the future of humanity. The most depressing thing is seeing artists embrace it. Talk about turkeys voting for Christmas.”

JimllPaintIt

Some others raised a good point, that the person typing the prompts still needs to be creative:

“The irony I have seen so far is that the best results from it come from talented artists. I don’t think it’s the awful thing you think it is. Talent is still needed for vision. I think it just opens up art to more people who have vision but not necessarily the physical skills.”

The animator Cyriak then chimes in:

I’m sure musicians have great record collections as well. The idea that “skills” and “talent” are magical properties some people are born with is rubbish. “talent” is just being bothered to keep trying, and skill accumulates as you keep trying.

Cyriak

Which I think isn’t correct. It’s more like a combination of what you are born with, then learned skill, (nature/nurture) as someone else points out:

In that case, if you kept practising you could run faster than Usain Bolt? or is he just naturally faster than you?

Matt_Francis

“I don’t draw pictures by running in a straight line with pencils tied to my shoes. I’m not sure anyone does”

Cyriak

Not sure what Cyriak’s response even means. Is he saying it’s a completely different skill so art is from practice, but physique is natural?

People keep talking about how AI will take away Software Developer’s jobs but at the moment, I think it can be used to take away some of the tedious aspects, and also give a good starting point (boilerplate code) to then enhance with your skills. You also need to understand how to ask the AI to realise your vision. I think there are comparisons in the Art world, but I think it’s easier to understand how their jobs are impacted more directly. ie Hiring an artist for one (or a few images) when you can use AI – versus hiring a developer for a few weeks to get a fully working program/website.

Github copilot

We recently had staff from Github Copilot do a presentation on how their product can be useful to Software Developers. I found their answers to be a bit wishy-washy. I think it’s a really complex topic and having what I think were essentially sales managers trying to pitch something technical to us was a challenge. They didn’t have a full understanding of how it actually worked.

Someone asked a question to clarify if Copilot just looked at your open documents, or if it had the permission to see all the other files in your repository. Their answer was vague, along the lines of “it might do. Could come down to chance“.

For it to be effective, it really does need to look at your codebase to see what your product does, what features are already developed, and for bonus points, your coding style.

When it needs to suggest calling third-party code and installing additional libraries, does it understand that you may need to abide by a certain licence (pay some fee, or not use it in open-source etc)? and does it know that you may be limited to a certain version of it due to other dependencies? when features and the API (required parameters etc) can change drastically between versions, does Copilot understand that?

It’s probably the same scenario as what Wolfram Alpha were talking about when they came to our company to do a presentation on AI. They were emphasising how standard language models often suggest some text which reads like it makes sense, but it’s actually nonsense. They gave an example where it quoted a real journal from that country, stated the title of a chart that exists, quoted some figures and years – but the figures were fictional.

I saw a news article about how a lawyer presented some documentation to a judge about similar cases, but it turns out the lawyer had used ChatGPT and it had made up the case numbers and years.

The way those models work is that it knows some related words, and knows sentence structure, but the likes of ChatGPT doesn’t understand that something like that needs to be accurate and you can’t make stuff up. So Wolfram were saying their plugin can be combined with ChatGPT’s conversational structure to plug in actual figures to make accurate essays. TEAMWORK.

I would imagine there’s a good chance Copilot has exactly the same issue. It knows a bit of structure, slaps in the correct programming language, but it has no idea that it’s from a different library version that you aren’t using.

From what I have seen of Copilot, it is very impressive but does often give you code that doesn’t quite compile but gives you a good template and inspiration of how to progress.

In the past I have seen people blindly copy code from the internet, or just do what a colleague suggests without actually thinking about it. I think we are gonna be seeing this more from now on, but it’s gonna be the AI’s fault.

I am not against AI in programming because it can speed up development in certain tedious areas, but it always comes down to the idea that the best programmers are ones with a certain mindset of quality, and I think AI is gonna produce more developers with the wrong mindset because it’s about speed and cutting corners.

I’ve heard people suggest that the next wave of developers can be so dependent on AI, that they will be unable to come up with a solution when the AI doesn’t get it right.

The Troubling Rise of Artificial Intelligence: A Mirror of Modern Ideologies

Artificial intelligence is becoming increasingly prevalent, no longer confined to science fiction. In some industries it may just enhance minor aspects, but in others, it is replacing processes. AI programs today can generate realistic deep-fake videos, simulate voices, create images, write essays, and even compose poems or songs.

While these creations are (mostly) not yet indistinguishable from reality, they are convincing enough to deceive many. Major companies are integrating AI into their operations, often at the expense of human jobs. Customer support has been partially replaced by chat bots. Websites are now flooded with AI-generated articles, and the likes of X are full of bots that use AI to repost a summary just to farm engagement. 

The influence of AI on our future is undeniable. Even this blog was partially written by AI. I stole a transcript from a YouTube video that made some good points, and turned it into a blog using AI. Deleted some parts, rewrote others, shuffled paragraphs around, and mixed in some of my own opinion and information from other sources. 

One thing to always keep in mind is how the creator’s ideas can then become a bias in the algorithms. This was seen in Google’s Gemini, which severely favoured political correctness over accuracy.

Like many discovered weaknesses, it is exploited by people for humorous effect. People asked Gemini to make an image of the Pope or a World War II German Soldier and they got a dark skinned version. Even the founders of Google were turned into Asians. The bias was so extreme, that the only reliable way to get white people was to ask AI to generate absurd racial stereotypes which would then be switched for a white person. This could be abused by asking for a person doing a mild stereotype and it would instinctively choose to draw a dark-skinned person whereas a non-biassed algorithm might have just defaulted to white.

So using Gemini, you couldn’t get a historically accurate image of most people, and could only create images of white people if it was in jest.

It was so bad that you couldn’t even ask for white actors, because that isn’t inclusive. But you can ask for only black, because that is inclusive.

When it comes to impersonating celebrities, it seemed to make out that it cannot impersonate people in the case that the person held right-wing views or provided controversial opinions, but then it had no problem with left-wing opinions.

In a now deleted tweet, there was a thread about the creator of Gemini posting several tweets about left-wing politics, systematic racism, and white privilege. All the usual phrases of woke activists.

Young people in particular will use AI for information, potentially at the expense of critical thinking. As AI becomes more integrated into education, media, and social platforms, it has the power to influence societal narratives, political opinions, and perceptions of history. This raises questions about the ethical responsibilities of AI developers and the potential consequences of unchecked technological influence. Prominent figures like Elon Musk have voiced such concerns.

As AI continues to be a prominent part of our lives, it’s important to understand any underlying bias, and any limitations of what it can and cannot do.

AI-Generated Games: Cutting Through the Hype

I watched this video and found it incredibly interesting.

Explaining the Rise of AI Generated ‘Games’ | AI and Games #78

One fascinating development has been AI-generated game simulations—models trained to generate video game frames based on existing gameplay footage. These systems often resemble playable versions of classic titles like Minecraft or Doom, but in reality, they’re more akin to interactive videos than fully functioning games.

While visually impressive, these AI-generated experiences are fundamentally limited. They operate by predicting the next frame, rather than running a dynamic game engine. This means they perform well under predictable conditions—opening doors, shooting enemies—but can quickly “hallucinate” inconsistencies when a player moves unpredictably, leading to bizarre visual errors.

Despite their novel appeal, these AI-driven simulations are not the future of gaming. The attention they’ve received is largely driven by press releases from AI companies eager to position themselves at the forefront of technological innovation. While these projects highlight advancements in AI model sophistication, they’re far from replacing traditional game engines.

There are three key reasons why these systems are unlikely to make a lasting impact on game development:

1. Consistency Issues

   These AI models generate frames based on prior footage, rather than maintaining a persistent world state. This can result in sudden shifts in location or enemies appearing out of nowhere, making gameplay unpredictable and unreliable.

2. Data and Hardware Limitations

   Training these models requires vast amounts of gameplay data—often spanning years of recorded footage. Additionally, the immense computational power makes them incredibly resource-intensive and requires a prohibitively expensive graphics card. While players love running Doom on unconventional devices, this AI-driven Doom may be the most expensive version yet.

3. Game Stability and Evolution  

   Unlike traditional game engines, these AI models depend on static, unchanging titles for training. They can’t adapt to evolving game mechanics or updates, making them ill-suited for modern game development, which often sees drastic iteration throughout production. Old games like Doom, or the more recent Bleeding Edge have ceased development, making them good choices for this research.

As these AI-generated game simulations continue to emerge, it’s important to approach them with a degree of skepticism. While industry figures like Phil Spencer and Satya Nadella may speculate on AI’s role in game preservation, the real question is: what do the creators of these systems say? The truth is, they aren’t making bold claims about AI reshaping game development—because they understand the inherent limitations.

Video game creation and development remain largely unaffected by these experimental AI-driven projects. Even as generative AI becomes a hot topic, separating hype from reality is crucial. These systems don’t threaten traditional gaming, but they do provide a fascinating glimpse into AI’s capabilities—and its current shortcomings.

Shiela’s Wheels Renewal

I always hate talking on phones and I always think that some companies’ policy to “auto renew unless you phone up to cancel” takes advantage of that mindset. My Home Insurance is with Shiela’s Wheels and they auto-renewed last time at an increased cost (even though I would have another year “no claims” bonus so would expect a discount).

It was approaching the date of renewal, and I thought I probably needed to call them to cancel. However, I received a letter

“Due to our renewal criteria, we’re unable to renew your policy”.

No explanation given. Very weird.

I go on their website, and my account shows that it is set to auto renew. There was a message saying that it will auto-renew, and there should have been a link to see my new quote – but the link was disabled.

So is it set to renew or not?

I used their live webchat, and after “checking my account” for 5 mins they confirmed it wasn’t set to auto renew. So I told them that it shows as auto-renew on my account page. They then disabled it for me. I thought it was weird that they were so happy to disable and not try and keep me for another year. Don’t they want my business? I then ask them why I cannot renew, since I still am unaware of what the problem was.

Thanks for looking to renew with us, it’s great to hear you want to stay with us for another year. We won’t be able to renew your policy this year. This is due to an upgrade of our internal system and we’re sorry that this has impacted you. You can visit a price comparison website to get a new policy quote. Your cover details and how much you paid this year are available in your insurance schedule in your Account to help you create a like-for-like quote and compare your price. You can also be assured that a new esure quote will not be higher than any renewal quote we would have offered you at this time.Thanks again for choosing esure. We can’t wait to see you return.

How bonkers is that? They cannot renew existing customers in their new system, and are confident that if I go to a price comparison website, then their prices should be competitive so I could come back. Surely they have to pay a commision to the price comparison websites, and there’s a good chance I won’t come back.

As it goes, I went to a price comparison website and didn’t even see Shiela’s Wheels/Esure on there, so they have lost me as a customer.

Recently, I have been making jokes with my colleagues about how bad we write our “User Stories”.  I joked about some user stories for Shiela’s Wheels new computer system:

As an existing customer,
I don't want my policy to be renewed
because the system has been upgraded

As an existing customer
when I ask to renew
I want to be told to take my business elsewhere

I’ve also been playing around with AI (Bing Copilot) quite a lot recently and keep testing it out.

 

I gave it the instruction:

“I contacted my Home Insurance company about why I couldn’t renew with them. Their response was <insert quote from above>. Write about how ridiculous it is that upgrading a system means that they are unable to renew their policy.”

Funny – Blog Post – Long

In the whimsical world of insurance, where logic often takes a backseat to the mysterious workings of "system upgrades," we find ourselves in a delightful conundrum. Picture this: you're cozied up with your favorite insurance policy, dreaming of another year of uneventful coverage, when suddenly – bam! – you're hit with the "it's not you, it's our internal system" breakup line.

It's almost comedic, isn't it? The idea that a company, entrusted with the oh-so-serious business of insuring our worldly possessions, can't figure out how to keep a customer due to an upgrade. What are they upgrading to – a system that only accepts carrier pigeons? Are they swapping out their computers for abacuses? One can only imagine the boardroom conversation: "Folks, we've got two options – renew these policies, or play with our shiny new system. I say we go for the shiny!"

Let's not forget the helpful suggestion to visit a price comparison website. It's as if they're saying, "We've enjoyed taking your money, but why don't you go see if someone else wants a turn?" And the assurance that a new quote won't be higher than the renewal quote they would have offered is the cherry on top of this farcical sundae. It's like a restaurant saying, "We can't serve you dinner tonight, but rest assured, if we did, it wouldn't have been more expensive than last night's meal."

The silver lining? This situation provides a perfect opportunity for a humorous blog post or a stand-up comedy bit. Imagine the laughs you could get with lines like, "I tried to renew my insurance policy, but my company was too busy updating their system to take my money. I guess I'll just wrap my house in bubble wrap and hope for the best!"

In the end, while the situation may be frustrating, it's also a reminder that sometimes, life throws you a curveball that's so absurd, you can't help but laugh. And who knows, maybe that laughter is the best policy of all. So here's to system upgrades and the unexpected comedy they bring into our lives – may they always be as entertaining as they are inconvenient.

AI Writes Woke Blogs

I’ve recently written about how my employer has been promoting “woke” issues and making out that we need to combat discrimination in the workplace even though it’s not a problem where we work. 

Some colleagues have written blogs about struggles they have had in life based on various issues, be it gender, ADHD, Autism, Depression etc.

I was joking with a colleague about how it’s easy to claim you are discriminated against. I’ve written blogs about how I’ve been overlooked for promotions and pay-rises multiple times even though my peers have often considered me a high-performing individual. Sometimes I’ve considered how I can be shy and non-confrontational and don’t suck up to managers, so often put it down to that. I could easily see how someone in that situation could claim it is against their race, religion, gender or health status. You could also claim it to be about the way you look or how old you are.

My colleague also gets overlooked for promotions and has never reached the rank of Senior even though he is probably around 50 years old. He is incredibly tall, so he asked the AI to generate him a blog post. Then he tried from the opposite perspective, then as a “white person of colour”.

Here is what it came up with:

### A Day in the Life at Altitude: Reflections of a Person of Height

#### Morning Musings: The Highs and Lows 

My mornings commence with what I’ve affectionately dubbed “The Great Duck,” a necessary dance with my environment to avoid the ever-threatening ceiling fan and the shower that seems to mock me with its disdain for elevation. This daily ritual, though humorous at times, is a vivid reminder of the mismatch between my height and the world’s standard measurements. It sets the tone for the day—a day where my stature both defines and challenges me in equal measure.

#### The Commuter’s Waltz: A Tall Order

Navigating public transport as a person of height is an exercise in humility and patience. With overhead straps laughingly out of reach and seats seemingly designed for a different species, my commute often feels like a puzzle where I don’t quite fit any of the pieces. Yet, there’s a certain grace to be found in this daily ballet, a reminder of the adaptability and resilience that have become second nature.

#### Workplace Woes: Above the Crowd

At work, my vertical advantage becomes a public commodity. While I’m all too aware of my ability to retrieve objects from lofty places, there’s an unspoken expectation that I should always be willing to do so. The truth is, the novelty of being the resident retriever wears thin, especially when it overshadows my professional contributions. And so, the search for a workspace where I’m not just seen as a tall figure but as a valued team member continues—a quest for a place where my height is incidental, not defining.

#### Lunchtime Lament: Seeking Common Ground

Lunchtime is no escape from the day’s taller challenges. The act of finding a seat where my knees aren’t pressed against the underside of a table is a daily trial. But it’s in these moments of shared struggle that I find unexpected camaraderie with others of similar stature. There’s comfort in mutual understanding, in knowing that you’re not alone in your lofty perspective.

#### An Evening Plea: A Shelf Too Far

As the day winds down and I venture into the supermarket, my height becomes a beacon for requests. “Could you just grab that from the top shelf?”—a question I hear all too often and one that I wish would cease. It’s not that I mind helping others; rather, it’s the assumption that I’m always available for such tasks that weighs heavily. I long for a shopping experience where I’m just another customer, navigating aisles without the added expectation of being a human ladder.

#### Reflections from Above: Life as a Towering Presence

This blog has become a sanctuary where I share the nuances of living as a person of height. It’s not just about the inconvenient light fixtures or the cramped seats; it’s about navigating a world that doesn’t quite fit. Each day is a lesson in adaptation, a testament to the human spirit’s capacity to find humor and grace in the face of constant reminders of difference.

Being tall is an integral part of who I am, but it’s not my defining feature. I am more than my ability to reach high places; I am a person with aspirations, thoughts, and feelings that soar even higher than my physical stature. And if there’s one wish I could have granted, it would be to navigate a day without the expectation to retrieve items from the unreachable realms of supermarket shelves. To those who see me in the aisles, know that I, too, am there to shop, not to extend my reach into the upper echelons of grocery goods.

In sharing these experiences, I hope to offer a glimpse into the life of a person of height, with all its trials and triumphs. It’s a journey of learning to stand tall, not just in stature but in spirit, embracing the view from above while staying grounded in the reality of everyday challenges

### A Day in the Life of a Vertically Challenged Person

Life from a different perspective can bring its own set of challenges and rewards. Being vertically challenged, or shorter than the average height, means navigating a world that seems designed for taller people. This blog explores a day in the life of someone who views the world from a lower vantage point but stands tall in spirit and resilience.

#### Morning Routine

The day begins with the usual morning scramble. For someone vertically challenged, this might involve a step stool strategically placed in the kitchen. Reaching for the top shelf where the cereal is stored or the cupboard for a favorite mug is the first small victory of the day. It’s a reminder that even the simple act of making breakfast requires a bit of ingenuity and adaptation.

#### Commute and Work

Next comes the commute. Public transportation can be a battleground, with overhead handrails just out of comfortable reach and crowds that seem to swallow you whole. Yet, there’s a certain skill in weaving through the masses, finding those small openings that taller people might overlook.

At work, the office environment poses its own challenges. The ergonomic design of desks and chairs often forgets those of us who can’t quite touch the ground with our feet or who need to adjust monitors lower to avoid neck strain. Yet, these challenges are met with personalized solutions, from footrests to keyboard risers, showcasing the creativity that comes from having to constantly adapt to a world not designed for your stature.

#### Social Interactions

Social interactions can be a mixed bag. There’s the inevitable moment when someone comments on your height, thinking it’s the first time you’ve heard that joke or observation. But there’s also the camaraderie found in meeting someone who shares your perspective, both literally and figuratively. Conversations often start with shared experiences of being vertically challenged but quickly move beyond height, to deeper connections and friendships.

#### Shopping and Errands

Shopping is an adventure, with clothes that seem to cater to a taller clientele and shelves stocked just beyond reach. But there’s a sense of achievement in finding that perfect outfit that fits just right, or the last item on the grocery list tucked away on a high shelf, secured with a bit of determination and perhaps a helpful fellow shopper.

#### Leisure and Self-care

Leisure time and self-care are tailored to suit personal comfort and accessibility. Exercise equipment is adjusted, and books and devices are positioned for easy access. It’s a time to stretch out, literally and figuratively, embracing hobbies and activities that bring joy and relaxation.

#### Reflections

As the day winds down, it’s a time for reflection. Being vertically challenged in a world built for the average height has its frustrations, but it also instills a sense of resilience, creativity, and a unique perspective on life. It’s a reminder that while we might not see eye to eye with everyone, we can stand tall in our own way, embracing the challenges and celebrating the victories, no matter how small they may seem.

### Conclusion

A day in the life of someone vertically challenged is filled with small adaptations, challenges overcome, and the constant reminder that perspective is everything. It’s a life viewed from a different angle, rich with experiences that remind us that it’s not the height that defines us, but how we choose to navigate the world around us

Navigating the Nuances: Mental Health in the Age of Wokeness

In today’s rapidly evolving cultural landscape, the conversation around identity, race, and mental health has never been more complex. As someone who identifies as a white person of color, the intersectionality of my identity often places me in a unique—and sometimes challenging—position, especially in the context of the current discourse on wokeness. This term, originally rooted in a genuine and necessary awareness of social injustices, has morphed in various ways, impacting many, including myself, in deeply personal ways.

### The Weight of Wokeness

The drive towards a more aware and inclusive society is something I wholeheartedly support. The ideals of equality, understanding, and respect are pillars that should underpin any progressive community. However, the journey towards these ideals is fraught with complexity and, at times, contradiction, particularly for those of us whose identities defy simple categorization.

As a white person of color, I find myself navigating a maze of expectations and assumptions. On one hand, my white heritage seems to place me in a position of privilege—a fair assumption in many societal contexts. On the other, my connection to a minority group imbues me with a personal understanding of racial challenges and injustices. This duality often makes me a target for criticism from all sides, with my identity being scrutinized and, at times, invalidated by those who view the world through a strictly binary lens of race and privilege.

### The Impact on Mental Health

The constant need to justify my identity and experiences takes a significant toll on my mental health. The feeling of not fully belonging to any one community, combined with the pressure to always say the “right” thing, is isolating. It fosters a sense of alienation and anxiety that shadows many of my social interactions. The fear of misspeaking, of inadvertently offending, or of not meeting someone else’s standards of “wokeness,” creates a pervasive sense of walking on eggshells.

This environment, where every word and action is heavily scrutinized, exacerbates feelings of anxiety and depression. The mental exhaustion of constantly navigating these social minefields, of trying to align with a set of ideals that sometimes seem at odds with the complexity of human identity, cannot be overstated.

### Seeking Balance and Understanding

Finding balance in this charged atmosphere is an ongoing process. It involves setting boundaries for engaging in discussions about race and identity, prioritizing self-care, and seeking out supportive communities that recognize and respect the complexity of individual experiences. It also means acknowledging my own feelings of confusion, frustration, and exhaustion as valid responses to a complex social dynamic.

One of the most significant steps towards mental wellness has been learning to embrace my unique perspective. Recognizing that my voice, informed by a diverse background, has value in conversations about race, identity, and social justice. It’s about finding strength in my nuanced understanding of these issues and using it to foster empathy and inclusivity.

### Moving Forward

As we navigate the complexities of identity and social awareness, it’s crucial to remember that the human experience is diverse and multifaceted. Embracing this diversity, with all its challenges and contradictions, is key to building a truly inclusive society. For those of us caught at the intersection of various identities, finding a way to voice our experiences without fear of judgment or backlash is essential for our mental health and for the broader conversation.

In the end, the goal should be to create a space where all voices are heard, where the complexity of human identity is acknowledged and celebrated, and where understanding and empathy prevail over divisiveness. As we strive towards this ideal, let us remember to approach each other with kindness and an open heart, recognizing the profound impact our words and actions have on the mental well-being of those around us.

Closing Thoughts

It’s crazy how good AI is at writing blogs. It seems to have the content and structure, sometimes coming up with good puns, and seems to perfectly emulate a victim mentality.

Humane AI & Rabbit R1: What Are These Companies Hiding?

What Are These Companies Hiding?

Dave2D made a video covering two AI Assistant devices; The two products Rabbit R1 and Humane AI pin are launching very soon. He was suspicious because he is unaware of anyone getting a review copy to promote them, and the existing marketing materials have been a bit vague or suspiciously misleading.

It reminds me of when there was some controversy over a computer game that wasn’t reviewed before release – which caused suspicion that it was going to be released in a broken state; because if the company really believed in it, they would send it for review to be praised. Restricting reviews seem an admission that the product isn’t good, or as what it seems.

The examples they have shown is that you can use voice commands just like an Alexa, and can book trips, bring up nutritional information for food by recognising what it is, live translations. You don’t need your phone or to launch any apps.

The Rabbit R1 is the more popular of the two (100,000 units on preorder) and is $200 device, featuring a cute design, small screen, camera, analog scroll wheel, speaker and button; very simple. The Humane AI pin is a $700 device and it also needs a $25 monthly subscription. Instead of using a screen, it projects the image. So you clip it to your clothes and hold your hand out to view. It’s a neat party trick, maybe a little gimmicky. 

In the marketing, both companies deflect any questions using AI buzzwords and using very specific examples. It seems it might be the case that they just do a subset of what your phone can do, but worse. 

Personally, I didn’t understand what most wearable tech did like the Apple Watch. In that example, it seemed to be sold on the idea that you no longer had to take your phone out of your pocket to check a message, but it seems a simple value proposition for a device that costs hundreds.

A mobile phone is more versatile and often more personal because it’s easier to hide the screen. Whereas, if a device relies purely on voice, then people can overhear, it causes a scene, and maybe could be hard to hear in a busy environment. The projection idea might even be difficult to see in different lighting, although might be easier to hide from other’s view, although you look a bit weird. 

Since these devices can’t do everything your phone can do, you still need your phone with you. Certain devices like an mp3 player became obsolete when smartphones were widely adopted because people didn’t want to charge and carry two devices around… Or most people anyway; I don’t like the idea that listening to music is draining my battery that I might need to use for phone calls and text messages. I listen to a lot of music so I like to keep the devices separate.

So back to these two AI devices: A good question is “why isn’t this just an app?” Google already has a Google Assistant that you can talk to, although I have never tried that out. With how AI has progressed, the likes of Bing gives you easy access to quick searches, summaries, image generation etc, so it sounds like most of it could just be an additional app to download, or would be quickly obsolete if Google just adds the feature natively to Android. The possible limiting feature that Dave mentioned is that with all the permission needed to work, it might be seen as a security risk for Android to grant permissions to your location, images, contacts, microphone, camera, passwords etc. If it’s the company’s own device, then they can access all features of such a device.

When you are on your phone, you have the flexibility to react on-screen and look at images, reviews and follow links in order to make a decision. Having something that’s basically fully voice controlled has less flexibility. You are kind of forced down a specific path.

Another thing to bear in mind is that the marketing materials have often shortened the sequences to make it seem more impressive than it is. Wait time is a massive factor in the user experience. If there’s too much delay, then the usefulness and appeal is reduced.

The size and weight of devices can be a problem. If you are expected to pin the Humane AI device to your clothes, then it needs to be light without tugging on your clothes. 

If these devices are using their own AI models, then another question is “how good are they?”. What is their training data, and are the initial versions going to be poor, but get better over time when learning from the early adopters?

Personally, I could see this being like the Google Glass where it sounds like a cool idea, but then it’s actually a bit limited, overpriced, and you look silly using it.