Recently, we had the classic debate whether when discovering an (unrelated) problem, a bug fix:
should be rejected/marked as failed,
or if you pass it, but log a new bug.
Sam said he had never logged a new bug if testing failed, even if this issue wasn’t called out in the bug report; in the description or recreation steps.
So to explain using this situation: I had fixed the Entity Framework code which saved a new row to the database table. The bug was for passing correct values into it, and my change was fine; the correct values were now saved in the appropriate columns in the new row. However, if you send multiple calls at once, Sam noticed it wasn’t incrementing the number by 2 as expected, only by 1 (the first call was then essentially being overwritten – a classic concurrency problem).
I think it’s reasonable to use the original Work Item (Bug Report) to flag any issues testers find, but once we confirm the new issue isn’t caused by my change it should go into its own Work Item. You might assess that the new bug could be fixed in a later release. It makes no sense to fail bug A because you found bug B.
Sam is basically increasing scope,then moaning that we might fail the sprint, because we cannot complete the new item within the 2 week deadline.
I always find it interesting when people work in a particular job then get promoted into management. It’s a completely different set of skills and if it’s a fair promotion, the idea of getting so good at your job, that you no longer do that job anymore; is another illogical aspect of it.
One thing that always amazes me is when people make decisions that they know are a bad idea from their experience doing the job.
When I worked as a software tester, my view is that we were essentially there to find any bugs that exist.Part of finding them is to document how to recreate the bug so that developers could fix it. Extending this process so it’s more complex, more stages, or involves more people – causes people to not want to find bugs.
There were times where I witnessed people do the bare minimum and they would ignore bugs that didn’t appear severe to them.
One of the worst people I’ve worked with was an average tester who wanted to become a Test Manager, and he ended up trying to make the process more complex and often announced changes in a condescending way.
When testers found a bug and wanted to investigate it, they would often try to recreate it, sometimes under different scenarios to work out the scope and impact of the bug, then will tell a developer their findings and only then get it logged.
Therefore there was a delay between finding the bug and actually logging it. So we got an email from the Test Manager like so:
All,It is important that as soon as you discover a defect, you raise a defect for this BEFORE you speak with the developer. Any defects raised can easily be closed if they have been raised in error or discovered by the developer to not be an issue. We run the risk of releasing with defects that could potentially cause serious issues for our customers.
I understand his point that – if managers are checking the system to see what bugs are outstanding and they don’t see them all, then potentially, the software could end up being released with bugs. However, the process started getting worse from then on:
Please can you include myself and Becky on any emails that are discussing a defect with a developer. This is so that we are both kept updated with any defects that could cause issues. Also for every defect you raise, I’d like an email to myself and Becky with the follow information : -- WorkItem ID - Title - Area - Any other information you feel relevant.
So now when we discover a bug, we had to log it straight away without the investigation, email two Test Managers, then copy in any further emails to them. Then as more information is known, update the bug report, and making sure we also had an appropriate workaround if the bug did get released (or is already released).
All,When you are filling out the SLA tab for a defect you need to ensure that if you’ve specified that there is a workaround available that the Workaround box is filled in with the Workaround.
If you’ve raised any defect that is a Severity 3 this MUST be fixed before the branch is signed off. This is our exit criteria, we do not sign a release off with any Sev 1, 2s or 3s. if the developer disagrees with this, escalate it to myself and Becky and we’ll deal with it.
Often when we logged a bug, he was either emailing you or comes to your desk to ask why you haven’t triaged it with a developer yet. Sometimes he did that within 10 minutes of you logging it. So he wanted you to log it before triaging, but would then demand that you triage it even if you haven’t had chance to contact an appropriate developer.
You’d also have other test cases to run which he was always on your back to give him constant status reports. It was hard to win because if you have tests to run and have found bugs, then he will want you to triage them but sometimes helping the developer could take hours which means you aren’t testing, so he will be asking why you haven’t run your tests.
That level of micromanaging and demanding updates wasn’t great for morale and also encouraged Software Testers to stop logging the bugs they found because it just added to their own workload and stress.
It seemed better just to steadily get through the tests, but I suppose if you didn’t want to log bugs, then what was the point in actually running the tests? I did suspect some people just marked them as passed and hoped there wasn’t an obvious bug they missed.
In my blog How To Make Your Team Hate You #3, I wrote about Barbara, a Tester who I used to work with that caused a lot of conflict and was constantly trying to get out of doing work, whilst taking credit for other people’s work.
Recently, when going through old chat logs, I found some brilliant “dirt”, which, in hindsight; I could have probably used to get her sacked because it was fairly strong evidence that – not only was she not doing work; she was falsely passing Test Cases. When you are paid to check if the software is behaving correctly, claiming you have tested it is very negligent.
When running test cases, if you pass each step separately, and haven’t disabled the recording feature, Microsoft Test Manager would record your clicks and add it as evidence to the test run.
I think the feature worked really well for web apps because it can easily grab the name of all the components you clicked, whereas on our desktop app, it mainly just logged when the app had focus and read your keystrokes.
The bad news for Barbara, is that she liked going on the internet for personal use, and liked chatting using instant messenger as we will see.
The Remedy
Type 'Hi Gavin. ' in 'Chat Input. Conversation with Gavin Ford' text box Type 'Hi Gavin. I've been telling everyone about this concoction and it really worked wonders for everyone that's tried it, myself included. This is for cold, cough and general immunity. 1 cup of milk + 1 tablespoon honey + 1/4 teaspoon of turmeric - bring to a rolling boil. Add grated root ginger (2 teaspoons or 1 tablespoon) and let it boil for another 5 mins. Put thru sieve and discard root ginger bits (or drink it all up if you fancy), but drink it hot before you sleep every night and twice a day if symptoms are really bad. Hope you feel better soon. 🙂 ' in 'Chat Input.
Pumpkins & Tetris
Type 'Indian pumpkin growing{Enter}' in 'Address and search bar' text box Type '{Left}{Left} {Right} {Left}{Left} {Up}{Up}{Up}{Up}{Up}{Up}{Left}{Left} {Up}{Up}{Up}{Right} {Up}{Up}{Left} {Right}{Right} {Up}{Right}{Left}{Left}{Left}{Left} {Right}{Up}{Left}{Left}' in '(1) Tetris Battle on Facebook - Google Chrome' document
Me 11:26: Barbara has been doing the Assessment regression pack for 3 days she says there is only a few left in this morning's standup. There's 15 left out of 27 Dan Woolley 11:28: lol Me 11:29: I don't even think she is testing them either. It looks like she is dicking about then clicking pass Click 'Inbox (2,249) - [Barbara@gmail.com]Barbara@gmail.com - Gmail' label Click 'Taurus Horoscope for April 2017 - Page 4 of 4 - Su...' tab Click 'Chrome Legacy Window' document Click 'Chrome Legacy Window' document Click 'Close' button Click 'Paul' label in the window 'Paul' Click image Type 'Morning. ' in 'Chat Input. Conversation with Paul' text box Type '{Enter}' in 'Chat Input. Conversation with Paul' text box Step Completed : Repeat steps 6 to 19 using the Context Menu in the List Panel End testing
Next Day
Me 12:42: Barbara said this morning that all the Assessments test cases need running. She has just removed them instead
Greek Salad
Type 'greek salad{Enter}' in 'Chrome Legacy Window' document Type 'cous cous salad' in 'Chrome Legacy Window' document Type 'carrots ' in 'couscous with lemon and coriander - Google Search ...' document
Click 'Vegetable Couscous Recipe | Taste of Home' tab Click 'Woman Traumatized By Chimpanzee Attack Speaks Out ...' tab
Marshall 11:50: oh damn haha these are things that were inadvertently recorded? Me 11:51: yeah Marshall 11:51: ha you've stumbled upon a gold mine Me 11:53: I don't think she is actually testing anything. I think she just completes a step now and then the other day Rob went to PO approve an item and he couldn't see the changes because they hadn't even patched
Haven’t Been Testing From The Start
we are in Sprint 8 and Barbara suggested Matt does a demo on the project so we know how it works; it’s a right riot
Me. 4 months into a project
Bad Audits
I wonder if Barbara was inconsistent with how she ran the test cases, or realised by the end that it tracked you. So near the end of her time, she was just hitting the main Pass button rather than passing each individual step. Managers liked the step-by-step way because if you mark a step as failed, it is clearer what the problem is.
Me 16:15: Barbara called me. Matt is monitoring our testing! Dan Woolley 16:15: how? Me 16:17: looking at the run history she said he was complaining it wasn't clear which step failed because we were just using the main pass button, and also bugs weren't linked when they had been failed I told Barbara I linked mine, then she checked and said it was Sam that didn't. I checked and saw it was Sam and Barbara so only the developer did testing properly 😀 you just can't get the staff
Obviously The Wrong Message
Me 09:12: Bug 35824:Legal Basis text needs to be clear what's all that about? Barbara Smith 09:12: Charlotte asked me to raise it for visibility We need to fix the text that appears on that tab Me 09:13: what's wrong with it? Barbara Smith 09:21: It says that on the Bug LOL And with a screenshot (mm) Me 09:22: it says "needs to be clear" and has a screenshot with a part of it underlined. But it doesn't say what the text should be instead.
She rarely logged bugs because she did minimal testing. Then when she did log something it didn’t have enough info to be useful.
Karma
Barbara got well conned in the end. She was gonna take the entire December off but delayed it for the end of the project and then she has been told she has lost her job, so they are telling her to take the holiday now. She had just bought a house so would be relying on the money for the mortgage payments. Luckily for her she got accepted for a new job, but she was looking for a brand new way of getting out of it, as we will see below.
Tax Fraud
Type 'what if I don't contact hrmc about my tax{Enter}' in 'Address and search bar' text box Sam 11:23: Ha ha You are savage Me 11:24: she is gonna get jailed for tax evasion
One of our Senior Testers wrote a blog detailing how she found an obscure bug. When I was a software tester, I often said that – even though you spend a large amount of your time writing Test Cases and running them; the majority of bugs I found were actually performing actions off-script.
The reason for this is that if you have a certain requirement, the developer writes enough code to pass that requirement as it is written. A better developer may even write some kind of automated tests to cover that scenario to prove that it works, and it won’t break in future. Therefore, running a manual test that describes that behaviour won’t find a bug now, and it won’t if you run that test in the future (during regression testing).
Being able to freestyle your steps means you can come up with obscure scenarios and experiment, and do way more testing than you would following a strict, heavily-documented process.
This was the main problem I had working as a Software Tester. Managers wanted the documentation and if you told them you had been testing without it, you sometimes got told to stop, or spend time writing Test Cases for ALL the additional scenarios you came up with. All that does is encourage people to be lazy and do the minimal amount of testing, which consists of just the basic scenarios.
You also get into scenarios where if there is a bug in live, it’s easy to make stupid claims in hindsight. I remember a colleague being absolutely furious with the criticism. They had done loads of testing but there was a bug in live in a very specific scenario:
“I’m disappointed in the level of testing” – Test Manager
Here is our Senior Tester’s blog:
I found a deliciously elusive bug last week. The feeling a tester gets when this happens is joy at your good luck, like satisfaction at solving a fiendish puzzle, and relief at preventing harm. We feel useful!
The bug was to do with online visibility data. My team is implementing the ability to right-click items and set Online Visibility. Sounds simple in theory - but data is complicated and the code base is large.
How was I going to approach this? It was an intimidating piece of work – and I was tired. My normal process would be to come up with some ideas for testing, document them, then interact with the product, make notes, fill out the report. But that day, I just couldn’t face doing the documentation and planning I would normally do before the testing. I decided to just test, not worry too much about documentation, and have fun.
I sought out a Record with a rich data set and played around, selecting things, deselecting them, selecting parent entries, child entries, single entry, multiple entries. I didn’t have any defined agenda in mind except to explore and see what would happen.
One minute in, I was rewarded with a beautiful crash!
I hadn’t taken a note of my steps – but I knew I could probably find the path again. I set up and recorded a Teams meeting with myself, as I didn’t want to have to pause to note down every step I took – that would take a long time and risk my mindset changing to a formal, rigid, structured view – which I didn’t want. I needed that playful freedom. The system crashed again! As there were so many variables at play, I didn’t know what the exact cause was, but I now had proof that it hadn’t been a magical dream.
I spent the rest of the afternoon trying to determine the exact circumstances in vain. I spoke to the programmer, and showed him my recording. He took the issue seriously, and tried to recreate it himself. We both struggled to do so, and decided to wait until the morning.
The following day, we got on a call and went over the recording again. What exactly had I done before the crash? I had selected the parent entry, then two child entries, right clicked but not changed anything, deselected the parent, selected another child, unselected it, selected a different child, selected the parent again and then right clicked and changed the Online Visibility - crash. We tried that again on the developer’s machine, on the same type of report, break points at the ready. Crash! Got it!
The developer eventually narrowed it down to two conditions: child entries had to have a GroupingDisplayOrder index beginning with 1, and the user had to select the parent entry after its child.
It seemed sheer luck that I had found this. But was it luck? No. It was luck by design – I had created a rich data set, and done lots of different actions in different orders, been creative and diverse in my testing. And it had only taken a minute to yield results!
So what did I learn? Reflecting, I noted my preference for highly structured documentation – of tables with colour highlighting, documenting each test in high detail, strictly in order, changing one condition at a time. The result of this was that I tested in a highly formal, structured way to fit the documentation, and only did informal testing as an afterthought. And yet I had most often found bugs during the informal testing!
I had made a god of documentation and lost sight of what mattered most. If you need me, I’ll be testing. And trying not to make too many pivot tables.
What Are Software Testers Really?
The same tester once came out with this quote
“testers are ultimately critics. Developers are artists. Testers are there to give information. What you do with that information is up to you.”
That’s quite an interesting perspective. I think it mainly comes from the idea that Testers can find 10 bugs but maybe you decide that you will only fix 6 of them, a few you might fix later, then 2 you think aren’t a problem, or so unlikely to happen – it’s not worth the effort and risk to fix it.
“we are the headlights of the car, driving into the darkness”
Software Testers In Game Development
“She was the one who taught me the importance of testers and how they are a critical gear in the machinery that makes-up making a game. Testers aren’t just unit tests in human form. They have a unique perspective on the game and poke not only at the bugs but also the design and the thought process of playing a game.”
Ron Gilbert, creator of Monkey Island
Another interesting discussion on the role software testers play is from Mark Darrah who has worked on games like Dragon Age Origins. He does seem to agree with this idea that the Testers are merely critics.
Mark Darrah – Don’t Blame QA
When encountering bugs during gameplay, it’s often misconceived that the quality assurance (QA) team is to blame. However, it’s more likely that the QA team identified and reported the bug, but it remained unresolved due to various factors. For instance, a more critical bug could have emerged from the attempted fix, leading to a strategic decision to tolerate the lesser bug. Additionally, project leaders may assess the bug during triage and conclude that its impact is minimal (affecting a small number of users), opting to proceed with the game’s release.
Such scenarios are more common than one might expect, and they typically occur more frequently than QA overlooking a bug altogether. If a bug did slip through QA, it’s usually not the fault of any single individual. The bug might result from a vast number of possible combinations (a combinatorial explosion) of in-game elements, making it impractical to test every scenario. Your unique combination of in-game items and actions may have simply gone untested, not due to negligence, but due to limited resources.
Complex game designs can introduce bugs that are difficult to detect, such as those that only appear in multiplayer modes. Budget constraints may force QA to simulate multiplayer scenarios solo (a single person playing all four or eight different players at once), significantly reducing the scope of testing.
Furthermore, bugs can be hardware-specific, and while less common now, they do occur. It’s improbable that QA had access to the exact hardware configuration of your high-end gaming setup.
The term ‘Quality Assurance’ (QA) can often be a misnomer within the development industry. While ‘assurance’ suggests a guarantee of quality, the role of QA is not to ensure the absence of issues but to verify the quality by identifying problems. It is the collective responsibility of the development team to address and resolve these issues.
Understanding the semantics is crucial because language shapes perception. The term ‘QA’ may inadvertently set unrealistic expectations of the role’s responsibilities. In many development studios, QA teams are undervalued, sometimes excluded from team meetings, bonuses, and even social events like Christmas parties. Yet, they are expected to shoulder the criticism for any flaws or bugs that remain in the final product, which is both unfair and inappropriate.
Developers, it’s essential to recognize that QA is an integral part of your team. The effectiveness of your QA team can significantly influence the quality of your game. Encourage them to report both qualitative and quantitative bugs, engage with them about the complexities of different systems, and heed their warnings about testing difficulties. Disregarding their insights can lead to overlooked bugs and ultimately, a compromised product.
For those outside the development sphere, it’s important to understand that if you encounter a bug in a game, it’s likely that QA was aware of it, but there may have been extenuating circumstances preventing its resolution. Treat QA with respect; they play a pivotal role in maintaining the game’s integrity.
Remember, a strong QA team is the bulwark against the chaos of a bug-ridden release. Appreciate their efforts, for they are a vital component in the creation of seamless gaming experiences.
Just like my last blog, this is based on an internal blog that our most experienced software tester wrote. She seems to love Michael Bolton, but not the singer. Michael Bolton is also the name of a software tester that is the co-Author of Rapid Software Testing (see About the Authors — Rapid Software Testing (rapid-software-testing.com)).
Michael Bolton
She said that Michael Bolton was asked the following question:
Q: My client wants to do risk analysis for the whole product, they have outlined all modules. I got asked to give input. Do we have a practical example for that? I want to know more about it.
Tester
Michael: Consider the basic risk story –
Some victim will suffer a problem because of a vulnerability in the product (or system) which is triggered by some threat.
Start with any of those keywords, and imagine how it connects with the others.
Who might suffer loss, harm, bad feelings, diminished value, trouble?
How might they suffer?
What kinds of problems might they experience? What Bad Things could happen? What Good Things might fail to happen?
Where are there vulnerabilities or weaknesses or bugs in the product, such that the problem might manifest? What good things are missing?
What combinations of vulnerability plus specific conditions could allow the problem to actually happen?
When might they happen? Why? On what platforms? How?
Our tester stated “This is a brilliant definition of risk. It is also a somewhat intimidating list of questions. If you are looking at this and thinking, “That’s hard!” you’re absolutely right. Good testing is hard. It’s deep, challenging, exhausting. It will make you weep, laugh, sigh from relief. But it’s also tremendous fun.”
Every now and then, there is a big initiative to focus on Automated testing. A manager will decide that our software is complex and too manually intensive to regression test in detail. Automation seems the answer but it’s never that practical.
Our main software, a desktop application, requires interaction through the UI which is incredibly slow and unreliable. We used to have a dedicated Automation team that maintained the tests but they would take several hours to run, would randomly fail, then eventually the team disbanded and declared them obsolete. There’s been times we wanted to replace them with the likes of CodedUI (which turned out to have the same issues), and more recently FlaUI.
When the last “drive for automation” was announced by the CTO, our most experienced tester wrote an internal blog which I thought had a lot of subtext to it, basically saying “it’s a bad idea”.
Communities of Practice around Test Automation
With all of the new Communities of Practice around Test Automation*, I wanted to share some thoughts on whether automation is actually a good idea. This comes from experiences over the years. I hope this saves some people time, and provokes conversations.
To automate or not to automate? That is question….
A common question in a tester’s life: “Should we automate our tests?”
Which of course really means, “Should we write our checks in code?”
This will inevitably give rise to more questions you need to answer:
which checks we should automate
and which we should not automate
and what information running the checks gives us
and how does that information help us assess risks present in the code
and which is the best tool to use
and how often we should run the checks
Asking and answering these questions is testing. We have to ask them because no automation comes for free. You have to write it, maintain it, set up your data, set up and maintain your test environment, and triage failures.
So how do you begin to decide which checks to automate?
Reasons for automating:
The checks are run frequently enough that if you spent a bit of time automating them then you would save time in the long run (high return on investment)
The checks would be relatively easy to write and maintain owing to the product having a scriptable interface (such as a REST API)
They can be performed more reliably by a machine (e.g. complex mathematical calculations)
They can be performed more precisely by a machine
They can be performed faster by a machine
You require use of code in order to detect that a problem exists
You want to learn how to code, or flex your programming muscles(Even if you ultimately decide not to automate your checks, you may decide to use code for other purposes, e.g. to generate test data.)
Reasons against automating:
There isn’t a scriptable interface; the product code can only be accessed via a User Interface (UI automation is notoriously expensive and unreliable).
In order to have a greater chance of finding problems that matter, the check should be carried out by a human being as they will observe things that would matter to a human but not a computer (e.g. flickering on the screen, text that is difficult to read).
The checks would have a short shelf life (low return on investment).
Beware of the fallacy that use of code or tools is a substitute for skilled and experienced human beings. If you gave an amateur cook use of a fancy food processor or set of knives, their cooking still wouldn’t be as good as that of a professional chef, even with the latter using blunt knives and an ancient cooker. Code and tools are ultimately extensions of your testing. If your testing is shallow, your automation will be shallow. If your testing is deep, your automation can be deep.
Ultimately the benefit you derive from writing coded checks has to outweigh the cost, and to automate or not is a decision no one else can make for you.
Testers in my Team
Most of the testers we employ aren’t that technical, and most aren’t interested in writing Automated Tests since that requires knowledge as a developer since it is coding. One of our testers went on a week-long training course about FlaUI. One of the first things he says is “FLAUI is not worth its value”, which made me laugh. The course cannot have painted it and a good light !” 😂
He then got asked to move teams to do pure automation for a few months. Another tester had no interest at all, but was instructed to “try learn”.
“writing the steps is fine, it’s just when you go into the code”
Joanne
There was no way she was gonna be able to learn it. She isn’t technical and the desire isn’t there at all. Being pressured by managers to move away from “manual” testing to “automated” just disrespects them as a tester. It’s happened before and they end up leaving. She eventually moved internally to be a Release Manager.
Automation Mess
The original decision to move to FlaUI was made by a group of Testers and they didn’t get input from the Developers.
I think it would be logical to code using the Coding Standards that us Developers have followed for years. If Developers want/need to help write Automated tests, they can fit right in since the process and code style is the same. Additionally, after years of writing Automated Tests, maybe the Testers want to switch roles and be a Developer and so it would be a smooth transition.
Not only did they invent their own Coding Standards, which meant variables/methods/classes were named differently, there was a lot of duplicated code to perform basic actions like logging in, selecting a customer record etc.
The process including a branching strategy was different too, and so instead of having a Master branch, taking a Project Branch for longer-lived changes, and standard User Branches for simple short-lived branches, they went for a more convoluted strategy where they had Development, Devupdate, Master. Then it became a disorganised mess when work wasn’t merged to the correct branches at the right times.
I can’t even make sense of this:
Before the start of Regression:
1) Lock the Development Branch (no PRs to be allowed to come in to Development till regression is completed)
2) Development, Devupdate, Master are up-to-date by syncing your local with remote branch and get all the commits into local branch
3) Merge from Development to DevUpdate
4) Merge from DevUpdate to MasterUpdate
5) Set <updateTestResults> to true and <testPlanId>(from URL ?planid=12345) inProjectSettings.xml in MasterUpdate
6) Raise a PR from MasterUpdate against Master. Throughout step 3, step 4, observe that ‘commits behind’ are equal after the merge process to that of master.
Once the above process is completed, observe that Master branch is 1 commit ahead of other branches
After the end of Regression:
1) Development, DevUpdate, Master are up-to-date by syncing your local with remote branch and get all the commits into local branch
2) Merge from Master to DevUpdate
3)Change the <testPlanId>toxxxxand<updateTestResults> to false in DevUpdate
4) Raise PR from DevUpdate against Development After Step 2, observe that ‘commits behind’ are equal after the merge process to that of master.
Once the above process is completed, observe that Development branch is 1 commit ahead of other branches
Eventually, a few more technical testers were moved into the team and tasked with aligning the process and codebase with our production code – ie sort the mess out.
This is the classic case of managers thinking they can just assign “resource” to a team, and give them an aim “automate this”; and expect results. But you need the technical know-how, and a clear direction.
Many years ago, when I was a Software Tester, I remember when we had to write a Test Specification based on the work that the Developers had planned. This was for both Enhancements and Bug Fixes (so new features and changes to old ones).
It would be a Word document, with the Item Number, Title, and then a description of what you would test (the description would be a bit more high level than the step-by-step description featured in an actual Test Case).
You would spend weeks writing it, then you had to get it approved by all the developers, or the Dev Lead. The developer often then told you it’s nothing like you imagined so you had to rewrite it.
Sometimes they would demo the feature to you so you had a better idea. If they had no comments, I often felt that they didn’t read it.
When there was a new Developer who wasn’t familiar with the process, he “rejected” a Test Specification because of some grammatical issues. I think it was something to do with the wrong tense, and not using semicolons. The Tester was fuming, but he was quite a belligerent character.
I think we often worked from the Bug description, or some comments from the Developer. However, quite often, the comment section would be the Developer writing something generic like “test bug is fixed”, “check data is saved“. If it was more detailed, sometimes you would paste the developer’s notes and change a few words – and have no idea what it meant until you saw the finished thing.
The Verdict
I think both Developers and Testers saw Test Specifications as a waste of time. The Developers weren’t enthused to read it, especially when most people rewrote what the Developer provided, and that might not be the full test coverage needed. The Testers should be adding more value by using their existing knowledge of the system to come up with additional scenarios to cover “regression testing”.
I think the only advantage is to quickly verify that the Developers and Testers were on the “same page”, but that only works if the Tester has not used the developers words and tried to illustrate that they do understand the upcoming features.
I think it eventually got binned off for what we called “Cycle Zero Testing” which was where the developer quickly demoed their changes; which I think was still hated by the Developers but was easier to understand the value, and was a more collaborative between the two roles.
Occasionally we may be asked to help our Software Testers run through their manual regression test cases
When I was a tester, even though writing test cases should be easy, you often find they are so tedious to write if you want to accurately describe every single step. Therefore, you may choose to be more concise with your wording or make assumptions that the person running through the test will understand what to click.
Sometimes you think you have written a brilliant test, but when you come to run it again at a later point, you realise that it was ambiguous and then might end up looking at the code to work out how it was meant to work at the time.
If the test case is misleading, sometimes the tester will then modify it to be “less ambiguous”/“correct” but there’s times where they have incorrectly changed it, causing further confusion.
I ran a test called “Enter 1020 characters into the Description Textbox ensuring to include numbers and special characters (namely ‘&’)”
However the expected result was “Textbox will only accept the first 260 characters”
Why would we be entering 1020 characters if the textbox is gonna stop at 260? Clearly something is up with this test.
So I look at the history to see if someone had changed it. It used to say enter 260, but 255 is accepted but then Sarah changed it to “enter 1020 and 260 is accepted”.
So I looked at the linked change to see what it should have been changed to (or maybe not changed at all). The item was called “Extend description from 255 to 1023 characters”
That seemed really random. Why 1023 characters? And why did the tester change the test case to 1020 (and 260) when that still isn’t enough.
Even more confusing was the developer didn’t even change it to 1023 – it was set to 1000 in the database.
\(〇_o)/
So we wanted 1023, the developer provided 1000, and the tester either tried 1020 or 260 and passed it.
When I was a Software Tester, one of the first bugs I found was on an appointment booking system. There was this concept called “Assignment List” which had a list of Patients that required appointments. Then you drag and drop them into the appointment slot to book them in. A tick/checkmark would appear next to their name. I then printed it out and saw that most of the printout was in the Wingdings font!
I thought it was pretty clear, so typed up the basic information in the Bug Report, even suggesting what I thought the problem was. My (correct) assumption that the font was switched to use the tick/checkmark from the Wingdings font, then the font wasn’t set back to normal for the next bit of information, resulting in a full page of Wingdings symbols!
Ensure patients are present in the assignment list. Book patients into session. Press the print button – App book is printed and includes patient details – Details for some patients appear in Windings font in assignment lists. This is possibly related to the tick (shown by using Windings font?).
My bug report
For some reason, the lead developer decided to be a bit aggressive and add extra information to my report:
CRITICAL MISSING INFORMATION: This ONLY happens if a patient in the assignment list has been assigned and has a tick next to them. When printing it prints the tick but it appears that the rest of the details for that patient are left in the wingdings font.
Back in June 2021, I wrote how we wanted to go 100% automation, and this was basically:
forcing the manual testers out of the company,
or to switch roles,
or learn automation.
There was a meeting to discuss how Testing could be completed for our next major release called “Software Developer assignment during the release regression window“
“30% of the regression pack is covered by automated tests. It takes around 9 days to run due to random failures”
Manager
They did want Developers to help run manual Regression Tests for the next release, but then going forward, they wanted Developers to help improve the current Automated Tests, and add more test coverage.
I think the problem is that the Testers don’t have that much experience writing Automated tests, so they end up writing brittle and messy tests. Then, when they ask Developers to help out, we don’t have time since we are too busy fixing Bugs and working on new Projects.
7 months later, still not much improvement.
I knew the situation wouldn’t change. We keep highlighting this as a problem but then don’t have the skills and time to actually do anything about it. So all that really happens is a few of the experienced manual Testers leave because they think they aren’t needed/respected anymore (see intro paragraph). Then it takes longer to run the manual tests, and the desire for more Automation increases.