The “Test Governance Lead” announced that all developers will need to report on Unit Test Coverage. We explored similar demands several years back, but we soon realised it was a rubbish metric. How do you measure Unit Test coverage? There’s a few ways and they all have limitations on their accuracy and usefulness. Teams were also adding “attributes” to the code (ExcludeFromTestCoverage) which also manipulated the figures.
My opinion is that it’s okay to report on as long as you just look at the overall trends and don’t overreact to the actual figures. If a figure says 50% and there’s never any problems with the code, and the figure is still 50% in 3 months time; then it is all good. The problem is that managers just look at the figure then start putting pressure on developers to do something about it because they want the number to be higher.
An experienced developer responded to the “Test Governance Lead” with a blog post which opens with the line
“Let me use one of my son’s toys to explain…”.
https://danashby.co.uk/…/code-coverage-vs-test-coverage/
What a great opening line for a blog. It’s like a passive aggressive statement from the developer there. He didnt directly insult the “Test Governance Lead”, but indirectly called her naive/dumb by using the blog.
Years ago, on multiple occasions, I witnessed a developer talking to one of the high-ranking managers, boasting about his 100% test coverage. He was obviously trying to manipulate the manager’s impression of him to increase the chance of promotions.
Not so long ago, there was a team that kept on boasting about their 100% test coverage, yet developers using their code library were telling them that major functionality was broken.