We have a report analysing our code that does the usual static analysis checks like Code Coverage, “Code Smells” etc.
One team boasted how their report was 100% code coverage and 0 Code Smells; a perfect score.
I thought it was strange because over the previous 3 weeks, many people had pointed out the many bugs their project had. If they do have 100% code coverage, then they aren’t testing the right things. Zero code smells seems suspicious too.
I had a little peek at their repository to spot any suspicious activity. One of the recent bugs stated “we have run out of switch statements”. Obviously, to us software developers – that is a very funny statement to make. What he meant was, they have a huge switch with 30 conditions and the report flags up switch statements that exceed 30 conditions. This is a “Code Smell” because it implies its a bad design.
You could address this warning by:
- rewrite a large part of it to implement a better design
- do nothing. The warning still flags it in the report.
- suppress the warning so the report doesn’t show it.
The developer decided to make a code change, but didn’t bother with a better design. His new switch had 30 statements, but the “default case” then called another method which had another several conditions in it. It gets rid of the warning, but it makes the code more harder to read; so he has made it worse. Suppressing the warning keeps the code readable, but you will still be perceived to be fiddling the report.
Looking at other recent check-ins, the team had “temporarily” suppressed warnings and then logged Bugs to address them later. This is definitely fiddling the report. It makes the report look good, but increased their backlog of work, and wasted time. They could have done meaningful work and let the report do its job with reporting accurate figures.
Then they are highlighting their teams “success” to the managers. The managers don’t look much further than a report, so they are happy.