Everything started with measurement of JavaDoc coverage. The equasion was simple - more coverage, better software quality. But the result is actually sad. In case developers are forced to comment every piece of code, they will comment getters / setters as well. It's hardly possible to provide usable comments for straight-forward code. The problem is not the process of writing - if you comment obvious code, someone will have to read your documentation....
Recently QA departments spotted the opportunity for measurement of the unit-test coverage. This gets even worse. Because writing unit tests takes some time, it's often much easier to test obvious code to raise the test coverage. Meanwhile, I gets suspicious in case an average corporate project shines with test-coverage > 80%. The question is: what are actually the remaining 20%? A more serious problem I encountered in an "agile" project (with huge amount of unit tests :-)), was a considerable coverage of CRUD-cases (like masterdata management) ...but some really hard to test algorithms were just skipped ...and caused trouble in the production.They just didn't contributed enough to the code-coverage results :-).
You could even generate your tests to increase the coverage. Then even System.out.println could be tested - the acronym would be cool as well Code Driven Test Generation (CDTG).
...instead of believing in numbers - sometimes a portion of common sense could really streamline your development. The problem here: then you could get rid off many buzzword, acronyms, processes and tools :-).
NEW online workshop: WebStandards Igniter (online)