One thing that I’ve seen over and over again is that best practices or coding standards that aren’t backed up with some sort of tooling are not consistently followed. I’m sure everyone has been on the team where there is one person that insists on using an ‘m’ instead of an ‘_’ in front of a private field name despite everyone else’s consistent opposite.
Fortunately, there are tools that can red flag this non-conformance and even prevent someone from checking in code that isn’t consistent with those standards. A lot of these tools are known and readily available for use on product code. Less tools and techniques are known for encouraging good testing practices.
One good rule for writing automated tests–unit, functional, or otherwise–is to avoid logic in tests. After all, we’re testing code to validate the logic in our products. We don’t write tests to validate the logic in our tests, so it is best to simply avoid it. Check out Google’s testing blog for a more in-depth treatment: Don’t Put Logic in Tests
So, now that we have agreed on this rule, what metric can we use to ensure we follow it? Cyclomatic complexity. For any of you not familiar with cyclomatic complexity, it is the number of paths through a particular block of code. If there are no decision points (if, switch, etc.) in a given method, then the complexity of that method is 1. You can find static code analyzers that can calculate this metric for just about any language, so what are you waiting for?