Code coverage

Code coverage is a metric that shows us what lines of our code is being run during the tests.

The code coverage report helps give us an idea that tests are needed, but it does NOT tell us what's important about this function, nor does it tell us the use cases this function supports which is the most important consideration we keep in mind as we write tests.

So the coverage report helps us identify what code in our codebase is missing tests. So when you look at a code coverage report and note the lines that are missing tests, don't think about the ifs/elses, loops, or lifecycles. Instead ask yourself:

What use cases are these lines of code supporting, and what tests can I add to support those use cases?

"Use Case Coverage" tells us how many of the use cases our tests support. Unfortunately, there's no such thing as an automated "Use Case Coverage Report." We have to make that up ourselves. Code coverage is not a perfect metric, but it can be a useful tool in identifying what parts of our codebase are missing "use case coverage".

Mandating 100% code coverage for applications is a really bad idea. The problem is that you get diminishing returns on your tests as the coverage increases much beyond 70%. Why is that? Well, when you strive for 100% all the time, you find yourself spending time testing things that really don't need to be tested. Things that really have no logic in them at all (so any bugs could be caught by ESLint and Flow). Maintaining tests like this actually really slow you and your team down.

You may also find yourself testing implementation details just so you can make sure you get that one line of code that's hard to reproduce in a test environment. You really want to avoid testing implementation details because it doesn't give you very much confidence that your application is working and it slows you down when refactoring. You should very rarely have to change tests when you refactor code.

Last updated

Was this helpful?