Every time we write a test like this:
public class ModelObjectTest
{
@Test
public void testModelObject() {
ModelObject mo = new ModelObject();
mo.setA(1);
mo.setB(“wharrgarrbl”);
assertThat(mo.getA()).isEqualTo(1);
assertThat(mo.getB()).isEqualTo(“wharrgarrbl");
}
}
Kent Beck sheds a single, solitary tear.
Every time we do this:
public class ModelObjectTest
extends PojoTest // where PojoTest is some kind of reflection-magic Pojo set/get test
{
@Test
public void testModelObject() {
testPojo(new ModelObject()); // haha, JaCoCo! now we have coverage!
}
}
*I* get upset. That thing up there tests nothing, and is worse than useless.
But why does it upset my delicate sensibilities? We need to get our code coverage figure up to 90%, don’t we?
Sure we do. But what does code coverage mean, in a world of Test-Driven Development? Wait, let’s take a step back and remind ourselves what Test-Driven Development actually is. In its simplest form, it looks like this.
- Decide what the logic should do.
- Write a unit test that describes what the code (which does not exist yet) should do.
- Run the unit test and watch it fail.
- Write some code that passes the test.
Note that none of this mentions POJOs ("Plain Old Java Objects" - what we used to call beans, things that just have a bunch of fields, and methods that set and get those fields). We make POJOs as a way of passing data around. And we make them because “logic” code, code that does stuff and is described by unit tests, needs them.
If our logic code is being created by TDD, that means we only ever create or add to a POJO because a test of some other class is failing and we need to change it to make the test pass.
So this means the POJO should always have 100% test coverage already. It doesn’t need a test class of its own.
Aha! But. Sometimes we have these model classes that only exist to be serialized into JSON and returned by web services, or deserialized and consumed. They’re not covered by our logic, so they won’t get tested.
This is fine.
If they really are all just data transfer objects, their job is simply to have a certain structure. They effectively have no logic and - this is going to sound heretical but bear with me - they do not need unit tests.
But what about JaCoCo? They’ll bring our coverage metrics down! 90% or die, right?
Put them in their own Maven submodule - you should be doing this anyway, so the API model can be shared as a JAR. Set this module’s JaCoCo limit accordingly. Be happy.
Having a huge number of model classes with 100% coverage in the same module as classes with real logic is actually bad, because it artificially inflates the coverage numbers. I recently moved the model classes out of a project so they could be shared, and was surprised to find that the JaCoCo check started failing. It turns out some of the logic was completely untested, but the 90% mark was being met because of all the useless POJO test coverage.
Let’s talk about JaCoCo for a second, by the way.
This is not TDD:
- Write some code.
- Run JaCoCo and watch it fail.
- Write some tests that increase the coverage metric until JaCoCo passes.
JaCoCo is actually a decent tool, but we are abusing it. It exists because we are fallible, because writing good tests is sometimes quite difficult, TDD is a hard discipline, and it’s good to have backup.
JaCoCo produces an HTML report, and I like to look at it after my test run and make sure there are no surprises. “Hey, this looks weird - that line is not covered, but my test passes. What’s it doing there? Either my test or my code is wrong, let’s investigate."