A Guide to Writing Bad Unit Tests
14 November
|
|
[✔︎ PASS] it sums positive integers
[✔︎ PASS] it sums negative integers
[✔︎ PASS] it sums one positive and one negative integer
[✔︎ PASS] it sums with zero
|
[× FAIL] it sums positive integers
[× FAIL] it sums negative integers
[× FAIL] it sums one positive and one negative integer
[× FAIL] it sums with zero
|
First, we’ve seen the articles, watched the videos, and unit tests are the shit. Whatever your testing strategy is I’m sure unit tests are part of it. Maybe you even employ test-driven development, or test-first development, or the popular favorite test-first-but-not-really-I-like-it-though development.
Once you’ve caught this wave there’s no going back. Of course I’m talking about the wave of bad unit tests.
Getting Started
The approach is straightforward:
- Isolate the unit.
- Test the public methods.
- Mock only what you need to make the test pass.
This maximizes speed, which is critical - you don’t want unit tests execution time to slow down your development much.
Done.
But why are these “bad” tests?
They’re fragile, implementation-dependent tests that make code changes take longer and longer with more and more tests.
If that sounds fun to you, let’s dive in.
1. Isolate the Unit
Stub everything. A test needs to fail if the code uses anything unexpectedly. A test especially needs to fail if anything in the code changes causing any use of any dependency to change.
Did the order of parallel network calls change? The unit tests should fail.
Did the asynchronous code run one event-loop later than before? The tests should fail.
Certainly the test specifically verifying timing should fail, and if you do it right:
You can get most of your unit tests to fail with every change!
This is what I call hidden coupling. More on this when we “mock only what you need” in section 3 below.
2. Test the public methods
And make sure you test all of them in every way you can think of. That’s what we call “good coverage”.
The more tests you have the more that can fail when implementation details change – but this only works if you’ve also done step 1 above and 3 below.
This step compounds the effects of hidden coupling so that any code change will take weeks to update all the failing unit tests.
3. Mock only what you need to make the test pass
Testing the output when the 2nd network call fails? Mock the first network call just enough to get the code to run to the 2nd, then mock the 2nd, then assert. Done.
Testing the UI updates after the user clicks a button? Mock only the things that fail along the way, then assert. Done.
How do I know I’m doing it?
Change the implementation in a business-value-neutral way - a way that still meets all the requirements, just differently. If tests fail, congratulations you’ve done it!
If a lot of tests fail, you’ve mastered it.
The end. Time for a promotion for all your hard work writing all those tests, responding quickly to the CI test failures, and spending so much time keeping them up-to-date.
Joking Aside, “Hidden Coupling” is a Time Suck
Above, the “properly isolated” unit’s tests were tightly coupled to the exact stub and mock setup used during the tests. And those stubs and mocks were tightly coupled to the code being tested.
Considering this from step 3 above:
Testing the UI updates after the user clicks a button? Mock only the things that fail along the way, then assert. Done.
It’s an example of “programming by coincidence”. Instead of writing test code according to requirements (business or technical), the test code is written to match the behavior that happens to be the current behavior at the time of writing the tests. This is the source of hidden coupling in tests.
Hidden coupling in tests is a type of technical debt. It makes a codebase hard-to-change and fragile, and that chunk of code inevitably stagnates shortly thereafter (months not years).
Further reading
See this writeup from Alexey Golub on this topic. While I don’t agree 100% with everything there, I’m at least 80% on each topic… and I’ve read at least 5% of the article. Note point #4:
4. Unit tests rely on implementation details
The unfortunate implication of mock-based unit testing is that any test written with this approach is inherently implementation-aware. By mocking a specific dependency, your test becomes reliant on how the code under test consumes that dependency, which is not regulated by the public interface.
That about sums it up.
I’m not against unit tests - just those that follow this guide.