By the way, yesterday at work there were three interesting thoughts about testing: about nominal test coverage, about the illusion of test coverage, and about fitting tests to code.

Nominal test coverage - this is when you wrote tests but they don’t check anything, i.e. the most exaggerated example of this is writing assert True inside a test. Formally the test exists, but actually checks nothing. As a result, when a code section really changes, tests won’t help detect breakage because they’ll continue burning green. This is a broader thought about “being, not seeming” (which, in my worldview, is indirectly connected with the thought about why you shouldn’t lie in your resume: lied in resume -> passed desired as actual).

Illusion of test coverage - we know that there are conditionally 60 situations that can arise during the operation of the tested code section. But depending on circumstances (no time or desire, there are other burning tasks), we deliberately write only 20 of them and add to tests. For everyone else, it will be visible that this section/module is tested and covered by tests (I’m providing for a moment of trust in the team, when “done” means truth, not at the end of the day you go through with coverage through all teammates’ commits in search of truth). That is, if you wrote tests for new functionality, it means in tests you anticipated all situations that can arise (which you assume at the development stage - I’m not accounting for bugs and unplanned possible behavior). In general, wrote new functionality - covered with tests, then other team members will be guided that your tests are objective and will light up red in case of breakage. But if tests are written only 20/60, then there’s a probability that a bug will get into the main branch, since the place it affects wasn’t covered by tests. As a result, an illusion is created that code is covered by tests and this creates false confidence that everything works correctly. But in fact it’s not (again returning to the thought why soft skills are sometimes valued higher than hard skills, because teaching a person to write code is faster than teaching them to be honest and responsible).

How to solve such a problem - admit that you can’t write tests now and leave the code section not covered by tests, then everyone working with this code section will be more attentive to changes, instead of mistaken trust in incomplete tests. Such situations are generally rare, but this is just about that - sometimes it’s better to admit bitter truth than create a beautiful illusion that everything is covered.

Fitting tests to code - there’s a desire to simply rewrite the algorithm used inside a function/method/flow and just check that everything works. But when changing internal implementation, you’ll have to rewrite tests each time, and this is the first indicator that tests are being written incorrectly. Ideally, tests should be as general as possible and display only what comes in and what result comes out. And when changing implementation, if everything continues to work as needed, such tests will also show whether code works as intended. Therefore, the idea of writing tests before code looks tempting, because when you don’t know the internal implementation, you can write a test for testing program requirements. And then make any implementations that just work as needed.

Perhaps all this was in the book about Billy the tester, but I learned all these thoughts in half an hour of a daily standup. I don’t understand why everyone doesn’t like them so much - it’s a treasure trove of knowledge. Before this was infrastructure discussion about how under load application instances are added and removed, and thus load is scaled. By feeling, it’s like in university - you can just sit and absorb knowledge with minimal effort. It’s like the environment itself allows you to learn.

Ilia Kaziamov @ 2025
v.0.2