For the most reliable software, don't choose between different testing approaches, use them all. Testing is not like choosing an architecture or programming language for your system; the more approaches the merrier. As each testing approach experiences diminishing returns, it encourages using a different approach to start near the top of its curve.
Unit testing enables hidden functionality to be tested. This can prevent future bugs when changes in the system suddenly uncloaks those functions, exposing them to higher level tests.
I don't fully agree. Yes, employ as many testing approaches as possible (e2e tests, property-based testing, golden tests, whatever), but only test for behavior that you expose and want to guarantee will stay as-is. Otherwise you will get into the situation where every refactor will require a test change.
Unit testing is fine, if you do it for your public interface. If you are writing a math library then sure, unit test that `add(1, 2) == 3`. But if you just have an internal helper function for that, then think about if you really want to lock its existence and behavior into place, or if that would just hinder future architectural changes.
You can always test the exposed functionality that uses the helper and achieve full coverage of it that way. If you can't, then you have dead code.
Of course this is all a bit more nuanced. Past a certain size it might make sense to e.g. consider one modules interface to be public for the rest of your application and test it. But you can definitely overdo it and testing every single function you write (as I've seen people unironically suggest) is very likely detrimental.
While that is a good rule for the general case, there are exceptional circumstances where testing an internal helper function can help improve development velocity. One should not shy away from using what is useful. What is important to remember is that "public" tests are your documentation that remain for the lifetime of your application. "private" tests are throwaway.
A good language will provide clear boundaries such that it is obvious which is which.
Alternatively, if an internal function is important enough to need good coverage, it should be pulled out into an internal "library" that exposes the interface explicitly (even if this is just a separate file or folder with limited visibility to the rest of the codebase).
Testing internals is almost always a code organization smell IMO.
If you seek good coverage, you undoubtedly would be better off moving that against the public interface.
"private" tests are more for like when you're having trouble figuring out an edge case failure and want to narrow it down to a specific helper function to aid debugging or if you need help coming up with the right design for an internal function. As before, we're talking exceptional circumstances. Rarely would you need such a thing. But if it helps, no need to fear it.
Either way, I'm not sure you would be looking for good coverage, only the bare necessities to reach the goal. Once settled, the tests are disposable. Organizing your project into internal libraries in case you encounter a debugging problem in need of assistance, for example, is extreme overkill.
Hmm, I guess I don't really consider throwaway assertions in pursuit of debugging "tests" in the typical sense.
I certainly wouldn't advocate for code reorganization for that case, but if there is a property that is important to maintain over time that isn't easily expressed by exercising the public API, it does suggest that reorganization is probably in order.
> Hmm, I guess I don't really consider throwaway assertions in pursuit of debugging "tests" in the typical sense.
I agree it is not "tests" in the documentation sense, and I did mention that, but, regardless, you do seem to align on "throwaway". Now you have me curious, what kind of ephemeral "tests" were you imagining if not something similar to what I described in more detail later?
I suppose I was imagining tests that are created to validate an internal piece of code when it's first built or refactored, not necessarily in pursuit of a specific observed bug.
> But if you just have an internal helper function for that, then think about if you really want to lock its existence and behavior into place, or if that would just hinder future architectural changes.
I agree with everything you’ve said up until this point. The add function should be locked down, whether or not it’s internal code. I’ve written some physics libraries for games I’ve coded in the past, and you can bet that I wanted to lock in the functionality of that complex physics code by working some examples by hand and then codifying them into unit tests!
I think you’re right at the end, everything is nuanced. Using the right tool for the right job is easier said then done :)
How do you propose testing defensive programming? In a good program, most of those checks are not coverable from the public interfaces.
The other problem is that finding tests that cover everything from the public interfaces can be extremely difficult. Most test suites don't achieve full coverage even of theoretically reachable points in the code.
> In a good program, most of those checks are not coverable from the public interfaces.
If they aren't then you are defending against situations that can never occur. In that case the check seems unnecessary, and I at least wouldn't make it a priority to cover it. Also, those checks are usually simple enough that one can reason about them easily, again making it less of a priority to have tests cover them.
> The other problem is that finding tests that cover everything from the public interfaces can be extremely difficult. Most test suites don't achieve full coverage even of theoretically reachable points in the code.
Yes, testing is hard. That is not a reason to write more detrimental tests though.
Yeah, let's only do defensive programming in the places where it will actually end up being needed. Brilliant idea! In the same vein, let's only write tests for the places where bugs will actually be.
This insight of yours cannot help but save on costs. Be sure to suggest it to management.
> In the same vein, let's only write tests for the places where bugs will actually be.
Sounds like a good idea, actually. These places being the entire surface of your code that is exposed to the real world. So not unlike what I have been suggesting.
I mean, if you really want to start every function with multiple `assert true == true` statements, feel free to do so. But I will question its usefulness.
Do you have any particular example in mind that is not superfluous and really can not be exercised by the public API? I have a hard time thinking of any.
Unit testing enables hidden functionality to be tested. This can prevent future bugs when changes in the system suddenly uncloaks those functions, exposing them to higher level tests.