I'll be quite radical but please read this fully before criticising.
In short: test-driven design is a kind of myth. It's impossible for implementation even for a new code unless all specifications are already fixed and won't change (that's classic "Waterfall" case but not a "Agile" one, opposed to the usually declared TDD application area). When it works out of one-shot projects, it works only because its principles are violated, not satisfied.
To understand this, one should imagine case the specification is changed and a new requirement is added, but it's already satisfied in existing code. Should the existing code be dropped and rewritten from scratch? Classic TDD description can't answer this at all - it doesn't deal with an existing code case.
The second question is what shall be tested. If a code piece shall satisfy tests but do nothing more, could be it only listed test cases in a big switch-case? If it doesn't, why? That's what is explicitly requested (at least in classical compact explanation).
The third question is how to check all marginal cases. If a routine multiplies numbers, shall it check INT_MIN*INT_MIN
? Marginal cases knowledge is mainly discovered after some usage experience, but not a priori. (Yep, the more programmer's expertise is, the more ability to find such cases before writing the code he has. But 90% of any expertise is what one's shan't do, but not what one shall.)
My answer is that TDD fulfils two roles:
- It shows an ideal, unreachable in practice (as infinity in mathematics) but ease to understand and so approximated as close as needed.
- It provides an excuse for managers to require testing which is not postponed until "good times" but just now, before reporting a task finish.
So it shall be treated accordingly - "with enthusiasm but without fanaticism".
My personal attitude to TDD is formed with a specific experience of building complex systems which shall work "24*7*365" and an acceptance style approached to military grade. Such systems require, at developers' side, integrated testing of component chains and regular testing on the fly, at the same components and with the same algotithms, but with another tags (so, only final decision checker understands it wasn't real data). For our systems, TDD is intentionally replaced with the following approach:
- Of course, the principle that code shall do what it is intended to, shall be more important that TDD (or analog) principle that a code is as small and simple as possible. This also means that code readability and debuggability is very important. (All this is not a Captain Obvious' comment but the written rule for those guys who tend to follow instructions in the most literal way, and who, regrettably, are present in every large team.)
- Each component starts with a reasonably small test set which allows an experienced programmer to say it's checked, according to the current conditions and resources.
- A part of time (of each programmer separately, or of the whole team) is dedicated to expanding tests for already existing components. This can be, according to goal specifics, 20% to 80%, but definitely not less. This work shan't stop until the whole project development is abandoned.
- To answer the question whether a test works (which is answered by TDD in a one-shot, unscalable manner), the test itself shall have techniques to validate it. For this task, we use so-called inversions. An inversion is a change in incoming data or environment which shall break test in a known way. For example, if a function shall convert A1 to B1 and A2 to B2, its test which validates B1 is response to A1 shall fail if input value becomes A2, and, moreover, one can check failure details (generated exception type, error count, etc.) Not all tests shall have inversions; but, the more complex it is, the more principal inversions are.
P.S. For an initial guide to add tests to an existing project, you could start with M. Feather's book. It isn't based on TDD but, instead, is very practical in suggesting how to reform an existing code in a evolutionary manner.