Take the 2-minute tour ×
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free, no registration required.

We have new (quite big) project starting, that we planned to develop using TDD.

The idea of TDD failed (many business and non-business reasons), but right now we have a conversation - should we write unit tests anyway, or not. My friend says that there is no (or close to zero) sense in writing unit tests without TDD, we should focus only on integration tests. I believe the opposite, that there is still some sense in writing plain unit tests, just to make code more futureproof. What do you think?

share|improve this question
8  
I'm curious what reasoning your friend has for such an absurd stance... –  Telastyn 16 hours ago
1  
possible duplicate of Difference Between Unit Testing and Test Driven Development –  gnat 16 hours ago
2  
I'd wager that the vast majority of projects with some unit tests are not using TDD. –  emodendroket 9 hours ago
    
What will be your integration levels? What will be your units? How often will you be refactoring below each test level? How fast will your integration tests be to run? How easy will they be to write? How many combinatorial cases do different parts of your code generate? etc... If you don't know the answers to these, then maybe it's too early to make firm decisions. Sometimes TDD is great. Sometimes the benefits are less clear. Sometimes Unit tests are essential. Sometimes a decent set of integration tests buys you almost as much, and are much more flexible... Keep your options open. –  topo morto 8 hours ago

3 Answers 3

up vote 21 down vote accepted

TDD is used mainly (1) to ensure coverage, (2) and to drive maintainable, understandable, testable design. If you don't use TDD, you don't get guaranteed code coverage. But that in no way means that you should abandon that goal and blithely live on with 0% coverage.

Regression tests were invented for a reason. The reason is that in the long run, they save you more time in prevented errors than they take in additional effort to write. This has been proven over and over again. Therefore, unless you are seriously convinced that your organization is much, much better at software engineering than all the gurus who recommend regression testing (or if you plan on going down very soon so that there is no long run for you), yes, you should absolutely have unit tests, for exactly the reason that applies to virtually every other organization in the world: because they catch errors earlier than integration tests do, and that will save you money. Not writing them is like passing up free money just lying around in the street.

share|improve this answer
5  
"If you don't use TDD, you don't get guaranteed code coverage.": I do not think so. You can develop for two days, and for the next two days you write the tests. The important point is that you do not consider a feature finished until you have the wanted code coverage. –  Giorgio 16 hours ago
2  
@Giorgio: the word "guaranteed" makes all the difference. With TDD, you get full test coverage because you only code toward tests. Without TDD, you only have the coverage you add on after. –  DougM 15 hours ago
4  
@DougM - In an ideal world maybe... –  Telastyn 14 hours ago
4  
Sadly TDD goes hand-in-hand with mocking and until people stop doing that all it proves is that your test runs faster. TDD is dead. Long live testing. –  Micky Duncan 14 hours ago
3  
TDD does not guarantee code coverage. That's a dangerous assumption. You can code against tests, pass those tests, but still have edge cases. –  Robert Harvey 12 hours ago

I have a relevant anecdote from something that's going on right now for me. I'm on a project that does not use TDD. Our QA folks are moving us in that direction, but we're a small outfit and it has been a long, drawn-out process.

Anyways, I was recently using a third-party library to do a specific task. There was an issue regarding the use of that library, so it's been put on me to essentially write a version of that same library on my own. In total, it ended up being about 5,000 lines of executable code and about 2 months of my time. I know lines of code is a poor metric, but for this answer I feel it's a decent indicator of magnitude.

There was one particular data structure I needed which would allow me to keep track of an arbitrary number of bits. Since the project is in Java, I chose Java's BitSet and modified it a bit(I needed the ability to track the leading 0s as well, which Java's BitSet doesn't do for some reason.....). After reaching ~93% coverage I started writing some tests that would actually stress the system I had written. I needed to benchmark certain aspects of the functionality to ensure they would be fast enough for my end requirements. Unsurprisingly, one of the functions I had overridden from the BitSet interface was absurdly slow when dealing with large bit sets(hundreds of millions of bits in this case). Other overridden functions relied on this one function, so it was a huge bottle neck.

What I ended up doing was going to the drawing board, and figuring out a way to manipulate the underlying structure of BitSet, which is a long[]. I designed the algorithm, asked colleagues for their input, and then I set about writing the code. Then, I ran the unit tests. Some of them broke, and the ones that did pointed me exactly to where I needed to look in my algorithm in order to fix it. After fixing all of the errors from the unit tests, I was able to say that the function works as it should. At the very least, I could be as confident that this new algorithm worked as well as the previous algorithm.

Of course, this is not bullet proof. If there's a bug in my code that the unit tests aren't checking for, then I won't know it. But of course, that exact same bug could have been in my slower algorithm as well. However, I can say with a high degree of confidence that I don't have to worry about the wrong output from that particular function. Pre-existing unit tests saved me hours, perhaps days, of trying to test the new algorithm to ensure it was correct.

That is the point of having unit tests regardless of TDD - that is to say, unit tests will do this for you in TDD and outside of TDD all the same, when you end up refactoring/maintaining the code. Of course, this should be paired with regular regression testing, smoke testing, fuzzy testing, etc, but unit testing, as the name states, tests things on the smallest, atomic level possible, which gives you direction on where errors have popped up.

In my case, without the existing unit tests, I would somehow have to come up with a method of ensuring the algorithm works all of the time. Which, in the end...sounds a lot like unit testing, doesn't it?

share|improve this answer

You can break code roughly into 4 categories:

  1. Simple and rarely changes.
  2. Simple and frequently changes.
  3. Complex and rarely changes.
  4. Complex and frequently changes.

Unit tests become more valuable (likely to catch important bugs) the further down the list you go. In my personal projects, I almost always do TDD on category 4. On category 3 I usually do TDD unless manual testing is simpler and faster. For example, antialiasing code would be complex to write, but much easier to verify visually than writing a unit test, so the unit test would only be worth it to me if that code changed frequently. The rest of my code I only put under unit test after I find a bug in that function.

It's sometimes difficult to know beforehand what category a certain block of code fits into. The value of TDD is you don't accidentally miss any of the complex unit tests. The cost of TDD is all the time you spend writing the simple unit tests. However, usually people experienced with a project know with a reasonable degree of certainty what category different parts of code fit into. If you aren't doing TDD, you should at least try to write the most valuable tests.

share|improve this answer
    
When working on code like you suggest with your antialiasing example, I find the best thing is to develop the code experimentally, then add some characterization tests to ensure that I don't accidentally break the algorithm later. Characterization tests are very quick and easy to develop, so the overhead of doing this is very low. –  Jules 1 hour ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.