Cram your test driven development up your ass....
Sam Hart2008-04-03 14:57:09
I vow today that I will never allow some blinkard, narrow-visioned, elitist Test-Driven-Development (TDD), Agile or XP (EXTREEEM PROGRAMMING!!!11one1eleventeen) cock-sucker push me around again. I also vow that I will fight these religious zealots with their pigheaded belief that TDD is the end-all, be-all solution to everything, until my last dying breath.
Seriously, not since... well... never.. have I felt so strongly about a particular technical methodology. I may rant about the cults of elitism surrounding git or vi, but they can't even remotely approach the level of elitism and self-assuredness of correctness in spite of ample contradictory evidence as the TDD crowd. For those practitioners and proselytizers of TDD, you're either with them or you're some sort of mentally retarded misanthrope who writes shitty code and probably eats babies.
What is TDD, and why do I hate it so? Well, let's explore the subject a bit deeper, shall we?
TDD, or test driven development, is the process by which you solve programmatic problems by first developing test code that tests for an expected condition, and then write your code towards the test. When the test succeeds, you're done. Generally speaking, the process runs like this:
- Develop a test
- Run all the tests and watch the new test fail
- Develop code to satisfy the test
- Run all the tests again and verify that they succeed
- If not, refactor code until they do
- Rinse and repeat
On paper and in theory, this sounds very appealing. You have a verifiable measurement of success (the new test succeeding without breaking the old ones), you have to think about the problem up front and form a solution before even starting to hack on the code (in order to write a test), and you are forced to break larger tasks into smaller, more manageable ones. Wonderful, no? Just looking at it makes one all wet and creamy, don't it?
Well, like a lot of things, the method breaks down in practice, and breaks down in ways that the vast majority of TDD practitioners are blind to. Furthermore, the process becomes a crutch by which mediocre developers can "get by" without really having to strive very hard at bettering themselves.
The Problems of TDD
So let's take a look at just a few of the specific problems of TDD. This wont be a complete list by any means, rather some of the important ones, in my view, that TDD-zealots often overlook.
- Extra, often useless, up-front development
Development, if you're doing it right, involves a great deal of code refactoring as you discover problems and dilemmas with your design as you implement it. Unless you're coding something whose problem-space is well-known and well-understood, you're going to have a lot of initial conceptions about the software that will change or vanish as you develop it. This isn't something specific to software development as it happens more generally in other fields (science, engineering, writing, you name it).
Applied to software development this means that you will invariably have "prototyped" or "proof-of-concept" code initially that, while serving as a nice road-map of the project, will ultimately be discarded or replaced as the development process continues.
TDD, by its definition, pre-loads this initial "prototyped" or "proof-of-concept" code with useless test cases that will probably not be applicable to the final project. But at the same time, writing these tests up front wont help you avoid the inevitable refactoring (as they are just testing your inaccurate pre-conceived notions that you initially held).
The time spent coming up with this quickly discarded test code could and should have been better spent actually hacking on the prototype.
- Development with blinders on
Race horses have special horse tack that limit the horse's vision to a fairly narrow field in front of them (usually). The purpose of this is to help keep the horse focused on the path ahead of them and not be distracted by the other horses, crowds, and objects around them. In every respect, TDD has the same effect on developers.
When you write your tests up front, you are making assumptions that your expected results are the ones you should be getting. Thus, it is very easy (and common) to accept them as gospel and let them guide the rest of your development (I know many TDDs will claim this isn't so and that TDD actually requires you not to do this, but the fact remains that this winds up happening anyway).
If you get the expected results correct the first time then peachy. However, as I said above, if you're doing it right you wont get these results correct the first time. These pre-written tests can thus guide you into inadequate solutions, and make it very hard to see those alternatives outside of your narrowly defined tests.
- Tests are weighted more heavily than the code
The emphasis of composing tests before the actual code of the application adds more importance to the tests than the actual code itself. This is another reason why the tests are so commonly accepted as gospel in spite of evidence to the contrary. Weighing the tests more than the actual code of the application is idiotic and wrong in every way, shape, and form.
Think about it, what is the ultimate goal when you develop software? Is it to develop a large testing framework that tests pre-conceived notions about the software, or is it to actually develop the software? Placing so much emphasis on the tests shifts the focus of the development process to something that is important to the developer and the developer alone.
- Coding yourself into a corner
Perhaps my biggest problem with TDD is how easily the above problems allow you to paint yourself into a corner, programmatically speaking.
Often it is much easier to refactor your code to match your expected results from a test you've written, than to scrap the test and design and start over again. TDD's emphasis on the tests will make it very hard to accept that the tests themselves could be what's wrong in a design, and you'll often find test-driven developers more willing to tweak their final code to match a flawed test than actually fix the test and change their initial design.
- Narrowly applicable uses
There is a lot online about how TDD really doesn't work well for certain classes of applications like GUI and WUI (web user interface) apps. But, really, that's just the tip of the iceberg. Honestly, other than library and API testing, pre-loading your tests up-front can only distract rather than help with development, and I'd suggest that even with library and API-testing they are a stretch.
What this means is that TDD is only applicable for a very narrow segment of software development, which certainly doesn't support how much publicity and press it's gotten.
- Tests solve problems that don't exist
TDD does not prevent bugs, the TDD-zealots don't claim that it does. After your software gets in the hands of testers and users, they will undoubtedly find bugs and corner cases you hadn't thought of regardless of how you first developed the software.
The process by which these bugs are resolved generally follows the TDD one detailed above.... In fact, it's where TDD came from! When a bug is reported, you often write a test that indicates the bug's existence, and then code up a solution to the bug. The test stays around to prevent regression of the bug.
Well, the TDD tests that were developed up front have nothing to do with the bugs that will be revealed once the software is released. What happens when a bug conflicts a TDD-test? What happens when a bug is caused by a TDD-test? Which is more important, the tests you used to develop the software, or the problems and issues found by testers and users of the software after it's been released?
- Scope creep
Part of TDD's method involves breaking up larger tasks into smaller, more easily testable, ones. This generally implies insufficient design and requirements documentation up front, which invariably leads to scope creep.
If the same area of code needs to be re-engineered via pre-loaded test-driven development over and over again due to things like scope creep, it is much less efficient than just planning the project out before-hand and sticking to the project design documentation all along.
- Impossible to develop realistic estimates of work
This is another big problem that TDD-zealots do a lot of hand-waving over. Without clear design and requirement documentation up front, it's virtually impossible to estimate how much time and effort will be required to get something up and running.
This may work fine if you're doing some sort of dateline-less project (like a Free- or Open-Source Software project), but imagine you're a contractor and need to assign a realistic estimate of how much time you'll be spending on a project so you can be paid up-front. With TDD it's entirely too easy to overestimate the time it will take to do something, or, worse for you, underestimate it.
I could go on, but I grow weary of it. If you think about the above, you should be able to see plenty of other problems with TDD.
Richard Dawkins once described himself as militantly atheistic. As loath as I am to honestly claim to be militantly anything, I feel I must proclaim that I am militantly anti-TDD. The TDD movement has entirely too much momentum when you consider its flaws and shortcomings, and it simply must be knocked down a peg.