Ok, I have run into the anti-TDD arguments on and off over the years. Generally it comes from two primary sources. The person who doesn’t want to do TDD no matter what, nor test, nor learn anything about the latest and greatest tech, they simply want left in their corner of the universe to code whatever they are coding until they drop dead. Then there is the other individual who is a decent developer, but just has honest hesitations about the style.
The developer who has honest hesitations I can understand, can talk to, and can usually at least get a “Well, that sounds reasonable, but I’m still not sure” response. Which is much better than the alternative of disregarding TDD outright. The hope, is that this individual can at least try out TDD and see if it would work for them.
Even with the more open minded developer it is still difficult to put TDD into practice. Why? That’s simple, TDD is hard and forces a developer to step back into the basics and really think the problem through first. Most developers (at least currently, maybe it’ll change) have a very “just attack the problem with code” type of response. Often the way to break down problems is taught in computer science 101 level classes, but after that is often lost in all the cramming for math, algorithms, and other such material. But the core fundamental idea of thinking the problem through clearly still needs to be broached. TDD puts that idea forefront in the process by making a developer write the test to verify the problem statement first.
With all this stated, there are a few myths that I’d like to bust. These myths are often what cause the first person to cringe away cowardly in their corner of the universe, and causes these hesitations in the second more open minded developer. After busting these myths I’d hope more of the open minded developers can use the power and thinking that TDD encourages. As for the first corner cringing coder, they’re a dying breed and I wouldn’t bet on their recovery.
The Myths
- MYTH: But I can’t test every single thing that is written, so TDD just isn’t for me. The reality is, you can’t test every single thing and you shouldn’t. You should stick to the logic, business process, and other parts of the code. Don’t worry about the configuration, individual database nodes, or other arbitrary or configuration based bits. The idea is to make you think through the problem spaces and architecture needed to work through those problem spaces. So TDD is still something you should try, but don’t try to test every single line of code. Focus first just getting through the logical, flowing, business related bits of the code you are writing with tests first. This will get you appropriately experienced with TDD.
- MYTH: You can’t test against boundaries, that becomes an integration test, so why even mock/stub/fake if I’m not testing the real framework/parts/features? Again, why worry about testing parts that you know or are not responsible for? Don’t test the integration parts. There is a reason QA exists, and defining and testing boundaries is what QA does great. But you still need to test and confirm that your code works against those. Mocking or stubbing against that gives you the confidence that your code logic will work against these integration points and also provides you a basis in which to work off of. If you write the tests first, you also know you’ve thought through these integration points to a fairly thorough level.
Do you have anymore myths that should be burst? Please feel free to toss a myth or two to burst in the comments, I’d love to hear others input on the matter.
The most common argument against TDD that I hear all the time is that the upfront costs are way to high. My counter argument to that is always; development costs are much cheaper than maintenance cost. To find, debug, and schedule a patch for something in production is wildly more expensive than to do the same when a system is in the coding stage.
Not only that but the pure act of TDD avoids defects.. rather than resolves defects. A mind shift for sure, but an important one.
Yeah, TDD has up front costs as you point out. The problem often comes up about if a company wants to pay that or not. Once it is paid though, rarely does it even come up again, and the benefits pay off very quickly. Rarely though can TDD be implemented by itself, often it has to go with other good practices such as pair programming.
I heard from other developers such things:
“We cant test the class, because the Logger is private static and injected by the framework, if I run the tests, I’ll get NPE.”
“The projectmanager told us, he doesn’t want any unit-tests. So we don’t do any.”
“There were like 10 Unit-Tests, which tested parts of the code, which wasn’t used. I can’t rely on Unit tests.”
“It’s so hard to Unit Tests. We got a JBoss6 where you can’t write JUnit-Tests like that. It would need 2-3 days to setup the environment for writing tests. The customer doesn’t pay for it. (We are external partners)”
“The requirements change so often, I would need to rewrite many of my Unit-Tests each time plus the code.”
Another way in which TDD should always be leveraged is in defect resolution. When you find a defect, do not try to solve it first. Instead, write a test that duplicates the defect and fails (because there is a defect) and then solve the test so that it passes. You end up with an isolated test environment for the defect in which to use to solve the defect and going forward you now have a test that will always check for the defect. (Your tests are automated against CI builds, right?) I have used this practice for meany years and it works great, but most other developers I have work with (at large and small companies) are not willing to do this at all. Many developers just lack the maturity and discipline required for great testing. So, in addition to be open-minded, there is a level of maturity and an amount of discipline that is also required from a developer for TDD to be adopted.
Yeah, for sure. When a code base goes untested, and issues come up again, that’s a key indicator of two things. One is that the code base is obviously in use and needs to be maintained. It’s an ideal to increase the maintainability of that code by dropping a test on it to insure that, if things come up again, they’re instantaneously fixable! (at least relatively 😉
The other cool thing about throwing a test on it at that point, is that it dramatically increase refactoring confidence. If the code is throwing errors, it’s likely code that need refactoring more than other code. Refactoring an existing code base without tests is practically suicide. The liklihood of introducing more errors than fixes is huge without tests.
Good to hear from ya Wade. Hope the shredding is good and the code is wicked elegant! Cheers!
Agreed. Yep, the shredding continues. Metal doesn’t die! I did get rid of half of my guitars though….I had waaaayyyy to many to maintain and to play.
\m/
You make an excellent point, Wade. I was able to sway opinion toward testing/TDD recently by using this technique. I had written a library of semi-complex logic that was thoroughly tested to begin with. As another developer was consuming the library, they would come up with edge-case bugs. As the other developer reported the bugs to me, I would always ask, “what are you passing to the library?”
Using this information, it became easy to duplicate the problem with a failing test. Making the test pass added another test to the suite, which solidified the library. By the time we were done integrating the library into the the other developer’s app, the library was solid and (to date) bug-free.
TDD can stand either for:
1) Test driven *Development* OR
2) Test driven *Design*
You didn’t specify explicitly what you mean.
In first case (development) it’s a perfectly valid technique assuming you already have clear vision of public interfaces of your classes – i.e. your model is already designed (at least partially) which is the hardest part of development by the way.
But test driven *design* is a myth and rubbish. You need model then you need to design it. There is no evidence that tests can help you to come up with good design and problem that evangelists (frauds) claim otherwise.
However there is obvious correlation between good design and ability of abstract thinking which in turn can be improved by “math, algorithms, and other such material” which you mentioned rather scornfully
I didn’t go with the second definition because it’s rarely used that way, if anything design is just merely one element inside of one’s practice when coding or preparing to code. But I digress, I see you’re point, and it is very safe to assume TDD stands for Test Driven Development, not the second definition.
As for my reference to “math, algorithms, and other such material” being scornful, it is anything but. I just merely state that the focus on that often leaves students ill-prepared to actually put together solutions once they’re actually employed. It’s great that students are loaded up with math, algorithms and related materials, but it’s all useless when employed if they can’t actually get to reasonable solutions with all that math and algorithm knowledge. No scorn intended, just that there is a bias that doesn’t work so well going from academia to the workforce and producing working solutions.
Also, I’ve seen more than a few people do design using development practices around testing first. I’m not saying there is much of a “Test Driven Design” wording as you point out, but there is damn well plenty of coders that implement solutions – or more specifically designs and patterns – based on what their tests inform them of. Adamantly stating as if it were an axiom that design isn’t associated to TDD (Test Driven Development) is an extremely limiting way to look at things, and disregards a large part of the developer community that builds things that way – almost as if the “But test driven *design* is a myth and rubbish.” and then “There is no evidence that tests can help you to come up with good design and problem that evangelists (frauds) claim otherwise.” claim itself is pretty absurd. There is evidence, I’ve seen and led teams that use TDD (Test Driven Development) to build out and reinforce their architectural designs that are put into place in companies all the time. But I digress, maybe there is a pedantic difference between whatever “Test Driven Design” is that you refer to and what is actually put into place by developers.
…and even though I wrote this blog entry 4 years ago, I’ve still seen zero evidence that would lead me to believe teams that don’t do TDD or BDD or some type of testing perform and produce anything as well as teams that do. If a code product or service needs to last any amount of time, TDD and BDD practices should be used regularly.
…thanks for commenting KolA.
I do apologize if I was rude. All I want to say is:
1) How people sell TDD: “do TDD/BDD and your design will be good”. How it’s in practice: “Work harder on your design and maybe you’ll be able to make it testable and do TDD”.
i.e. ability to do TDD is a logical outcome of good decoupled design – not the cause.
2) Testability / test coverage is not a holy grail. Holy grail is ability to proof that your program is correct by obvious reasoning / induction. We need to strive for programs which obviously have no mistakes rather than have no obvious mistakes (yes, I’m a fan of Dijkstra).
I’m not saying automated tests are useless but they can’t prove more than that (lack of obvious mistakes). There is a good paper on this topic “Why Most Unit Testing is Waste” by James O Coplien.
Hey no worries!
1) I generally agree with this. It’s very much a give and take in this space. One has to have a good design, doing TDD helps have a good design, but without a good design TDD can end up useless, and TDD alone doesn’t make good design.
2) +1 yes Dijkstra. I also happen to agree that most unit testing does end up being waste, if it isn’t practiced and used actively. It’s a mixed boat and I can argue it many ways. TDD and especially BDD are tools I often use to force newcomers into figuring out what they’re doing exactly. Many newcomers just don’t have enough practice thinking through problems well, to just sling some code out. I’ve also noticed – with or without automated tests – senior developers almost always have some strict, effective and comprehensive way to test the code in which they write. So in the end, having some type of testing – which these days the easiest route to achieve this is usually TDD/BDD of some sort – is fundamental to get to the goal and have a good product.
I’m betting – we’re agreeing on much of this. Testing does generally remove obvious mistakes – which are absurdly common in much software these days. Catching it during development is exponentially cheaper/easier/better than letting it drop to QA/Test/UAT/Users.
As for the “Why Most Unit Testing is Waste” paper by James O’ Coplien, it just went on my list of “next things to read”! 🙂
> I’m betting – we’re agreeing on much of this
I think we’re indeed 🙂
Totally support TDD is a good learning tool for juniors/newcomers to develop a habit to slow down and think before jump into cowboy style coding. It’s interesting to watch how someone’s brain smoking when he’s trying to come with a nice and clean test for his own API he thought was great 🙂
The paper I mentioned provides some healthy skepticism about what tests can achieve and how to recognize/focus on tests that has highest return of investment (quite often they’re system-level rather than unities). It’s rather biased / not scientific though have links to formal studies but worth the reading anyway.