June 4, 2014 · development-process testing tdd

How I Test My Code

There has been a lot of talking about testing and specifically TDD in the past several weeks. I think it has a bit to do with DHH saying that TDD is dead. I've gone back and forth on how I test my code and just want to share those thoughts.

Test First

One of the more extreme (in my opinion) views with TDD is:

Never write a line of production code until you have a failing test for it.

This is something that never really appealed to me because it didn't help me in creating better quality code. I've no real rule about this because I don't think there needs to be one. Sometimes I'll dive into the code first and sometimes I'll write the tests first. I generally write tests first when I'm modifying a piece of functionality and write code first when I'm writing new functionality but there's no hard rule that I follow.

Unit Test Execution Speed

Another thing I hear in the TDD camp is that your unit tests should run so fast that they can be executed on every code change. Why? How does that help me? Some editors auto saves when switching between applications. If I'm in the middle of making changes and I switch to another application, do I really need the tests to run and tell me they are failing (something that I already know because I'm not done coding).

Lets say for the sake of argument that I thought this was a good idea, how do we get that kind of speed? Well, we mock everything besides what we are testing. Network calls, I/O calls, database calls, etc... I'm not always a fan of this fan because mocking out everything besides the extremely small piece of code I'm testing decreases my confidence that the code is working properly even if the tests pass.

Lets say I've a database call that I'm mocking for my tests. I then have a feature request that requires a change to the database structure. I update my mock for the tests, make my changes, and my tests now pass. If I forget to make the change to the actual database, when I push my code, it's going to break. "But that is what integration tests are for". Why have 2 sets of tests that test the same thing? I like to get the most out of my time and having over 100% code coverage doesn't make sense (I don't even care about getting to 100% code coverage a lot of the time).

That is not to say I don't mock anything. I mock stuff if the value it adds out weight the costs (like most things). Mocking database calls or I/O calls doesn't add much value. It might make things faster but in this day and age, computer power is relatively cheap. I can run about 1000 tests per minute with my local code making database calls to an external server (if I were to setup the database locally on a VM or something, I imagine it would be faster). While this speed might be way too slow for some, I'm more than fine with it.

Something that I do mock are HTTP requests. I use the supertest library to mock HTTP requests for my NodeJS code. Why do I mock HTTP requests but not database calls? In my experience, these tests are closely tied to the production code and I can't remember a time where I had a mocked HTTP request test pass where the actual code failed. I'll also test services that I don't have control over. If there's some 3rd party API that I don't control, then I'll probably mock that too.

Mocking for me is done on a case by case scenario. Just like most things, I'll weight the pros and cons of mocking something and if the pros out weight the cons (and different people will evaluate this differently), I'll mock it.

Code Coverage

Another thing I hear is how close they are to 100% code coverage. Well good for you. If your building a control system for an aircraft, you probably want to make sure everything works for any possible scenario that anyone can possibly think of and then some. Not all applications need that level of testing and I generally work on those kinds of applications.

I don't spend a huge amount of time trying to think of every single use case that my code could possibly encounter. The amount of up front time I put into thinking of use cases for my tests are determined by a number of factors like what is the life expectancy of the code, what is the worst thing that could happen if something breaks, how often is the code going to run, etc... I'm going to spend more time thinking about use cases for an ORM or a payment system than I would for a command line utility that simplifies the use of rsync. I'll make sure to write tests for all the use cases I can think of and then add tests as issues come up.

Why don't I care about 100% code coverage? Well, it doesn't increase my confidence in the code. Having tests that run every single line of code in your application doesn't mean your code is bug free. You could have a method call that has a parameter that passes the test but 5 different others parameters might fail the test in different ways and just don't know it.

If you have the time to invest to get to 100% code coverage and you think it's worth it, then by all means, go for it. Just don't think 100% code coverage === bug free code because it doesn't.

Testing Levels

This is more of a general testing thing, not specific to TDD. There are many levels of testing, some of the more common ones I hear are:

Now some people would say that all of your code should have tests for all of these levels but that is something I disagree with. This is because I don't see the point of testing the same code with 3 separate test suites. I'll generally have 2 different automated test suites, one for unit/integration tests and another for end-to-end tests.

First of all, I write my unit/integration tests as one set of tests because I don't believe everything should be mocked (as mentioned above). I'll then write end-to-end tests as a separate set because they do take a bit longer to run since it's interacting with the end application (which for me is generally the browser).

Overlapping Tests

Something that some people would advocate for that I generally don't do intentionally is that the unit/intergration tests and end-to-end tests should be testing the same thing, just doing it at different levels (and the end-to-end tests can test a little more since it's interact with the end application). If my code doesn't interact with the DOM, then I'll write unit/intergration tests. If I've some code that is DOM heavy, I'll write end-to-end tests. Sometimes I'll have code that has both in which case I'll have both types of tests that will overlap a little bit. The point is that I don't go out of my way to prevent unit/integration tests and end-to-end tests from overlapping but I also don't go out of my way to make sure they do overlap.

For example, I have this extend text component for AngularJS that just includes DalekJS tests (an end-to-end tester similar to Selenium). Now there was a point where I ran into an issue that I probably spent 20-30 minutes trying to figure out why a test was failing. Having unit/integration tests would've probably reduce the time to figuring out where the issue was. At that point I was like, "I need to make all my end-to-end tests have corresponding unit/integration tests". The more I though about it though, the amount of time it would take to invest into writing unit/integration tests that correspond with my end-to-end tests would probably not be worth in the long run.

Adding tests to make the code slightly easier to debug doesn't seem to be worth the effort in the long run from what I've seen.


Testing is important and I'm definitively not suggesting that you don't do it. Always make sure you write tests as you are developing code. Always make sure that when a bug does come up, you either add or update a test to make sure that case it covered to help prevent it from show up again. I just don't agree with the notation that you write code to make your tests pass. You should be writing tests to help increase your confidence that your code is working properly when you make a change.

comments powered by Disqus