Test-driven development (TDD) solves some of the chaos introduced by an agile software development methodology by forcing the devs to consider the behaviour they're looking to achieve before writing code.
However, it's certainly no silver bullet. Taking a TDD approach adds a decent amount of overhead, and there are times when it doesn't make as much sense.
Our team are always searching for the best way to manage our projects. Gathering a few hundred complex requirements and working through them in an efficient manner while coordinating with a handful of colleagues is really hard and easy to mess up. So we've tried pretty much every management approach under the sun; test-driven development, behaviour-driven development, acceptance criteria-driven development, agile, scrum, kanban, extreme development, waterfall, hybrid, and a bunch of less structured approaches.
Whenever we try a new approach, I try to do a bit of research to see if it's going to be worth the effort. This might sound a bit cynical, but it's all about the time and effort required to properly learn and implement a business process. It takes my team months to get good at a new process, and I think they're sick of me asking them to learn new things!
So in the spirit of asking questions and weighing up our options, this article will be dedicated to figuring out if test-driven development is worth asking my team to implement.
What is TDD & what problem is it solving?
TDD is a testing framework for writing unit tests first and code second.
The tests tell development how to proceed. If a new feature is developed but the test fails, then additional development is required. Every requirement and feature must have full test coverage to tell it how to behave.
TDD is a response to getting to the end of a project and having to frantically rush through a bunch of testing to figure out if everything works properly. It forces the project team to create a set of tests that can be automatically executed to check if bugs have been introduced accidentally.
This kind of automated unit testing is pretty much necessary for any larger application. While manual testing will always be required to some extent, relying on just manually testing for a large application where a lot of changes are happening is a nightmare. It's super easy to introduce regression issues, knowing if the backend is working correctly is virtually impossible, and the effort involved to test thoroughly can be extreme.
Since automated testing needs to happen for the project to stay healthy and developers to remain sane, TDD takes the stance that it should happen up front and drive the whole process.
Test-driven development cycle
The test-driven development cycle looks like this:
- Write tests
- Red: Tests fail because code isn't complete
- Green Phase: Tests pass when code is complete. Rerun entire test suite.
- Refactor code to ensure optimization and best practices
The red state is essentially everything before the complete code is implemented and the test passes. It falls between the point when the test is written and it passes.
The green state is when the code has met the conditions of the test and it's passing. Once the individual test passes, the entire suite needs to be re-run to ensure no regression issue have been introduced. If something fails, you're back in the red state until there are no problems.
Once all the tests have passed, it's time to refactor the code. Since the goal of TDD is to pass individual tests, there can be some inflexibility in how code is written. Often, the first way you write a unit of code to pass a test isn't the most optimal and it will need a bit of reworking, which is the point of this phase.
Accounting for refactoring every unit of code creates room to build a high-quality, well-thought-out application, but it also adds effort and overhead to the project.
The bigger picture TDD process
I find it more helpful to consider TDD at a slightly less granular level so you can see how it fits in with defining requirements.
- Define the user flow, or the steps they will take to achieve their goal
- Break the user flow into stories
- Decompose the stories into features
- Write tests for each unit of code required for the feature
- Write code
- Execute test
- Refactor code until the test passes
Looking at TDD this way helps paint a more accurate picture. Most of the time, the development team will have some involvement in figuring out what features are required to achieve a story and how that should be broken down into units of code.
We've written a pretty detailed explanation about writing user flows and stories so we won't rehash that topic. Rather, we'll jump into the actual testing part of TDD.
Designing for testability
The biggest value of test driven development is its emphasis on forcing developers to think through the design of their code instead of taking a cowboy approach. The process of thinking through the logic to create a feature and implementing the code is decoupled. As a result, the development team can focus on achieving the conditions of the test case, heavily reducing the amount of code written and building only what's needed.
By forcing this process of thinking through the code design, a number of other benefits are unlocked:
- Less code to maintain
- Higher-quality releases with few bugs
- Self-documented code that will help future devs figure out what's happening in the codebase
Will my team hate me for implementing TDD?
There are two situations when your team will hate you for implementing test driven development:
- The project is small and doesn't justify the overhead that TDD adds
- Your team is working on an existing project and you're asking them to retroactively implement TDD
It takes a bit of time to get used to any new process, and writing tests for every unit of code can make the speed of development feel relatively slow. However, I've found that most development teams embrace the idea of writing tests since it cuts down on the number of bugs they have to deal with down the line.
The times when protests will happen around implementing TDD is when the project needs to happen as quickly as possible and doesn't justify an automated suit of tests. The popular project management approaches often don't consider these kinds of projects, but we all know they happen.
The other situation when you'll feel backlash from your team for implementing TDD is on an existing project. Retroactively writing codes for someone else's code sucks. It takes forever and can be quite tricky to get right. While it's possible, your team will hate every minute of it :).
Testing: fakes, mocks and stubs
Test driven development revolves around writing unit tests and getting them to pass. Most functionality will require some data to work with in order to test if it's working. The problem with adding data during development is that it's rarely possible to work with actual product data and services. There are three main ways that TDD solves this problem;
Fakes are a watered-down version of a production object, where the code is representative and slimmed down in some way. For example, a fake database object would return the same values as the real object, except it could be called without requiring a database connection and everything that comes with it.
Mocks follow a similar principle, except they register any calls received. A good example of this is a mock for a form submission. Writing out the form data every time is a massive time sink, so automating the process and setting up for mock to register the submission makes sense.
Finally, stubs are objects configured with test data that will return when called. They're helpful when the database contains actual data that shouldn't be returned, or when the database isn't configured to return data.
Leveraging these test doubles is necessary to ensure that tests can pass and the project can continue to roll forward. However, setting up test doubles is another aspect of TDD that can feel like an investment of effort that's slowing everything down.
Another option that's less "by the book" is seeding the database with test data. This requires a bit more messing around and may still need to be combined with the methods we just covered if the database schema isn't complete for the feature you're writing tests for.
Backend frameworks like Laravel make seeding data super simple, and it's a good practice to get into even if you're only doing it so the manual tests can be more meaningful.
TDD vs ATDD - How do they stack up?
The difference between TDD and ATDD is the granularity to which the tests have been written. Under the test driven development approach, a test is written for each unit of code. Whereas, under acceptance test driven development, each feature has an acceptance test that covers all the units of code within the feature.
TDD is arguably a more thorough methodology because it questions and strengthens the quality assurance behind each small unit of code. However, as we've mentioned a couple of times, this can be a pain in the butt and sometimes uncalled for. ATDD on the other hand sets a definition of done for multiple units of code, and is much quicker to implement across a project. However, it's easier to achieve false positives where even though the acceptance criteria are being met, something within the code is actually not working correctly.
TDD vs BDD
Personally, I prefer behaviour driven development as a way to write requirements and drive the direction of a project.
It was created in the early 2000's as a response to the rise in TDD. Rather than writing out a never-ending list of unit-tests, BDD focuses on defining all the ways a feature (or story) will be used. The scenarios that describe the story follow a synxtax called Gerkin that looks like this:
GIVEN some condition
AND additional information about the condition
WHEN something happens
THEN there will be a predictable outcome
The benefit of behaviour-driven development over test-driven development is creating a shared understanding between the developers and business about exactly how the feature should work and what kind of edge cases should be considered.
Popular TDD tools
Selenium and Cucumber are the two tools we're most familiar with for supporting TDD, but there are tons of other great options:
Limitations of Test-Driven Development
There aren't really any limitations to TDD but there are a few "considerations":
- You're spending testing time up front rather than behind
- Tests can pass without the user's requirement being met
- There's a learning curve
TDD takes time and effort. This isn't a bad thing, but it can feel like it's slowing your team's velocity down. In reality, you're building the time your team would have spent testing into your development sprints (assuming the project is following an agile or hybrid methodology).
A problem that's less obvious is that your application can have great coverage but not meet the user's requirements. Unit tests do a great job of breaking a functional goal into digestible chunks of code, there's no mechanism that ensures the business and development team are on the same page about the requirements. These problems are addressed by BDD, and there can be an advantage of combining the two approaches.
Finally, training your team to use test-driven development takes time. Tests are a kind of code too, and they need to be written carefully. Since they add time and effort, there can be a reluctance to adopt the approach.
Test-driven development is a great way to ensure your next agile application project achieves a high standard and enjoys easy maintenance once deployed. We've covered all the important business considerations about TDD. If we haven't answered your questions, please leave a comment below and someone from our team will get back to you in a day or two.
Have an idea you want to discuss?
We’re based in Canberra, Australia and we LOVE working with locals… but we work with clients all around the world.
From the U.S. to the U.K. From Norway to New Zealand. Where there’s a problem to solve, we’ll be there!