Gone are the days when the Waterfall model reigned supreme – we’re in a new era where quality assurance isn’t an afterthought.

To bring QA to the forefront, many methodologies, such as Test-Driven Development (TDD), Acceptance Test-Driven Development (ATDD), Behavior-Driven Development (BDD), Specification-Driven Development (SDD), and more, have become prevalent. These methodologies support Agile’s preachings and make the process efficient.

In this article, we’ll dive into the finer points of how to make ATDD and TDD work for your project.

What is TDD?

As the name suggests, over here, tests drive development. You can also think of TDD as the Red, Green, Refactor method. This is because:

  • Unit tests are written first, even before there’s any code.
  • Then, the tests are run. Naturally, they fail. This is the Red stage.
  • Now, you write just enough code to make the tests pass, that is, make it Green.
  • Developers run these tests multiple times to make them pass but also Refactor code wherever necessary to make it efficient.

TDD is known to be efficient. But this seems a bit cumbersome, don’t you think? Especially if you’re working in an environment where changes are frequent. Well, fret not.

We will be looking at some great tips in the below sections, which are based on Urs Enzler’s clean TDD and ATDD cheatsheet.

General TDD Principles

Principle Explanation
A test checks one feature A test checks exactly one feature of the testee. That means that it tests all things included in this feature but not more. This includes probably more than one call to the testee. This way, the tests serve as samples and documentation of the testee’s usage.
Baby steps Make tiny little steps. Add only a little code to the test before writing the required production code. Then repeat. Add only one Assert per step.
Keep tests simple Whenever a test gets complicated, check whether you can split the testee into several classes.
Prefer verifying state over behavior Use behavior verification only if there is no state to verify. Since TDD tests are more direct, state verification will yield better results. Refactoring also becomes easier due to less coupling to implementation.
Test Domain-Specific Language (DSL) Use test DSLs to simplify reading tests, builders to create test data using fluent APIs, and assertion helpers for concise assertions.

The Relation Between TDD and Unit Tests

If you’ve understood TDD, then you’ve realized that we need some form of test cases to get coding started. These are going to be unit tests. However, the two differ from one another as TDD is a methodology that requires writing tests before code, whereas unit testing can be done even after the code is written. Thus, TDD is “using” unit tests.

Unit Testing Principles

Principles Explanation
Fast Unit tests have to be fast so that they can be executed often.
Isolated No dependencies between unit tests. It should be clear where the failure happened.
Repeatable No assumed initial state, nothing left behind, no dependency on external services that might be unavailable (databases, file system …).
Self-validating Tests should be clear – either pass or fail, no in-betweens

Use of Fakes in Unit Tests

Unit testing heavily uses Fakes like Stubs, Fakes, Spies, Mocks, and Test Doubles. You want to do this to ensure isolation in your unit tests. Here are some do’s and don’ts to manage them.

Do’s ✅ Don’ts ❌
Isolation from environment
Use fakes to simulate all dependencies of the testee.
Mixing stubbing and expectation declaration
Make sure that you follow the AAA (arrange, act, assert) syntax when using fakes.
Keep it clean by allotting a block to set up stubs for the testee to function as intended. Then, list your expectations from the testee.
Faking framework
Use a dynamic fake framework for fakes that show different behavior in different test scenarios (little behavior reuse).
Checking fakes instead of testee
Tests that do not check the testee but values returned by fakes. Normally, it is due to excessive fake usage.
Manually written fakes
Use manually written fakes when they can be used in several tests, and they have only little changed behavior in these scenarios (behavior reuse).
Excessive fake usage
If your test needs a lot of fakes or fake setup, then consider splitting the testee into several classes or providing an additional abstraction between your testee and its dependencies.

Unit Test Smells

Unit test smells tell you that something is wrong with the way you’ve implemented the unit tests. Though the system might continue to work, you might observe some of the following signs:

  • Test not testing anything: A passing test that at first sight appears valid but does not test the testee.
  • Test needing excessive setup: A test that needs dozens of lines of code to set up its environment. This noise makes it difficult to see what is really tested.
  • Large test with assertions for multiple scenarios: While the test might be valid, it might be too large. Reasons can be that this test checks for more than one feature or the testee does more than one thing.
  • Checking internals: A test that directly accesses the internals (private/protected members) of the testee. This is a refactoring killer.
  • Test only runs on developer’s machine: A test that is dependent on the development environment and fails elsewhere. Use continuous integration to catch them as soon as possible.
  • Test checking more than necessary: A test that checks more than it is dedicated to. The test fails whenever something changes that it checks unnecessarily. Especially probable when fakes are involved or when checking for item orders in unordered collections.
  • Irrelevant information: The test contains information that is not relevant to understand it.
  • Chatty test: A test that fills the console with text – probably used once to check for something manually.
  • Test swallowing exceptions: A test that catches exceptions and lets the test pass.
  • Test not belonging in host test fixture: A test that tests a completely different testee than all other tests in the fixture.
  • Obsolete test: A test that checks something no longer required in the system. It may even prevent the clean-up of production code because it is still referenced.
  • Hidden test functionality: Test functionality is hidden in either the SetUp method, base class, or helper class. The test should be clear by looking at the test method only – there should be no initialization or assertions elsewhere.
  • Bloated construction: The construction of dependencies and arguments used in calls to the testee makes the test hardly readable. Extract to helper methods that can be reused.
  • Unclear failure reason: Split test or use assertion messages.
  • Conditional test logic: Tests should not have any conditional test logic because it’s hard to read.
  • Test logic in production code: Tests depend on special logic in production code.
  • Erratic test: The test sometimes passes and sometimes fails due to leftovers or the environment.

Now, let’s look at some points to keep in mind at each stage of TDD.

Test Writing

Design tests keeping in mind testability
A constructor should be simple
Objects have to be easily creatable. Otherwise, easy and fast testing is not possible.
Constructor – lifetime
Pass dependencies and configuration/parameters into the constructor that have a lifetime equal to or longer than the created object. For other values, use methods or properties.
Understand the Algorithm
Working alone is not enough, so make sure you understand why it works.
Test structure
Arrange – Act – Assert
Structure the tests always by AAA. Never mix these three blocks.
Test namespace
Put the tests in the same namespace as their associated testee.
Unit test methods show the whole truth
Unit test methods show all the parts needed for the test. Do not use the SetUp method or base classes to perform actions on testee or dependencies.
SetUp / TearDown for infrastructure only
Use the SetUp / TearDown methods only for the infrastructure that your unit test needs. Do not use it for anything that is under test.
Test method naming
Use a pattern that reflects the behavior of tested code, e.g., Behaviour[_OnTrigger][_WhenScenario] with [] as optional parts.
Resource files
Test and resource are together.
Naming
Naming SUT test variables
Give the variable holding the System Under Test always the same name (e.g., testee or SUT). Clearly identifies the SUT, robust against refactoring.
Naming result values
Give the variable holding the result of the tested method always the same name (e.g., result).
Anonymous variables
Always use the same name for variables holding uninteresting arguments to tested methods (e.g., anonymousText, anyText).

Red Bar Patterns

These patterns occur when tests are failing, typically in the “Red” phase of TDD. This phase helps identify the problem you’re solving and ensures your test setup is meaningful.

Pattern Explanation
One step test Pick a test you are confident you can implement that maximizes the learning effect (e.g., impact on design).
Partial test Write a test that does not fully check the required behavior but brings you a step closer to it. Then, use the Extend Test as mentioned below.
Extend test Extend an existing test to match real-world scenarios better.
Another test If you think of new tests, then write them on the TO DO list, and don’t lose focus on the current test.
Learning test Write tests against external components to make sure they behave as expected.

Green Bar Patterns

These patterns occur when tests are passing, typically in the “Green” phase. This phase focuses on ensuring the implementation satisfies the test, often with minimal effort.

Pattern Explanation
Fake it (Till you make it!) Return a constant to get the first test running. Refactor later.
Triangulate-drive abstraction Write a test with at least two sets of sample data. Abstract implementation of these
Obvious implementation If the implementation is obvious, then just implement it and see if the test runs. If not, then step back and just get the test running and refactor it.
One to many – drive collection operations First, implement operation for a single element. Then, step to several elements (and no element).

What is ATDD?

ATDD (Acceptance Test-Driven Development) follows a trajectory similar to that of TDD. Just that over here, you write acceptance tests before you start coding. Acceptance tests are the ones that are derived from acceptance criteria that are decided at the time of formulating the requirement.

Here’s the general essence of ATDD:

  • You collaborate with your team of developers, testers, and business stakeholders to get a list of acceptance criteria. This is the process of discussing and defining.
  • Then, you write the actual tests. These are written before any code is developed.
  • Developers now write the code to pass these tests.
  • Now, run the tests to ensure that they pass. This is a cyclic process. Once tests start to run, you can refactor them too, over time.

Tips for Better ATDD

Tips Explanation
Use acceptance tests to drive your TDD tests Acceptance tests check for the required functionality. Let them guide your TDD.
User feature test An acceptance test is a test for a complete user feature from top to bottom that provides business value.
Automated ATDD Use automated Acceptance Test-Driven Development for regression testing and executable specifications.
Component acceptance tests Write acceptance tests for individual components or subsystems so that these parts can be combined freely without losing test coverage.
Simulate system boundaries Simulate system boundaries like the user interface, databases, file system, and external services to speed up your acceptance tests and to be able to check exceptional cases (e.g., a full hard disk). Use system tests to check the boundaries.
Avoid acceptance test spree Do not write acceptance tests for every possibility. Write acceptance tests only for real scenarios. The exceptional and theoretical cases can be covered more easily with unit tests.

Conclusion

This article highlights some of the ways you can ensure effective TDD and ATDD. But remember that your testing will only be as helpful if you write clean code. While there are many ways to test code, make sure to pick approaches that fit your specific needs.