Test-driven Development programming with confidence

The Greek myth about navigating between Scylla and Charybdis comes to my mind when I add new features to the heavily used business application. Lack of documentation, technical debt and vague planning is my Scylla. Corrupted data and regressions resemble Charybdis. So what should I sacrifice now for a successful release?

Regressions and data corruption is severe issues of unfortunate deployment. They are hard to notice and time-consuming to fix. These are the failures we should try to avoid by all the means practically possible. Luckily both of them might be prevented by the employment of the test-driven development process.

Writing a test is exhausting and time-consuming. It requires a lot of willpower and self-confidence to type 50-100 lines regression test for five lines change in the application code. They might either like the change or want to adjust it. The latter might force you to throw your unit test and write a completely new one. Thinking about this would hardly increase your dopamine level or help to feel secured.

Unit test refactoring is by far my favourite way of procrastination. This can make me busy for hours and, what is more exciting, I feel that I am doing something essential. Still, I would prefer to avoid doing this. Broadly, tests support refactoring of the application code. However, what should help us to refactor the tests? How to keep tests clean and nitty?

I believe that the lack of tests should never be technical debt, but test refactoring might be.

My checklist for writing tests should help you keep focused on what to do to get the best from your tests. It states what should be in your tests; what should stay safe from Scylla to make your tests valuable. Having your test suit in good shape might come as a side effect from applying these rules.

Write your tests

Write a broken test before making any change and then fix it. Simple. I tell you a story of how this works for me, and the same approach might suit your mindset as well.

Tests shorten the distance to your final destination by replacing it. Let me rephrase this for clarity. When you fix a bug in the application, your final destination is a production release with the defect set. The application contains several services. You change the code for one of the services, build a docker image, and finally run another service using your updated image. Or something along the line. You do a lot before you can verify that your changes really work. When you write a broken test for the defect, you shift the goal. Now your first aim is to make the test green.

Tests hold the knowledge about your application. When you worked on the program a life-long time ago, you knew what it should do. Now you do not. They say that it should behave as it did before, but now it behaves differently. From this, you only know that you broke something, and you have no idea what exactly is wrong. You have to revert your recent changes, and data corruption hits you vulnerable and helpless — a sad story. Of course, this can happen even when you have 100% code coverage by tests. You add a regression test, fix your code and keep believing that you have reduced the probability of kaboom happening again.

Tests help you to keep focused. You fix one test at a time by doing the minimum needed change in the code. This makes your work more predictable by setting checkable milestones. Lighting the green bulbs on the tests brings a feeling of progression and assures you that you will complete the task.

I hope that at this moment I have convinced you that tests are great and we should write them whenever practically possible. Or at least you know now what gets me through my day.

Make your tests easy to work with

The tests should be easy to extend. For example, adding a new use case for a function or an API method should be possible without copying/pasting or writing new code.

Make your tests open to additions and modifications by design. This is a bold way of saying, use test.each() instead of duplicating the code. You add new elements to the test case array or modify their structure. But the test code stays untouched.

Consider an example below as proof of concept and source of inspiration.

We do integration tests for the REST API. First, we define the data structure that contains all the data required to initialise the application, run the test and validate the result.

/**
 * Integration controller HTTP POST test case:
 * 1. Prepare test data, populate MongoDb with `datumBefore`
 *    and initialise all services.
 * 2. Optionally change current date and time.
 * 2. Start application.
 * 3. Make HTTP POST request to given URL with given payload
 *    and check HTTP response (status and body content).
 * 4. Optionally check MongoDB database.
 */
export interface ControllerPostTestCase<Req extends object, Resp> {
    advanceTo?: Date;

    datumBefore: StorageDatum;

    requestUrl: string;

    requestBody: Req;

    responseStatus: number;

    responseBody: Resp | ErrorResponse;

    expectedDatum?: StorageDatum;
}

Second, we instantiate the generic interface for each API method we are testing.

export type FindOrderTestCase =
    ControllerPostTestCase<FindOrderRequest, FindOrderResponse>;

Third, we write test cases. The example below findOrderTestCases is an array of test cases for the API method; each test case implements the FindOrderTestCase interface. I show you one test case as an example. This is a regression test for a bug fix.

export const findOrderTestCases: FindOrderTestCase[] = [{
    /**
     * Test #9 (regression test)
     * Data: Two orders:
     * Order 1: ReadyToPack, fully picked, has packing type FastTrack.
     * Order 2: ReadyToPick, not picked, has packing type Normal.
     * Both orders have only one and the same product.
     * Request: Find order to pick for the product with packing
     * type Normal.
     */
    datumBefore: {
        ...emptyStorageDatum,
        orderDatum: [fastTrackReadyToPack, normalReadyToPickDavina],
        productDatum: [davina250x300],
    },

    requestUrl: "/api/orders/find",

    requestBody: {
        packingType: OrderType.Normal,
        productId: davina250x300Id,
        userId: 1,
        warehouse: Warehouse.FH,
    },

    responseStatus: FindOrderError.HTTP_STATUS,

    responseBody: findOrderError("Orders with this product have different packing type."),
}];

Running all tests for a specific API method is just one line of code.

/**
 * Find order for packing.
 * @see OrderController::find()
 */
test.each(findOrderTestCases)("POST /api/orders/find $#",
    te.controllerPostTest);

The te.controllerPostTest function does all the magic. I do not put it here because it is very much application-specific.

Here is how we use the process to extend the test cases when adding a bug fix. Obviously, we start with locating the broken API method. We prepare the minimum required data set to reproduce the issue, the request and expected response. Finally, we add these data as a new item to the test cases array. No new code is needed. If we are lucky, we can reuse already created data snippets. We are going to discuss test data in the following principle.

Value your test data

Always add type definition to your test data. When you change the data structure in your application, and I assure you that you will, this should be reflected in tests. Of course, compiler and IDE must be the ultimate tools to support these changes. But without type hints guidance from you, they are helpless.

Having test data in JSON files might sound until you have to change the stricture of the data. But, then, updating 20-30 JSON files without compile validation and auto-completion is a real pain. So instead, make it TypeScript with all types defined even with no code but only constants.

export const farahanZiegler: ProductData = {
    _id: farahanZieglerId,
    positions: [{
        pallet: "SHOW",
        quantity: 2,
        warehouse: "RG",
    }, {
        pallet: "C320",
        quantity: 12,
        warehouse: "RG",
    }],
    synchronisedAt: new Date("2018-12-11T21:03:16+00:00"),
};

Next time you change the ProductData type from my example and compile your code, you get tens of compilation errors in your test data. This is frustrating. And yet, compared to the JSON files solution, you immediately know the scope of the problem, how many tests should be updated.

Try to avoid test data duplication whenever it is possible for precisely the same reasoning. Then, when you change your data structure, you have fewer test cases to keep in sync.

Consider an example of data snippet usage to initialise the application before the test.

export const putBackTestCases: PutBackTestCase[] = [{
    datumBefore: {
        ...emptyStorageDatum,
        orderDatum: [normalReadyToPack],
        productDatum: [damask],
    },
    // ...
}];

We populate the database before the integration test with the order and the product. At the moment of writing, the normalReadyToPack constant is used 48 times in different tests. Thus, having it in one place significantly reduces the amount of code and, more importantly, the number of changes to do when the type for the order is modified.

Test stable interfaces

Put most of your efforts into testing the most stable layer of the application.

Business logic in the application might be implemented in several places. For instance, in MVC style REST API service, most likely you find it in services and models. Controllers validate client requests, forward data to the services and return data back to the clients. Controllers do almost nothing. While services and models do all the heavy lifting. And still, I would suggest placing the most effort into testing controllers and implement most of the business logic tests in controller tests. We do this because controllers in MVC architecture have the most stable interface.

Think about a function in a model or a service that is heavily tested. Most likely, this means that it does something important. The tests define its behaviour, and they are valuable too. Now we work on a new feature, and we plan to refactor the function in some way. For example, we move it inside another method, wrap it into a new class, or split it. And for every refactoring we are planning to do, we should always consider tests. We should somehow refactor the tests to match the main code refactoring. This might be a problem.

In a nutshell, when planning tests consider the most stable code in the application for the most significant business logic and regression tests.

Things to remember

  • Write your tests to document the application behaviour. Do regression tests.
  • Design your tests for changes. Try to make it easy to add new test cases and to modify the data structure.
  • Test stable interfaces. Try to make your tests less vulnerable to application changes.
  • Think about tests as milestones in tracking progress. You can make the test green, and each green test brings you closer to your final goal.