A Deep Dive into Table-Driven Testing in Golang

This article is the 22nd entry in Merpay Advent Calendar 2021 by @adlerhsieh from the Payment Platform Team.

We write a significant amount of Go code in the Merpay backend. While it’s fun, maintaining the production code quality is a challenge. One essential component to make that happen is to write effective unit tests. Great tests help developers create great results. They work as documentation, encourage clear design, and boost our productivity if used well.

While performing code reviews, we make sure tests are consistently following a specific coding style. One of the basic techniques we encourage developers to use is table-driven testing. It is a technique widely adopted in every Go community and commercial project. It offers better readability, extendability, and maintainability. It also comes with easy implementation of parallel testing.

The following is a quick walkthrough of how we do it in Merpay.

What is Table-Driven Testing?

The goal of writing table-driven testing is to extract patterns in the function. For example, the following is a file sum.go:

package sum

func Sum(a, b int) int {
    return a + b
}

To create test cases, we create a testing file sum_test.go:

package sum

import (
    "testing"
)

func TestSum_One(t *testing.T) {
    result := Sum(1, 2)
    if result != 3 {
        t.Errorf("expected 3, but got %d", result)
    }
}

func TestSum_Two(t *testing.T) {
    result := Sum(3, 4)
    if result != 7 {
        t.Errorf("expected 7, but got %d", result)
    }
}

We create one test case in one function and prefix each function name with Test. With these two test cases, we have basic coverage. In order to further validate the function, we might also need to add cases such as negative numbers.

However, the more functions we add to this pattern, the more duplicate codes we will add here. Each function will need to run the same Sum() function with different arguments, and we need to write an assertion and error message for each function call. It’s not bad in this test case, but in the real world, the codebase can get big fast.

In order to reduce duplications, it is time to introduce table-driven testing. It utilizes an anonymous struct to define all the arguments and expectations we need:

package my

import (
    "testing"
)

func TestSum(t *testing.T) {
    cases := []struct{
        description string
        num1 int
        num2 int
        expected int
    }{
        {
            description: "1 + 2",
            num1: 1,
            num2: 2,
            expected: 3,
        },
        {
            description: "3 + 4",
            num1: 3,
            num2: 4,
            expected: 7,
        },
    }

    for _, tt := range cases {
        t.Run(tt.description, func(t *testing.T){
            result := Sum(tt.num1, tt.num2)
            if result != tt.expected {
                t.Errorf("expected %d, but got %d", tt.expected, result)
            }
        }
    }
}

With this style, we can add as many test cases as we need without creating a new function. We specify all arguments in the anonymous struct, with a description providing the case summary. We probably don’t need a description for the Sum() function because the logic is straightforward, but it would be helpful when we have more complicated test cases.

I would recommend using tools that help us generate test cases from our source code. One great example is gotests. Some text editors even have this function built-in, such as VS Code.

However, it doesn’t end here. There are still many treasures to dig from table-driven testing.

Running Tests in Parallel

One of the features we can use to boost the testing speed is parallel testing. Parallelism is achieved by adding a t.Parallel() function call:

func TestSum(t *testing.T) {
    t.Parallel() // this
    result := Sum(1, 2)
    if result != 3 {
        t.Errorf("expected 3, but got %d", result)
    }
}

By specifyingParallel(), the test case will run in parallel with other tests specifying the same Parallel() function call. It’s usually called at the beginning of a test function, but when writing table-driven tests, it is at the beginning of the t.Run() function body:

    for _, tt := range cases {
        tt := tt // this
        t.Run(tt.description, func(t *testing.T){
            t.Parallel() // and this
            result := Sum(tt.num1, tt.num2)
            if result != tt.expected {
                t.Errorf("expected %d, but got %d", tt.expected, result)
            }
        })
    }

Same with the Parallel() function call for an entire function, each function call in the loop will run the test case in a different thread. It also DRYs up the code as we don’t need to specify it for each test case.

There will be a race condition if we don’t specify tt := tt to assign the looping variable to a local variable. The parallelism will continue the loop and change the value of tt to the next looping element. There will be unexpected results. I would recommend reading this short gist to see a walkthrough of this issue.

Handling Race Conditions

In addition to speed, another great benefit of running tests in parallel is to detect race conditions. There are cases where we use shared variables, structs, or global variables. It would be hard to see race conditions occurring without real-world traffic. The -race flag built-in in the testing tool helps us catch these possible bugs.

Here’s an example of a race condition. Let’s say we have the following code:

package counter

type Counter struct {
    count int
}

func (c *Counter) Add() {
    c.count++
}

func (c *Counter) GetCount() int {
    return c.count
}

And test:

package counter

import "testing"

func TestCounter(t *testing.T) {
    c := &Counter{}
    cases := []struct {
        callCount int
        expected  int
    }{
        {
            callCount: 3,
            expected:  3,
        },
        {
            callCount: 5,
            expected:  8,
        },
    }
    for _, tt := range cases {
        t.Run("test", func(t *testing.T) {
            t.Parallel()
            var result int
            for i := 0; i < tt.callCount; i++ {
                c.Add()
            }
            if c.GetCount() != tt.expected {
                t.Errorf("expected %d, but got %d", tt.expected, result)
            }
        })
    }
}

And we can run the test command to test its thread safety.

go test ./... -race

It will give us a nice error message if there is a race condition.

==================
WARNING: DATA RACE
Read at 0x00c000120128 by goroutine 9:
  github.com/adlerhsieh/counter.(*Counter).Add()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter.go:8 +0x6a
  github.com/adlerhsieh/counter.TestCounter.func1()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter_test.go:25 +0x65
  testing.tRunner()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1259 +0x22f
  testing.(*T).Run·dwrap·21()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x47

Previous write at 0x00c000120128 by goroutine 8:
  github.com/adlerhsieh/counter.(*Counter).Add()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter.go:8 +0x7c
  github.com/adlerhsieh/counter.TestCounter.func1()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter_test.go:25 +0x65
  testing.tRunner()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1259 +0x22f
  testing.(*T).Run·dwrap·21()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x47

Goroutine 9 (running) created at:
  testing.(*T).Run()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x726
  github.com/adlerhsieh/counter.TestCounter()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter_test.go:21 +0x1ce
  testing.tRunner()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1259 +0x22f
  testing.(*T).Run·dwrap·21()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x47

Goroutine 8 (finished) created at:
  testing.(*T).Run()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x726
  github.com/adlerhsieh/counter.TestCounter()
      /Users/adlerhsieh/go/src/github.com/adlerhsieh/counter/counter_test.go:21 +0x1ce
  testing.tRunner()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1259 +0x22f
  testing.(*T).Run·dwrap·21()
      /usr/local/Cellar/go/1.17.3/libexec/src/testing/testing.go:1306 +0x47
==================
--- FAIL: TestCounter (0.00s)
    --- FAIL: TestCounter/test (0.00s)
        counter_test.go:28: expected 8, but got 0
    --- FAIL: TestCounter/test#01 (0.00s)
        counter_test.go:28: expected 8, but got 0
        testing.go:1152: race detected during execution of test
FAIL
FAIL    github.com/adlerhsieh/counter   0.643s
FAIL

The amount of information is overwhelming. But it is not intimidating if we break it down. There are four blocks under the WARNING: DATA RACE message, containing different information.

Quick tip: we can ignore the unreadable memory addresses as well as internal packages. It’s easier to find the cause if we focus on the files that we create in our projects.

  • The first block starts with Read at, which means that this is the place where it detects a race condition. It gives us a clear trace of which line in which file triggers the error.
  • The second block starts with Previous write at. It indicates that this goroutine is trying to access the resource. By referencing the line numbers in both blocks, we know which parts are accessing the same resource.
  • The third and the fourth blocks are related to the goroutines. If the line numbers and file paths do not help much, these are the places to let us know where the program creates the goroutines.

In our test case, we see that both test cases try to call the Add() function to increment the counter. That’s why the error message tells us that both goroutines are executing the code on line 8.

Refactoring with Testing Frameworks

Upon this point, we have added features to the testing code to help us find patterns in our actual code. We’ve also utilized parallelism to help us detect race conditions. Now it’s time for refactoring. However, instead of refactoring the code ourselves, we’re importing external libraries to help us refactor.

There are a wide range of options of testing frameworks and libraries in Golang. One risk in using a testing framework is adding a layer of dependency to the source code. If there’s a breaking change in the framework, we must refactor the entire codebase.

My personal favorite is testify. It effectively reduces the amount of code we need to write. And it does not require developers to write test code in a certain way that makes reading difficult for developers with no background of the framework. Even though the official Go community is not a fan of using assertions in testing, in my opinion, as long as we handle the error properly, it’s ok to use assertions in our test code to make the tests more readable.

In a real world, we’ll need to make many assertions for a single test case, such as:

if user.FirstName != "John" {
    t.Errorf("FirstName mismatch. expected 'John', but got %s", user.FirstName)
}
if user.LastName != "Doe" {
    t.Errorf("LastName mismatch. expected 'Doe', but got %s", user.LastName)
}
if user.EmailAddress != "johndoe@gmail.com" {
    t.Errorf("EmailAddress mismatch. expected 'johndoe@gmail.com', but got %s", user.EmailAddress)
}

The list can go long. However, we need to write three lines of code with each assertion. Why not make it one when we can? With testify, we’ll refactor the code into the following:

assert.Equal(t, "John", user.FirstName, "FirstName mismatch")
assert.Equal(t, "Doe", user.LastName, "LastName mismatch")
assert.Equal(t, "johndoe@gmail.com", user.EmailAddress, "EmailAddress mismatch")

Now we have the same amount of information but fewer repetitions. With a proper failure message in the test case, we significantly reduce the amount of code.

Conclusion

Testing is and will always be an essential part of our programs. We also use other strategies to keep our codebase readable and performant, such as gomock and end-to-end tests (but with gRPC servers). Apart from that, since we adopt many cutting-edge technologies like GCP Spanner and Pub/Sub, we have other specific issues to resolve in testing. There are many more to explore. If you’re interested in how we maintain our codebase and what we are doing at Merpay, keep an eye on our Mercari Engineering Blog!

  • X
  • Facebook
  • linkedin
  • このエントリーをはてなブックマークに追加