Module 2 · Lesson 4 · ~25 min read

Tests: Table-Driven, Subtests, Fuzzing

Go's test framework is in the standard library and is intentionally minimal. The whole API is one type. The idioms — table-driven tests, subtests, helpers, fuzzing — are conventions on top of that one type.

The shape

A test file is named foo_test.go and lives in the same directory as the code it tests. Test functions take a *testing.T:

package ledger

import "testing"

func TestParseOffset(t *testing.T) {
    got, err := ParseOffset("00100")
    if err != nil {
        t.Fatalf("unexpected error: %v", err)
    }
    if got != 100 {
        t.Errorf("got %d, want 100", got)
    }
}

Run with go test ./.... Names start with Test. The compiler builds a test binary, runs it, reports.

t.Errorf vs t.Fatalf

Table-driven tests — the canonical pattern

One test function, many cases. Read it once and you'll see it everywhere in Go.

func TestRetryDelay(t *testing.T) {
    cases := []struct {
        name      string
        attempt   int
        baseDelay time.Duration
        want      time.Duration
    }{
        {"first attempt — base delay", 1, 100 * time.Millisecond, 100 * time.Millisecond},
        {"second — doubled",         2, 100 * time.Millisecond, 200 * time.Millisecond},
        {"fourth — 8x",              4, 100 * time.Millisecond, 800 * time.Millisecond},
        {"capped at 10s",            99, 1 * time.Second, 10 * time.Second},
    }
    for _, c := range cases {
        t.Run(c.name, func(t *testing.T) {
            got := RetryDelay(c.attempt, c.baseDelay)
            if got != c.want {
                t.Errorf("got %v, want %v", got, c.want)
            }
        })
    }
}

t.Run creates a subtest. Each case becomes its own runnable, named, individually-failable unit. Run a single case with go test -run TestRetryDelay/capped. Get a clean per-case pass/fail in the output.

Test helpers

If a test calls a helper function and the helper records a failure, by default the failure is reported at the helper's line — useless for debugging which test case actually broke. Mark the helper with t.Helper():

func mustParse(t *testing.T, raw string) Offset {
    t.Helper()  // failures will be attributed to the caller's line
    o, err := ParseOffset(raw)
    if err != nil {
        t.Fatalf("parse %q: %v", raw, err)
    }
    return o
}

Setup and teardown

Use t.Cleanup for per-test teardown (registered at any point during the test, runs at the end). For per-package setup, define TestMain(m *testing.M):

func TestMain(m *testing.M) {
    setupTestDatabase()
    code := m.Run()
    tearDownTestDatabase()
    os.Exit(code)
}

Black-box vs white-box

Test files can be in the same package or in foo_test:

Mocking — usually not needed

The interface idiom from Module 1 makes mocks usually unnecessary. To test code that depends on a Submitter, write a fake Submitter directly in the test file:

type fakeSubmitter struct { calls []Command }

func (f *fakeSubmitter) Submit(cmd Command) error {
    f.calls = append(f.calls, cmd)
    return nil
}

func TestPipeline(t *testing.T) {
    f := &fakeSubmitter{}
    pipeline := New(f)
    pipeline.Run()
    if len(f.calls) != 1 { ... }
}

15 lines, no library. Mocking frameworks like gomock exist for cases where you need verification matchers and fancy expectation DSLs (typically when fakes get unwieldy across many tests), but in idiomatic Go they're a last resort.

Benchmarks

Same file, function name starts with Benchmark, takes *testing.B:

func BenchmarkSubmit(b *testing.B) {
    c := NewClient()
    cmd := Command{ID: "x"}
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        c.Submit(cmd)
    }
}

Run with go test -bench=. -benchmem. The framework chooses b.N to get a stable measurement.

Fuzzing (Go 1.18+)

Fuzzing generates random inputs to find inputs that crash or violate properties.

func FuzzParseOffset(f *testing.F) {
    f.Add("0")        // seed corpus
    f.Add("100")
    f.Add("-1")
    f.Fuzz(func(t *testing.T, raw string) {
        _, err := ParseOffset(raw)
        // We don't care if it errors, only that it doesn't panic.
        _ = err
    })
}

go test -fuzz=FuzzParseOffset runs the fuzzer until you stop it, saving any crashing input to testdata/fuzz/. Underused; reach for it on parsers, decoders, anything that handles untrusted input.

Race detector

go test -race ./... runs every test with the race detector enabled. The runtime instruments memory access and reports any concurrent unsynchronized read/write. Run this in CI for any package that touches goroutines.

This is not optional for production code. Many real-world data-race bugs hide behind tests that pass deterministically without -race and consistently fail with it.

What good test coverage looks like

Coverage percentage (go test -cover) is a useful indicator but a terrible target. Aim for "every meaningful path is covered," not for a number.

Takeaways