Module 2 · Lesson 4 · ~25 min read
Go's test framework is in the standard library and is intentionally minimal. The whole API is one type. The idioms — table-driven tests, subtests, helpers, fuzzing — are conventions on top of that one type.
A test file is named foo_test.go and lives in the same directory as the code it tests. Test functions take a *testing.T:
package ledger
import "testing"
func TestParseOffset(t *testing.T) {
got, err := ParseOffset("00100")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != 100 {
t.Errorf("got %d, want 100", got)
}
}
Run with go test ./.... Names start with Test. The compiler builds a test binary, runs it, reports.
t.Errorf vs t.FatalfOne test function, many cases. Read it once and you'll see it everywhere in Go.
func TestRetryDelay(t *testing.T) {
cases := []struct {
name string
attempt int
baseDelay time.Duration
want time.Duration
}{
{"first attempt — base delay", 1, 100 * time.Millisecond, 100 * time.Millisecond},
{"second — doubled", 2, 100 * time.Millisecond, 200 * time.Millisecond},
{"fourth — 8x", 4, 100 * time.Millisecond, 800 * time.Millisecond},
{"capped at 10s", 99, 1 * time.Second, 10 * time.Second},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
got := RetryDelay(c.attempt, c.baseDelay)
if got != c.want {
t.Errorf("got %v, want %v", got, c.want)
}
})
}
}
t.Run creates a subtest. Each case becomes its own runnable, named, individually-failable unit. Run a single case with go test -run TestRetryDelay/capped. Get a clean per-case pass/fail in the output.
If a test calls a helper function and the helper records a failure, by default the failure is reported at the helper's line — useless for debugging which test case actually broke. Mark the helper with t.Helper():
func mustParse(t *testing.T, raw string) Offset {
t.Helper() // failures will be attributed to the caller's line
o, err := ParseOffset(raw)
if err != nil {
t.Fatalf("parse %q: %v", raw, err)
}
return o
}
Use t.Cleanup for per-test teardown (registered at any point during the test, runs at the end). For per-package setup, define TestMain(m *testing.M):
func TestMain(m *testing.M) {
setupTestDatabase()
code := m.Run()
tearDownTestDatabase()
os.Exit(code)
}
Test files can be in the same package or in foo_test:
package ledger in ledger_test.go) — can access unexported identifiers. White-box, useful for testing internals.package ledger_test) — only sees exported identifiers. Black-box, forces you to use the public API the way callers will. Many Go projects use both side by side.The interface idiom from Module 1 makes mocks usually unnecessary. To test code that depends on a Submitter, write a fake Submitter directly in the test file:
type fakeSubmitter struct { calls []Command }
func (f *fakeSubmitter) Submit(cmd Command) error {
f.calls = append(f.calls, cmd)
return nil
}
func TestPipeline(t *testing.T) {
f := &fakeSubmitter{}
pipeline := New(f)
pipeline.Run()
if len(f.calls) != 1 { ... }
}
15 lines, no library. Mocking frameworks like gomock exist for cases where you need verification matchers and fancy expectation DSLs (typically when fakes get unwieldy across many tests), but in idiomatic Go they're a last resort.
Same file, function name starts with Benchmark, takes *testing.B:
func BenchmarkSubmit(b *testing.B) {
c := NewClient()
cmd := Command{ID: "x"}
b.ResetTimer()
for i := 0; i < b.N; i++ {
c.Submit(cmd)
}
}
Run with go test -bench=. -benchmem. The framework chooses b.N to get a stable measurement.
Fuzzing generates random inputs to find inputs that crash or violate properties.
func FuzzParseOffset(f *testing.F) {
f.Add("0") // seed corpus
f.Add("100")
f.Add("-1")
f.Fuzz(func(t *testing.T, raw string) {
_, err := ParseOffset(raw)
// We don't care if it errors, only that it doesn't panic.
_ = err
})
}
go test -fuzz=FuzzParseOffset runs the fuzzer until you stop it, saving any crashing input to testdata/fuzz/. Underused; reach for it on parsers, decoders, anything that handles untrusted input.
go test -race ./... runs every test with the race detector enabled. The runtime instruments memory access and reports any concurrent unsynchronized read/write. Run this in CI for any package that touches goroutines.
This is not optional for production code. Many real-world data-race bugs hide behind tests that pass deterministically without -race and consistently fail with it.
-race.Coverage percentage (go test -cover) is a useful indicator but a terrible target. Aim for "every meaningful path is covered," not for a number.
*_test.go alongside code. Functions named TestXxx with *testing.T.t.Helper() for helpers, t.Cleanup() for teardown.-race in CI when concurrency is involved.