Module 3 · Lesson 3 · ~20 min read
Channels are great for orchestration. Sometimes you just need a lock around a counter. The sync package gives you small, well-tested primitives. Pick the right one and your concurrent code stays boring (in a good way).
| Primitive | Use when | Don't use when |
|---|---|---|
sync.Mutex | Protect bounded shared state from concurrent reads and writes. | You're trying to coordinate goroutines (use a channel). |
sync.RWMutex | Many concurrent readers, occasional writer. And reads are non-trivial work (hashing, lookups). | Reads are cheap (just a field access) — RWMutex overhead exceeds savings; use Mutex. |
sync.WaitGroup | Wait for a known set of goroutines to finish. | The set is dynamic and ongoing — use a channel-based coordinator. |
sync.Once | Run an initialization exactly once across many goroutines. | The init is not racy or only called from one place anyway. |
sync.Map | Many concurrent readers/writers on disjoint key sets, especially append-only. | Workload involves frequent reads of the same keys with rare writes — plain map+RWMutex wins. |
sync/atomic | Single integer or pointer counter under heavy contention. | Multi-field invariants — atomic doesn't help; you need a Mutex. |
sync.Cond | (Almost never.) Goroutines waiting on a condition variable. | Almost any time. Use a channel. |
type Counter struct {
mu sync.Mutex
n int
}
func (c *Counter) Inc() {
c.mu.Lock()
defer c.mu.Unlock()
c.n++
}
func (c *Counter) Read() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.n
}
Notes:
defer Unlock. If the function returns through any other path (early return, panic), the lock is still released.go vet catches this.c.n while another goroutine writes is still a data race.Same shape as Mutex but with separate read and write locks:
c.mu.RLock() // many concurrent readers OK
defer c.mu.RUnlock()
return c.cache[key]
c.mu.Lock() // exclusive writer
defer c.mu.Unlock()
c.cache[key] = value
Don't reach for RWMutex by default. The bookkeeping overhead is real, and for cheap reads it loses to plain Mutex. Profile before assuming RWMutex helps.
Already covered in Lesson 1. Three rules:
Add(n) before spawning the goroutine, not inside it. Otherwise Wait can return before the Add lands.defer wg.Done() as the first line of the goroutine body.WaitGroup is reusable but not concurrency-friendly to reuse — once Wait has returned, you can Add again, but you generally don't want to.var (
cfg *Config
cfgOnce sync.Once
)
func GetConfig() *Config {
cfgOnce.Do(func() {
cfg = loadConfig()
})
return cfg
}
The function passed to Do runs exactly once, no matter how many goroutines call GetConfig simultaneously. Other callers block until the first finishes. Useful for lazy initialization.
var requests atomic.Int64
func handler(...) {
requests.Add(1)
}
func stats() {
n := requests.Load()
log.Printf("served %d", n)
}
Lock-free for single-value updates. Faster than Mutex for hot counters. Don't try to coordinate multi-field invariants with atomic — you'll write subtle bugs. For "increment one number a lot," it's perfect.
Go 1.19+ provides typed atomic wrappers (atomic.Int64, atomic.Bool, atomic.Pointer[T]) — use those over the older atomic.AddInt64(&x, 1)-style functions.
A concurrent map. The standard library's recommendation is: don't use it unless your workload is one of:
For everything else (which is most workloads), map[K]V + sync.RWMutex is faster and the API is what you already know.
Condition variables exist in the package for completeness, and the standard library docs all but tell you to avoid them. Whatever you'd build with a Cond can usually be expressed more clearly with channels. If you reach for Cond, think twice — there's almost always a channel design that's easier to reason about.
The slogan ("communicate by sharing memory…") makes channels sound like the right answer always. They're not. The actual rule:
| Use a channel for… | Use a mutex for… |
|---|---|
| Passing ownership of data between goroutines | Protecting access to shared state |
| Coordinating workflow ("here's the next batch") | Cache, config, counters, registry |
| Signaling completion or cancellation | Anything where two reads back-to-back must see consistent values |
| Distributing work to a pool | Cheap, frequently-accessed fields |
Real Canton-adjacent code uses both. A gRPC client probably has a connection pool guarded by a Mutex, a worker pool fed by a channel, a config protected by RWMutex, request counters in atomic. Each tool fits its job.
go test -race ./... in CI. The race detector catches almost all incorrect synchronization.-race.