Module 3 · Lesson 3 · ~20 min read

sync Primitives — When to Reach for Them

Channels are great for orchestration. Sometimes you just need a lock around a counter. The sync package gives you small, well-tested primitives. Pick the right one and your concurrent code stays boring (in a good way).

The cheat sheet

PrimitiveUse whenDon't use when
sync.MutexProtect bounded shared state from concurrent reads and writes.You're trying to coordinate goroutines (use a channel).
sync.RWMutexMany concurrent readers, occasional writer. And reads are non-trivial work (hashing, lookups).Reads are cheap (just a field access) — RWMutex overhead exceeds savings; use Mutex.
sync.WaitGroupWait for a known set of goroutines to finish.The set is dynamic and ongoing — use a channel-based coordinator.
sync.OnceRun an initialization exactly once across many goroutines.The init is not racy or only called from one place anyway.
sync.MapMany concurrent readers/writers on disjoint key sets, especially append-only.Workload involves frequent reads of the same keys with rare writes — plain map+RWMutex wins.
sync/atomicSingle integer or pointer counter under heavy contention.Multi-field invariants — atomic doesn't help; you need a Mutex.
sync.Cond(Almost never.) Goroutines waiting on a condition variable.Almost any time. Use a channel.

Mutex

type Counter struct {
    mu sync.Mutex
    n  int
}

func (c *Counter) Inc() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.n++
}

func (c *Counter) Read() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.n
}

Notes:

RWMutex

Same shape as Mutex but with separate read and write locks:

c.mu.RLock()         // many concurrent readers OK
defer c.mu.RUnlock()
return c.cache[key]

c.mu.Lock()          // exclusive writer
defer c.mu.Unlock()
c.cache[key] = value

Don't reach for RWMutex by default. The bookkeeping overhead is real, and for cheap reads it loses to plain Mutex. Profile before assuming RWMutex helps.

WaitGroup

Already covered in Lesson 1. Three rules:

  1. Call Add(n) before spawning the goroutine, not inside it. Otherwise Wait can return before the Add lands.
  2. Always defer wg.Done() as the first line of the goroutine body.
  3. WaitGroup is reusable but not concurrency-friendly to reuse — once Wait has returned, you can Add again, but you generally don't want to.

Once

var (
    cfg     *Config
    cfgOnce sync.Once
)

func GetConfig() *Config {
    cfgOnce.Do(func() {
        cfg = loadConfig()
    })
    return cfg
}

The function passed to Do runs exactly once, no matter how many goroutines call GetConfig simultaneously. Other callers block until the first finishes. Useful for lazy initialization.

sync/atomic

var requests atomic.Int64

func handler(...) {
    requests.Add(1)
}

func stats() {
    n := requests.Load()
    log.Printf("served %d", n)
}

Lock-free for single-value updates. Faster than Mutex for hot counters. Don't try to coordinate multi-field invariants with atomic — you'll write subtle bugs. For "increment one number a lot," it's perfect.

Go 1.19+ provides typed atomic wrappers (atomic.Int64, atomic.Bool, atomic.Pointer[T]) — use those over the older atomic.AddInt64(&x, 1)-style functions.

sync.Map

A concurrent map. The standard library's recommendation is: don't use it unless your workload is one of:

For everything else (which is most workloads), map[K]V + sync.RWMutex is faster and the API is what you already know.

sync.Cond — almost never use

Condition variables exist in the package for completeness, and the standard library docs all but tell you to avoid them. Whatever you'd build with a Cond can usually be expressed more clearly with channels. If you reach for Cond, think twice — there's almost always a channel design that's easier to reason about.

The "channel vs mutex" decision

The slogan ("communicate by sharing memory…") makes channels sound like the right answer always. They're not. The actual rule:

Use a channel for…Use a mutex for…
Passing ownership of data between goroutinesProtecting access to shared state
Coordinating workflow ("here's the next batch")Cache, config, counters, registry
Signaling completion or cancellationAnything where two reads back-to-back must see consistent values
Distributing work to a poolCheap, frequently-accessed fields

Real Canton-adjacent code uses both. A gRPC client probably has a connection pool guarded by a Mutex, a worker pool fed by a channel, a config protected by RWMutex, request counters in atomic. Each tool fits its job.

What to remember about correctness

Takeaways