Module 4 · Lesson 1 · ~20 min read
Two interfaces, one method each, and basically every byte that flows through a Go program touches one of them. Files, network sockets, gzip wrappers, HTTP bodies, JSON encoders — all read or write through these two interfaces. Master them.
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
That's it. Two methods total. Anything in the Go ecosystem that makes bytes available is a Reader; anything that consumes bytes is a Writer.
The Read contract is subtle and getting it wrong is a common bug:
len(p) even if more data is available.n > 0 AND err != nil can both be true — process the bytes before handling the error.io.EOF. EOF is a normal, expected error — not a failure.buf := make([]byte, 4096)
for {
n, err := r.Read(buf)
if n > 0 {
handle(buf[:n]) // always slice to n — bytes past n are unset
}
if err == io.EOF {
break // clean end of stream
}
if err != nil {
return fmt.Errorf("read: %w", err)
}
}
You almost never need to write this loop yourself. io.Copy, io.ReadAll, bufio.Scanner all do it for you.
| Function | Use for |
|---|---|
io.Copy(dst, src) | Stream all of src into dst. Returns bytes copied. |
io.ReadAll(r) | Read entire stream into a byte slice. Use when you need the whole thing in memory. |
io.LimitReader(r, n) | Wrap a reader that stops after n bytes. Critical for not-trusting-input. |
io.MultiWriter(a, b) | Single writer that fans out to multiple destinations. |
io.TeeReader(r, w) | Read from r AND copy through w as a side effect. |
bufio.NewReader(r) | Wrap with buffering — read in chunks instead of byte-at-a-time. |
bufio.NewWriter(w) | Buffer writes — flush in chunks. Don't forget Flush(). |
bufio.NewScanner(r) | Iterate over lines/tokens. scanner.Scan() + scanner.Text(). |
Because everything is just Reader/Writer, you can stack them:
resp, _ := http.Get("https://example.com/data.json.gz")
defer resp.Body.Close()
gz, _ := gzip.NewReader(resp.Body) // io.Reader wrapping io.Reader
defer gz.Close()
var data MyShape
json.NewDecoder(gz).Decode(&data) // JSON decoder over io.Reader
Three layers of streams: HTTP body → gzip decoder → JSON decoder. No intermediate buffer to hold the whole response. Memory usage stays low even for a 10GB compressed JSON response.
Compare to languages where each layer would force you to "read full body, then decompress, then parse." Go's stream-everything approach scales without thinking.
To make a type a Reader, give it a Read([]byte) (int, error) method. Same for Writer.
type Counter struct { N int64 }
func (c *Counter) Write(p []byte) (int, error) {
c.N += int64(len(p))
return len(p), nil
}
// Use it as a writer that just counts bytes:
c := &Counter{}
io.Copy(c, src) // now c.N == bytes streamed from src
io.Copy(io.MultiWriter(dst, c), src) // or write AND count
That's how you'd build a "bytes transferred" metric for an HTTP transfer or a gRPC stream — five lines.
Forget to Close(). Anything that's also an io.Closer (HTTP response body, file, gzip reader) leaks if not closed. defer resp.Body.Close() the moment you check the err is the safest pattern.
ioutil.ReadAll(r) on untrusted input. Wrap with io.LimitReader(r, maxBytes) first or you'll OOM on a giant payload. (Also: ioutil is deprecated; use io.ReadAll.)
Forget bufio.Writer.Flush(). Buffered writes sit in memory until flushed. If your program exits without flushing, you lose data. defer w.Flush().
Canton's gRPC streams (the transaction stream, completion stream) deliver bytes that you'll often want to:
All of those compose cleanly when the underlying source is a stream. Even when gRPC gives you typed messages rather than raw bytes, the architectural muscle of "treat data as a stream and apply transformations" carries over.
n > 0 and an error; process bytes before handling the error.io.EOF is a normal end-of-stream signal, not a failure.io.Copy, io.ReadAll, bufio.Scanner — instead of raw Read loops.io.LimitReader for any untrusted input.