Module 5 · Lesson 4 · ~20 min read
The cross-cutting concerns of any production gRPC service: middleware (interceptors), authentication, TLS, and connection lifecycle. Canton-grade infrastructure code lives or dies on getting these right.
Interceptors are gRPC's version of HTTP middleware. They wrap an RPC call to add cross-cutting behavior: logging, metrics, tracing, auth, retries.
Two kinds for unary calls (and a parallel pair for streaming):
| Client side | Server side | |
|---|---|---|
| Unary | UnaryClientInterceptor | UnaryServerInterceptor |
| Streaming | StreamClientInterceptor | StreamServerInterceptor |
func loggingInterceptor(
ctx context.Context,
req any,
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (any, error) {
start := time.Now()
resp, err := handler(ctx, req)
log.Printf("%s took %v err=%v", info.FullMethod, time.Since(start), err)
return resp, err
}
// Wire up:
s := grpc.NewServer(grpc.UnaryInterceptor(loggingInterceptor))
The signature is verbose but straightforward: receive the context, the request, the call info, and a "handler" function that represents "the rest of the chain." Call handler at the appropriate time. Log/decorate around it.
func authInterceptor(token string) grpc.UnaryClientInterceptor {
return func(
ctx context.Context,
method string,
req, reply any,
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
opts ...grpc.CallOption,
) error {
ctx = metadata.AppendToOutgoingContext(ctx, "authorization", "Bearer "+token)
return invoker(ctx, method, req, reply, cc, opts...)
}
}
// Wire up:
conn, _ := grpc.NewClient(addr,
grpc.WithUnaryInterceptor(authInterceptor("abc.def.ghi")),
grpc.WithTransportCredentials(creds),
)
google.golang.org/grpc/metadata handles per-RPC headers — auth tokens, request IDs, trace headers.
// Client adds metadata before sending
ctx = metadata.AppendToOutgoingContext(ctx, "x-request-id", reqID)
// Server reads from incoming metadata
md, ok := metadata.FromIncomingContext(ctx)
if ok {
reqIDs := md.Get("x-request-id")
}
Canton's Ledger API uses metadata for authentication tokens (typically JWT) and trace propagation.
Canton in production runs over TLS. Your client needs proper credentials.
// Plaintext (local sandbox only)
conn, _ := grpc.NewClient(addr,
grpc.WithTransportCredentials(insecure.NewCredentials()))
// TLS with system roots (most production setups)
creds := credentials.NewClientTLSFromCert(nil, "")
conn, _ := grpc.NewClient(addr, grpc.WithTransportCredentials(creds))
// TLS with custom CA cert (private/internal Canton deployment)
caBytes, _ := os.ReadFile("/etc/canton/ca.crt")
pool := x509.NewCertPool()
pool.AppendCertsFromPEM(caBytes)
creds := credentials.NewClientTLSFromCert(pool, "")
conn, _ := grpc.NewClient(addr, grpc.WithTransportCredentials(creds))
// Mutual TLS (mTLS) — when the server requires client certs
clientCert, _ := tls.LoadX509KeyPair("client.crt", "client.key")
config := &tls.Config{
Certificates: []tls.Certificate{clientCert},
RootCAs: pool,
}
creds := credentials.NewTLS(config)
conn, _ := grpc.NewClient(addr, grpc.WithTransportCredentials(creds))
Real Canton deployments will give you a CA certificate to trust and (often) a client certificate to present. The mTLS path is the most common for production participant-to-participant or external-integration-to-participant connections.
A grpc.ClientConn is a long-lived object that internally manages a pool of HTTP/2 connections to a target. Treat it like a database connection pool: create once, reuse forever.
conn, err := grpc.NewClient("canton.example.com:5011",
grpc.WithTransportCredentials(creds),
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: 30 * time.Second, // ping if idle this long
Timeout: 10 * time.Second, // fail if no pong
PermitWithoutStream: true, // keep ping even with no active call
}),
)
if err != nil { return err }
defer conn.Close()
// Now share `conn` across goroutines, calls, requests, etc.
client := ledger.NewCommandServiceClient(conn)
streamClient := ledger.NewUpdateServiceClient(conn)
Keepalives matter for long-lived connections crossing NATs and load balancers — without them, idle connections get killed by intermediaries and the next call fails for confusing reasons.
gRPC returns errors with structured status codes. google.golang.org/grpc/status and codes let you inspect them.
import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
resp, err := client.Submit(ctx, req)
if err != nil {
st, ok := status.FromError(err)
if ok {
switch st.Code() {
case codes.DeadlineExceeded:
// retry or surface as timeout
case codes.Unavailable:
// transient, retry with backoff
case codes.PermissionDenied:
// auth issue, don't retry
}
}
}
Memorize the categories: transient (Unavailable, DeadlineExceeded, ResourceExhausted) → retry. Permanent (InvalidArgument, NotFound, PermissionDenied) → don't retry, surface to caller.
For transient failures, you want exponential backoff. gRPC has built-in retry support via service config:
conn, _ := grpc.NewClient(addr,
grpc.WithTransportCredentials(creds),
grpc.WithDefaultServiceConfig(`{
"methodConfig": [{
"name": [{"service": "example.v1.Submitter"}],
"retryPolicy": {
"maxAttempts": 4,
"initialBackoff": "0.1s",
"maxBackoff": "5s",
"backoffMultiplier": 2.0,
"retryableStatusCodes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"]
}
}]
}`),
)
This is built-in retry. For more sophisticated retry logic (per-call decisions, idempotency-aware retries), implement it as a client interceptor.
The standard gRPC health check protocol (grpc.health.v1.Health) is a tiny service every gRPC server should implement. It's how Kubernetes probes, load balancers, and Envoy decide if a backend is healthy.
import (
"google.golang.org/grpc/health"
"google.golang.org/grpc/health/grpc_health_v1"
)
s := grpc.NewServer()
hs := health.NewServer()
grpc_health_v1.RegisterHealthServer(s, hs)
hs.SetServingStatus("", grpc_health_v1.HealthCheckResponse_SERVING)
ClientConn is long-lived; share it. Set keepalives for production.status.FromError + codes. Retry transient, fail permanent.