Caching Systems (Part 3): Building Cache-Aside in Go with Redis

Part 2 covered invalidation strategies. This part shows a cache-aside implementation in Go that is safe in production: TTL jitter, request coalescing, and defensive fallbacks.


Design goals

  • Cache reads are fast and non-fatal
  • Cache misses do not stampede the DB
  • Writes invalidate or refresh cache deterministically

Assumptions - Redis as shared cache - JSON serialization - go-redis client


Cache key strategy

Keep it boring and deterministic:

user:{id}

If you later adopt versioned caches, make it:

user:{id}:v{version}

Go implementation (cache-aside + singleflight)

package cache

import (
    "context"
    "encoding/json"
    "math/rand"
    "time"

    "golang.org/x/sync/singleflight"
    "github.com/redis/go-redis/v9"
)

type User struct {
    ID    string `json:"id"`
    Name  string `json:"name"`
    Email string `json:"email"`
}

type Store struct {
    rdb   *redis.Client
    group singleflight.Group
}

func NewStore(rdb *redis.Client) *Store {
    return &Store{rdb: rdb}
}

func (s *Store) cacheKey(id string) string {
    return "user:" + id
}

func ttlWithJitter(base time.Duration) time.Duration {
    // Add up to 10% jitter to spread expirations
    jitter := time.Duration(rand.Int63n(int64(base / 10)))
    return base + jitter
}

func (s *Store) GetUser(ctx context.Context, id string, dbFetch func(context.Context, string) (*User, error)) (*User, error) {
    key := s.cacheKey(id)

    // 1) Try cache
    if b, err := s.rdb.Get(ctx, key).Bytes(); err == nil {
        var u User
        if jsonErr := json.Unmarshal(b, &u); jsonErr == nil {
            return &u, nil
        }
    }

    // 2) Singleflight to avoid stampede
    v, err, _ := s.group.Do(key, func() (any, error) {
        u, err := dbFetch(ctx, id)
        if err != nil {
            return nil, err
        }
        if u == nil {
            return nil, nil
        }
        if b, err := json.Marshal(u); err == nil {
            _ = s.rdb.Set(ctx, key, b, ttlWithJitter(10*time.Minute)).Err()
        }
        return u, nil
    })
    if err != nil {
        return nil, err
    }
    if v == nil {
        return nil, nil
    }
    return v.(*User), nil
}

func (s *Store) InvalidateUser(ctx context.Context, id string) error {
    return s.rdb.Del(ctx, s.cacheKey(id)).Err()
}

Write path: update DB, then invalidate

Golden rule: DB is source of truth. Cache follows.

func (s *Store) UpdateUser(ctx context.Context, id string, dbUpdate func(context.Context, string) error) error {
    if err := dbUpdate(ctx, id); err != nil {
        return err
    }
    return s.InvalidateUser(ctx, id)
}

If you need read-your-writes consistency, consider: - Versioned keys (Part 2) - Updating cache immediately after write


Production notes

  • Negative caching: cache “not found” for a short TTL to reduce DB hits
  • Timeouts: enforce short Redis timeouts; cache failures should not take down reads
  • Metrics: track hit rate, miss rate, and stampede events
  • Backoff: if Redis is unhealthy, bypass cache temporarily

Final takeaway

Cache-aside is simple, but production-safe cache-aside is deliberate. Add jitter, coalescing, and clear invalidation and you get the best of both worlds: speed without hidden correctness debt.

Prev
Next