Skip to content

Query Caching

Reduce DynamoDB costs and improve latency by caching query results. Dynorm provides a pluggable cache interface with a built-in in-memory implementation.

Overview

flowchart TD
    Query[Query Request] --> Check{Cache Hit?}
    Check -->|Yes| Return[Return Cached Results]
    Check -->|No| DDB[(DynamoDB)]
    DDB --> Store[Store in Cache]
    Store --> Return
    Write[Write Operation] --> Invalidate[Invalidate Cache]
    Invalidate --> DDB2[(DynamoDB)]

Query caching helps when:

  • You have read-heavy workloads with repeated queries
  • You want to reduce DynamoDB costs (reads are charged per request)
  • You need lower latency for frequently accessed data
  • Your data has acceptable staleness windows

Quick Start

Enable Global Cache

import "github.com/go-gamma/dynorm/pkg/cache"

func init() {
    // Enable in-memory caching globally
    cache.SetGlobal(cache.NewMemory())
}

Cache Individual Queries

// Cache this query for 5 minutes
users, err := UserRepo.
    Cache(5 * time.Minute).
    Where("Status", "=", "active").
    GetAll()

The first request fetches from DynamoDB and stores in cache. Subsequent identical queries return cached results until the TTL expires.

Cache Interface

Dynorm's cache is pluggable. Implement the Cache interface to use Redis, Memcached, or any other cache:

type Cache interface {
    // Get retrieves a value by key
    Get(key string) ([]byte, bool)

    // Set stores a value with optional TTL
    Set(key string, value []byte, ttl time.Duration)

    // Delete removes a value by key
    Delete(key string)

    // DeletePrefix removes all values matching a key prefix
    DeletePrefix(prefix string)

    // Clear removes all values
    Clear()
}

Built-in Cache Implementations

Memory Cache

Thread-safe in-memory cache with TTL support:

import "github.com/go-gamma/dynorm/pkg/cache"

// Create memory cache
mem := cache.NewMemory()

// Set as global cache
cache.SetGlobal(mem)

Features:

  • Zero external dependencies
  • Automatic expiration
  • Thread-safe for concurrent access
  • Copies values to prevent mutation

No-op Cache

The default cache that does nothing (caching disabled):

// This is the default - caching is disabled
cache.SetGlobal(cache.NewNoop())

// Explicitly disable caching
cache.SetGlobal(nil)  // Falls back to Noop

Using the Cache

Repository-Level Caching

Cache all queries from a repository:

// Cache queries for 10 minutes
users, err := UserRepo.
    Cached(10 * time.Minute).
    Where("Status", "=", "active").
    GetAll()

Query-Level Caching

Cache specific queries with different TTLs:

// Hot data - short TTL
activeUsers, _ := UserRepo.
    Cache(1 * time.Minute).
    Where("Status", "=", "active").
    GetAll()

// Reference data - longer TTL
categories, _ := CategoryRepo.
    Cache(1 * time.Hour).
    GetAll()

Skip Cache

Force a fresh read from DynamoDB:

// Always fetch fresh data
user, _ := UserRepo.
    NoCache().
    Where("Email", "=", email).
    GetFirst()

Custom Cache Keys

Specify a custom cache key for fine-grained control:

// Use a custom cache key
users, _ := UserRepo.
    CacheWithKey("active-admins", 5*time.Minute).
    Where("Status", "=", "active").
    Where("Role", "=", "admin").
    GetAll()

Cache Invalidation

Automatic Invalidation

Dynorm automatically invalidates relevant cache entries when you write:

// This automatically invalidates the cache for this table
UserRepo.Save(user)
UserRepo.Delete(user)

Write operations call DeletePrefix(tableName:) to clear all cached queries for that table.

Manual Invalidation

Invalidate cache manually when needed:

// Invalidate all cached queries for a repository
UserRepo.InvalidateCache()

// Global cache operations
cache.Delete("specific-key")
cache.DeletePrefix("users:")
cache.Clear()  // Clear everything

Cache Key Generation

Cache keys are automatically generated from query parameters:

// These generate different cache keys
UserRepo.Where("Status", "=", "active").GetAll()
// Key: users:<hash of conditions>

UserRepo.Where("Status", "=", "inactive").GetAll()
// Key: users:<different hash>

UserRepo.Where("Status", "=", "active").Limit(10).GetAll()
// Key: users:<yet another hash>

The key includes:

  • Table name
  • Index name (if using GSI)
  • All conditions
  • Limit
  • Order direction

Redis Implementation Example

Here's how to implement a Redis cache:

package mycache

import (
    "context"
    "time"

    "github.com/redis/go-redis/v9"
)

type RedisCache struct {
    client *redis.Client
    ctx    context.Context
}

func NewRedis(addr string) *RedisCache {
    return &RedisCache{
        client: redis.NewClient(&redis.Options{Addr: addr}),
        ctx:    context.Background(),
    }
}

func (r *RedisCache) Get(key string) ([]byte, bool) {
    val, err := r.client.Get(r.ctx, key).Bytes()
    if err != nil {
        return nil, false
    }
    return val, true
}

func (r *RedisCache) Set(key string, value []byte, ttl time.Duration) {
    r.client.Set(r.ctx, key, value, ttl)
}

func (r *RedisCache) Delete(key string) {
    r.client.Del(r.ctx, key)
}

func (r *RedisCache) DeletePrefix(prefix string) {
    iter := r.client.Scan(r.ctx, 0, prefix+"*", 0).Iterator()
    for iter.Next(r.ctx) {
        r.client.Del(r.ctx, iter.Val())
    }
}

func (r *RedisCache) Clear() {
    r.client.FlushDB(r.ctx)
}

Usage:

import "github.com/go-gamma/dynorm/pkg/cache"

func init() {
    cache.SetGlobal(mycache.NewRedis("localhost:6379"))
}

Global Cache Functions

Convenience functions for working with the global cache:

import "github.com/go-gamma/dynorm/pkg/cache"

// Get current global cache
c := cache.Global()

// Direct cache operations
cache.Set("my-key", data, 5*time.Minute)
data, found := cache.Get("my-key")
cache.Delete("my-key")
cache.DeletePrefix("user:")
cache.Clear()

Memory Cache Management

The in-memory cache provides additional management methods:

mem := cache.NewMemory()

// Get current size
size := mem.Size()
fmt.Printf("Cache has %d entries\n", size)

// Clean up expired entries (call periodically)
mem.CleanExpired()

Periodic Cleanup

Set up periodic cleanup to free memory:

func init() {
    mem := cache.NewMemory()
    cache.SetGlobal(mem)

    // Clean expired entries every minute
    go func() {
        ticker := time.NewTicker(1 * time.Minute)
        for range ticker.C {
            mem.CleanExpired()
        }
    }()
}

Caching Strategies

Cache-Aside Pattern

The default pattern - cache on read, invalidate on write:

// Read: Check cache, fetch if miss
users, _ := UserRepo.
    Cache(5 * time.Minute).
    Where("Status", "=", "active").
    GetAll()

// Write: Update DB, invalidate cache
UserRepo.Save(user)  // Auto-invalidates

Write-Through Pattern

For critical data, refresh the cache after writes:

func UpdateUser(user *User) error {
    // Update database
    if err := UserRepo.Save(user); err != nil {
        return err
    }

    // Refresh common queries in cache
    UserRepo.
        Cache(5 * time.Minute).
        Where("Status", "=", "active").
        GetAll()

    return nil
}

TTL Strategies

Choose TTLs based on data characteristics:

// Real-time data - no cache or very short TTL
session, _ := SessionRepo.
    Cache(10 * time.Second).
    Where("Token", "=", token).
    GetFirst()

// User data - moderate TTL
user, _ := UserRepo.
    Cache(5 * time.Minute).
    Find(userID)

// Reference data - long TTL
countries, _ := CountryRepo.
    Cache(24 * time.Hour).
    GetAll()

// Static data - very long TTL
config, _ := ConfigRepo.
    Cache(7 * 24 * time.Hour).
    Where("Name", "=", "settings").
    GetFirst()

Lambda Considerations

In AWS Lambda, each execution environment has its own memory:

flowchart TD
    subgraph "Lambda Environment 1"
        L1[Lambda Handler] --> C1[Memory Cache]
    end

    subgraph "Lambda Environment 2"
        L2[Lambda Handler] --> C2[Memory Cache]
    end

    subgraph "Lambda Environment 3"
        L3[Lambda Handler] --> C3[Memory Cache]
    end

    C1 -.->|No sharing| C2
    C2 -.->|No sharing| C3

    L1 --> DDB[(DynamoDB)]
    L2 --> DDB
    L3 --> DDB

For Lambda:

  • Memory cache: Works within a single warm instance
  • Redis/ElastiCache: Shared across all instances (requires VPC)
func init() {
    if os.Getenv("USE_REDIS") != "" {
        cache.SetGlobal(mycache.NewRedis(os.Getenv("REDIS_URL")))
    } else {
        cache.SetGlobal(cache.NewMemory())
    }
}

Monitoring and Debugging

Cache Hit Ratio

Track cache effectiveness:

type MonitoredCache struct {
    cache.Cache
    hits   int64
    misses int64
}

func (m *MonitoredCache) Get(key string) ([]byte, bool) {
    val, found := m.Cache.Get(key)
    if found {
        atomic.AddInt64(&m.hits, 1)
    } else {
        atomic.AddInt64(&m.misses, 1)
    }
    return val, found
}

func (m *MonitoredCache) HitRatio() float64 {
    h := atomic.LoadInt64(&m.hits)
    miss := atomic.LoadInt64(&m.misses)
    total := h + miss
    if total == 0 {
        return 0
    }
    return float64(h) / float64(total)
}

Debug Logging

type DebugCache struct {
    cache.Cache
}

func (d *DebugCache) Get(key string) ([]byte, bool) {
    val, found := d.Cache.Get(key)
    if found {
        log.Printf("CACHE HIT: %s", key)
    } else {
        log.Printf("CACHE MISS: %s", key)
    }
    return val, found
}

func (d *DebugCache) Set(key string, value []byte, ttl time.Duration) {
    log.Printf("CACHE SET: %s (ttl=%v, size=%d)", key, ttl, len(value))
    d.Cache.Set(key, value, ttl)
}

Best Practices

Do

  • Use caching for read-heavy, repeated queries
  • Choose appropriate TTLs for your data's freshness requirements
  • Monitor cache hit ratios
  • Use Redis for cross-instance caching in Lambda
  • Clean up expired entries periodically

Don't

  • Cache data that must always be fresh
  • Use very long TTLs without invalidation strategy
  • Forget to handle cache misses gracefully
  • Cache large result sets that consume too much memory
  • Rely on cache for data consistency

Cost Optimization

A well-tuned cache can significantly reduce your DynamoDB costs:

  • Read-heavy workloads benefit most
  • Even short TTLs (30s-1min) help with burst traffic
  • Monitor your DynamoDB read capacity to measure impact

API Reference

Cache Interface

Method Description
Get(key) Retrieve cached value
Set(key, value, ttl) Store value with TTL
Delete(key) Remove specific key
DeletePrefix(prefix) Remove all keys with prefix
Clear() Remove all cached values

Global Functions

Function Description
cache.SetGlobal(c) Set the global cache instance
cache.Global() Get the global cache instance
cache.Get(key) Get from global cache
cache.Set(key, value, ttl) Set in global cache
cache.Delete(key) Delete from global cache
cache.DeletePrefix(prefix) Delete by prefix from global cache
cache.Clear() Clear global cache

Query Methods

Method Description
Cache(ttl) Enable caching with TTL
CacheWithKey(key, ttl) Cache with custom key
NoCache() Skip cache for this query

Repository Methods

Method Description
Cached(ttl) Start query with caching
InvalidateCache() Clear all cached queries for this table

Memory Cache Methods

Method Description
NewMemory() Create new memory cache
Size() Get number of cached entries
CleanExpired() Remove expired entries

Next Steps