Cache Efficiency
How v.recipes uses a multi-layer caching strategy to answer DNS queries as close to the edge as possible, minimising upstream lookups.
Cache Lookup Scenarios
L1 Cache Hit
FastestThe domain was recently resolved by this Worker isolate. The response is served directly from in-memory storage.
L2 Cache Hit
FastThe L1 cache missed (isolate was cold or entry expired), but another Worker in the same data centre had the answer.
Full Cache Miss
UpstreamNeither cache layer has the answer. The query is forwarded to the upstream resolver, and the result is stored in both layers.
Typical Cache Performance
Because most DNS queries are for popular domains (search engines, CDNs, social media), the vast majority of requests are served from the L1 or L2 cache without touching the upstream resolver. This reduces latency for users and eliminates unnecessary load on upstream providers.
Key Concepts
L1 Cache (Edge)
In-memory storage within the Worker isolate. Scoped to a single instance. Extremely fast but ephemeral — cleared when the isolate recycles.
L2 Cache (Regional)
Shared across all Workers in the same Cloudflare data centre via the Cache API. Persists longer than L1 and benefits all users routed to that PoP.
TTL Respect
Cache entries honour the TTL returned by the upstream resolver. Low TTL records expire quickly; high TTL records benefit from longer caching. Stale entries may be served briefly during revalidation.
Cache Backfill
When L2 serves a hit, the result is written back to L1 so the next query from the same isolate is even faster. This cascading fill keeps the hottest data at the closest layer.