v.recipes
DNS/Illustrations/Cache Efficiency

Cache Efficiency

How v.recipes uses a multi-layer caching strategy to answer DNS queries as close to the edge as possible, minimising upstream lookups.

Multi-Layer Cache Architecture
Client DNS Query
Request
L1
Edge Cache
MISS
L2
Regional Cache
MISS
DNS
Upstream
Store + Reply
FILL
Store + Reply
RESPONSE
Resolve
Response
Response Delivered

Cache Lookup Scenarios

L1 Cache Hit

Fastest

The domain was recently resolved by this Worker isolate. The response is served directly from in-memory storage.

1Client sends DNS query to v.recipes.
2L1 (edge) cache has a fresh entry for this domain.
3Response returned immediately from memory — sub-millisecond overhead.
4No L2 lookup, no upstream query. Fastest possible path.

L2 Cache Hit

Fast

The L1 cache missed (isolate was cold or entry expired), but another Worker in the same data centre had the answer.

1Client sends DNS query to v.recipes.
2L1 cache miss — entry absent or expired in this isolate.
3L2 (regional) cache is checked via Cloudflare Cache API.
4L2 returns a valid response. L1 is backfilled for next time.
5Response returned to client — typically <5ms overhead.

Full Cache Miss

Upstream

Neither cache layer has the answer. The query is forwarded to the upstream resolver, and the result is stored in both layers.

1Client sends DNS query to v.recipes.
2L1 cache miss — no entry in this isolate.
3L2 cache miss — no entry in this data centre.
4Query forwarded to the upstream resolver (e.g., Cloudflare 1.1.1.1).
5Upstream responds. Result stored in both L2 and L1.
6Response returned to client. Subsequent queries hit cache.

Typical Cache Performance

>90%
Overall Hit Rate
<1ms
L1 Hit Latency
<5ms
L2 Hit Latency

Because most DNS queries are for popular domains (search engines, CDNs, social media), the vast majority of requests are served from the L1 or L2 cache without touching the upstream resolver. This reduces latency for users and eliminates unnecessary load on upstream providers.

Key Concepts

L1 Cache (Edge)

In-memory storage within the Worker isolate. Scoped to a single instance. Extremely fast but ephemeral — cleared when the isolate recycles.

L2 Cache (Regional)

Shared across all Workers in the same Cloudflare data centre via the Cache API. Persists longer than L1 and benefits all users routed to that PoP.

TTL Respect

Cache entries honour the TTL returned by the upstream resolver. Low TTL records expire quickly; high TTL records benefit from longer caching. Stale entries may be served briefly during revalidation.

Cache Backfill

When L2 serves a hit, the result is written back to L1 so the next query from the same isolate is even faster. This cascading fill keeps the hottest data at the closest layer.