At Kinsta, we’re obsessed with speed: Our Application Hosting, Database Hosting, and Managed WordPress Hosting services all run on the Google Cloud Platform’s fastest Premium Tier Network and C2 Machines, and we rely on Cloudflare to keep the pedal to the metal for tens of thousands of customers who want to deliver their content around the world with speed and security.

While making that happen, we’ve learned a thing or two about using Cloudflare Workers and Workers KV to provide optimized caching rules for static and dynamic content.

In early 2023, we doubled down on Cloudflare cache wrangling, making caches more responsive to client-side configuration changes while also shifting the heavy lifting behind broadcasting feature updates away from our admins on the backend and into Cloudflare Workers. A key result was a dramatic increase in the share of customer data successfully cached, increasing 56.3% between October 2022 and March 2023.

Data showing increase in percentage of successful cache hits over time.
Optimization via Cloudflare Workers became a stronger focus in January of 2023.

Cloudflare Workers and Workers KV allow us to programmatically customize every request and response with minimal effort and lower latency. We no longer need to deploy changes to hundreds of thousands of containers when we want to implement new features; we can replicate or implement the feature with Workers and deploy it everywhere with a few commands and clicks, saving us days of work and maintenance.

Request Routing With Workers KV and Workers

Every Kinsta-hosted domain is a key, and its value contains at least the core settings, like the origin’s IP and port, and a unique random ID. With this data easily available in Workers KV, we can use Workers to analyze, manipulate, and route requests to their expected backend. We also use Workers KV to store customer optimization options like Polish, Image Resizing, and Auto Minify.

To route requests to custom IPs and ports, we use resolveOverride, a Cloudflare-specific Request property. Here’s an example:

// Assign KV values to variables
const { customBackend } = kvdata.kinstaConf;

// Override the backend
cf.resolveOverride = customBackend;

However, while Workers KV worked well to route requests, we soon noticed inconsistent responses in our cache. Sometimes a customer activated Polish, and due to Workers KV’s one-minute cache, new requests arrived before Workers KV fully propagated the change, causing us to cache non-optimized assets. When this happened, the client had to clear their cache again manually. Not the ideal scenario. Clients got frustrated, and we wasted API operations and GCP bandwidth, constantly purging caches.

Cache Key Is the Key

Since we always read the domain’s Workers KV data, we realized we could route requests and customize the cache key, appending things like the domain’s ID and features that could affect the asset, like Polish. Today, our cache key is heavily customized to quickly reflect every client’s change in our panel or API. By modifying the cache key using Workers KV’s data, nobody needs to worry about clearing the cache anymore. As soon as Workers KV propagates the changes, the cache key also changes, and we request and cache a fresh asset.

The easiest way to customize the cache key is to append query params to it. For example:

let cacheKey = `${request.url}?custom-cache-param-polish=lossy`

Of course, you need to check the URL for existing parameters to determine which connector to use — ? or & — and ensure you are using a unique identifier.

Then, you can use this new cache key to save the response with Cache API or Fetch — or both.

Workers KV Cache

Workers KV operations are affordable, but the numbers can pile up when you trigger billions of reading operations daily.

Thanks to our cache key customization, we realized we could cache the Workers KV data with Cache API, saving on reading operations and possibly lowering the latency by avoiding multiple Workers KV GET requests per visitor. Since the cached response is now based on the request’s URL combined with KV data, we no longer need to worry about caching stale content.

Chart showing process flow when caching Workers KV data.
The process flow with caching of Workers KV data included.
Chart showing TTFB responses for various caching scenarios.
Average time to first byte in various caching scenarios.

However, unlike many applications, we can’t cache Workers KV for extended periods. Kinsta’s customers are constantly trying new features, changing Polish and Auto Minify settings, sometimes excluding pages or extensions from being cached, and they want to see their changes in production as soon as possible.

That’s when we decided to microcache our Workers KV data — caching dynamic or constantly-changed content for a very short period of time, usually less than 60 seconds.

It’s pretty simple to implement your own Workers KV caching logic. For example:

const handleKVCache = async (event, myCustomDomain) => {
  // Try to get KV from cache first
  const cache = caches.default;
  let site_data = await cache.match( `https://${myCustomDomain}/some-string-ID-kv-data/` );

  // Valid KV cache match
  if (site_data && site_data.status === 200) {
    // ... modify your cached data if necessary, then return it
    return site_data;
  }

  // Invalid cache (expired, miss, etc), get data from KV namespace
  site_data = await KV_NAMESPACE.get(myCustomDomain.toLowerCase());
  
  // Cache valid KV responses with Cache API
  if (site_data) {
    let kvResponse = new Response(JSON.stringify(site_data), {status: 200});
    kvResponse.headers.set("Cache-Control", "public, s-maxage=30");
    event.waitUntil(cache.put(`https://${myCustomDomain}/some-string-ID-kv-data/`, kvResponse));
  }
  
  return site_data;
};

(Optionally, you could use FlareUtils’ BetterKV.)

At Kinsta, we implemented a 30-second cache TTL for Workers KV data, decreasing read operations by about 80%.

Chart showing change in read operations over time.
Drop in read operations after implementing a TTL of 30 seconds for Workers KV data cache.

Learn More

Want to learn more about Workers and Workers KV? Check out the Cloudflare Workers KV developer documentation, or start by visiting Cloudflare’s dedicated Workers KV homepage.

This article was originally published on the Cloudflare website.

Paulo Paracatu

Paulo is a seasoned DevOps Engineer at Kinsta with a solid web hosting and optimization background. Equipped with Bash and JavaScript expertise, he uses Cloudflare Workers to continually improve user experiences in hosting.