rlfrahm
Published on

Easy Cache Layer for Next.js - Instantly Scale to Millions of Requests per Second

Authors

Introduction

Recently on X there was a fire storm of posts around the difficulty of scaling Next.js outside of Vercel. The biggest complaint, or sales pitch for competitor tooling, was that shared cache is hard. However, I'm here to show you how easy it actually is with the services available today (it used to be much more difficult).

tl;dr

You use Cloudflare as your nameserver and use their built in caching rules to cover requests based on conditions you specify.

Step 1: Open your domain in your Cloudflare account

Step 2: Navigate to Caching > Cache rules

Step 3: Create rules as needed

Create new rule, input the settings in the screenshot (Add a rule name, match on URI path, make it eligible for cache, set Edge TTL to 2hrs, set Browser TTL to respect origin)

Results

My SSR pages with multiple queries and 4x 1920px wide images returns first paint in 0.6s and and full paint in 1.3s.

Conclusion

And you're done. Check your server logs to see that Nextjs won't see second requests for 2 hrs. You now have taken care of a large majority of scale issues may encounter should you get at least 1 customer. 🫢

I use SSR in place of static routes because you can utilize cache to get a similar result while also having tighter, less hacky control of updates of content.

Notes

  1. Use matching rules to skip over pages where you don't want cache handled (like api endpoints, auth routes, etc).
  2. You can also set a control-cache header on routes to fine tune how long you want cache (or not at all). For server components you will need to create another rule if you need different cache lengths.
Subscribe to the newsletter