All Content APIs are served via our globally distributed edge cache. Whenever a query is being sent to the API, it's response is cached in over 190 edge POPs around the globe.
GraphCMS handles all cache management for you! For even faster queries, be sure to use GET requests, so browsers can leverage advanced caching abilities available in the headers. See Browser Caching below.
The following chapters will highlight the benefits of our caching strategy.
We use an industry-leading multi-tiered caching appoach. The first layer is close to the user, on 190 edge servers around the world. If those edge servers don't have a cached response for the requested query, they will retrieve it from a globally distributed second caching layer before eventually retrieving it from our Servers. This method ensures a fast distribution of your content as soon as the second request hits the API.
Our custom cache doesn't rely on a simplistic TTL (Time-to-Live) strategy, but instead uses Smart Invalidation whenever the content or underlaying schema changes. The Smart Invalidation ensures that no stale content will be delivered to your connected applications.
Our System understands if mutations are flowing through the cache and invalidates the affected query response immediately.
We analyze the incoming GraphQL Queries on the edge server closest to the user. During optimization we compress the incoming query to accelerate requests to our API. In the case of whitespaces or code comments, we still serve the previously cached version.
Additonally, we support very fast browser caching based on ETAG headers, when GET requests are being used.
In case you're using Apollo to consume your GraphCMS API, you can enable the
useGETForQueries option in apollo-link-http.