Imagine you’ve built an awesome financial web application for some large enterprise. The frontend is a beautiful React app and the backend API powered by the Apollo GraphQL server. In your testing, everything worked fantastically, so with full confidence you’ve scheduled a meeting for the life demo with the upper management. At the beginning of the meeting, you proudly asked all the participants (~15 folks) to open the app in the browser…Two minutes later the demo became a disaster. The web app did not load for the majority of participants. The server crashed several times and you had to end the demo shortly. So what did go wrong?
You knew upfront that the application is going to be used by hundreds of business analysts and upper management, so you’ve put together a horizontally scalable infrastructure.
The landing page for the app is a customizable dashboard with a dozen of widgets that show various graphs and tables of useful information. A handful of widgets need data that requires some serious data crunching, so you’ve used a combination of apollo-server-plugin-response-cache and apollo-server-cache-redis NPM packages to set up a distributed cache. This way the most time and resource-consuming operations can be cached once and served quickly.
Aside from the fact that you did not perform an application load test, the biggest mistake that you’ve done is to overlook the scenario where multiple users open the app at the same time. Your horizontally scalable infrastructure handled all the requests quite well, however, because there was no cache built, each request executed an expensive operation in parallel and that what caused the failure.
Caching, when it is done correctly, is a powerful tool, however, its weakest link is a first-time hit request. For example, we have a request that loads company’s spending trend for the past 12 months. To prepare this data, your server has to crunch a few million records, which may take up to 20 seconds. If within those 20 seconds, somebody else requests the same information, it’ll be crunching the same data over again, because we still do not have cache built, right?
It appears that Apollo GraphQL does not have any solutions to this type of problem, so that inspired me to put together a quick apollo-server-cache-directive package, that allows managing cache life-cycle on the individual field as well as prevent from resolving the same data more than once.
Let us build a quick application that serve the “Library API”. You can find the complete code for the demo below in the Github repository.
Our GraphQL schema declares one query
library that allows to fetch library’s details by its unique id. The collection of
books comes from some external API and we know that sometimes it takes a while to retrieve response because that API is hosted on super old server.
We are going to use the combination of Apollo server and Express as well as distributed Redis cache to store the cached data. The
books resolver we will throttle for 10 seconds on purpose to emulate the slow external API response.
Now if you send a request to the server to fetch the library, you’ll notice the detail in about 10 seconds, however, any other requests to fetch exactly the same library details, will wait until the first resolver caches the result, so others can use it. In other words, the
books resolver will be invoked only once, no matter how many parallel request you send to the server within the 10 seconds window.
Conclusion. Apollo GraphQL server has several packages that allow caching full response in memory or distributed cache storage like Redis or Memcache. However, there are no solutions to prevent a server from building the same data more than once when the cache is empty and several parallel requests to fetch this data are sent at the same time. Besides, there are also no elegant solutions to choose what exactly you want to cache up to an individual field.
The apollo-server-cache-directive allows you to define in your GraphQL schema which field or query is cachable with a simple @cache directive. It also has several arguments to customize cache behavior your way.