Two weeks ago I presented Spryker at the code.talks commerce edition. In my talk I mentioned full page cache as an anti-pattern. Surprisingly I got some feedbacks on this single statement.
To say it in other words: in my opinion the caching of dynamic content is a dirty workaround for a poor performance of a badly designed software. The full page cache introduces a bunch of new hard problems and it limits the possibilities to optimize the conversion rate.
Server side execution time matters!
The performance of the shop is one of the most important metrics in e-commerce. In this post I am talking about the server side execution time which is usually measured in milliseconds (ms). Most web applications have an execution time between 50 ms (fast) and 500 ms (slow) when there is no high load. During a traffic peak the servers must share their resources and the execution times increase by factor 5 to 10 which results in a bad user experience and a poor conversion rate. Even worse, at some point the server reaches its capacity limits and the shop does not respond anymore. This is particularly annoying during a TV spot or other expensive marketing campaigns.
Full page cache
An often used practice is to use full page caching. The dynamic content (~HTML) is cached for a given URL so that the following requests get a very fast response with the same content. This is a typical use case for reverse proxies like Varnish. The proxy holds the cached data and handles the request without involving the web server.
While this approach makes a lot of sense for static news websites like Spiegel.de, I want to list some reasons why Full-Page-Cache is an anti-pattern in e-commerce.
A well done modern e-commerce website provides optimized content to each user. You will find personal product recommendations, search results which are optimized based on the customer’s history and also the product detail page shows cross selling products that fit to the last search keywords, etc. On top of this you usually present the user’s cart and a customized tracking pixel on every side. As you can imagine there is not much left what can be cached without drawbacks and obviously there is no way to use caching mechanisms for the user account, cart and checkout.
I know there are ways to solve most of these problems. For instance you can load user-specific information from a cookie or use more advanced techniques like edge-side-includes (ESI) for hole-punching.
In addition there are more non-trivial problems like the cold cache. This describes the state when the cache does not contain any values and needs to be filled (“warmed up”). This usually happens when you need to flush the cache. For a moment of time your backend application needs to handle the whole traffic which often results in downtimes. To solve this you again need to implement advanced caching strategies or you need to out-scale this event with more servers.
Another very hard problem is the cache invalidation and how to deal with outdated content. What happens to products which are sold out? It is not enough to invalidate the related product detail page, you also need to flush all category pages which contain the product.
As a result of this implementation the overall complexity of your system raises because it is not deterministic anymore. The behaviour on production environment is different than on the development computers which drives your developers crazy. In the worst-case scenario you even get different results on different servers. Your QA will create bug issues which are impossible to reproduce in the development environment.
To sum up: a full page cache introduces some hard problems. As a result you waste your developers’ time to fight against these problems, instead of developing new features with a high business value for your customers. Even worse, these “solutions” make your shop more complex and your website very static, so you are limited in future optimization of your shop front-end.
For these reasons at Spryker we provide a high performance and great scalability by default. In the next blog post I will explain the main concepts and afterwards we are going to publish the results of a series of load tests.