A private cache is only used by one client, only for the IP it was created for. As such, it gives a greater performance gain and a much greater scalability gain, as a user may receive cached copies of representations without ever having obtained a copy directly from the origin server. And you can usually add a TTL (time to live) to the cache, so the cache will expire after a specific time. A public, or shared cache is used by more than one client. The way it's used is that you have some object (let's call it cache) and a few methods to get, set, or delete a value from the cache. ![]() You already have an HTTP layer, and most likely, your app is behind a CDN, so you could use the Cache-Control header to cache in the CDN directly.īut, when should you choose the HTTP Cache-Control header over a server-side cache? Server-side CacheĪ server-side cache usually needs an extra tool like Redis, but it gives you way more control of your cache. The main benefit is that you don't need to add another tool to your stack. Still, there's another way you could leverage the HTTP Cache-Control headers to cache in the HTTP layer. Location.Client (Cache-Control set to private) indicates that only the client may cache the value. making safe choices around HTTP caching, were using a HTTPCacheControlMiddleware class to control if a response should be considered public or private. One of the most common ways to solve that server-side is to add a server-side cache like Redis or Memcached. Location.Any (Cache-Control set to public) indicates that the client or any intermediate proxy may cache the value, including Response Caching Middleware. When you build a web application, you may reach a point where some performance problems could be solved by adding a cache layer.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |