I have the following endpoints:
/orders
/orders?status=open
/orders?status=open&my_orders=true
The third example uses headers to determine user and return their specific items.
Obviously, this is a poor API design but we want to cache the first two and not the third. The caching policy can be modified to either whitelist or exclude querystring params but based on my understanding this won't be helpful. If we include the user specific header than the first 2 URIs will all be cached per user.
Is there an option I am missing that allows me to avoid caching the 3rd endpoint, while still caching the first two? Another option is to cache the 3rd but include the user specific headers in the cache key.
If you exclude the my_orders query string from the cache policy, CloudFront will not include that value in the cache key. That means all else held equal, these two URI paths will share the same cache key:
/orders?status=open
/orders?status=open&my_orders=true
That doesn't sound like it's what you want - you do want to treat requests with my_orders=true as separate cache keys, but you also need to account for a specific request header where the value of that header changes the cache key. If that's the case, you need to include the request header as part of your cache key (which will also ensure CloudFront passes it through to your origin)
Related
I searched a lot but couldn't find a single page on the internet which explained the difference between X-Cache and X-Cache-Remote Akamai header.
Every time I receive two different values for these headers which indicates that they are not the same. Any information regarding the different between the two will be of great help.
As you probably know, Akamai does two levels of redirections.
The DNS points to one of the addresses closest to the client.
But it is not the address of the actual server that services the request. Rather the request gets serviced by one of the "edge" servers.
There is a possible third level. Sometimes the edge server, if the content is not in its cache, instead of sending the request to the origin server it redirects to another edge server in the hope that the latter may have the content in its cache. "X-Cache" and "X-Cache-Remote" are the status of cache check on these two edge servers respectively. If the first edge server serves the request from its cache or if it fetches from the origin directly, "X-Cache-Remote" header is absent.
There is practically no difference between the first and the second edge servers except in one aspect. In the second edge server, detection of user location, any check related to user location returns "false". For example if your criterion says "Is the user country one of ("US")?" would return "false" and the opposite "Is the user country NOT one of ("US")?" would also return false. So if you have rules that use user location, you have somehow pass that information from the first edge server to the second. Custom outgoing request headers can be used for this.
None of the above is from the Akamai documentation. Rather they are based on a series of experiments performed on Akamai. Akamai does give a clue to this effect by emitting a warning: "The behaviors and matches enclosed within a User Location Data match will only be executed by the Akamai edge server that receives the client request. If the request is forwarded to another Akamai server, the matches and behaviors enclosed will be ignored. If you are unsure about how this will affect your property please contact your Akamai Technical representative."
There is a single page that explains all of the various x-akamai-* headers (if you're logged in to the Akamai Customer Community) that you can use with Akamai.
The possible values of those two specific headers (x-akamai-cache and x-akamai-cache-remote) are available in a separate Customer Community document.
In short, the x-akamai-cache header tells you how the initially responding Edge server handled the object. The x-akamai-cache-remote header tells you how the Parent Tier handled the object.
In many circumstances, your configuration may have something called "Tiered Distribution" (or "Cache Hierarchy") enabled which uses a multi-layered caching system. There's a good video created by Akamai employees that talks about Tiered Distribution and other caching behaviors available to you through the Akamai platform. There's also a little more on this multi-tiered caching system on Akamai's Developer site.
okhttp,the cache key is the url.But every time I request the same url I need to put param of "distinctRequestId" to distinguish every request.It lead to my url is different every time.But the content is the same .So I want to set the cache key custom. which i can get the cache is Unique.Thx
When you configure a cache in Edge you give it some key fragments (e.g. request.uri, request.header.Accept, request.header.Accept-Language, etc.). To clear that key you pass the same key fragments.
If I have 5,000 elements cached, how can I clear the entire cache without generating 5,000 calls to my API with all the possible cache keys?
You can use the clear all cache entries API call, documented here. If you don't pass in the prefix query parameter, it should remove all.
Invalidate cache policy is used to explicitly invalidate the cache entry for the given CacheKey (Where Cachekey is the combination of 'Prefix and KeyFragment'), not for clearing the all entries, associated with the given Cache resource. Please go through the document here to understand more about 'Invalidate Cache'.
The cache can also be cleared from UI.
You can login to UI then go to API's tab under that will be "Environment Configuration" Tab
Here you will get the option to clear entire cache.
The following API call will also allow you to delete all your cache entries :
curl -v -u admin 'https://api.enterprise.apigee.com/v1/organizations/{org-name}/environments/{env-name}/caches/{cache-name}/entries?action=clear' -X POST
I am trying to use Varnish to cache a page that has some user specific text and links on it. The best way to cache such pages is via Edge Side Includes.
Context
My web application is RESTful and does not support sessions or even cookies for that matter. Every source URL is complete in a sense that it contains a user specific query parameter to be able to identify a unique user. The pages which see most visits in the web application are listing pages. I just need to show the user's email in the header and the links on the page must also carry the user specific query parameter ahead so as to simulate a logged in behavior. Page contents are supposed to be the same for each user except for the header and those internal links.
I tried to use <esi:include /> for such areas on the page but obviously, could not include the user specific parameter in the page source (else the first user specific hit would be cached with the first user's parameter and be served the same for every subsequent user). Further, I tried to strip user specific parameter in vcl_recv subroutine of Varnish and store it temporarily in a header such as req.http.X-User just before a lookup. Each source URL gets hashed with a req.url that doesn't contain any user specific parameters and hence, does not create duplicate cache objects for each unique user.
Question
I would like to read the user specific parameter from req.http.X-User and hash user specific ESI requests by adding this user specific value against each ESI URL as a query parameter. I do not see a way in which one could share query parameters between a source request and it's included ESI requests. Could someone help?
I have tried to depict my objective in the following diagram:
I guess your problem is that the ESI call itself is going to be cached. Including any query strings in the URL.
I cant remember the specifics, but I think you can get Varnish to pass cookies through the ESI requests, so you could store the value in a cookie (encrypted?) and then that can be read via whatever is handling the ESI call.
Or maybe you can get it to pass the HTTP headers through? In which case it can be read directly from the HTTP header parameter
To improve performances, I'd like to add a fairly long Cache-Control (up to 30 minutes) to each page since they do not change often. However, each page also displays the name of the user logged in (like this website).
The problem is when the user logs in or logs out: the user name must change. How can I change the user name after each login/logout action while keeping a long Cache-Control?
Here are the solutions I can think of:
Ajax request (not cached) to retrieve and display the user name. If I have 2 requests (/user?registered and /user?new), they could be cached as well. But I am afraid this extra request would nullify my caching performance-wise
Add a unique URL variable (?time=) to make the URL different, and cancel the cache. However, I would have to add this variable to all links on my webpage, not very convenient code-wise
This problems becomes greater if I actually have more content that is not the same for registered users and new users.
Cache-Control: private
Is usually enough in practice. It's what SO uses.
In theory, if you needed to allow for the case of variable logins from the same client you should probably set Vary on Cookie (assuming that's the mechanism you're using for login). However, this value of Vary (along with most others) messes up IE's caching completely so it's generally avoided. Also, it's often desirable to allow the user to step through the back/forward list including logged-in/out pages without having to re-fetch.
For situations where enforcing proper logged-in-ness for every page is critical (such as banking), an full Cache-Control: no-cache is typically used instead.