Wikimedia API query getting cached and inconsistent - caching

After doing some queries with Wikimedia API, I am not getting any result, all requests seems like cached or not proper.
The SandBox API request was giving some response for me yesterday, but today afetr trying other queries I am not getting proper response for any of the query.
So is there a limit of query we can make from one IP or if all my requests are getting cached how to remove the cache.

Recent changes are stored for 30 days of Wikimedia projects (the MediaWiki default is 90). You're requesting data older than that. Did you maybe intend to get changes newer than your date? In that case, you need to specify rcdir=newer.

Related

Fetch Still Hit After Purged Edge-Cache

I have this API link that cloudflare cache in edge, and everytime the data(s) updated the backend will send to cloudflare's api to purge this link.
When I try with postman it work perfectly fine, got MISS when new data(s) came and got cached when try to fetch afterwards.
But when building frontend website using fetch no matter how hard I try it will always "HIT" cloudflare's cache even though my postman got "MISS".
(Image) Postman "MISS"
(Image) Fetch Always "HIT"
Providing more details about your testing method would allow better guess. But offhand, sounds like after purging the cache you are testing Postman first each time? In which case makes sense that it would be a MISS, then followed up by browser test, it would be loaded into cache (from the Postman request)?

Why are GraphQL queries POST requests even when we are trying to fetch data and not update/submit new data?

I am using Postman to fetch data from my server and when I use a REST call it is a GET request but when I use a GraphQL API call, it needs to be a POST request. Why is it so?
The GraphQL spec is itself transport-agnostic, however the convention adopted by the community has been to utilize POST requests. As pointed out in the comments, some libraries support GET requests. However, when doing so, the query has to be sent as a URL query parameter since GET requests can't have bodies. This can be problematic with bigger queries since you can easily hit a 414 URI Too Long status on certain servers.
The best practice is to always utilize POST requests with a application/json Content-Type.

Prevent AJAX POST responses from being cached

I have AJAX POST requests generated from my webpage, and there may be multiple post requests with the same post data. But the response may vary, and I want to make sure I am not getting cached responses to any of these requests. I need each request to hit the webpage.
Am I right in assuming that responses to POST requests will not be cached?
There is two level of caching will be involved in that process
Browser caching
Server caching
To eliminate first one you have to cheat your browser and add a fake parameter to your ajax request so it will think it's unique each time i.e
www.example.com/api/ajax?123
www.example.com/api/ajax?1234
For server level you have to make sure that no cache been added to your configuration for such link, for example some developer will cache any file ends with .json or service like Cloud Flare it will automatically cache any static content.

Google Analytics uses gif get request why not post request?

Google Analytics uses Get Request for .gif image to server
http://www.google-analytics.com/__utm.gif?utmwv=4&utmn=769876874&utmhn=example.com&utmcs=ISO-8859-1&utmsr=1280x1024&utmsc=32-bit&utmul=en-us&utmje=...
We can observer that all parameters are sent in this Get Request and the requested image is no where found useful (Its just 1px by 1px Image)
Known Information: If requesting query string is large then Google are going for Post Request.
Now the question is why not Post Request always irrespective of the query string is large or not.
Being data sent via Get Request its leads to security issue. Since, the parameters will be stored in browser history or in web server logs in case of Get Request.
Could someone give any supportive reasons why Google Analytics is depending on both the things?
Because GET requests is what you use for retrieving information that does not alter stuff.
Please note that the use of POST has quite some downsides, the browser usually warns against reloading a resource requested via POST (to prevent double data-entry), POST requests are not cached (which is why some analytics misuse it), proxied etc.
If you want to retrieve a LOT of data using a URL (advice: rethink if there might be a better option), then it's necessary to use post, from Wikipedia:
There are times when HTTP GET is less suitable even for data retrieval. An example of this is when a great deal of data would need to be specified in the URL. Browsers and web servers can have limits on the length of the URL that they will handle without truncation or error. Percent-encoding of reserved characters in URLs and query strings can significantly increase their length, and while Apache HTTP Server can handle up to 4,000 characters in a URL, Microsoft Internet Explorer is limited to 2048 characters in any URL. Equally, HTTP GET should not be used where sensitive information, such as user names and passwords have to be submitted along with other data for the request to complete. In these cases, even if HTTPS is used to encrypt the message body, data in the URL will be passed in clear text and many servers, proxies, and browsers will log the full URL in a way where it might be visible to third parties. In these cases, HTTP POST should be used.
A POST request would require an ajax call and it wouldn't work because of http://en.wikipedia.org/wiki/Same-origin_policy. But images can easily be cross-site, so they just need to add an img tag to the DOM with the required url and the browser will load it, sending the needed information to their servers for tracking.

Conditional GET Request and Expiry Header testing with Firebug-NET

I'm using Firebug's NET feature to measure the performance of our application. I'm a bit confused the way it is displaying the timeline. We have enabled Expiry header for all static files(it is 30 days from the current date). Now even if the resource is available in cache, it still makes a conditional GET (that is what I think). Ideally there shouldn't make a connection to the server, but it takes 93ms to create a connection. Please find the image that I've attached.
Can some one please help me to understand this better?
The HTTP response contains a header entry "Etag". ETag is a cache validator tag.
The HTTP Client on seeing this response will always verify with the server if the Content has been updated.
Cache Validator tag has higher preference over other Cache control tags.
If you want content to be served from cache without it being validate on the server side then only keep the Expires header and remove the ETag header.

Resources