UID tag : 04 7F C7 BA 20 4B 80
Message : 04 7F C7 BA 20 4B 80TCP connection ready
Sending..
Packet sent
+IPD,210:HTTP/1.1 200 OK
Server: nginx/1.14.0
Date: Tue, 05 Jun 2018 11:51:29 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
X-Acc-Exp: 600
X-Proxy-Cache: HIT localarra.com
0
above is code i am able to call my api but only once how i disable cache in esp8266 arduino project. my all code work properly just esp8266 every time show response from cache i tried with postman it called every time but not with esp8266 please comment if you can help.
Those are headers returned by the web server you're accessing.
The cache isn't in the ESP8266, it's in the web server. The web site is being served through a proxy server which caches pages - this is usually done to help performance, like in a Content Distribution Network.
It's possible that if you append a URL query parameter that the proxy server will serve you a fresh copy of the page rather than a cached copy.
You didn't share the URL you're accessing, so suppose it's
http://www.example.com/page
In that case,
http://www.example.com/something?foo=1
might cause the proxy to bypass the cache.
If that doesn't work then you're probably not going bypass the server's cache.
However - the web site operator likely has good reason for using a cache - it's not something you should need to bypass.
Related
This is so frustrating. The result of an API should not be cached, unless we explicitly say so.
However, on this server, on this particular instance of my deployment the result gets cached. And it's so annoying and causes a huge amount of bugs.
Here's the API endpoint example:
http://example.com/user/list
And Google Chrome keeps showing (from disk cache). I checked, and there isn't any caching header available anywhere for this request, and the point is that we have more than 10 servers and more than 50 instances of our API deployed and only here it gets this stupid behavior. What can be wrong and where should I check?
Update: Response headers are:
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: Kestrel
Date: Sun, 30 Sep 2018 05:20:36 GMT
Content-Length: 355
This one has me perplexed...
On my website, I am getting Mixed content errors in my console yet when inspecting the source, the urls it says are http are showing as https?
In fact, a search for anything with http:// returns nothing.
Inspection shows:
<img src="https://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default" data-lazy="https://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default" alt="2 Bedroom Apartment for Sale in Strand North" title="2 Bedroom Apartment for Sale in Strand North" class="lazy loading-F5F5F5">
Yet I get this error:
Mixed Content: The page at 'https://www.immoafrica.net/residential/for-sale/south-africa/?advanced-search=1&st=' was loaded over HTTPS, but requested an insecure image 'http://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default'. This content should also be served over HTTPS.
The page is requesting the following https URL:
https://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default
…but the server is redirecting that https URL to the following http URL:
http://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default
Paste that https URL into your browser address bar and you’ll see you end up at the http URL.
Or try it from the command line with something like curl:
$ curl -i 'https://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default'
HTTP/2 301
date: Sat, 06 Jan 2018 01:56:57 GMT
cache-control: max-age=3600
expires: Sat, 06 Jan 2018 02:56:57 GMT
location: http://images.immoafrica.net/aHR0cHM6Ly9yZXZvbHV0aW9uY3JtLXJldm9sdXRpb24tcHJvcGltYWdlcy5zMy5hbWF6b25hd3MuY29tLzU2LzE3MTk4OC8xMjcxOTk0X2xhcmdlLmpwZw==/fb5c609f3c1506a8798dfa620ccf8a15?1=1&width=420&height=310&mode=crop&scale=both&404=default
server: cloudflare
cf-ray: 3d8b1051cfbf84fc-HKG
…and notice th server sends back a 301 response and a location header with the http URL.
So the problem seems to be, that images.immoafrica.net site isn’t served over HTTPS/TLS and instead redirects all requests for https URLs to their http equivalents.
There’s nothing you can do on your end to fix that — other than creating or using some kind of HTTPS proxy through which you make the requests for images.immoafrica.net URLs.
Instead of using https:// use //. This will stop mixed content issues.
I have a server serving static files with an expire of 1 year but my browsers still get the file and receive a 304 - not modified. I want to prevent the browser from even attempting the connection. I realize that that happens in several different setup (Ubuntu Linux) with Chrome and Firefox.
My test is as follows:
$ wget -S -O /dev/null http://trepalchi.it/static/img/logo-trepalchi-black.svg
--2016-03-14 19:56:14-- http://trepalchi.it/static/img/logo-trepalchi-black.svg
Risoluzione di trepalchi.it (trepalchi.it)... 213.136.85.40
Connessione a trepalchi.it (trepalchi.it)|213.136.85.40|:80... connesso.
Richiesta HTTP inviata, in attesa di risposta...
HTTP/1.1 200 OK
Server: nginx/1.2.1
Date: Mon, 14 Mar 2016 18:55:29 GMT
Content-Type: image/svg+xml
Content-Length: 25081
Last-Modified: Sun, 13 Mar 2016 23:03:53 GMT
Connection: keep-alive
Expires: Tue, 14 Mar 2017 18:55:29 GMT
Cache-Control: max-age=31536000
Cache-Control: public
Accept-Ranges: bytes
Lunghezza: 25081 (24K) [image/svg+xml]
Salvataggio in: "/dev/null"
100%[==================================================================================================================================================================>] 25.081 --.-K/s in 0,07s
2016-03-14 19:56:14 (328 KB/s) - "/dev/null" salvato [25081/25081]
That shows correctly providing expires and cache control (via nginx).
If I go to the browser and enable cache and open diagnostic tools, the first hit I notice a 200 return code, then I refresh the page (Control-r) and find a connection with 304 - not modified return code.
Inspecting firefox cache (about:cache) I found it with correct expire and clicking on the link in that page I was able to see it w/o hitting the remote server.
I also tested pages where the images are loaded from image tags (as opposed as directly called as in the example above).
All the letterature I read state that with such an expire the browser should not even try a connection. What's wrong? RFC 2616 states:
HTTP caching works best when caches can entirely avoid making requests
to the origin server. The primary mechanism for avoiding requests is
for an origin server to provide an explicit expiration time in the
future, indicating that a response MAY be used to satisfy subsequent
requests. In other words, a cache can return a fresh response without
first contacting the server.
Note another question addresses the problem of how 304 is generated, I just want to prevent the connection to be made
Sandro
Thanks
How can I make my Image cached by browser and expire after particular period of time
There are several HTTP headers that you can use to effect changes to the content caching policies.
This one:
Cache-control: no-cache
instructs the browser not to cache the content at all.
This one:
Expires: Tue, 20 Mar 2024 02:00:00 GMT
instructs the browser to expire its cached copy by the given time.
This one:
ETag: ab10be20
instructs the browser to consider ab10be20 as a hash of the contents and only if the value changes upon subsequent requests should it need to download the new contents.
Note that all of these are effectively advisory only and there's no possible way to enforce the purging of caches remotely.
The following is a http response header from a image on our company's website.
HTTP/1.1 200 OK
Content-Type: image/png
Last-Modified: Thu, 03 Dec 2009 15:51:57 GMT
Accept-Ranges: bytes
ETag: "1e61e38a3074ca1:0"
Date: Wed, 06 Jan 2010 22:06:23 GMT
Content-Length: 9140
Is there anyway to know if this image is publicly cacheable in some proxy server? The RFC definition seems to be ambiguous http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 and http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.4.
Run RED on your URL and it'll tell you whether the response is cacheable, among other information.
The headers you show appear to be cacheable.
If you would like to control the caching behavior of correctly configured proxies and web browsers, you might investigate using the Cache-Control and Expires headers to gain additional control.
Here is a webpage I had bookmarked that has one person's opinion of how to intepret the specifications you list (plus some other ones):
http://www.web-caching.com/mnot_tutorial/how.html
If you need to guarantee that someone sees a completely new image each time (even with misconfigured devices between you and them), you may want to consider using a randomized or GUID value as part of the URL.
Here is a tutorial on setting headers for proxy caching. Be sure to read the part about setting cookies!