Should I disable WebClient caching? - windows-phone-7

WebClient.DownloadStringAsync does cache the server response.
After once getting a response from the server I get a response even without internet connection!
Is WebClient caching smart enough to determine from the server response how long to cache?
Or is it buggy and I should disable caching.
Backgound info:
Url: http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml
Fiddler trace:
GET /stats/eurofxref/eurofxref-daily.xml HTTP/1.1
Accept: /
Referer: file:///Applications/Install/4D0DF1F7-1481-45CA-86BE-C14FF5CCD955/Install/
Accept-Encoding: identity
User-Agent: NativeHost
Host: www.ecb.europa.eu
Connection: Keep-Alive
HTTP/1.1 200 OK
Date: Sun, 25 Mar 2012 08:54:40 GMT
Server: Apache/2.2.3 (Linux/SUSE)
Last-Modified: Fri, 23 Mar 2012 13:31:39 GMT
ETag: "19d4e5-6a9-4bbe90b5904c0"
Accept-Ranges: bytes
Content-Length: 1705
Keep-Alive: timeout=3, max=200
Connection: Keep-Alive
Content-Type: text/xml
Set-Cookie: BIGipServerPOOL.www.ecb.europa.eu_HTTP=2684883628.16415.0000; path=/
...
Disabling caching via Headers does not work:
.Headers("cache-control") = "no-cache"
.Headers("HttpRequestHeader.IfModifiedSince") = DateTime.UtcNow.ToString()
Disabling caching via appending uniqa parameter works:
"http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml" & "?MakeRequestUnique=" & Environment.TickCount

The integrated cache isn't smart at all. So if you expect different results when querying the page, you have to bypass it. I say 'bypass' because there's no way I know of to disable it with the WebClient (I don't think it's enabled if you directly use the HttpRequest class).
So if you want to use the WebClient, the best way is to append a random parameter to the request.

Related

How to Capture the API's ?if API's have no end points and just showing baseUrls

I need help or any suggestion. I have no idea how to do it?
Request URL: https://www.vizofly.com/NTU/Stress/StreamingAssets/Schools.json
Request Method: GET
Status Code: 200 (from disk cache)
Remote Address: 172.66.43.59:443
Referrer Policy: strict-origin-when-cross-origin
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
cf-cache-status: DYNAMIC
cf-ray: 6d4c32aacdb1926d-FRA
content-encoding: br
content-type: application/json
date: Fri, 28 Jan 2022 18:14:00 GMT
etag: W/"1261-61f2caaf-3e2d8e;;;"
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
last-modified: Thu, 27 Jan 2022 16:39:11 GMT
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v3?s=aVZAHndifoZtrY0MH3O1WlauF71saxbdUuS7eBS0tReoUi5fGDG3zSlxFCTvbIwJxvGeVeiQyjT%2FVIUWKfUpxNbRT1jUi%2F9VEOvJnaBBRtJKapsW8RBKeLUxqP%2FusLzYEHQ%3D"}],"group":"cf-nel","max_age":604800}
server: cloudflare
x-content-type-options: nosniff
x-frame-options: ALLOW-FROM https://www.ntu.edu.sg
If you're asking about how to configure JMeter to send the request:
Add Thread Group to your Test plan
Add HTTP Request sampler and set it up like:
You may also want to add HTTP Cache Manager to represent browser cache, HTTP Cookie Manager to automatically handle cookies and so on in order to configure JMeter to behave more like a real browser

Varnish failing to detect valid probe page

Having a problem with Varnish 3.x probe page from a SpringBoot application (1.4). Varnish is failing to detect the probe page (returns 503 SERVICE NOT AVAILABLE) and consequently fails to route.
When I manually ping the probe URL, it works fine, but Varnish is flagging the probe page as being down.
Removing the probe page, everything works fine.
Pointing to a static probe page (my.css) or any other static or dynamic URL fails.
Looking at the logs, the response header looks like this:
HTTP/1.1 200
Content-Type: text/plain;charset=utf-8
Content-Length: 72
Date: Wed, 01 Feb 2017 15:20:48 GMT
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Comparing this to other working (non Spring Boot) applications, the only difference is that the working applications have an OK after the response, and the bad ones don't:
HTTP/1.1 200 OK
Does that mean anything?
For example, here is a good one:
HTTP/1.1 200 OK
Date: Wed, 01 Feb 2017 14:04:18 GMT
Last-Modified: Fri, 11 Nov 2016 22:00:02 GMT
Content-Type: text/css
Content-Length: 129
Server: Jetty(9.3.z-SNAPSHOT)
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Is this a SpringBoot issue? Not sure where else to look!
Any clues?
Varnish support has replied
You found the issue: https://github.com/varnishcache/varnish-cache/issues/2069
This is "fixed" in master. But maybe you can fix your backend?
Now to see how to change Spring Boot response message!

What would cause SmartCloud to redirect a REST call to a login page?

I am not using the SBT, but making direct REST calls with Abdera to the current version of Connections on IBM SmartCloud. REST URL in question: https://apps.na.collabserv.com/search/serviceconfigs
Observations
When testing from my laptop (using Firefox and the REST client add-on,) this works as expected. I get back an ATOM feed.
When testing from a server (on a different network,) using the same method (Firefox + REST client,) I get back HTML that is a log-in page.
In addition, I get this same result when I call the URL from a Java program on the same server.
In all cases, I am using the same credentials with basic authentication.
Update: If I log into SmartCloud first, on a separate tab in Firefox on the server, then call the URL as before, from another tab, it works. I get the ATOM feed as desired. Naturally, this is unsuitable as a solution, but I present it as additional information that could lead to an actual solution.
Update: Further testing shows that the local (laptop) log-in exhibits the same behavior as the server. A form-based log-in is required from the same browser, then subsequent REST calls work.
Update: Here is a relevant simplified code snippet:
private static Abdera ABDERA = new Abdera();
private static AbderaClient ABDERA_CLIENT = new AbderaClient(ABDERA);
...
String host = "https://apps.na.collabserv.com";
ABDERA_CLIENT.addCredentials(host, AuthScope.ANY_REALM, "basic", new UsernamePasswordCredentials("user", "password"));
...
ClientResponse response = ABDERA_CLIENT.get("https://apps.na.collabserv.com/search/serviceconfigs");
Summary
It appears that something about the originating server or the call is causing SmartCloud to respond with a log-in page. Whereas, the same call and credentials from my laptop, work as expected.
Question
Where should I start to trouble-shoot this? How can I build the client credentials to allow programmatic log-in?
Response Headers
If it helps, here are the response headers that I get back in each case.
Unsuccessful
Status Code: 200 OK
Cache-Control: no-cache
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 1850
Content-Type: text/html
Date: Tue, 08 Oct 2013 14:15:03 GMT
Pragma: no-cache
Server: WebSEAL/6.1.1.3 (Build 110428)
Set-Cookie: PD-H-SESSION-ID=4_0_IR4***masked***oRKlJI;secure; Path=/; HttpOnly BIGipServerE3A-WebSEAL-80-fe=2132806922.20480.0000;secure; path=/
Vary: Accept-Encoding
p3p: CP="NON CUR OTPi OUR NOR UNI"
Successful
Status Code: 200 OK
Cache-Control: public, max-age=86400, s-maxage=86400, no-cache=set-cookie, private, must-revalidate
Content-Encoding: gzip
Content-Language: en-US
Content-Length: 1164
Content-Type: application/atom+xml; charset=UTF-8
Date: Mon, 07 Oct 2013 17:21:12 GMT
Expires: Tue, 08 Oct 2013 17:21:12 GMT
Server: WebSphere Application Server/8.0
Vary: Accept-Encoding
p3p: CP="NON CUR OTPi OUR NOR UNI"
x-lconn-auth: true
x-powered-by: Servlet/3.0
#Grant is your login using SAML? I could see this redirect happening. also could be TFIM related... maybe you should grab the auth on a different page, store the cookies, and then try connecting to the endpoint above.

Redirects stopped working in Firefox

I'm stumped, my website was working fine and now on Firefox suddenly the redirects stopped working.
I've tested IE and Chrome and going to /login redirects me to /dashboard however on Firefox the page is blank (no output sent) and no errors are logged. So this is why I'm assuming it to be a browser related issue. It might be due to a firefox update, but not sure how to confirm that.
Here are the headers:
Request Headers
GET /login HTTP/1.1
Host: local.example.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0 FirePHP/0.7.4
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Cookie: __utma=34805930.947644602.1372214584.1380730296.1380733154.30; __utmz=34805930.1378700053.15.2.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); __utma=214248714.242656582.1377296111.1380047082.1380734348.30; __utmz=214248714.1377296111.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __qca=P0-705514134-1378344178153; __utmc=34805930; __utmb=34805930.15.10.1380733154; __utmb=214248714.5.10.1380734348; __utmc=214248714; PHPSESSID=lli8i30qkhvohfm9ufkbdvbki0
x-insight: activate
Connection: keep-alive
Response Headers
HTTP/1.1 302 Found
Date: Wed, 02 Oct 2013 17:30:58 GMT
Server: Apache/2.4.3 (Win32) OpenSSL/1.0.1c PHP/5.4.7
X-Powered-By: PHP/5.4.7
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Location: /dashboard
Content-Length: 0
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8
It all looks pretty standard to me, however FF stays stuck on /login Am I missing something?
This behaviour is both on my local windows host and my remote amazon Linux web-server. The body is empty...
How could I go about debugging this?
The Expires header field in the response is really off. Firefox probably does not bother to render stale responses.
Please check the system time in your server. It is possible it is an Amazon problem, but it is also possible that one of the server users set the system time.
You can look into setting up a Network Time Protocol (NTP) client to run regularly (with ntpd), if you don't have that yet.
I would fire up Fiddler to see what bits actually went over the wire. Among other information, Fiddler will show what content type is actually used during the HTTP request / response.
This could be related to the fact that there is no extension. Firefox could be having trouble determining if this is a document or folder. Try firebug and see what URL Firefox tries to request after the redirect.

Convince Firefox to send an If-Modified-Since header over HTTPS

How can I convince Firefox (3.0.1, if it matters) to send an If-Modified-Since header in an HTTPS request? It sends the header if the request uses plain HTTP and my server dutifully honors it. But when I request the same resource from the same server using HTTPS instead (i.e., simply changing the http:// in the URL to https://) then Firefox does not send an If-Modified-Since header at all. Is this behavior mandated by the SSL spec or something?
Here are some example HTTP and HTTPS request/response pairs, pulled using the Live HTTP Headers Firefox extension, with some differences in bold:
HTTP request/response:
http://myserver.com:30000/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30000
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
If-Modified-Since: Tue, 19 Aug 2008 15:57:30 GMT
If-None-Match: "a0501d1-300a-454d22526ae80"-gzip
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Date: Tue, 19 Aug 2008 15:59:23 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Connection: Keep-Alive
Keep-Alive: timeout=5, max=99
Etag: "a0501d1-300a-454d22526ae80"-gzip
HTTPS request/response:
https://myserver.com:30001/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30001
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
HTTP/1.x 200 OK
Date: Tue, 19 Aug 2008 16:00:14 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Last-Modified: Tue, 19 Aug 2008 15:57:30 GMT
Etag: "a0501d1-300a-454d22526ae80"-gzip
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Length: 3766
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/javascript
UPDATE: Setting browser.cache.disk_cache_ssl to true did the trick (which is odd because, as Nickolay points out, there's still the memory cache). Adding a "Cache-control: public" header to the response also worked. Thanks!
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.
The not caching on disk is a security pre-caution, but it seems it indeed affects the If-Modified-Since behavior (glancing over the code).
Try setting the Firefox preference (in about:config) browser.cache.disk_cache_ssl to true. If that helps, try sending Cache-Control: public header in your response.
UPDATE: Firefox behavior was changed for Gecko 2.0 (Firefox 4) -- HTTPS content is now cached.
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.

Resources