varnish cache real (body) size vs content-length - caching

Sometimes, when an object is not in the cache, varnish will send an object that has a real size smaller than the size declared in the content-length header. For example - only part of the picture.
Is it possible to construct such a rule...?
if (beresp.http.content-lenght <> real_object_body_size) { return(retry); }
I wrote a script that tests the same request against the varnish and the backend. It compares the downloaded size with the content-lenght header. The backend, unlike varnish, sometimes ends up with a timeout but the size is always fine. The problem is rare but annoying because the objects are set to long user cache time.

After a few days I can say that the problem was in occasional backend problems with varnish's ability to send a chunked transfer if the object is not in the cache.
Thank you #Thijs Feryn for pointing this out. I knew about that property but until I read it here, I didn't connect it to my problem at all.
It seems that "set beresp.do_stream = false;" solved the problem.

Related

Firefox always caches when Last-Modified header is set [duplicate]

I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!)
On my web site, I'll have three kinds of resources:
(1) Forever cacheable (public / equal for all users)
Example: 0A470E87CC58EE133616F402B5DDFE1C.cache.html (auto generated by GWT)
These files are automatically assigned a new name, when they change content (based on the MD5).
They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?)
They shouldn't require the client to make a round-trip to the server to validate, if the content has changed.
(2) Changing occasionally (public / equal for all users)
Examples: index.html, mymodule.nocache.js
These files change their content without changing the URL, when a new version of the site is deployed.
They can be cached, but probably need a round-trip to be revalidated every time.
(3) Individual for each request (private / user specific)
Example: JSON responses
These resources should never be cached unencrypted to disk under no circumstances. (Except maybe I'll have a few specific requests that could be cached.)
I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.
I would probably use these settings:
Cache-Control: max-age=31556926 – Representations may be cached by any cache. The cached representation is to be considered fresh for 1 year:
To mark a response as "never expires," an origin server sends an
Expires date approximately one year from the time the response is
sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
year in the future.
Cache-Control: no-cache – Representations are allowed to be cached by any cache. But caches must submit the request to the origin server for validation before releasing a cached copy.
Cache-Control: no-store – Caches must not cache the representation under any condition.
See Mark Nottingham’s Caching Tutorial for further information.
Cases one and two are actually the same scenario.
You should set Cache-Control: public and then generate a URL with includes the build number / version of the site so that you have immutable resources that could potentially last forever.
You also want to set the Expires header a year or more in the future so that the client will not need to issue a freshness check.
For case 3, you could all of the following for maximum flexibility:
"Cache-Control", "no-cache, must-revalidate"
"Expires", 0
"Pragma", "no-cache"

Classic ASP cache busting (& yet still satisfying PageSpeed score)

Scenario:
I am working with IIS and ASP, and we need to cache the site (to make Google Page Speed, and my boss, happy). We currently have IIS caching everything (asp/JS/CSS) for a period of 1 week.
Problem:
After updating the HTML content on the ASP pages, my boss sees the old version of the page until he does a (force) refresh.
Question:
How can I (force) update the server cache after I make a change to the ASP HTML content?
I would like my peers and managers to see the latest changes without making them do a forced browser refresh.
Are you configured to use the "If-Modified-Since" HTTP Header?
This explanation on Scott Hanselman's blog gives you and idea of what you are looking for - Forcing an update of a cached JavaScript file in IIS
This page also provides a useful primer for the "If-Modified-Since" HTTP Header
Let's see if we can make the boss happy. Like you, I have a few people that think F5 or Ctrl+F5 is annoying.
Quick Review, to be sure your Output Cache on your IIS server is updating on Change let's set it to "Cache until Change".
I read that you clear it every week but if things don't change... Why?
Let's set the client browser caching defaults.
And you have the following for all your page headers letting the page expire after 30 minutes using GMT time.
Master header:
Dim dtmExp
Response.Buffer = True
Response.CharSet = "UTF-8"
dtmExp = DateAdd("n", 30, Now())
Response.ExpiresAbsolute = dtmExp
Response.Expires = dtmExp
We have several options and methods to trigger our header change.
You can use Sessions, Cookies, DB updates etc. in this example I'm using Sessions feel free to change things around to fit your application better.
PageEdit.asp
Session("EditedPageFullURL") = "/yourpage.asp"
In a common functions page add the following.
Function EditorsReload(eChk,erURL)
If IsNumeric(eChk) Then
Session("Editing") = eChk
End If
If Len(erURL) = 0 Then
Exit Function
End If
If Session("Editing") <> "" Then
If Session("Editing") = 1 Then
If (LCase(erURL) = LCase(Request.ServerVariables("SCRIPT_NAME"))) Then
Session("Editing") = ""
Session("EditedPageFullURL") = ""
Response.Expires = -1
Response.ExpiresAbsolute = Now() -1
Response.AddHeader "pragma", "no-store"
Response.AddHeader "cache-control","no-store, no-cache, must-revalidate"
End If
End If
End If
End Function
Place the following in your page just below any headers you might have.
Call EditorsReload(1,Session("EditedPageFullURL"))
You can wrap it in a "Session("AUTH")" if your site has login and member sessions setup.
Other than that, this will fire only when Session("EditedPageFullRUL" has a length greater than 1.
This will update the bosses browser header forcing the browser to refresh the local cache.
It is a one time deal so any additional page refresh is using the standard headers.
There are many ways of doing this so be creative!

AJAX query weird delay between DNS lookup and initial connection on Chrome but not FF, what is it?

I have an AJAX query on my client that passes two parameters to a server:
var url = window.location.origin + "/instanceStats"
$.getJSON(url, { 'unit' : unit, "stat" : stat }, function(data) {
instanceData[key] = data;
var count = showInstanceStats(targetElement, unit, stat, limiter);
});
The server itself is a very simple Python Flask application. On that particular URL, it grabs the "unit" and "stat" parameters from the query to determine the name of a CSV file and line within that file, grabs the line, and sends the data back to the client formatted as JSON (roughly 1KB).
Here is the funny thing: When I measure the time it takes for the data to come back, I observe that some queries are fast (between 20 and 40 ms), and some queries are slow (between 320 and 350 ms). Varying the "stat" parameter (i.e. selecting a different line in the CSV) doesn't seem to have any impact. The fast and slow queries usually switch back and forth (i.e. all even queries are fast, all odd ones are slow). The Python server itself reports roughly the same time for each query.
AJAX itself doesn't seem to have any impact either, as I can take the url that is constructed in the JS and paste it into the browser myself and get the same behavior. Here are some measurements from two subsequent queries:
Fast: http://i.imgur.com/VQ7qopd.png
Slow: http://i.imgur.com/YuG0ROM.png
This seems to be Chrome-specific, as I've tried it on Firefox and the same experiment yields roughly the same query time everytime (between 30 and 50 ms). This is unfortunate, as I want to deploy on both Chrome and Firefox.
What's causing this behavior, and how can I fix it?
I've run into this also. It only seems to happen when using localhost. If you use 127.0.0.1 (or even the computer name), it will not have the extra delay.
I'm having it too, and it's exactly the same: my Node.js application serves Ajax requests and no matter which /url I request it's either 30ms or 300ms and it switches back and forth: odd requests are long, even requests are short.
The thing I see in Chrome Web Inspector (aka Chrome DevTools) is that there is a long gap between "DNS lookup" and "Initial Connection".
They say it's OCSP related here:
http://www.webpagetest.org/forums/showthread.php?tid=12357
OCSP is some kind of certificate validation protocol:
https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol
Moving from localhost to 127.0.0.1 seems to fix it: response times are 30ms now.

Is there a maximum size for an ajax call (JSF)?

when I perform a ajax-call which loads a part of the contents of a database, it works fine until the number of loaded entries exceeds about 400. they are mainly only texts, and I'm wondering if there is a limit?
the ajax status bar tells me "success" and no error is thrown; maybe there is a server setting I have set wrong?
PS: javax.faces.STATE_SAVING_METHOD is set to "client"; should I set it to "server"?
thanks for any help!

How do I overcome the XHR FF POST size limit?

My XHR POST REQUEST is cut off. When I try to reload my page information is missing. Firebugs sends following message:
Firebug request size limit has been reached by Firebug.
My question is: What are my options?
Would it work if I declare the content.length in the header?
I added a line to my apache config file and restarted it: LimitRequestBody 0
I increased the size of transfer files in mysql config file
Or it is a browser issue?
The only solution I could think of was to cut the data in pieces and transmit the array one by one but I don't like this idea. The content length is 91691 according to firebug.
Any suggestions?
You just need to modify Firebug settings. In browser's address bar go to about:config, then look for option extensions.firebug.netDisplayedPostBodyLimit.
You should increase its value in order to see non truncated requests. Set it to 65535 for example.
Here you can find many other Firebug options you may want to change: http://getfirebug.com/wiki/index.php/Firebug_Preferences

Resources