Why does Apache treat some files differently with regards to caching? - ajax

Base problem: I'm working on a web-app that pulls in an XML file via Ajax. Initially my issue was the file seeming to indefinitely cache in the browser and never issue an HTTP request at all. This is easily mitigated by adding the 'Cache-Control: max-age=0' request header. However, now I'm on the other side of the spectrum with the server always returning 200 along with the file contents and never a 304.
Debugging this with Fiddler has left me even more perplexed. Eventually I selected a request in Fiddler for a PNG file that was returning 304. I duplicated the request and then changed only the URL to point to the XML file instead. The XML file still however returns 200 despite having identical request headers as that of the 304 returning PNG file. Apache also returns many additional response headers for the XML file. The PNG file only returns Date, ETag, Server, Connection, and Keep-Alive. The XML file has all of these as well as Vary, Content-Length, Last-Modified, Accept-Ranges and Server.
I just want the XML file to be cached like any other file. If it's new on the server, return new content, otherwise respond with 304. So my question is, what settings are causing Apache to treat the XML file differently than the PNG file? Thanks.
EDIT - Per Covener's questions
No I didn't switch out the eTag, I can try doing that when I get
back in the office.
Vary: Accept-Encoding,User-Agent
No idea, I'm running a default install of Uniform server with Apache for local development with no modifications. (Content-length is probably absent since the server is returning 304 with no content.)
EDIT-2
I noticed that Apache doesn't GZip the PNG file (appears according to Fiddler that it would actually be larger, so makes sense). If I remove Accept-Encoding from the request as well as the '-gzip' from the etag, Apache will return 304 for the XML file. The GZip compression is 10 to 1 for this file, so ideally, it would be nice to have the compression but still send 304's when appropriate.

Related

Prevent truncated HTTP response from being cached

We saw this issue on one of our dev machines - the vendor.js bundle in our Angular project had somehow gotten cached, while truncated, which breaks the web app until you clear the cache.
We do use browser caching (together with URL-hashing so caching doesn't prevent app updates).
Is there any way to prevent the browser from caching a truncated request? Actually, I would have thought that the browser has this built-in (i.e. it won't cache a request where the bytes header does not match the amount downloaded).
The browser where we reproduced the problem was Chrome.
I think I found the issue - for whatever reason, our HTTP Response was missing the "Content-Length" header in the Response Headers.
The response passes through 2 proxies so one of them might remove the "Content-Length" header.
What we did in such a case is to add a parameter for the request of a lib.
You just need to raise the number and next time the browser and the caches in between will fetch a new version from the server:
e.g. www.myserver.com/libs/vendor.js?t=12254565
www.myserver.com/libs/vendor.js?t=12254566

413 request entity too large jetty server

I am trying to make a POST request to an endpoint served using jetty server. The request errors out saying 413 Request entity too large. But the content-length is only 70KB which I see is way below the default limit of 200KB.
I have tried serving via ngnix server and add client_max_body_size to desired level but that didn't work. I have set the setMaxFormContentSize of WebContext and that didn't help either. I have followed https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size and that didn't helped me either.
Does anyone have any solution to offer?
wiki.eclipse.org is OLD and is only for Jetty 7 and Jetty 8 (long ago EOL/End of Life). The giant red box at the top of the page that you linked it even tell you this, and gives you a link to the up to date documentation.
If you see a "413 Request entity too large" from Jetty, then it refers the the Request URI and Request Headers.
Note: some 3rd party libraries outside of Jetty's control can also use HttpServletResponse.sendError(413) which would result in the same response status message as you reported.
Judging by your screenshot, which does not include all of the details, (it's really better to copy/paste the text when making questions on stackoverflow, screenshots often hide details that are critical in getting a direct answer), your Cookie header is massive and is causing the 413 error by pushing the Request Headers over 8k in size.

Web page content is not gzipped on every computer

I have configured my Apache xampp server to serve gzipped content as described [here][1]. This works successfully on some Firefox and Chrome browsers I have tried on different PCs (Windows and Ubuntu). I could verify this by looking at Network tab in DevTools on Firefox and Chrome browser, where I can see the reduced size that is transferred and also the header Content-Encoding :gzip I also passed the [GIDZipTest][2].
The problem is that for my PC and also another laptop I found (Windows 10) the content is not received as gzip by any browser, although the browsers send the request header that they accept gzip. The weird thing is that when I tested this in the Firefox browser of my Ubuntu VM it receives gzipped content but when I test this in the browser of the PC that hosts the VM it does not receive gzipped content.
I attach some pictures.
Firefox on my PC[3], Chrome on my PC[4], Firefox on VM[5]
https://ourcodeworld.com/articles/read/503/how-to-enable-gzip-compression-in-xampp-server
http://www.gidnetwork.com/tools/gzip-test.php
https://i.stack.imgur.com/9KLSO.png
https://i.stack.imgur.com/gcLsW.png
https://i.stack.imgur.com/UO9fA.png
Finally it worked by replacing http with https..I don't know why this is required, since I'm using exactly the same version of browser in both cases!
p.s.CBroe was right. If I look request headers more carefully I can see that in the browser that was not accepting gzip content, the header Accept-Encoding had one more value apart from gzip and deflate, which is br aka Brotli which seems to support compression only for https! Could this probably be the explanation? Although I didn't make any configuration in xampp for Brotli..

F5 BIG IP - ajax POST with HTTP response truncated

Jmeter 2.12.
I used a scenario fully functional in front of a reverse proxy Apache. Recently we 've replaced the reverse proxy with the F5 BIGIP technology and now my scenario hangs.
The problem is for a particular ajax POST request the HTTP response is truncated : i receive a 200 OK but the HTML content is not full (no html tags for example). When i post the same request with Firefox the full content is ok.
Note that i don't receive the http header Transfer-Encoding: chunked.
In this case what can be the difference between Firefox and JMETER ?
Anyone have an idea on how could i get the full html response ?
Thanks for any reply.
That completely depends on the settings on your F5 and what exactly you mean with "response is truncated" and "no html tags". Do you get the correct response but the html tags are stripped out? or is the response just truncated so you i.e. only get the first n bytes?
The best way to find out what is actually going wrong is to use something like fiddler in between and try to find the real difference between the responses, especially regarding the response headers (Content-Length, Transfer-Encoding, etc).
When you found the actual difference please post here so we can help you further.
On a sidenote, by any chance do you have some custom coding on the f5 (iRules) which react to different user-agent settings?
Given you send identical requests you should be receiving identical responses.
Use JMeter's View Results Tree listener to inspect request details, or even better compare requests which are being sent by Firefox and JMeter using a lower level network sniffer tool like Wireshark, detect the differences and configure JMeter accordingly to send the same request(s) as Firefox does.
The other reason might be JMeter truncating large response, by default JMeter displays "only" first 10 megabytes in the View Results Tree listener, if this is the case - you can add the next line to user.properties file:
view.results.tree.max_size=0
and restart JMeter to pick the property up - it will suppress response truncation and you will be able to view the full response data.
Alternative way of setting the property is passing it via -J command line argument like:
jmeter -Jview.results.tree.max_size=0 ....
References:
Full list of command-line options
Apache JMeter Properties Customization Guide

Debug static file requests from IIS6

How can I debug what is being returned by IIS(6) when the response goes through proxies before getting to the browser?
I have requests for static files which are being sent with the 'Accept-encoding: gzip' header. These are being gzipped correctly. However, if a 'Via: ' header (to redirect the response via a proxy) is also included the content is not received gzipped by the browser.
I want to know if the issue is with IIS not applying the compression or related to something the proxy is doing.
How can I investigate this problem?
This is related to IIS6 not doing gzip compression when including Via header in request.
In case anyone else hits this problem i believe it's due to the "HcNoCompressionForProxies" option, configurable in the metabase. It specifically lets you disable compression for proxied requests.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/05f67ae3-fab6-4822-8465-313d4a3a473e.mspx?mfr=true
If your still interested my answer would be install Fiddler probably on the client first. For HTML snooping you can't do much better.
That would be my first port of call.

Resources