How do I configure Sinatra to omit the Date & Server HTTP response headers? I also want to omit the Content-Type & Content-Length headers when there's no response body. I'm building a REST API server for an iPhone app. My iPhone app doesn't use these headers, and I want to be as efficient as possible.
I tried adding the following after filter, but the headers are still included.
after do
response.headers.delete('Date')
response.headers.delete('Server')
end
A header can be effectively deleted from a Sinatra response by setting it to an empty string. (Not nil, but '') e.g.:
get '/myroute/nodate' do
response.headers['Date']=''
body="Hello, No Date header in my header!"
end # get
Related
I'm developing an application which is supposed to serve different content for "normal" browser requests and AJAX requests for the same URL requested.
(in fact, encapsulate the response HTML in JSON object if the request is AJAX).
For this purpose, I'm detecting an AJAX request on the server side, and processing the response appropriately, see the pseudocode below:
function process_response(request, response)
{
if request.is_ajax
{
response.headers['Content-Type'] = 'application/json';
response.headers['Cache-Control'] = 'no-cache';
response.content = JSON( some_data... )
}
}
The problem is that when the first AJAX request to the currently viewed URL is made strange things happens on Google Chrome - if, right after the response comes and is processed via JavaScript, user clicks some link (static, which redirects to other page) and then clicks back button in the browser, he sees the returned JSON code instead of the rendered website (logging the server I can say that no request is made). It seems for me that Chrome stores the latest request response for the specific URL, and doesn't take into account that it has different content-type etc.
Is that a bug in the Chrome or am I misusing HTTP protocol ?
--- update 12 11 2012, 12:38 UTC
following PatrikAkerstrand answer, I've found following Chrome bug: http://code.google.com/p/chromium/issues/detail?id=94369
any ideas how to avoid this behaviour?
You should also include a Vary-header:
response.headers['Vary'] = 'Content-Type'
Vary is a standard way to control caching context in content negotiation. Unfortunately it has also buggy implementations in some browsers, see Browser cache vary broken.
I would suggest using unique URLs.
Depending of you framework capabilities you can redirect (302) the browser to URL + .html to force response format and make cache key unique within browser session. Then for AJAX requests you can still keep suffix-less URL. Alternatively you may suffix AJAX URL with .json instead .
Another options are: prefixing AJAX requests with /api or adding some cache boosting query params ?rand=1234.
Setting cache-control to no-store made it in my case, while no-cache didn't. This may have unwanted side effects though.
no-store: The response may not be stored in any cache. Although other directives may be set, this alone is the only directive you need in preventing cached responses on modern browsers.
Source: Mozilla Developer Network - HTTP Cache-Control
I am using valums file-uploader to upload files. This works great if my Spring controller returns void. If I add a #Responsebody Object to my controller IE things that I am about to download instead of uploading a file and launches a dialog.
The reason I would like to have a #Responsebody Object and not void is for error handling. How can I trick IE in this case?
I'm assuming that Spring is automagically setting the content-type to application/json for you, which will not work in IE. Ensure the content-type of your response is text/plain. Some will say that text/html is correct, and that is true for most cases. However, text/html will cause you problems if your JSON response contains HTML as IE will mess with the response. So, your safest bet is to ensure the content-type of your response is text/plain.
While we are on the topic of IE quirkiness, also be sure that you only return a 200 response if you intend to also include JSON in your response. IE will, by default, replace the content of "small" non-200 responses with a "friendly" message. "Small", I believe, is defined as a response that is less than 512 (or possibly 256) bytes.
For a list of all things you should be aware of when using IE, have a peek at the "limitations of IE" section in the Fine Uploader readme.
For each time $this->session->set_userdata() or $this->session->set_flashdata() is used in my controller, another identical "Set-Cookie: ci_session=..." is added to the http header the server sends.
Multiple Set-Cookie fields, with the same cookie name, in the http header is not okay according to rfc6265.
So is there a way to use codeigniter sessions without it creating multiple identical "set-cookie:"s?
(I've used curl to verify the http header)
check https://github.com/EllisLab/CodeIgniter/pull/1780
By default when using the cookie session handler (encrypted or unencrypted), CI sends the entire "Set-Cookie" header each time a new value is written to the session. This results in multiple headers being sent to the client.
This is a problem because if too many values are written to the session, the HTTP headers can grow quite large, and some web servers will reject the response. (see http://wiki.nginx.org/HttpProxyModule#proxy_buffer_size)
The solution is to only run 'sess_save()' one time right after all other headers are sent before outputting the page contents.
I believe you can pass an array to $this->session->set_userdata(); I haven't tested this code so it is merely a suggestion to try something along these lines:
$data = array(
'whatever' => 'somevalue',
'youget' => 'theidea'
);
$this->session->set_userdata($data);
NB: When I say I haven't tested the code.. I have used this code and I know it works, I mean I havent tested if it will reduce the amount of headers sent.
In my case, the error is in the browser (Chrome). It stored 2 cookie and send both to server, this make server create new session all the time.
I fixed it by clear the cookies in browser.
Hope it help someone. :)
I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this?
When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time.
This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem):
With CreateObject("MSXML2.XMLHTTP")
.open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False
.send
nsrresponse =.responseText
End With
Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL?
I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read.
I am also trying to force not using the cache using http header settings and html document header meta data:
Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0.
print $httpGetCGIRequest->header(
-type => 'text/html',
-expires => '+0s',
);
Http header settings in response sent back to client:
<html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head>
<body>
response message generated from server
</body>
</html>
The above http header and html document head settings haven't worked, hence my question.
I don't think that the XMLHTTP object itself does even implement caching.
You send a fresh request as soon as you call .send() on it. The whole point of caching is to avoid sending requests, but that does not happen here (as far as your code sample goes).
But if the object is used in a browser of some sort, then the browser may implement caching. In this case the common approach is to include a cache-breaker into the statement: a random URL parameter you change every time you make a new request (like, appending the current time to the URL).
Alternatively, you can make your server send a Cache-Control: no-cache, no-store HTTP-header and see if that helps.
The <meta http-equiv="CACHE-CONTROL" content="NO-CACHE> is probably useless and you can drop it entirely.
You could use WinHTTP, which does not cache HTTP responses. You should still add the cache control directive (Cache-control: no-cache) using the SetRequestHeader method, because it instructs intermediate proxies and servers not to return a previously cached response.
If you have control over the application targeted by the XMLHTTP Request (which is true in your case), you could let it send no-cache headers in the Response. This solved the issue in my case.
Response.AppendHeader("pragma", "no-cache");
Response.AppendHeader("Cache-Control", "no-cache, no-store");
As alternative, you could also append a querystring containing a random number to each requested url.
I have a ruby script that goes and saves web pages from various sites, how do i make sure that it checks if the server can send gzipped files and saves them if available...
any help would be great!
One can send custom headers as hashes ...
custom_request = Net::HTTP::Get.new(url.path, {"Accept-Encoding" => "gzip"})
you can then check the response by defining a response object as :
response = Net::HTTP.new(url.host, url.port).start do |http|
http.request(custom_request)
end
p [response['Content-Encoding']
Thanks to those who responded...
You need to send the following header with your request:
Accept-Encoding: gzip,deflate
However, I am still reading how to code ruby and dont know how to do the header syntax in the net/http library (which I assume you are using to make the request)
Edit:
Actually, according to the ruby doc it appears the this header is part of the default header sent if you dont specify other 'accept-encoding' headers.
Then again, like I said in my original answer, I am still just reading the subject so I could be wrong.
For grabbing web pages and doing stuff with them, ScrubyIt is terrific.