When using nginx fastcgi_cache, I cache HTTP 200 responses longer than I do any other HTTP code. I want to be able to conditionally set the expires header based on this code.
For example:
fastcgi_cache_valid 200 302 5m;
fastcgi_cache_valid any 1m;
if( $HTTP_CODE = 200 ) {
expires 5m;
}
else {
expires 1m;
}
Is something like the above possible (inside a location container)?
sure, from http://wiki.nginx.org/HttpCoreModule#Variables
$sent_http_HEADER
The value of the HTTP response header HEADER when converted to lowercase and
with 'dashes' converted to 'underscores', e.g. $sent_http_cache_control,
$sent_http_content_type...;
so you could match on $sent_http_response in an if-statement
there's a gotcha though since http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires doesn't list if's as allowed context for the expires directive
you can work around that setting a variable in the if-block, and then referring to it later like so:
set $expires_time 1m;
if ($send_http_response ~* "200") {
set $expires_time 5m;
}
expires $expires_time;
Related
In image #1, as you can see, I am getting a valid ES response on firing a GET request. However, if I try doing the same things through the NGINX reverse proxy that I have created and hit myip/elasticsearch, it returns me the error (image #2). Can someone help me with this?
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601;
}
}
The right way is to specify both of those slashes. Slash after 127.0.0.1:9000 is essential, without it your request /elasticsearch/some/route would be passed as-is while with that slash it would be passed as /some/route. In nginx terms it means that you specified an URI after the backend name. That is, an URI prefix specified in a location directive (/elasticsearch/) stripped from an original URI (we having some/route at this stage) and an URI specified after the backend name (/) prepended to it resulting in / + some/route = /some/route. You can specify any path in a proxy_pass directive, for example, with proxy_pass http://127.0.0.1:9200/prefix/ that request would be passed to the backend as /prefix/some/route. Now if you understand all being said, you can see that specifying location /elasticsearch { ... } instead of location /elasticsearch/ { ... } would give you //some/route instead of /some/route. I'm not sure it is exactly the cause of your problem however configurations like
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
are more correct.
Now may I ask you what you get with exactly this configuration in response to curl -i http://localhost:9200/ and curl -i http://localhost/? I want to see all the headers (of cause except those containing private information).
The problem is the path. Nginx is passing it unmodified.
Add a slash at the proxy_pass urls.
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601/;
}
}
From the documentation:
Note that in the first example above, the address of the proxied server is followed by a URI, /link/. If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).
I have a web2py configuration, operating on top of nginx, which is producing a 404 error when browser caching is implemented for certain static files. The problem is described here, and I'm now asking this question within a web2py context, because that may be relevant to the issue, or because there may be some web2py-specific workaround or solution.
nginx.conf looks like this:
worker_processes 3;
events {
worker_connections 1024;
}
http {
access_log [/...];
error_log [/...] crit;
include mime.types;
sendfile on;
server {
server_name [...] [...];
return 301 [...] $request_uri;
}
server {
listen 127.0.0.1:[...];
root [/...];
location / {
include uwsgi_params;
uwsgi_pass [.../uwsgi.sock];
}
}
}
Adding the following line either before or after the "location" clause above causes the server to stop serving the static files, which match the pattern in question:
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
}
It was suggested in the previous thread that this may be a uwsgi issue, although it's possible that the problem is caused by other issues. How can I implement browser caching, without causing the "404" issue?
It seems to me that you are serving only dynamic content. Also, nginx selects a location block to process a request, and it needs to be complete.
In your case, the uwsgi configuration from the location / block needs to be replicated across any new dynamic locations you may add. For example:
server {
...
include uwsgi_params;
location / {
uwsgi_pass [.../uwsgi.sock];
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
uwsgi_pass [.../uwsgi.sock];
}
}
You can probably move the include statement into the outer block and allow its statements to be inherited (assuming it only contains uwsgi_param statements).
I am using Sinatra as a webservice and angularjs to make the calls
post '/loginUser' do
session[:cui]=user['cui']
end
get '/cui' do
return session[:cui].to_s
end
But it doesn't seem to work (the '/cui' call returns an empty string) any help would be greatly apreciated.
UPDATE:
setting this in sinatra headers['Access-Control-Allow-Credentials'] = 'true' allows me to send the session, but it seems like $http directive is not using the browsers cookies
on the sinatra app
before do
headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE, OPTIONS'
headers['Access-Control-Allow-Origin'] = 'http://localhost:4567'
headers['Access-Control-Allow-Headers'] = 'accept, authorization, origin'
headers['Access-Control-Allow-Credentials'] = 'true'
end
angularjs app
host='http://127.0.0.1:5445/'
#viewController = ($scope,$http)->
$scope.getCui = ()->
$http.get(host+'cui',{ withCredentials: true}).success (data)->
$scope.cui=data
console.log data
Explanation:
AngularJS uses his own cookie system, so we need to specify that we can pass the cookies trough the $http.get call using the {withCredentials:true} configuration object. Sinatra needs to accept the cross domain cookies so we need the headers mentioned above.
Note: 'Access-Control-Allow-Origin' header cannot be wildcard.
One option around this would be to configure a http server with a proxy pass, so you could hit the same domain without incurring a cross origin error. That way you can continue to properly maintain your abstractions as 2 separate apps.
Here is a brief example with nginx:
upstream angular_app {
server localhost:3003;
}
upstream sinatra_app {
server localhost:3004;
}
server {
listen 80;
server_name local.angular_app.com;
root /Users/username/source/angular_app/;
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ ^/api/(.*)$ {
proxy_set_header Host $http_host;
proxy_read_timeout 1200;
proxy_pass http://sinatra_app/;
}
}
By routing at the server level, you can successfully bypass domain restrictions AND you can keep the applications separate.
I want to check whether a page view on localhost encountered a cache hit or a cache miss. I'm running varnish on my local machine. Next, I wanna check the X-cache header in response. But I cant see any X-cache tag in the response header. i'm able to see server, Etag, x-runtime et. , but not X-cache
How can I see the X-cache?
By adding it;
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}
Because currently only Chrome and Opera supports WebP, I was wondering if I could target those two particular browsers and redirect them to fetch another version of my website so I can help optimize my site downloading speed more faster?
Thanks.
I solved this problem like this:
Check if the client advertises "image/webp" in Accept header
If WebP is supported, check if the local WebP file is on disk, and
serve it
If server is configured as proxy, append a "WebP: true" header and
forward to backend
Append "Vary: Accept" if a WebP asset is served
in Nginx:
location / {
if ($http_accept ~* "webp") { set $webp "true"; }
# Use $webp variable to add correct image.
}
In my case, I use thumbor software to convert images.
https://github.com/globocom/thumbor
pip install thumbor
My conf:
upstream thumbor {
server 127.0.0.1:9990;
server 127.0.0.1:9991;
server 127.0.0.1:9992;
server 127.0.0.1:9993;
server 127.0.0.1:9994;
}
location / {
if ($http_accept ~* "webp") {
set $webp "T";
}
if ($uri ~* "(jpg|jpeg)$") {
set $webp "${webp}T";
}
proxy_cache_key $host$request_uri$webp;
if ($webp = "TT") {
rewrite ^(.*)$ "/unsafe/smart/filters:format(webp)/exemple.com$uri" break;
proxy_pass http://thumbor;
add_header Content-Disposition "inline; filename=image.webp";
}
if ($webp != "TT") {
proxy_pass http://exemple.com;
}
}
For a while now, thumbor supports automatic webp conversion:
https://github.com/thumbor/thumbor/wiki/Configuration#auto_webp
You'll still have to configure the load balancer to pass the webp accepts header, but other than that, thumbor will take care of everything for you.
Hope that helps!