I seem to be having some problems with my Varnish set up. I have a clean install of Varnish and Nginx running on ubuntu, everything seems to be running, but I don't seem to be actually caching anything.
This is what im seeing:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Powered-By: PHP/5.5.9-1ubuntu4.14
Cache-Control: no-cache
Date: Tue, 02 Feb 2016 10:15:17 GMT
Content-Encoding: gzip
X-Varnish: 196655
Age: 0
Via: 1.1 varnish-v4
Accept-Ranges: bytes
Connection: keep-alive
I'm almost certain the problem is to do with the "age" response being 0. I have read that the Cache-Control header can be the culprit and have spent some time configuring both nginx and my vcl file with solutions I have read on-line, none of which have worked.
I'm open to any ideas even ones I have tried before (hence why im not listing the steps I have already taken).
Thanks in advance for any thoughts you might have.
Remove "no-cache" and set "max-age=120" (in seconds) in the Cache-Control header instead.
Also note that if the request contains any cookies or if the response sets any cookies than by default varnish is not gonna cache.
Related
I'm interacting with a server (that is out of my control) in which protocol upgrade is not performed by the server if a request contains content (POST, PUT, PATCH with payload). It's unclear exactly what the issue with the server is, but I noticed that when I query with --http2-prior-knowledge, the protocol is upgraded:
❯ curl -i -PUT --http2-prior-knowledge http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/2 200
date: Tue, 08 Nov 2022 13:26:50 GMT
content-type: application/json;charset=utf-8
vary: Accept-Encoding
content-length: 78
The same request without --http2-prior-knowledge is stuck at HTTP/1.1. This seems closer to the default behaviour of Go's HTTP client
❯ curl -i -PUT --http2 http://localhost:8081/document/v1/foo -d '{"fields": {"docid": "123"}}'
HTTP/1.1 200 OK
Date: Tue, 08 Nov 2022 01:37:17 GMT
Content-Type: application/json;charset=utf-8
Vary: Accept-Encoding
Content-Length: 78
When I call this same API using Go's default client, the protocol is not upgraded. I've tried setting ForceAttemptHTTP2: true on the transport, but each http.Response contains a .Proto of HTTP/1.1
I think what I need to understand is how I can mimic curl's prior-knowledge flag in Go. Is this possible?
I solved this issue by specifying a custom http2.Transport which skipped TLS dial. The ideal solution, in retrospect, is to use an SSL certificate (self-signed is sufficient) which would better guarantee the use of HTTP2. Leaving some links for posterity
c := &http.Client{
// Skip TLS Dial
Transport: &http2.Transport{
AllowHTTP: true,
DialTLS: func(netw, addr string, cfg *tls.Config) (net.Conn, error) {
return net.Dial(netw, addr)
},
},
}
And links:
Why do web browsers not support h2c (HTTP/2 without TLS)?
https://github.com/golang/go/issues/14141
I'm pulling my hairs for days trying to serve brotli compressed files through my local nginx install.
My configuration :
MacOS 12.6, Homebrew, Laravel Valet for managing sites and ssl
default nginx install replaced with nginx-full homebrew formulae that allows recompiling nginx with modules -> installed with the brotli module
I have tried different nginx brotli configuration, like this one
I think I do not have to do this, but I still tried to add specific proxy configurations for the files I want served with brotli
location ~ [^/]\.data\.br(/|$) {
add_header Content-Encoding br;
default_type application/octet-stream;
}
location ~ [^/]\.js\.br(/|$) {
add_header Content-Encoding br;
default_type application/javascript;
}
In the end, the http response does not contain content-encoding:br
nginx shows the module is installed :
$ nginx -V 2>&1 | tr ' ' '\n' | egrep -i 'brotli'
--add-module=/usr/local/share/brotli-nginx-module
When testing with curl it works for gzip but not for brotli :
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:20 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
content-encoding: gzip
HTTP/2 200
server: nginx/1.23.1
date: Thu, 20 Oct 2022 09:57:21 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
x-powered-by: PHP/8.1.10
access-control-allow-origin: *
HERE IT SHOULD BE "content-encoding: br" BUT IT'S NOT
Any idea is welcome, I don't understand what is going on... cheers.
I have 3 marathon servers running in HA. when i reach the rest api on the leader, it returns good data. But when i try it against one of the non leader nodes, I do not get any data back...no strings at all. The headers say 200...but no data. Has anybody experienced this before?
here is what i see on the leader
# curl -i http://10.0.0.1:8080/v2/apps
HTTP/1.1 200 OK
X-Marathon-Leader: http://x1-master-0:8080
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Content-Type: application/json; qs=2
Connection: close
Server: Jetty(8.y.z-SNAPSHOT)
{"apps":[]}
here is the data from the non leader
# curl -i http://10.0.0.2:8080/v2/apps
HTTP/1.1 200 OK
Connection: close
Server: Jetty(8.y.z-SNAPSHOT)
the problem was that the marathon servers could not resolve each other by name. Adding the hostnames of the other marathon servers to each marathon's /etc/hosts file fixed the problem.
I'm new to NGINX. I don't know a lot about it yet, but I'm trying to.
I'm curious what is the best way to serve the static contents from my page using NGINX. The main reason why I want to serve the static contents is that I want put less load on my application servers, and increase the page load speed.
I came across
a couple good articles that help me put these together this post : here, here, here, and here.
But everything is still a little clear.
Configuration
File Path : etc/nginx/default
server {
listen 80 default_server;
server_name default;
root /home/forge/site/public;
location / {
proxy_pass http://43.35.49.160/;
try_files $uri $uri/ /index.php?$query_string;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
}
Test/Result
After saving my file, I run service nginx reload.
Next, I tried run : curl -X GET -I http://45.33.69.160/index.php
I got:
HTTP/1.1 200 OK
Server: nginx/1.6.3
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-cache
Date: Fri, 08 May 2015 15:14:55 GMT
Set-Cookie: XSRF-TOKEN=eyJpdiI6IkhPa2kwK1wvd2kxMFV0TURzYnMwSXFnPT0iLCJ2YWx1ZSI6IkFpSFpvakNjcGp0b0RWcVViYXJcLzRHbmo3XC9qbStYc2VzYVh4ZHVwNW45UGNQMmltZEhvSys1NjhZVzZmckhzOGRBUk5IU1pGK084VDF1ZmhvVkZ4MlE9PSIsIm1hYyI6IjliMzc5NWQ4MWRiMjM1NzUxNjcyNGNmYWUzMGQyMDk3MjlkYTdhYzgxYTI0OGViODhlMTRjZTI4MWE5MDU2MGYifQ%3D%3D; expires=Fri, 08-May-2015 17:14:55 GMT; Max-Age=7200; path=/
Set-Cookie: laravel_session=eyJpdiI6Iklhb041MkVBak0rVm5JeUZ0VVwvZ3pnPT0iLCJ2YWx1ZSI6IitRUFlzQzNmSm1FZ0NQVVFtaTJ4cG1hODlDa2NjVDgzdXBcLzRcL0ZSM1ZPOTRvRGo5QjQ1REluTUM3Vjd3cFptV3dWdHJweTY3QW5QR2lwTkZMUlNqbnc9PSIsIm1hYyI6IjIxOTZkYzM5ODE0N2E4YmQzODMxZGYzMDY3NjI4ODM1YWQxNGMxNDRlZDZmMGE1M2IwZWY2OTU4ZmVjOTIyMjkifQ%3D%3D; expires=Fri, 08-May-2015 17:14:55 GMT; Max-Age=7200; path=/; httponly
Then, I tried run curl -X GET -I http://45.33.69.160/css/custom.css
I got :
HTTP/1.1 200 OK
Server: nginx/1.6.3
Date: Fri, 08 May 2015 15:16:03 GMT
Content-Type: text/css
Content-Length: 2890
Last-Modified: Thu, 07 May 2015 03:02:38 GMT
Connection: keep-alive
ETag: "554ad5ce-b4a"
Accept-Ranges: bytes
Why do I see Cache-Control: no-cache and I just set up the cache ?
Everything is just unclear to me right now.
Questions
Can someone please make it clear on how to :
configure this properly
test that configuration if it is work
see the different between caching and not caching
benchmark it and print out that report on a page or CLI
?
Cache-Control: no-cache
As said in this answer about no-cache, which links to the spec, the Cache-Control: no-cache should tell the user agent and in-between caches which caching style to use (namely to revalidate each time with the server). This applies if you use nginx exclusively. If you use it as a pass-through, you need to set proxy_ignore_headers, like
proxy_ignore_headers Cache-Control;
Config
Apart from that: in the NGINX reference about content caching, it says to put the line
proxy_cache_path /data/nginx/cache keys_zone=one:10m;
in the http part, followed by
proxy_cache one;
in the server part.
Testing
In this SF question, it says to test caching behavior by adding the X-Cache-Status header via the config file
add_header X-Cache-Status $upstream_cache_status;
Its answer states that
You can view headers with
the Firefox addon firebug
the Chrome debugging console
cURL (curl -I )
I try to figure out a cross-domain API issue.
I have an application created with Sencha Touch 2.3.1 that is using Ajax to fetch data from remote server.
The issue that I am facing is that all Ajax requests against local server does not contain all headers in response.
On remote server, all works fine and headers are ok.
Here are two prints that show the headers sent and received for each server individualy
1 - headers sent and received from localhost (http://local.api - vhost)
Headers received:
Connection Keep-Alive
Content-Length 274
Content-Type text/html; charset=iso-8859-1
Date Mon, 07 Jul 2014 10:58:54 GMT
Keep-Alive timeout=5, max=100
Location http://local.api/fa/?ref.agent/lista-clienti&_dc=1404730734262
Server Apache/2.2.17 (Win32) PHP/5.3.3
Headers sent:
Accept text/html,application/xhtml+xml,
application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language ro-ro,ro;q=0.8,en-us;q=0.6,en-gb;q=0.4,en;q=0.2
Content-Length 33
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Host local.api
Origin http://sencha.local
Referer http://sencha.local/fisa-agenti/index.html
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101
Firefox/30.0
2 - headers sent and received from remote server (http://adgarage.ro)
Headers received
Accept-Ranges bytes
Access-Control-Allow-Cred... true
Access-Control-Allow-Orig... *
Age 0
Connection keep-alive
Content-Length 375
Content-Type application/json
Date Mon, 07 Jul 2014 10:58:52 GMT
Server Apache/2.2.22 (Unix) mod_ssl/2.2.22
OpenSSL/0.9.8e-fips-rhel5
Via 1.1 varnish
X-Powered-By PHP/5.3.13
X-Varnish 562862498
Headers sent
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language ro-ro,ro;q=0.8,en-us;q=0.6,en-gb;q=0.4,en;q=0.2
Host adgarage.ro
Origin http://sencha.local
Referer http://sencha.local/fisa-agenti/index.html
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101
Firefox/30.0
Note the Access-Controll-Allow header.
It is missing from the header container received from localhost
And here is my .htaccess file:
Header set Access-Control-Allow-Origin *
Header set Access-Control-Allow-Credentials: true
this file is the same on both servers.
I have the headers_module acitive on local machine.
Another thing I noticed is that response status from local is 301 moved Permanently while the response status received from remote server is 200 Ok
What I am missing?
Thank you!
I've identified the problem.
As discussed in this this topic headers were not sent because of the 301 Moved Permanently status.
My local requests were made to http://local.api/fa?ref.agent/... instead of http://local.api/fa/?ref.agent/... - notice the trailing slash missing after /fa in the first link.
Everything it's ok now.