I have a website www.mysite.com running behind a load balancer. There are two servers in the load balancer cluster. Each runs Varnish 3.0 and Apache/PHP (I know varnish could load balance for me - but we have a preference for a different LB tech).
Every now and again I need to purge an URL or two...
In my VCL I have 127.0.0.1 as a trusted URL for PURGEs. And a standard purge config:
vcl_recv:
....
if (req.request == "PURGE") {
# Allow requests from trusted IPs to purge the cache
if (!client.ip ~ trusted) {
error 405 "Not allowed.";
}
return(lookup); # #see vcl_hit;
}
...
sub vcl_hit {
if (req.request == "PURGE") {
purge;
error 200 "Purged (via vcl_hit)";
}
if (!(obj.ttl > 0s)) {
return (pass);
}
return (deliver);
}
sub vcl_miss {
if (req.request == "PURGE"){
purge;
error 404 "Not in Cache";
}
return (fetch);
}
Now from a shellscript I want to invalidate an URL.
curl -X PURGE http://127.0.0.1/product-47267.html
Doesnt work, but
curl -X PURGE http://www.mysite.com/product-47267.html
Does work. Problem here is - I need to invalidate on each local machine in cluster - not have the request go out and back in via the load balancer (because I dont know which machine will take the PURGE).
Hope this makes sense
LW
You need to connect to localhost but Varnish still need to know which Host you want to PURGE.
I'm not sure but try something like the following :
curl -X PURGE -H "Host: www.mysite.com" http://127.0.0.1/product-47267.html
Related
I'm running CentOs 7 with DirectAdmin. I have created some users with websites. This works fine on httpd. But after installing Varnish, I get the notification "Apache is functioning normally".
How can I configure varnish to send domainone.com to
/var/html/www/domainone.com/public_html and domaintwo.com to /var/html/www/domaintwo.com/public_html
I've already tried to add backend server to the right direction and port but the page stays redirected to the apache notification.
Any help is much appreciated.
Thanks in advance.
How is you apache configured?
The generic answer to you question would be something like:
sub vcl_recv {
if (req.http.host == "www.domainonecom") {
set req.url = "/var/html/www/domainone.com/public_html" + req.url;
} else if (req.http.host == "www.domaintwo.com") {
set req.url = "/var/html/www/domaintwo.com/public_html" + req.url;
} else {
return (synth(404));
}
}
but it doesn't seem right because varnish passes the host header along (by default), so if your apache works, varnish should change that. Have a look at varnishlog -d -q 'BereqURL' -g request and see what gets sent to the backend.
My idea is to configure Varnish-cache on primary backend fails (HTTP 503 for example), first of all to try another backend, if failed, serve static error message.
Is it possible to configure it that way? P.S. I don't want varnish to work with emergency backend unless primary has really failed. Emergency backend always has a bit outdated data.
I am using Varnish 4, planing to move to 5.X soon. Backend is Java or PHP applications.
Sure you can do that, you should change your vcl_backend_response code tuning it with bereq.retries and return(retry) :
sub vcl_backend_response {
if ( beresp.status == 503 && bereq.retries == 0 ) {
set bereq.http.Host = "myNewHost";
return(retry);
}
if ( beresp.status == 503 && bereq.retries > 0 ) {
return (synth(503, "Oh noes!"));
}
}
We have 2 file servers(Apache port-82) which is running under Load Balancer. And I have configured varnish successfully for a domain(imgs.site.com) in 2 servers(port-80) and its working properly when i put a host entry for the server but when i access it globally(through LB) it went Aborted request. I guess there is something missing in my configuration. Pls help.
Here is my vcl configuration and i have the same configuration in both file1 and file2 servers
backend default {
.host = "127.0.0.1";
.port = "82";
.first_byte_timeout = 60s;
.between_bytes_timeout = 60s;
}
sub vcl_recv {
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
return (pipe);
}
if (req.http.host == "imgs.site.com") {
set req.http.host = "imgs.site.com";
set req.backend = default;
return (lookup);
}
}
It may be a basic question and since we're new to varnish, we dont know how to solve it.
So to clarify, you have a load balancer for domain imgs.site.com passing along requests to port 80 on two machines. Each of these is running varnish and routing requests back to themselves on port 82. If some new request gets routed to http server A, and then the same request comes in again later and gets routed to http server B, the second request will be as slow as the first and you'll end up with the same lookup cached on two machines, so you'd get better cache performance if you set up a single varnish and used it as your load balancer in a round-robin configuration.
But to solve it the way it is, you can get diagnostic information about how varnish is responding to a request by running varnishlog while the request comes in. You can further verify that a request from the varnish machine to its backend (in this case, itself) works by running from a shell on the varnish machine:
$ telnet 127.0.0.1 82
and if you see a success message, enter a basic GET command (with two returns afterward):
GET / HTTP/1.0
You can test more complex requests requiring authentication or POST payloads using wget or curl commands.
And of course, verify that the http server is receiving the request by checking the logs.
I have been Googling aggressively, but without luck.
I'm using Varnish with great results, but I would like to host multiple websites on a single server (Apache), without Varnish caching all of them.
Can I specify what websites by URL to cache?
Thanks
(edited after comment) It's req.http.host, so in your vcl file (e.g. default.vcl) do:
sub vcl_recv {
# dont cache foo.com or bar.com - optional www
if (req.http.host ~ "(www\.)?(foo|bar)\.com") {
pass;
}
# cache foobar.com - optional www
if (req.http.host ~ "(www\.)?foobar\.com") {
lookup;
}
}
And in varnish3-vcl:
sub vcl_recv {
# dont cache foo.com or bar.com - optional www
if (req.http.host ~ "(www\.)?(foo|bar)\.com") {
return(pass);
}
# cache foobar.com - optional www
if (req.http.host ~ "(www\.)?foobar\.com") {
return(lookup);
}
}
Yes,
in vcl_recv you just match the hosts that you would like not to cache and pass them. Something like this (untested):
vcl_recv {
# dont cache foo.com or bar.com - optional www
if (req.host ~ "(www)?(foo|bar).com") {
return(pass);
}
}
For Varnish 4
replace lookup with hash
default.vcl:
sub vcl_recv {
# dont cache foo.com or bar.com - optional www
if (req.http.host ~ "(www\.)?(foo|bar)\.com") {
return(pass);
}
# cache foobar.com - optional www
if (req.http.host ~ "(www\.)?foobar\.com") {
return(hash);
}
}
We used to have a caching proxy setup using a very early version of Varnish (0.5ish, I think) which used the 'restart' action to send requests to a second backend in the case of a 404 on the first.
The new version of Varnish doesn't seem to support this - the 'restart' action no longer seems to be supported, and the 'req.restarts' variable is no longer recognised. Is such behaviour possible?
The documentation seems to be out of date, as do many of the online examples. man 7 vcl seems to reflect current behaviour though.
If it's not possible with Varnish, can you suggest another solution?
Here are the relevant bits of our old Varnish config:
sub vcl_recv {
# remove cookies
remove req.http.Cookie;
if (req.restarts == 0) {
set req.backend = backend1;
} else if (req.restarts == 1) {
set req.backend = backend2;
}
# remove any query strings
set req.url = regsub(req.url, "\?.*", "");
# force lookup even when cookies are present
if (req.request == "GET" && req.http.cookie) {
lookup;
}
}
sub vcl_fetch {
# we might set a cookie from the Rails app
remove obj.http.Set-Cookie;
# force minimum ttl of 1 year
if (obj.ttl < 31536000s) {
set obj.ttl = 31536000s;
}
if (obj.status != 200 && obj.status != 302) {
restart;
}
}
It seems this behaviour has been reinstated in more recent versions of Varnish.