Varnish under LB server - caching

We have 2 file servers(Apache port-82) which is running under Load Balancer. And I have configured varnish successfully for a domain(imgs.site.com) in 2 servers(port-80) and its working properly when i put a host entry for the server but when i access it globally(through LB) it went Aborted request. I guess there is something missing in my configuration. Pls help.
Here is my vcl configuration and i have the same configuration in both file1 and file2 servers
backend default {
.host = "127.0.0.1";
.port = "82";
.first_byte_timeout = 60s;
.between_bytes_timeout = 60s;
}
sub vcl_recv {
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
return (pipe);
}
if (req.http.host == "imgs.site.com") {
set req.http.host = "imgs.site.com";
set req.backend = default;
return (lookup);
}
}
It may be a basic question and since we're new to varnish, we dont know how to solve it.

So to clarify, you have a load balancer for domain imgs.site.com passing along requests to port 80 on two machines. Each of these is running varnish and routing requests back to themselves on port 82. If some new request gets routed to http server A, and then the same request comes in again later and gets routed to http server B, the second request will be as slow as the first and you'll end up with the same lookup cached on two machines, so you'd get better cache performance if you set up a single varnish and used it as your load balancer in a round-robin configuration.
But to solve it the way it is, you can get diagnostic information about how varnish is responding to a request by running varnishlog while the request comes in. You can further verify that a request from the varnish machine to its backend (in this case, itself) works by running from a shell on the varnish machine:
$ telnet 127.0.0.1 82
and if you see a success message, enter a basic GET command (with two returns afterward):
GET / HTTP/1.0
You can test more complex requests requiring authentication or POST payloads using wget or curl commands.
And of course, verify that the http server is receiving the request by checking the logs.

Related

Varnish configuration with virtual hosts on Centos 7 / directadmin

I'm running CentOs 7 with DirectAdmin. I have created some users with websites. This works fine on httpd. But after installing Varnish, I get the notification "Apache is functioning normally".
How can I configure varnish to send domainone.com to
/var/html/www/domainone.com/public_html and domaintwo.com to /var/html/www/domaintwo.com/public_html
I've already tried to add backend server to the right direction and port but the page stays redirected to the apache notification.
Any help is much appreciated.
Thanks in advance.
How is you apache configured?
The generic answer to you question would be something like:
sub vcl_recv {
if (req.http.host == "www.domainonecom") {
set req.url = "/var/html/www/domainone.com/public_html" + req.url;
} else if (req.http.host == "www.domaintwo.com") {
set req.url = "/var/html/www/domaintwo.com/public_html" + req.url;
} else {
return (synth(404));
}
}
but it doesn't seem right because varnish passes the host header along (by default), so if your apache works, varnish should change that. Have a look at varnishlog -d -q 'BereqURL' -g request and see what gets sent to the backend.

Varnish emergency backend or serve static error

My idea is to configure Varnish-cache on primary backend fails (HTTP 503 for example), first of all to try another backend, if failed, serve static error message.
Is it possible to configure it that way? P.S. I don't want varnish to work with emergency backend unless primary has really failed. Emergency backend always has a bit outdated data.
I am using Varnish 4, planing to move to 5.X soon. Backend is Java or PHP applications.
Sure you can do that, you should change your vcl_backend_response code tuning it with bereq.retries and return(retry) :
sub vcl_backend_response {
if ( beresp.status == 503 && bereq.retries == 0 ) {
set bereq.http.Host = "myNewHost";
return(retry);
}
if ( beresp.status == 503 && bereq.retries > 0 ) {
return (synth(503, "Oh noes!"));
}
}

How do I access Varnish admin area?

This is a stupid question, sorry... I have tried googling and to no avail.
I thought it would just be visiting example.com:6082 but that doesn't seem to load anything.
# # Telnet admin interface listen address and port
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
Also, as a side question (I'm still working on getting it working), will varnish cache ANY file type, even if it's an RSS feed or a .php file or anything?
Varnish doesn't have an admin area. The admin port is for the CLI varnishadm tool. It will normally pick up the port automatically. You can also use the admin port to connect to Varnish from custom tools and issue admin commands.
Check out the docs for the varnishadm tool. Here's an example of specifying the port:
varnishadm -T localhost:6028
There is a tool called the VAC (Varnish Administration Console) that provides a web based admin console, but it's quite expensive, and is part of Varnish Plus.
As for the other part of your question, Varnish will cache anything it thinks is safe to cache. It doesn't look so much at file types, but more at HTTP headers. If the user sends cookies for example, Varnish won't cache the page by default as the cookies may indicate the user is on a dynamic page. Varnish also only caches GET requests by default.
Check out the default vcl. For version 3:
sub vcl_recv {
if (req.restarts == 0) {
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.request != "GET" && req.request != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
return (lookup);
}

PURGE on a Varnish 3.0.3 cluster

I have a website www.mysite.com running behind a load balancer. There are two servers in the load balancer cluster. Each runs Varnish 3.0 and Apache/PHP (I know varnish could load balance for me - but we have a preference for a different LB tech).
Every now and again I need to purge an URL or two...
In my VCL I have 127.0.0.1 as a trusted URL for PURGEs. And a standard purge config:
vcl_recv:
....
if (req.request == "PURGE") {
# Allow requests from trusted IPs to purge the cache
if (!client.ip ~ trusted) {
error 405 "Not allowed.";
}
return(lookup); # #see vcl_hit;
}
...
sub vcl_hit {
if (req.request == "PURGE") {
purge;
error 200 "Purged (via vcl_hit)";
}
if (!(obj.ttl > 0s)) {
return (pass);
}
return (deliver);
}
sub vcl_miss {
if (req.request == "PURGE"){
purge;
error 404 "Not in Cache";
}
return (fetch);
}
Now from a shellscript I want to invalidate an URL.
curl -X PURGE http://127.0.0.1/product-47267.html
Doesnt work, but
curl -X PURGE http://www.mysite.com/product-47267.html
Does work. Problem here is - I need to invalidate on each local machine in cluster - not have the request go out and back in via the load balancer (because I dont know which machine will take the PURGE).
Hope this makes sense
LW
You need to connect to localhost but Varnish still need to know which Host you want to PURGE.
I'm not sure but try something like the following :
curl -X PURGE -H "Host: www.mysite.com" http://127.0.0.1/product-47267.html

Using a second backend with Varnish 1.0.3-2 in case of 404 from first backend

We used to have a caching proxy setup using a very early version of Varnish (0.5ish, I think) which used the 'restart' action to send requests to a second backend in the case of a 404 on the first.
The new version of Varnish doesn't seem to support this - the 'restart' action no longer seems to be supported, and the 'req.restarts' variable is no longer recognised. Is such behaviour possible?
The documentation seems to be out of date, as do many of the online examples. man 7 vcl seems to reflect current behaviour though.
If it's not possible with Varnish, can you suggest another solution?
Here are the relevant bits of our old Varnish config:
sub vcl_recv {
# remove cookies
remove req.http.Cookie;
if (req.restarts == 0) {
set req.backend = backend1;
} else if (req.restarts == 1) {
set req.backend = backend2;
}
# remove any query strings
set req.url = regsub(req.url, "\?.*", "");
# force lookup even when cookies are present
if (req.request == "GET" && req.http.cookie) {
lookup;
}
}
sub vcl_fetch {
# we might set a cookie from the Rails app
remove obj.http.Set-Cookie;
# force minimum ttl of 1 year
if (obj.ttl < 31536000s) {
set obj.ttl = 31536000s;
}
if (obj.status != 200 && obj.status != 302) {
restart;
}
}
It seems this behaviour has been reinstated in more recent versions of Varnish.

Resources