I've spun up a new Amazon Linux AMI EC2 instance and installed varnish on top of it via the following process:
Installed varnish with sudo yum install varnish and set it to run on startup
Setup /etc/sysconfig/varnish to point varnish to listen to port 80
Added a off the shelf WordPress /etc/varnish/defualt.vcl from this repo
Modified /etc/httpd/conf/httpd.conf to have apache look at port 8080
Restarted httpd and varnish
When running /usr/sbin/varnishd -V I get the following: varnishd (varnish-3.0.7 revision f544cd8)
And it works. Kind of. It seems to be having the following issues:
When up and running, if I hit a page it will load fully, then serve the varnish cached page as expected, headers all setup correctly. However the cache will die after 120 seconds, not the time set in the default.vcl file. However if I make a deliberate error in the default.vcl file, varnish will not start.
When using varnishstat, varnishtop or varnishsizes nothing shows up. I get the default screen for each process but no hits or misses or sizes show up at all.
Varnish does not seem to want to run properly on startup and after being left ticking over with no access to the server for a week varnish seems to have stopped working. Along with this when running service varnish restart Stopping the cache always fails.
/etc/sysconfig/varnish https://pastebin.com/UmamLFMG
/etc/varnish/default/vcl https://pastebin.com/C3JdkuUe
/etc/https/conf/httpd.conf https://pastebin.com/TzvDVhkq
Related
I developed a Ruby website and in order to deploy it, I followed this tutorial. Everything went well, but I skipped all user switching operations because I really didn't see the point of it, and to be honest maybe this is the problem.
My problem is that the server is running, as we can see
And I can do a successful curl on it using the local machine, but I can't access it online.
To be honest, it's the first time I'm deploying a website ever so I'm sure I'm just missing an obvious thing (a DNS maybe), but I don't really know what.
The problem might also come from the fact that I'm using the passenger and Nginx binaries given by the passenger gem installed. I didn't install passenger and Nginx on my system so it's using the binaries from the gem.
EDIT:
Thanks all for the current answers, I think the problème, as stated by the first comment under this question is that I'm not using the default server port configured by Nginx but another one, so I'm gonna try to add my port in the Nginx config file.
And to clarify a bit, because I don't have a server name, I'm running my tests using :
curl ipaddress:port
EDIT 2:
I just tried looking at the config file and it appears that the passenger is generating an Nginx config file (because it uses its own Nginx standalone binary to run) that looks like that, so the port must not be not the problem.
Maybe I really have no choice but to use port:80 but now I'm not even sure if I can ping the Nginx standalone from outside my VM, I'm a total beginner with Nginx
EDIT 3:
A netstat gives me this
So the nginx server is really running, but how can I access it? Because curl ipaddress:8080 (I changed the port since the first try with 90 and replaced it with 8080) is not working. But on the local machine a curl on 0.0.0.0:8080 is still working.
I have a nginx server deployed as a reverse proxy. everything works great if I regularly use the service.
Issue happens when the nginx service that I have deployed is inactive or not used(NO REQUEST PROCOSSED) for few days.
When I try to launch THE application using nginx the static files download take lots of time even though the size of the files are in byte.
issues goes away after I restart my nginx SERVER.
Using 1.15.8.3 version of openresty.
any suggestion/help will be highly appreciated.
So I rebooted an EC2 instance. When I went to pull that website up that links to that instance it is now down with a 521 error saying the website is down. We used Nginx as a web server.
I haven't tried much as I am not familiar with this issue. I do know that I should be trying to restart nginx I just do not know in what directory.
If everything was working fine for you then you can simply bring up the ngnix on ec2 machine using below command :
sudo service nginx restart
Config file for NGINX was not pointing to the correct website. I guess someone changed the file and me rebooting it enabled the changes. Just went into config file and changed the site back to the one I was using rebooted and bingo.
I have installed Magneto 2 on EC2 with a MySQL database hosted separately on a RDS instance. I have configured the varnish on my EC2 server.
Whenever I open multiple tabs on the website, the varnish server crashes on a couple of product pages with the following error:
Error 503 Backend fetch failed"
Varnish version: 4.0.4 on Centos 6
I had the same problem and finally figured out what the reason was. We had the website silently running for testing without problems for a week. When we pointed our domain to the amazon EC2 instance - varnish crashed after some hours.
But that was maybe a coincidence. I found out someone was scraping the ip to find some access. logs were full of access tries to /sql /phpmyadmin /administrator ...and so on.
That caused varnish crashing.
I have a rather strange issue that appears to have only started in the last few days (no changes have been made in the last few days) . I'll try and describe it as best I can but please let me know if I need to post particular configs etc.
I have a setup with an nginx load balancer in front of 2 backend nginx web servers running on CentOS 6.3. This site has been running fine up until now but recently when visiting the home page it has been showing me the page I created as the default site on the webserver (it is a discrete notice that says the host header has not been recognised on the webserver). When you visit a sub-page e.g. domain.com/about-us it is fine but visiting domain.com shows the default site. It appears that this problem can be solved by disabling the caching on the load balancer for this particular site. As soon as the cache is not used, the homepage shows correctly.
The cache on the load balancer is held in a RAM drive mapped to /cache with a total available size of 256Mb. The cache configuration is as follows
proxy_cache_path /cache levels=1:2 keys_zone=app-cache:30m max_size=126m inactive=10m;
proxy_temp_path /cache/tmp;
proxy_buffer_size 8192;
proxy_max_temp_file_size 1m;
I reduced the total size of the cache last night as I have a feeling it was causing another problem I saw recently where we came under extreme load, the cache filled to the max allowed size of 256Mb but that left no room on the mount point for the temp path to buffer files from the upstreams and started serving empty files to clients.
I've looked in the error log for the particular site and found a number of instances of
2013/01/03 05:47:24 [crit] 22889#0: *352983 pwrite() "/cache/tmp/0000028264" failed (28: No space left on device) while reading upstream, client: 121.58.173.7, server: www.domain.com, request: "GET /images/slider-plus.gif HTTP/1.1", upstream: "http://10.0.100.193:80/images/slider-plus.gif", host: "www.domain.com", referrer: "http://www.domain.com/example-page"
A df -h shows the /cache mount to only be using 1% of the available space at the moment, so I'm not sure why I should be still getting this error.
The nginx version on the load balancer is 1.0.11 (built from source)
The nginx version on the web server is 1.0.15 (installed from package from epel)
I realise that nginx is in need of updating, I only realised this yesterday. If this is the problem then I can bring forward the plans to update it, but it'd be useful to have confirmation of what this problem might be and how it can be solved.
Please feel free to request any extra info needed to help diagnose this situation.
Many thanks
Eric
Edit: I thought I should add that the problem of the site not showing the homepage does present itself when the server is under very little load, the load balancer is idle most of the time with not much traffic. The empty file problem was only shown when the servers were under much higher load than usual.