The current setup of our backend uses Route53 to route requests to tomcat servers which run on ec2 instances.
I am trying to setup nginx as a load balancer(proxy) to route requests to our tomcat servers.
Here are the instance types,
Tomcat server instance type = m3.2xlarge
nginx server instance type
= c3.large
When I run ab (apache benchmark) with 100 concurrent connections without keep alive, I see that the performance of a single tomcat instance is better than 2 tomcat servers in front of an nginx server. I am now wondering if there is something wrong with my nginx config. I checked the error.log file on nginx instance and there are no errors. Also, the CPU on nginx instance does not cross 30% while running benchmark tool. Here is my nginx config,
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 32768;
events {
worker_connections 8192;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
upstream backend {
server x.x.x.x:443;
server x.x.x.x:443;
keepalive 1024;
}
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/certs/ssl-bundle_2015_2018.crt;
ssl_certificate_key /etc/nginx/certs/chewie.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_session_timeout 10m;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4";
location / {
proxy_pass https://backend;
proxy_cache_bypass true;
proxy_no_cache true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here are apache benchmark results without nginx.
Concurrency Level: 100
Time taken for tests: 8.393 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 368000 bytes
HTML transferred: 16800 bytes
Requests per second: 95.32 [#/sec] (mean)
Time per request: 1049.083 [ms] (mean)
Time per request: 10.491 [ms] (mean, across all concurrent requests)
Transfer rate: 42.82 [Kbytes/sec] received
These are results with nginx in front of 2 tomcat servers:
Concurrency Level: 100
Time taken for tests: 23.494 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 381600 bytes
HTML transferred: 16800 bytes
Requests per second: 34.05 [#/sec] (mean)
Time per request: 2936.768 [ms] (mean)
Time per request: 29.368 [ms] (mean, across all concurrent requests)
Transfer rate: 15.86 [Kbytes/sec] received
Any thoughts on where I should be looking to optimize are appreciated!
Here are some things done to improve performance,
Convert the traffic between nginx and the upstream server to http
form https
Use the right ssl ciphers for your nginx. Make sure to run
the ssl test to make sure ciphers used are secure (www.ssllabs.com)
Increase file descriptor limits for nginx server as well the tomcat
instances to a high number.
Will keep updating as I find more things.
Related
I've tried increasing the buffer sizes as suggested in other threads. Here is the output of
sudo nginx -T | grep buffer
fastcgi_buffer_size 4096k;
fastcgi_buffers 128 4096k;
fastcgi_busy_buffers_size 4096k;
proxy_buffer_size 4096k;
proxy_buffers 128 4096k;
proxy_busy_buffers_size 4096k;
I've restarted valet but I am still getting the error when submitting a POST request after adding something to my cart using darryldecode/laravelshoppingcart.
Any suggestions?
Thanks in advance!
I was getting this same error from my local Drupal 9 site. I've spent a few hours myself trying to find a solution that actually works for Valet (plus). After some trial and error the following steps solved the problem for me. Hope this helps anyone facing the same issue:
go to the nginx config folder:
cd /usr/local/etc/nginx
edit the nginx.conf file:
sudo nano nginx.conf
add these lines to the http{} section and save the file:
proxy_buffer_size 4096k;
proxy_buffers 128 4096k;
proxy_busy_buffers_size 4096k;
http {
include mime.types;
default_type application/octet-stream;
proxy_buffer_size 4096k;
proxy_buffers 128 4096k;
proxy_busy_buffers_size 4096k;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
...
edit the fastcgi_params file:
sudo nano fastcgi_params
append these lines to the end of the file:
fastcgi_buffer_size 4096k;
fastcgi_buffers 128 4096k;
fastcgi_busy_buffers_size 4096k;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param HTTP_PROXY "";
fastcgi_buffer_size 4096k;
fastcgi_buffers 128 4096k;
fastcgi_busy_buffers_size 4096k;
restart valet:
valet restart
Refresh the site in your browser
I'm trying to cache static files on a server, rather than going to the 'upstream' server each time. This upstream server happens to be Cloudfront,
Here is my nginx configuration:
nginx.conf http context:
proxy_cache_key "$scheme$host$request_uri";
proxy_cache_path /var/spool/nginx levels=1:1 keys_zone=oly_zone:1000m;
proxy_cache_use_stale updating;
proxy_cache_valid 200 301 302 10m;
proxy_cache_valid any 10s;
proxy_cache oly_zone;
website.conf:
location /gameimages/stock/ {
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache_valid 404 1s;
proxy_cache_valid any 15d;
proxy_cache oly_zone;
proxy_pass http://d34sdfsfsadfasdfmhbsdafirsdfsdffelaut.cloudfront.net/;
}
I thought this worked, but an example response header shows this:
Accept-Ranges:bytes
Age:11515
Connection:keep-alive
Content-Length:11577
Content-Type:image/jpeg
Date:Mon, 08 Aug 2016 19:25:16 GMT
ETag:"57a47349-2d39"
Last-Modified:Fri, 05 Aug 2016 11:06:49 GMT
Server:nginx/1.4.1
Via:1.1 3ba457b8dbcd4sadfsdfe93515e26caad.cloudfront.net (CloudFront)
X-Amz-Cf-Id:N0Dlk5c28sdfsf5Cvfskb3-T6PRBfSXfEPsdfasfuOLW7SHa1hjQ==
X-Cache:Hit from cloudfront
X-Proxy-Cache:HIT
It seems to be hitting both CloudFront and the cache on the server. Am I doing something wrong?
Thanks,
Michael
If issue is still actual - I have found an solution.
Generally CloudFront should be excluded from assets loading path and S3 bucket should be used directly as data source.
Solution: https://dpb587.me/blog/2015/06/20/using-nginx-to-reverse-proxy-and-cache-s3-objects.html
I my case only one line had to be added to get cache magic working:
# use google as dns
resolver 8.8.8.8;
Also sometimes SELinux requres some tuning to prevent nginx (13: Permission denied) errors:
sudo setsebool httpd_can_network_connect on -P
sudo semanage permissive -a httpd_t
I have Nginx in front of a Spring Boot 1.3.3 application with Tomcat access log enabled, but the logging always write the proxy IP address (127.0.0.1) instead of the real client IP.
Is the X-Real-IP header used to get the real client IP?
Is this header used by tomcat to write the IP address in the access log?
I have this configuration:
application.properties
server.use-forward-headers=true
server.tomcat.internal-proxies=127\\.0\\.0\\.1
server.tomcat.accesslog.enabled=true
Nginx configuration:
location / {
proxy_pass http://127.0.0.1:8091;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host $host;
}
The real client IP is available in $proxy_add_x_forwarded_for variable i.e. X-Forwarded-For header. It will have "," separated entries. The very first value is the real client IP.
To log the real client IP in Tomcat's access logs, modify the pattern value in the AccessLog Valve as:
%{X-Forwarded-For}i %l %u %t "%r" %s %b
I'm running a website on a vps with Plesk 11 which uses nginx as a reverse proxy. In my webserver settings I enabled gzip compression and browser caching with the following code:
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain [s]text/html[/s] text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location ~* ^/(.*\.(js|css|png|jpg|jpeg|gif|ico))$ {
expires 2w;
log_not_found off;
}
After I editing my CSS I want the cache to be cleared so changes will take effect immediately. I'm not familiar with SSH and I was wondering how to clear the nginx cache from the Pesk Admin Panel.
I need to enable gzip compression on nginx server. As I have observed from firfox firebug NET tools, I have found that html file are gzip compressed. But Not the javascript files and CSS files.
I have already check Mime.types and nginx configuration file /etc/nginx/ngnix.conf and not found any issue.
still not able to see the css and javascript Gzip Compression.
My NGINX.conf entries are as below
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
This is an working config that I currently use in production.
http://pastie.org/10870547
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/json
application/xml
application/rss+xml
image/svg+xml;
This config was tested via tools.pingdom.com.
You can find example configuration from the html5 boilerplate code.
# Enable Gzip
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_proxied any;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
gzip_static on;
gzip_proxied expired no-cache no-store private auth;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
If some of your files are compressed and some are not, then your gzip is working but you might have missed definition in gzip_types. For example, javascript files may return in headers any of following type:
application/javascript
application/x-javascript
text/javascript
To compress all javascript files, all three definitions should be included in gzip_types.
You need to check in response headers what content-type is returned for such an uncompressed file and then make sure it is also defined in gzip_types.
Are your gzip entries within the nginx configuration "scope" that js,css,etc. assets are being served? I ask because if you're using some sort of a framework, sometimes there are different location {...} blocks that handle html requests vs assets.
Also when you're testing in a browser, make sure you do a hard refresh to force the server to give you a "fresh copy" of whatever you're looking at.
Finally, you can use gzip_types * to allow anything to be gzipped. Perhaps someone else can chime in if this is a good practice or not.
To compress SVG, this line is correct:
image/svg+xml svg svgz;