I have a frontend hosted under nginx and backend hosted under tomcat.
I got the error 405 : Method not allowed when I try to login into the application.
you will find here the config file nginx.conf :
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
client_max_body_size 300M;
location / {
proxy_http_version 1.1;
proxy_request_buffering off;
expires off;
try_files $uri $uri/ /index.html;
}
location /api/app/ {
proxy_http_version 1.1;
proxy_request_buffering off;
expires off;
proxy_method GET;
proxy_pass http://192.168.57.129:8080/backend-1.0/api/app/;
}
}
}
I added also the following lines to web.xml in order to let the server accept GET, POST, etc. request:
<param-name>readonly</param-name>
<param-value>false</param-value>
The connection with the database works properly when I start tomcat. But the error still existing.
Related
I am using nginx and a spring boot application with Netty server, but for some requests nginx is throwing 502 error even though netty access logs are showing 200OK for the same request. So basically the response packet is being dropped in between netty server and nginx.
This is my nginx.conf
daemon off;
worker_processes 4;
worker_rlimit_nofile 100000;
pid /var/run/nginx.pid;
error_log /opt/logs/myservice/nginx-error.log warn;
events {
worker_connections 512;
use epoll;
multi_accept off;
}
http {
#server_tokens off;
#include mime.types;
default_type application/octet-stream;
################# Gzip Settings ################
gzip on;
gzip_comp_level 4;
gzip_min_length 1024;
gzip_proxied any;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js application/soap
+xml;
gzip_disable "MSIE [1-6]\.";
####################################################
log_format upstreamlog '$time_local $status $remote_addr to:- $upstream_addr $request -- upstream_response_time:$upstream_response_time request_time:$request_time tid_header:$http_tid status:$upstream_cache_status slot:$http_slot slotTime:$http_slotstarttime ttlReq:$http_ttl ttlResp:$upstream_http_x_accel_expires jobFlag:$http_jobflag cookies:"$http_cookie" bytes_sent:$bytes_sent gzip_ratio:$gzip_ratio "$http_referer" "$http_user_agent" $http_x_forwarded_for';
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 10000;
reset_timedout_connection on;
client_body_timeout 30;
send_timeout 300;
set_real_ip_from 10.117.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_scheme;
proxy_set_header X-Real-IP $remote_addr;
server {
listen 80;
server_name 127.0.0.1;
client_header_buffer_size 64k;
large_client_header_buffers 4 64k;
client_max_body_size 2M;
if ($host ~* ^(example)) {
rewrite ^/(.*)$ https://www.example.com/$1 permanent;
}
access_log /opt/logs/myservice/nginx-frontend.log upstreamlog;
location / {
# Proxy Settings
proxy_pass http://127.0.0.1:8000$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_buffering off;
proxy_buffers 8 16k;
proxy_buffer_size 16k;
proxy_set_header Cookie "";
fastcgi_read_timeout 120;
proxy_read_timeout 120;
client_max_body_size 500M;
add_header Cache-Control "no-cache, , max-age=0, must-revalidate, no-store";
################# Gzip Settings ################
gzip on;
gzip_comp_level 4;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js application/soap+xml;
###################################################
gzip_disable "MSIE [1-6]\.";
set_real_ip_from 10.117.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
}
location /nginx_status {
stub_status on;
access_log off;
}
}
}
I am using spring boot version of 2.5.0.
Also, during the issue, the CPU and memory usage are also below 10%.
I already have tried changing the number of worker processes and changing the reverse proxy timeouts. Tries increasing the keep alive timeouts and keep alive connections count.
Does the error.log of Nginx explain the reason for 502? See if this helps you.
NGINX returning HTTP 502, but HTTP 200 in the logs
I do not know the exact reason why this is happening but upgrading my Netty server version does sole the problem.
We would like to launch a NextJS 10 app using NGINX so we use a configuration similar to:
location /_next/static/ {
alias /home/ec2-user/my-app/.next/static/;
expires 1y;
access_log on;
}
It works great, it caches for a year our statics but as we use NextJS images I'm failing to add an expires tag on on-the-fly resized images.
If I do:
location /_next/image/ {
alias /home/ec2-user/my-app/.next/image;
expires 1y;
access_log on;
}
It just returns a 404 on images.
Here is my server part NGINX config :
server {
listen 80;
server_name *.my-website.com;
# root /usr/share/nginx/html;
# root /home/ec2-user/my-app;
charset utf-8;
client_max_body_size 20M;
client_body_buffer_size 20M;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
underscores_in_headers on;
add_header X-Frame-Options SAMEORIGIN always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "same-origin" always;
location = /robots.txt {
proxy_pass https://api.my-website.com/robots.txt;
}
location /_next/static/ {
alias /home/ec2-user/my-app/.next/static/;
expires 1y;
access_log on;
}
location / {
# reverse proxy for merchant next server
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass_request_headers on;
proxy_cache_bypass $http_upgrade;
proxy_buffering off;
}
}
Here is an example how you can rely of upstream Content-Type header to set up the Expires and Cache-Control headers:
map $upstream_http_content_type $expire {
~^image/ 1y; # 'image/*' content type
default off;
}
server {
...
location / {
# reverse proxy for merchant next server
proxy_pass http://localhost:3000;
...
expires $expire;
}
}
The same way you can tune cache control headers for any other content type of proxied response. The $upstream_http_<name> nginx variable is described here.
Update
To add cache control headers only by specific URIs you can use two chained map blocks:
map $uri $expire_by_uri {
~^/_next/image/ 1y;
default off;
}
map $upstream_http_content_type $expire {
~^image/ $expire_by_uri;
default off;
}
And if you don't expect anything but the images from /_next/image/... URIs, you can just use the
map $uri $expire {
~^/_next/image/ 1y;
default off;
}
I have a rails application that is running on Nginx and Puma in production environment.
There is a problem with web page loading (TTBF delay), and I am trying to figure out a reason.
On backend side in production.log I see that my web page is rendered fast enough in 134ms:
Completed 200 OK in 134ms (Views: 49.9ms | ActiveRecord: 29.3ms)
But in browser I see that TTFB is 311.49ms:
I understand that there may be a problem in settings or processes count may be not optimal, but cannot find a a reason of ~177ms delay.. Will be grateful for some advices.
My VPS properties and configurations are listed below.
Environment
Nginx 1.10.3
Puma 3.12.0 (rails 5.2)
PostgreSQL
Sidekiq
ElasticSearch
VPS properties
Ubuntu 16.04 (64-bit)
8 cores (2.4 GHz)
16gb of RAM.
Network Bandwidth: 1000 Mbps
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 8096;
multi_accept on;
use epoll;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
web_app.conf
upstream puma {
server unix:///home/deploy/apps/web_app/shared/tmp/sockets/web_app-puma.sock fail_timeout=0;
}
log_format timings '$remote_addr - $time_local '
'"$request" $status '
'$request_time $upstream_response_time';
server {
server_name web_app.com;
# SSL configuration
ssl on;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_buffer_size 4k;
ssl_certificate /etc/ssl/certs/cert.pem;
ssl_certificate_key /etc/ssl/private/key.pem;
root /home/deploy/apps/web_app/shared/public;
access_log /home/deploy/apps/web_app/current/log/nginx.access.log;
error_log /home/deploy/apps/web_app/current/log/nginx.error.log info;
access_log /home/deploy/apps/web_app/current/log/timings.log timings;
location ^~ /assets/ {
#gzip_static on;
expires max;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
access_log off;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_request_buffering off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_body_buffer_size 8K;
client_max_body_size 10M;
client_header_buffer_size 1k;
large_client_header_buffers 2 16k;
client_body_timeout 10s;
keepalive_timeout 10;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
}
puma.rb
threads 1, 6
port 3000
environment 'production'
workers 8
preload_app!
before_fork { ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord) }
on_worker_boot { ActiveRecord::Base.establish_connection if defined?(ActiveRecord) }
plugin :tmp_restart
Check the true response time of the backend
The backend might claim it's answering/rendering in 130ms, that doesn't mean it's actually doing that. You can define a logformat like this:
log_format timings '$remote_addr - $time_local '
'"$request" $status '
'$request_time $upstream_response_time';
and apply it with:
access_log /var/log/nginx/timings.log timings;
This will tell how long the backend actually takes to respond.
Others possible way to debug
Check the raw latency between you and the server (i.e. with ping or by querying from the server itself)
Check how fast static content is served to get a baseline
Use caching
Add something like this to your location block:
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
proxy_cache my_cache;
If your backend supports a "moddified since" header:
proxy_cache_revalidate on;
Disable buffering
You can instruct nginx to forward the responses from the backend without buffering them. This might reduce response time:
proxy_buffering off;
Since version 1.7.11 there also exists a directive that allows nginx to forward a reponse to a backend without buffering it.
proxy_request_buffering off;
First off: I don't have much experience with Nginx.
I'll just proceed directly to the problem though:
Nginx config:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
proxy_cache_path /var/nginx_cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=10g;
upstream server {
server -removed-;
}
server {
listen 80;
server_name -removed-;
location / {
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_http_version 1.1;
gzip_min_length 500;
gzip_vary on;
gzip_proxied any;
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy
text/js
text/xml
text/javascript;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache STATIC;
proxy_set_header Host $host;
----> proxy_ignore_headers Vary; <-----
proxy_cache_key $host$uri$is_args$args;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_pass -removed-;
}
}
}
When the line 'proxy_ignore_headers Vary;' is set, everything will cache, including the HTML pages. When I remove this line, everything gets cached EXCEPT the HTML pages. Why is this?
I would like that Nginx caches the HTML pages even when Vary-headers are being sent by the origin server.
I hope someone can help me :).
Response Headers are:
Vary:Host, Content-Language, Content-Type, Content-Encoding
Fixed:
In the source code of Nginx there is set a maximum of 42 characters being used by Vary headers. In my case there where 51 characters thus my Vary headers where being handled as Vary:* (no-cache). Setting the maximum to 84 fixed it for me.
This article explains it more in depth.
https://thedotproduct.org/nginx-vary-header-handling/
Credits to the guy posting that short article.
We've a running a grails application on a tomcat server behind nginx for multiple subdomain:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$tempRequest" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" \n';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.1;
gzip_min_length 1000;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6] \.";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_vary on;
upstream main {
server localhost:8081;
}
include /etc/nginx/conf.d/*.conf;
# First server config to listen top level domain request (https/http) & redirect to mnop.com
server {
listen 80;
listen 443 ssl;
server_name xyz.com www.xyz.com;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
return 301 https://mnop.com;
}
# Second server config to redirect all non https requests to https request.
server {
listen 80;
# Remember wildcard domain with "*" doesn't listen top level domain.
# Hence no conflict with first server config.
server_name *.xyz.com;
rewrite ^ https://$host$request_uri? permanent;
}
# Third server config to listen https request & serves all html with nginx.
server {
listen 443 ssl;
server_name *.xyz.com;
ssl on;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
location / {
set $tempRequest $request;
if ($tempRequest ~ (.*)j_password=[^&]*(.*)) {
# Mask spring authentication password param.
set $tempRequest $1j_password=****$2;
}
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://main;
proxy_redirect http://$host https://$host;
}
location /ng/app {
root /usr/share/apache-tomcat-7.0.54/webapps/ROOT;
}
}
}
A tomcat app is running on port 8081 and any subdomain like: a.xyz.com or b.xyz.com, working fine and sharing same session.
But We need to use the same session and app using a different domain like: abc.com, how can I achieve that? I tried setting virtual hosts and proxy_cookie_domain but nothing worked?