nginx forwards every incoming request to upstream if its returning 5xx, can it be throttled? - caching

I use nginx for caching responses for 1 minute. Following is the config:
proxy_pass http://myupstream.com;
proxy_set_header Host myupstream.com
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache mycache;
proxy_cache_key $scheme$proxy_host$uri$http_accept_encoding;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
With this config, nginx serves from cache when the upstream returns 5xx responses which is good. However my problem is that when upstreams starts returning 5xx and the cache has expired, every incoming request is forwarded to upstream putting immense load on the upstream servers which are already not in a good state (that they are responding with 5xx). Can nginx be configured to forward requests to upstream once every 1 minute even when upstream responds with 5xx. I have added the following line but with no avail:
proxy_cache_valid 500 502 503 504 1m;
Would appreciate any suggestions.

Related

How to replicate session in memory on Oracle Weblogic?

I want to create a high availability with Oracle Weblogic. First, I create a cluster called MyCluster and add two servers (Server1 and Server2) to MyCluster. I use Nginx as a load balancer.
I follow the tutorial from https://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/12-ManageSessions--4478/session.htm#t1 to replicate session in memory.
Here is my nginx config:
upstream myweb {
server server1:38080 weight=1;
server server2:38080 weight=1;
server server3:38080 weight=1;
}
server {
listen 80;
server_name nginxHost;
access_log /var/log/nginx/nginxHost.access.log main;
error_log /var/log/nginx/nginxHost.error.log warn;
location / {
proxy_pass http://myweb/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
When I test session replication, I have a problem. If Server1 is running and Server2 is shutdown now, I connect my application which is on Server1. I power on Server2 and wait it complete startup. Then I shutdown Server1 and refresh browser. The session disappear.
Finally, I find I have to refresh browser after Server2 is running. Is there any way to replicate session when servers start?

Where the response is comming from - Nginx? App? Kubernetes? Other?

I have an app providing RESTFull api in google kubernetes cluster.
In front of application i have an nginx working as a proxy_pass.
The problem is that one request of few thousands (1000, 2000) has bad data in response (other users data). Analysing logs showed that request of the bad response doesn't come to the application at all.
But it comes to nginx:
2019/05/08 13:48:03 [warn] 5#5: *28350 delaying request, excess: 0.664, by zone "one", client: 10.240.0.23, server: myportal.com, request: "GET /api/myresource?testId=10 HTTP/1.1"
In the same time there's no logs in the app for testId=10 (but there are for testId=9 and testId=11 when i make sequential test 1..1000)
Nginx configuration is almost default
limit_req_zone $binary_remote_addr zone=one:10m rate=4r/s;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name myportal.com;
if ($http_x_forwarded_proto = "http") {
return 308 https://$server_name;
}
charset utf-8;
access_log on;
server_tokens off;
location /api {
proxy_pass http://backend-service:8000;
limit_req zone=one burst=10;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
There is no caching configured (or maybe it's on by default?).
Application is working in google kubernetes environement, so the request chain looks like this
(k8s ingress, nginx-service) -> nginx -> (k8s backend-service) -> backend
Backend app is written in spring and using jetty to run.
Nginx version was updated from 1.13.X to 1.15.12 but both has the same issue.
I have no idea what and where should i check to find the cause of the problem.
Error you see comes from Nginx because of configs limit_req_zone $binary_remote_addr zone=one:10m rate=4r/s; and limit_req zone=one burst=10;
Read more here: http://nginx.org/ru/docs/http/ngx_http_limit_req_module.html
Did you put it for reason?

Trying to avoid "attack reported by Rack::Protection::AuthenticityToken" message

I have a new Padrino 0.13.1 project that I am hosting on an AWS Elastic Beanstalk worker instance. The worker instance has a cron job that calls a POST every 5 minutes in my Padrino app. I have defined the routine as follows:
post :myroutine, :with => :myparams, :csrf_protection => false do
# ... do some stuff
status 200
end
I have also configured /config/apps.rb as follows:
Padrino.configure_apps do
set :session_secret, '...'
set :protection, :except => :path_traversal
set :protection_from_csrf, true
set :allow_disabled_csrf, true
end
The worker instance does a post to http://localhost:80/myroutine/somevar every 5 minutes. The nginx access.log file shows:
127.0.0.1 - - [21/Mar/2016:04:49:59 +0000] "POST /myroutine/01234 HTTP/1.1" 200 0 "-" "aws-sqsd/2.0" "-"
But in my AWS production.log file, I also see this come up every 5 minutes:
WARN - 21/Mar/2016 04:49:59 attack reported by Rack::Protection::AuthenticityToken
Strangely, the routine executes fine, and does what it is supposed to do. I would just like to stop my log file from filling up with the Rack::Protection error every 5 minutes.
Is this because of a misconfigured csrf setting somewhere, or a bug?
it is caused by nginx reverse proxy setting.
which may lost http related information , result session information lost.
https://github.com/znc/znc/issues/946
i just add below line and it works :
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_pass_header Set-Cookie;

Nginx proxy cache invalidating request to s3

I have set up a proxy from my application server to a private s3 bucket to cache requests. I was having some trouble with it where s3 was rejecting my download requests (403 forbidden) and after some experimentation it seems that disabling cacheing allows the valid request to go through. But the entire purpose of the proxy is as a cache. I guess the proxy is altering the request in some way but I don't understand how. Does anyone have any insight into how enabling caching in nginx alters requests and if there is some way to over come this?
Here is the relevent config.
http {
proxy_cache_path /home/cache levels=1:2 keys_zone=S3_CACHE:10m inactive=24h max_size=500m;
proxy_temp_path /home/cache/tmp;
server {
server_name my-cache-server.com;
listen 80;
proxy_cache S3_CACHE;
location / {
proxy_buffering on;
proxy_pass http://MY_BUCKET.s3.amazonaws.com/;
proxy_pass_request_headers on;
}
}
}
if I remove the line proxy_cache S3_CACHE;
Here is the difference between the nginx access logs with proxy_cache disabled vs. enabled... In the first case the headers are passed, accepted, and then a get request is made that returns the images. In the second case (with cache enabled) the headers are sent and then rejected, resulting in a 403 error which stops the performance.vidigami.com test server running
WORKING...
MY_IP - - [09/Nov/2014:23:19:04 +0000] "HEAD https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 200 0 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
MY_IP - - [09/Nov/2014:23:19:04 +0000] "GET https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 200 69475 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
NOT WORKING...
MY_IP - - [09/Nov/2014:23:20:08 +0000] "HEAD https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 403 0 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
If AWS S3 rejects requests (HTTP 403), the origin call is invalid, this is not caching or Nginx problem. In your case Nginx itself accesses S3 via http (80 port), make sure your S3 URL created to be accessed with no HTTPS. Othewise, make proxy_pass https://...
This directive proxy_pass_request_headers is not required, also proxy buffering is on by default. It's highly recommended to enable access/error logs.
To use HTTP 1.1 keep alive with backend and perform caching use the following directives:
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'MY_BUCKET.s3.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3_CACHE;
proxy_cache_valid 200 24h;
proxy_cache_valid 403 15m;
proxy_cache_bypass $http_cache_purge;
add_header X-Cached $upstream_cache_status;
proxy_pass http://MY_BUCKET.s3.amazonaws.com/;
access_log s3.access.log;
error_log s3.error.log;
}
Cache invalidation works via HTTP header Cache-Purge, so header X-Cached displays MISS/HIT depending on full request or retrieve from cache respectively. To perform cache invalidation just do:
curl -I 'http://your_server.com/file' -H 'Cache-Purge: 1'
It's important to choose appropriate S3 endpoint to avoid DNS redirect:
us-east-1 s3.amazonaws.com
us-west-2 s3-us-west-2.amazonaws.com
us-west-1 s3-us-west-1.amazonaws.com
eu-west-1 s3-eu-west-1.amazonaws.com
eu-central-1 s3.eu-central-1.amazonaws.com
ap-southeast-1 s3-ap-southeast-1.amazonaws.com
ap-southeast-2 s3-ap-southeast-2.amazonaws.com
ap-northeast-1 s3-ap-northeast-1.amazonaws.com
sa-east-1 s3-sa-east-1.amazonaws.com

Nginx 502 Bad Gateway error ONLY in Firefox

I am running a website locally, all the traffic is routed through NGinx which then dispatches requests to PHP pages to Apache and serves static files. Works perfectly in Chrome, Safari, IE, etc.
However, whenever I open the website in Firefox I get the following error:
502 Bad Gateway
nginx/0.7.65
If I clear out cache and cookies, and then restart FireFox, I am able to load the site once or twice before the error returns. I've tried both Firefox 3.6 and 3.5 and both have the same problem.
Here is what my Nginx config looks like:
worker_processes 2;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name local.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://local.mysite.amc:8080;
}
include /opt/local/etc/nginx/rewrite.txt;
}
server {
include /opt/local/etc/nginx/mime.types;
listen 80;
server_name local.static.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
And here is the errors that Firefox generates in my error.log file:
[error] 11013#0: *26 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream
[error] 11013#0: *30 upstream sent too big header while reading response header from upstream
[error] 11013#0: *30 no live upstreams while connecting to upstream
I am completely at a loss why a browser would cause a server error. Can someone help?
I seem to have found a work around that fixed my problem. After some additional Google research, I added the following lines to my Nginx config:
proxy_buffers 8 16k;
proxy_buffer_size 32k;
However, I still don't know why this worked and why only Firefox seemed to have problems. If anyone can shed light on this, or offer a better solution, it would be much appreciated!
If you have firePHP disable it. Big headers causes problems while nginx comunication with php.
Increasing the size of your proxy buffers solves this issue. Firefox allows large cookies (up to 4k each) that are attached to every request. The Nginx default config has small buffers (only 4k). If your traffic uses big cookies, you will see the error "upstream sent too big header while reading response header" in your nginx error log, and Nginx will return a http 502 error to the client. What happened is Nginx ran out of buffer space while parsing and processing the request.
To solve this, change your nginx.conf file
proxy_buffers 8 16k;
proxy_buffer_size 32k;
-or-
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
open /etc/nginx/nginx.conf and
add the following lines into http section :
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
This fix worked for me in a CI web application. read more at http://www.adminsehow.com/2012/01/fix-nginx-502-bad-gateway-error/

Resources