Nginx 502 Bad Gateway error ONLY in Firefox - firefox

I am running a website locally, all the traffic is routed through NGinx which then dispatches requests to PHP pages to Apache and serves static files. Works perfectly in Chrome, Safari, IE, etc.
However, whenever I open the website in Firefox I get the following error:
502 Bad Gateway
nginx/0.7.65
If I clear out cache and cookies, and then restart FireFox, I am able to load the site once or twice before the error returns. I've tried both Firefox 3.6 and 3.5 and both have the same problem.
Here is what my Nginx config looks like:
worker_processes 2;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name local.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://local.mysite.amc:8080;
}
include /opt/local/etc/nginx/rewrite.txt;
}
server {
include /opt/local/etc/nginx/mime.types;
listen 80;
server_name local.static.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
And here is the errors that Firefox generates in my error.log file:
[error] 11013#0: *26 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream
[error] 11013#0: *30 upstream sent too big header while reading response header from upstream
[error] 11013#0: *30 no live upstreams while connecting to upstream
I am completely at a loss why a browser would cause a server error. Can someone help?

I seem to have found a work around that fixed my problem. After some additional Google research, I added the following lines to my Nginx config:
proxy_buffers 8 16k;
proxy_buffer_size 32k;
However, I still don't know why this worked and why only Firefox seemed to have problems. If anyone can shed light on this, or offer a better solution, it would be much appreciated!

If you have firePHP disable it. Big headers causes problems while nginx comunication with php.

Increasing the size of your proxy buffers solves this issue. Firefox allows large cookies (up to 4k each) that are attached to every request. The Nginx default config has small buffers (only 4k). If your traffic uses big cookies, you will see the error "upstream sent too big header while reading response header" in your nginx error log, and Nginx will return a http 502 error to the client. What happened is Nginx ran out of buffer space while parsing and processing the request.
To solve this, change your nginx.conf file
proxy_buffers 8 16k;
proxy_buffer_size 32k;
-or-
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;

open /etc/nginx/nginx.conf and
add the following lines into http section :
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
This fix worked for me in a CI web application. read more at http://www.adminsehow.com/2012/01/fix-nginx-502-bad-gateway-error/

Related

HTTP/2 server pushed assets fail to load (HTTP2_CLIENT_REFUSED_STREAM)

I have the following error ERR_HTTP2_CLIENT_REFUSED_STREAM in chrome devtools console for all or some of the assets pushed via http2.
Refreshing the page and clearing the cache randomly fixes this issue partially and sometimes completely.
I am using nginx with http2 enabled (ssl via certbot) and cloudflare.
server {
server_name $HOST;
root /var/www/$HOST/current/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
set $auth "dev-Server.";
if ($request_uri ~ ^/opcache-api/.*$){
set $auth off;
}
auth_basic $auth;
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(?:css|js)$ {
access_log off;
log_not_found off;
# Let laravel query strings burst the cache
expires 1M;
add_header Cache-Control "public";
# Or force cache revalidation.
# add_header Cache-Control "public, no-cache, must-revalidate";
}
location ~* \.(?:jpg|jpeg|gif|png|ico|xml|svg|webp)$ {
access_log off;
log_not_found off;
expires 6M;
add_header Cache-Control "public";
}
location ~* \.(?:woff|ttf|otf|woff2|eot)$ {
access_log off;
log_not_found off;
expires max;
add_header Cache-Control "public";
types {font/opentype otf;}
types {application/vnd.ms-fontobject eot;}
types {font/truetype ttf;}
types {application/font-woff woff;}
types {font/x-woff woff2;}
}
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/$HOST/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$HOST/privkey.pem; # managed by Certbot
# include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = $HOST) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name $HOST;
listen 80;
return 404; # managed by Certbot
}
Googling this error doesn't return much results and if it helps, it's a laravel 6 app that pushes those assets, if i disable assets pushing in laravel, then all assets load correctly.
I don't even know where to start looking.
Update 1
I enabled chrome logging, and inspected the logs using Sawbuck, following the instructions provided here and found that the actual error has some relation with a 414 HTTP response, which implies some caching problem.
Update 2
I found this great The browser can abort pushed items if it already has them which states the following:
Chrome will reject pushes if it already has the item in the push cache. It rejects with PROTOCOL_ERROR rather than CANCEL or REFUSED_STREAM.
it also links to some chrome and mozilla bugs.
Which led me to disable cloudflare completely and test directly with the server, i tried various Cache-Control directives and also tried disabling the header, but the same error occures, upon a refresh after a cache clear.
Apparently, chrome cancel the http/2 pushed asset even when not present in push cache, leaving the page broken.
For now, i'm disabling http/2 server push in laravel app as a temporary fix.
We just got the exact same problem you're describing. We got "net::ERR_HTTP2_CLIENT_REFUSED_STREAM" on one of our Javascript files. Reloading and clearing cache worked but then the problem came back, seemingly randomly. The same issue in Chrome and Edge (Chromium based). Then I tried in Firefox and got the same behavior but Firefox complained that the response for that url was "text/html". My guess is that for some reason we had gotten a "text/html" response cached for that url in Cloudflare. When I opened that url directly in Firefox I got "application/javascript" and then the problem went away. Still not quite sure how this all happened though.
EDIT:
In our case it turned out that the response for a .js file was blocked by the server with a 401 and we didn't send out any cache headers. Cloudflare tries to be helpful because the browser was expecting a .js file so the response was cached even if the status was 401. Which later failed for others because we tried to http2 push a text/html response with status 401 as a .js file. Luckliy Firefox gave us a better, actionable error message.
EDIT2:
Turns out it wasn't a http header cache issue. It was that we had cookie authentication on .js files, and it seems that the http2 push requests doesn't always include cookies. The fix was to allow cookieless requests on the .js files.
For anyone who comes here using Symfony's local development server (symfony server:start), I could not fix it but this setting (which is the default setting) stops the server to try to push preloaded assets and the error goes away:
config/packages/dev/webpack_encore.yaml
webpack_encore:
preload: false

Gitlab Client in Login Redirect Loop

I have been doing some work trying to update our gitlab servers. Somewhere along the line, something in the configuration changed and now I can't access the web client. The backend starts up correctly and when I run rake gitlab:check everything comes back as green. Same for nginx, as far as I can tell it is working correctly. When I try to go to the landing page in the browser though, I keep getting an error about 'too many redirects'.
Looking at the browser console, I can see that it is repeatedly trying to redirect to the login page until the browser gives up and throws an error. I did some looking around, and most of the answers seem to involve going to the login page directly and then changing the landing page from the admin settings. When I tried that I got the same problem. Apparently any page on my domain wants to redirect to the login, leaving me with an infinite loop.
I'm also seeing some potentially related errors in the nginx logs. When I try to hit the sign in page the error log is showing
open() "/usr/local/Cellar/nginx/1.15.9/html/users/sign_in" failed (2: No such file or directory)
Is that even the correct directory for the gitlab html views? If not how do I change it?
Any help on this would be greatly appreciated.
Environment:
OSX 10.11.6 El Capitan
Gitlab 8.11
nginx 1.15.9
My config files. I have removed some commented out lines to save on space.
nginx.config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
include servers/*;
}
nginx/servers/gitlab
upstream gitlab-workhorse {
server unix:/Users/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
server {
listen 0.0.0.0:8081;
listen [::]:8081;
server_name git.my.server.com; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
## See app/controllers/application_controller.rb for headers set
## Individual nginx logs for this GitLab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
I finally found the answer after several days of digging. At some point my default config file (/etc/default/gitlab) got changed. For whatever reason, my text editor decided to split gitlab_workhorse_options into two lines. As a result, gitlab was missing the arguments for authSocket and document root and was just using the default values. If that wasn't bad enough, the line split started on a $ character, so it looked like nano was just doing a word wrap.

Where the response is comming from - Nginx? App? Kubernetes? Other?

I have an app providing RESTFull api in google kubernetes cluster.
In front of application i have an nginx working as a proxy_pass.
The problem is that one request of few thousands (1000, 2000) has bad data in response (other users data). Analysing logs showed that request of the bad response doesn't come to the application at all.
But it comes to nginx:
2019/05/08 13:48:03 [warn] 5#5: *28350 delaying request, excess: 0.664, by zone "one", client: 10.240.0.23, server: myportal.com, request: "GET /api/myresource?testId=10 HTTP/1.1"
In the same time there's no logs in the app for testId=10 (but there are for testId=9 and testId=11 when i make sequential test 1..1000)
Nginx configuration is almost default
limit_req_zone $binary_remote_addr zone=one:10m rate=4r/s;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name myportal.com;
if ($http_x_forwarded_proto = "http") {
return 308 https://$server_name;
}
charset utf-8;
access_log on;
server_tokens off;
location /api {
proxy_pass http://backend-service:8000;
limit_req zone=one burst=10;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
There is no caching configured (or maybe it's on by default?).
Application is working in google kubernetes environement, so the request chain looks like this
(k8s ingress, nginx-service) -> nginx -> (k8s backend-service) -> backend
Backend app is written in spring and using jetty to run.
Nginx version was updated from 1.13.X to 1.15.12 but both has the same issue.
I have no idea what and where should i check to find the cause of the problem.
Error you see comes from Nginx because of configs limit_req_zone $binary_remote_addr zone=one:10m rate=4r/s; and limit_req zone=one burst=10;
Read more here: http://nginx.org/ru/docs/http/ngx_http_limit_req_module.html
Did you put it for reason?

trouble getting a file from node.js using nginx reverse proxy

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

Upstream too big - nginx + codeigniter

I am getting this error from Nginx, but can't seem to figure it out! I am using codeigniter and am using the database for sessions. So I'm wondering how the header can ever be too big. Is there anyway to check what the header is? or potentially see what I can do to fix this error?
Let me know if you need me to put up any conf files or whatever and I'll update as you request them
2012/12/15 11:51:39 [error] 2007#0: *5778 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxxx.com", referrer: "http://jdobres.xxxx.com/"
UPDATE
I added the following into conf:
proxy_buffer_size 512k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
And now I still get the following:
2012/12/16 12:40:27 [error] 31235#0: *929 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxx.com", referrer: "http://jdobres.xxxx.com/"
Add this to your http {} of the nginx.conf file normally located at /etc/nginx/nginx.conf:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
Then add this to your php location block, this will be located in your vhost file look for the block that begins with location ~ .php$ {
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
Modify your nginx configuration and change/set the following directives:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
using nginx + fcgiwrap + request too long
I had the same problem because I use a nginx + fcgiwrap configuration:
location ~ ^.*\.cgi$ {
fastcgi_pass unix:/var/run/fcgiwrap.sock;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /opt/nginx/bugzilla/$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
# attachments can be huge
client_max_body_size 0;
client_body_in_file_only clean;
# this is where requests body are saved
client_body_temp_path /opt/nginx/bugzilla/data/request_body 1 2;
}
and the client was doing a request with a URL that was about 6000 characters (a bugzilla request).
debugging...
location ~ ^.*\.cgi$ {
error_log /var/log/nginx/bugzilla.log debug;
# ...
}
This is what I got in the logs:
2015/03/18 10:24:40 [debug] 4625#0: *2 upstream split a header line in FastCGI records
2015/03/18 10:24:40 [error] 4625#0: *2 upstream sent too big header while reading response header from upstream, client: 10....
can I have "414 request-uri too large" instead of "502 bad gateway"?
Yes you can!
I was reading How to set the allowed url length for a nginx request (error code: 414, uri too large) before because I thought "hey the URL's too long" but I was getting 502's rather than 414's.
large_client_header_buffers
Try #1:
# this goes in http or server block... so outside the location block
large_client_header_buffers 4 8k;
This fails, my URL is 6000 characters < 8k. Try #2:
large_client_header_buffers 4 4k;
Now I don't see a 502 Bad Gateway anymore and instead I see a 414 Request-URI Too Large
"upstream split a header line in FastCGI records"
Did some research and found somewhere on the internet:
http://forum.nginx.org/read.php?2,4704,4704
https://www.ruby-forum.com/topic/4422529
http://mailman.nginx.org/pipermail/nginx/2009-August/014709.html
http://mailman.nginx.org/pipermail/nginx/2009-August/014716.html
This was sufficient for me:
location ~ ^.*\.cgi$ {
# holds request bigger than 4k but < 8k
fastcgi_buffer_size 8k;
# getconf PAGESIZE is 4k for me...
fastcgi_buffers 16 4k;
# ...
}
I have proven that this is also sent when an invalid header is transmitted. Invalid characters or formatting of the HTTP headers, cookie expiration set back by more than a month, etc will all cause: upstream sent too big header while reading response header from upstream
I encountered this problem in the past (not using codeigniter but it happens whenever the responses contain a lot of header data) and got used to tweaking the buffers as suggested here, but recently I got bitten by this issue again and the buffers were apparently okay.
Turned out it was spdy's fault which I was using on this particular project and solved by enabling spdy headers compression like this:
spdy_headers_comp 6;
Problem: upstream sent too big header while reading response header from upstream Nginx with Magento 2
Solution: Replace given below setting into /nginx.conf.sample File
fastcgi_buffer_size 4k;
with
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;

Resources