Upstream too big - nginx + codeigniter - codeigniter

I am getting this error from Nginx, but can't seem to figure it out! I am using codeigniter and am using the database for sessions. So I'm wondering how the header can ever be too big. Is there anyway to check what the header is? or potentially see what I can do to fix this error?
Let me know if you need me to put up any conf files or whatever and I'll update as you request them
2012/12/15 11:51:39 [error] 2007#0: *5778 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxxx.com", referrer: "http://jdobres.xxxx.com/"
UPDATE
I added the following into conf:
proxy_buffer_size 512k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
And now I still get the following:
2012/12/16 12:40:27 [error] 31235#0: *929 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxx.com", referrer: "http://jdobres.xxxx.com/"

Add this to your http {} of the nginx.conf file normally located at /etc/nginx/nginx.conf:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
Then add this to your php location block, this will be located in your vhost file look for the block that begins with location ~ .php$ {
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;

Modify your nginx configuration and change/set the following directives:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;

using nginx + fcgiwrap + request too long
I had the same problem because I use a nginx + fcgiwrap configuration:
location ~ ^.*\.cgi$ {
fastcgi_pass unix:/var/run/fcgiwrap.sock;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /opt/nginx/bugzilla/$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
# attachments can be huge
client_max_body_size 0;
client_body_in_file_only clean;
# this is where requests body are saved
client_body_temp_path /opt/nginx/bugzilla/data/request_body 1 2;
}
and the client was doing a request with a URL that was about 6000 characters (a bugzilla request).
debugging...
location ~ ^.*\.cgi$ {
error_log /var/log/nginx/bugzilla.log debug;
# ...
}
This is what I got in the logs:
2015/03/18 10:24:40 [debug] 4625#0: *2 upstream split a header line in FastCGI records
2015/03/18 10:24:40 [error] 4625#0: *2 upstream sent too big header while reading response header from upstream, client: 10....
can I have "414 request-uri too large" instead of "502 bad gateway"?
Yes you can!
I was reading How to set the allowed url length for a nginx request (error code: 414, uri too large) before because I thought "hey the URL's too long" but I was getting 502's rather than 414's.
large_client_header_buffers
Try #1:
# this goes in http or server block... so outside the location block
large_client_header_buffers 4 8k;
This fails, my URL is 6000 characters < 8k. Try #2:
large_client_header_buffers 4 4k;
Now I don't see a 502 Bad Gateway anymore and instead I see a 414 Request-URI Too Large
"upstream split a header line in FastCGI records"
Did some research and found somewhere on the internet:
http://forum.nginx.org/read.php?2,4704,4704
https://www.ruby-forum.com/topic/4422529
http://mailman.nginx.org/pipermail/nginx/2009-August/014709.html
http://mailman.nginx.org/pipermail/nginx/2009-August/014716.html
This was sufficient for me:
location ~ ^.*\.cgi$ {
# holds request bigger than 4k but < 8k
fastcgi_buffer_size 8k;
# getconf PAGESIZE is 4k for me...
fastcgi_buffers 16 4k;
# ...
}

I have proven that this is also sent when an invalid header is transmitted. Invalid characters or formatting of the HTTP headers, cookie expiration set back by more than a month, etc will all cause: upstream sent too big header while reading response header from upstream

I encountered this problem in the past (not using codeigniter but it happens whenever the responses contain a lot of header data) and got used to tweaking the buffers as suggested here, but recently I got bitten by this issue again and the buffers were apparently okay.
Turned out it was spdy's fault which I was using on this particular project and solved by enabling spdy headers compression like this:
spdy_headers_comp 6;

Problem: upstream sent too big header while reading response header from upstream Nginx with Magento 2
Solution: Replace given below setting into /nginx.conf.sample File
fastcgi_buffer_size 4k;
with
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;

Related

Downtime when deploying Laravel to Azure

Im deploying a laravel site to a Azure Web App (running linux).
After upgrading to PHP 8 and nginx I experience a lot more downtime after deployment. Several minutes of nginx's Bad Gateway error.
In order to get laravel working with nginx I need to copy a nginx conf file from my project to nginx's config on the server.
Im running startup.sh after deploy that has the following commands as first lines:
cp /home/site/wwwroot/devops/nginx.conf /etc/nginx/sites-available/default;
service nginx reload
Content of my nginx.conf:
server {
# adjusted nginx.conf to make Laravel 8 apps with PHP 8.0 features runnable on Azure App Service
# #see https://laravel.com/docs/8.x/deployment
listen 8080;
listen [::]:8080;
root /home/site/wwwroot/public;
index index.php;
client_max_body_size 100M;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
gzip on;
gzip_proxied any;
gzip_min_length 256;
gzip_types
application/atom+xml
application/geo+json
application/javascript
application/x-javascript
application/json
application/ld+json
application/manifest+json
application/rdf+xml
application/rss+xml
application/xhtml+xml
application/xml
font/eot
font/otf
font/ttf
image/svg+xml
text/css
text/javascript
text/plain
text/xml;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
}
I've also tried to use Azure Deployment Slots but the swap is happening before the Bad Gateway error has gone away.
Is there something else I can do to minimize the downtime/time for the project to get up and running again?
The "Bad Gateway" error suggests that Nginx is unable to connect to the backend, which in this case is PHP-FPM.
There are a few things you can try to minimize the downtime:
Increase the fastcgi_connect_timeout, fastcgi_send_timeout, and fastcgi_read_timeout values in your nginx configuration file. This will give PHP-FPM more time to start up and respond to requests.
Optimize your PHP code. Make sure your code is optimized for performance, as this will help reduce the time it takes for the site to start up.
Use Azure Deployment Slots for testing. Deployment slots allow you to test your code in a staging environment before deploying it to production. This can help reduce the risk of downtime in your production environment.
Try to make sure that your PHP-FPM and nginx services are always running, and that they are started automatically when the server boots up.
Try to reduce the number of restarts needed during deployment by using a deployment process that utilizes rolling upgrades.
Finally, you can try deploying a simple HTML file first, and then deploy the Laravel codebase. This will ensure that the web server and PHP are working before deploying the Laravel codebase.
Use trial and error to find out the best solution for your use case.

Purging nginx cache files does not always work

I run an nginx server + PHP webservices API. I use nginx's fastcgi_cache to cache all GET requests, and when certain resources are updated, I purge one or more related cached resources.
The method I'm using to do this is to calculate the nginx cache file name for each resource I want to purge, and then deleting that file. For the most part, this works well.
However, I've found that sometimes, even after the cache file is deleted, nginx will still return data from cache.
This is not a problem with selecting the correct cache file to delete -- as part of my testing, I've deleted the entire cache directory, and nginx still returns HIT responses
Is anyone aware of why this might be happening? Is it possible that another cache is involved? E.g., could it be that the OS is returning a cached version of the cache file to nginx, so nginx is not aware that it's been deleted?
I'm running this on CentOS, and with this nginx config (minus irrelevant parts):
user nginx;
# Let nginx figure out the best value
worker_processes auto;
events {
worker_connections 10240;
multi_accept on;
use epoll;
}
# Maximum number of open files should be at least worker_connections * 2
worker_rlimit_nofile 40960;
# Enable regex JIT compiler
pcre_jit on;
http {
# TCP optimisation
sendfile on;
tcp_nodelay on;
tcp_nopush on;
# Configure keep alive
keepalive_requests 1000;
keepalive_timeout 120s 120s;
# Configure SPDY
spdy_headers_comp 2;
# Configure global PHP cache
fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=xxx:100m inactive=24h;
# Enable open file caching
open_file_cache max=10000 inactive=120s;
open_file_cache_valid 120s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
server {
server_name xxx;
listen 8080;
# Send all dynamic content requests to the main app handler
if (!-f $document_root$uri) {
rewrite ^/(.+) /index.php/$1 last;
rewrite ^/ /index.php last;
}
# Proxy PHP requests to php-fpm
location ~ [^/]\.php(/|$) {
# Enable caching
fastcgi_cache xxx;
# Only cache GET and HEAD responses
fastcgi_cache_methods GET HEAD;
# Caching is off by default, an can only be enabled using Cache-Control response headers
fastcgi_cache_valid 0;
# Allow only one identical request to be forwarded (others will get a stale response)
fastcgi_cache_lock on;
# Define conditions for which stale content will be returned
fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503;
# Define cache key to uniquely identify cached objects
fastcgi_cache_key "$scheme$request_method$host$request_uri";
# Add a header to response to indicate cache results
add_header X-Cache $upstream_cache_status;
# Configure standard server parameters
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
# php-fpm config
fastcgi_param SCRIPT_URL $fastcgi_path_info;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param REMOTE_USER $remote_user;
# Read buffer sizes
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
# Keep connection open to enable keep-alive
fastcgi_keep_conn on;
# Proxy to PHP
fastcgi_pass unix:/var/run/php-fpm/fpm.sock;
}
}
}
Now that I look at this, could the open_file_cache parameters be affecting the cache files?
Any ideas?
No, the OS does not cache files.
However, the reason this might be happening is that files are not actually fully deleted until both the link count and the number of processes that have the file open both go down to zero.
The unlink(2) manual page, which documents the system call used by tools like rm, reads as follows:
The unlink() function removes the link named by path from its directory and decrements the link count of the file which was referenced by the link. If that decrement reduces the link count of the file to zero, and no process has the file open, then all resources associated with the file are reclaimed. If one or more processes have the file open when the last link is removed, the link is removed, but the removal of the file is delayed until all references to it have been closed.
Depending on the system, you can actually still recover such open files fully without any data loss, for example, see https://unix.stackexchange.com/questions/61820/how-can-i-access-a-deleted-open-file-on-linux-output-of-a-running-crontab-task.
So, indeed, open_file_cache would effectively preclude your deletion from having any effect within the processes that still have relevant file descriptors in their cache. You may want to use a shorter open_file_cache_valid if urgent purging after deletion is very important to you.

Nginx proxy cache invalidating request to s3

I have set up a proxy from my application server to a private s3 bucket to cache requests. I was having some trouble with it where s3 was rejecting my download requests (403 forbidden) and after some experimentation it seems that disabling cacheing allows the valid request to go through. But the entire purpose of the proxy is as a cache. I guess the proxy is altering the request in some way but I don't understand how. Does anyone have any insight into how enabling caching in nginx alters requests and if there is some way to over come this?
Here is the relevent config.
http {
proxy_cache_path /home/cache levels=1:2 keys_zone=S3_CACHE:10m inactive=24h max_size=500m;
proxy_temp_path /home/cache/tmp;
server {
server_name my-cache-server.com;
listen 80;
proxy_cache S3_CACHE;
location / {
proxy_buffering on;
proxy_pass http://MY_BUCKET.s3.amazonaws.com/;
proxy_pass_request_headers on;
}
}
}
if I remove the line proxy_cache S3_CACHE;
Here is the difference between the nginx access logs with proxy_cache disabled vs. enabled... In the first case the headers are passed, accepted, and then a get request is made that returns the images. In the second case (with cache enabled) the headers are sent and then rejected, resulting in a 403 error which stops the performance.vidigami.com test server running
WORKING...
MY_IP - - [09/Nov/2014:23:19:04 +0000] "HEAD https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 200 0 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
MY_IP - - [09/Nov/2014:23:19:04 +0000] "GET https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 200 69475 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
NOT WORKING...
MY_IP - - [09/Nov/2014:23:20:08 +0000] "HEAD https://MY_BUCKET.s3.amazonaws.com/Test%20image.jpg
HTTP/1.1" 403 0 "-" "aws-sdk-nodejs/2.0.23 darwin/v0.10.32"
If AWS S3 rejects requests (HTTP 403), the origin call is invalid, this is not caching or Nginx problem. In your case Nginx itself accesses S3 via http (80 port), make sure your S3 URL created to be accessed with no HTTPS. Othewise, make proxy_pass https://...
This directive proxy_pass_request_headers is not required, also proxy buffering is on by default. It's highly recommended to enable access/error logs.
To use HTTP 1.1 keep alive with backend and perform caching use the following directives:
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'MY_BUCKET.s3.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3_CACHE;
proxy_cache_valid 200 24h;
proxy_cache_valid 403 15m;
proxy_cache_bypass $http_cache_purge;
add_header X-Cached $upstream_cache_status;
proxy_pass http://MY_BUCKET.s3.amazonaws.com/;
access_log s3.access.log;
error_log s3.error.log;
}
Cache invalidation works via HTTP header Cache-Purge, so header X-Cached displays MISS/HIT depending on full request or retrieve from cache respectively. To perform cache invalidation just do:
curl -I 'http://your_server.com/file' -H 'Cache-Purge: 1'
It's important to choose appropriate S3 endpoint to avoid DNS redirect:
us-east-1 s3.amazonaws.com
us-west-2 s3-us-west-2.amazonaws.com
us-west-1 s3-us-west-1.amazonaws.com
eu-west-1 s3-eu-west-1.amazonaws.com
eu-central-1 s3.eu-central-1.amazonaws.com
ap-southeast-1 s3-ap-southeast-1.amazonaws.com
ap-southeast-2 s3-ap-southeast-2.amazonaws.com
ap-northeast-1 s3-ap-northeast-1.amazonaws.com
sa-east-1 s3-sa-east-1.amazonaws.com

Nginx + PHP-FPM is very slow on Mountain Lion

I have set up Nginx with PHP-FPM on my MacBook running ML. It works fine but it takes between 5 and 10 seconds to connect when I run a page in my browser. Even the following PHP script:
<?php
die();
takes about 5 seconds to connect. I am using Chrome and I get the "Sending request" message in the status bar for around 7 seconds. If I refresh again it seems to work instantly, but if I leave it for around 10 seconds it will "sleep" again. It is as if nginx or PHP is going to sleep and then taking ages to wake up again.
Edit: This is also affecting static files on the server so it would seem to be an issue with DNS or nginx.
Can anyone help me figure out what is causing this?
nginx.conf
worker_processes 2;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/plain;
server_tokens off;
sendfile on;
tcp_nopush on;
keepalive_timeout 1;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css text/javascript application/json application/x-javascript text/xml application/xml application/xml+rss;
index index.html index.php;
upstream www-upstream-pool{
server unix:/tmp/php-fpm.sock;
}
include sites-enabled/*;
}
php-fpm.conf
[global]
pid = /usr/local/etc/php/var/run/php-fpm.pid
; run in background or in foreground?
; set daemonize = no for debugging
daemonize = yes
include=/usr/local/etc/php/5.4/pool.d/*.conf
pool.conf
[www]
user=matt
group=staff
pm = dynamic
pm.max_children = 10
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500
listen = /tmp/php-fpm.sock
;listen = 127.0.0.1:9000
php_flag[display_errors] = off
sites-available/cft
server {
listen 80;
server_name cft.local;
root /Users/matt/Sites/cft/www;
access_log /Users/matt/Sites/cft/log/access.log;
error_log /Users/matt/Sites/cft/log/error.log;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
include fastcgi_php_default.conf;
}
fastcgi_php_default.conf
fastcgi_intercept_errors on;
location ~ .php$
{
fastcgi_split_path_info ^(.+.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_read_timeout 300;
fastcgi_pass www-upstream-pool;
fastcgi_index index.php;
}
fastcgi_param REDIRECT_STATUS 200;
One reason could be - as already suspected above - that your server works perfectly but there is something wrong with DNS lookups.
Such long times usually are caused by try + timeout, then retry other way, works, cache.
Caching of the working request would explain why your second http request is fast.
I am almost sure, that this is caused by a DNS lookup, which tries to query an unreachable service, gives up after a timeout period, then tries a working service and caches the result for a couple of minutes.
Apple has recently made a significant change in how the OS handles requests for ".local" name resolution that can adversely affect Active Directory authentication and DFS resolution.
When processing a ".local" request, the Mac OS now sends a Multicast DNS (mDNS) or broadcast, then waits for that request to timeout before correctly sending the information to the DNS server. The delay caused by this results in an authentication failure in most cases.
http://www.thursby.com/local-domain-login-10.7.html
They are offering to set the timeout to the smallest possible value, which apparently is still 1 second - not really satisfying.
I suggest to use localhost or 127.0.0.1 or try http://test.dev as a local domain
/etc/hosts
127.0.0.1 localhost test.dev
EDIT
In OSX .local really seems to be a reserved tld for LAN devices. Using another domain like suggested above will def. solve this problem
http://support.apple.com/kb/HT3473
EDIT 2
Found this great article which exactly describes your problem and how to solve it
http://www.dmitry-dulepov.com/2012/07/os-x-lion-and-local-dns-issues.html?m=1
I can't see anything in your configuration that would cause this behaviour alone. Since the configuration of Nginx looks OK, and this affects both static and CGI request, I would suspect it is a system issue.
An issue that might be worth investigating is whether the problem is being caused by IPv6 negotiation on your server.
If you are using loopback (127.0.0.1) as your listen address, have a look in /etc/hosts and ensure that the following lines are present:
::1 localhost6.localdomain6 localhost6
::1 site.local cft.local
If this doesn't resolve the issue, I'm afraid you'll need to look at more advanced diagnostics, perhaps using strace or similar on the Nginx instance.
(this started as a comment but it's getting a bit long)
There's something very broken here - but I don't see anything obvious in your config to explain it. I'd start by looking at top and netstat while the request is in progress, and checking your logs (webserver and system) after the request has been processed. If that still reveals nothing, then next stop would be to capture all the network traffic - most likely causes for such a long delay are failed ident / DNS lookups.
Barring any configuration-related issues, it may be a compile issue.
I would advise that you install nginx from homebrew, the OS X open source package manager.
If you don't yet have homebrew (and every developer should!), you can grab it from here or just run this in a terminal:
ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
Then run
brew install ngnix
And ngnix will be deployed from the homebrew repository. Obviously, you'll want to make sure that you removed your old copy of nginx first.
The reason I recommend this is that homebrew has a number of OS X-specific patches for each open source library that needs it and is heavily used and tested by the community. I've used nginx from homebrew in the past on OS X and had no problems to speak of.

Nginx 502 Bad Gateway error ONLY in Firefox

I am running a website locally, all the traffic is routed through NGinx which then dispatches requests to PHP pages to Apache and serves static files. Works perfectly in Chrome, Safari, IE, etc.
However, whenever I open the website in Firefox I get the following error:
502 Bad Gateway
nginx/0.7.65
If I clear out cache and cookies, and then restart FireFox, I am able to load the site once or twice before the error returns. I've tried both Firefox 3.6 and 3.5 and both have the same problem.
Here is what my Nginx config looks like:
worker_processes 2;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name local.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://local.mysite.amc:8080;
}
include /opt/local/etc/nginx/rewrite.txt;
}
server {
include /opt/local/etc/nginx/mime.types;
listen 80;
server_name local.static.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
And here is the errors that Firefox generates in my error.log file:
[error] 11013#0: *26 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream
[error] 11013#0: *30 upstream sent too big header while reading response header from upstream
[error] 11013#0: *30 no live upstreams while connecting to upstream
I am completely at a loss why a browser would cause a server error. Can someone help?
I seem to have found a work around that fixed my problem. After some additional Google research, I added the following lines to my Nginx config:
proxy_buffers 8 16k;
proxy_buffer_size 32k;
However, I still don't know why this worked and why only Firefox seemed to have problems. If anyone can shed light on this, or offer a better solution, it would be much appreciated!
If you have firePHP disable it. Big headers causes problems while nginx comunication with php.
Increasing the size of your proxy buffers solves this issue. Firefox allows large cookies (up to 4k each) that are attached to every request. The Nginx default config has small buffers (only 4k). If your traffic uses big cookies, you will see the error "upstream sent too big header while reading response header" in your nginx error log, and Nginx will return a http 502 error to the client. What happened is Nginx ran out of buffer space while parsing and processing the request.
To solve this, change your nginx.conf file
proxy_buffers 8 16k;
proxy_buffer_size 32k;
-or-
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
open /etc/nginx/nginx.conf and
add the following lines into http section :
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
This fix worked for me in a CI web application. read more at http://www.adminsehow.com/2012/01/fix-nginx-502-bad-gateway-error/

Resources