Gitlab Client in Login Redirect Loop - macos

I have been doing some work trying to update our gitlab servers. Somewhere along the line, something in the configuration changed and now I can't access the web client. The backend starts up correctly and when I run rake gitlab:check everything comes back as green. Same for nginx, as far as I can tell it is working correctly. When I try to go to the landing page in the browser though, I keep getting an error about 'too many redirects'.
Looking at the browser console, I can see that it is repeatedly trying to redirect to the login page until the browser gives up and throws an error. I did some looking around, and most of the answers seem to involve going to the login page directly and then changing the landing page from the admin settings. When I tried that I got the same problem. Apparently any page on my domain wants to redirect to the login, leaving me with an infinite loop.
I'm also seeing some potentially related errors in the nginx logs. When I try to hit the sign in page the error log is showing
open() "/usr/local/Cellar/nginx/1.15.9/html/users/sign_in" failed (2: No such file or directory)
Is that even the correct directory for the gitlab html views? If not how do I change it?
Any help on this would be greatly appreciated.
Environment:
OSX 10.11.6 El Capitan
Gitlab 8.11
nginx 1.15.9
My config files. I have removed some commented out lines to save on space.
nginx.config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
include servers/*;
}
nginx/servers/gitlab
upstream gitlab-workhorse {
server unix:/Users/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
server {
listen 0.0.0.0:8081;
listen [::]:8081;
server_name git.my.server.com; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
## See app/controllers/application_controller.rb for headers set
## Individual nginx logs for this GitLab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}

I finally found the answer after several days of digging. At some point my default config file (/etc/default/gitlab) got changed. For whatever reason, my text editor decided to split gitlab_workhorse_options into two lines. As a result, gitlab was missing the arguments for authSocket and document root and was just using the default values. If that wasn't bad enough, the line split started on a $ character, so it looked like nano was just doing a word wrap.

Related

nginx prod setup for Clojure WebSocket app

I'm trying to deploy my first Clojure WebSocket app and I think I'm getting close. I get a good response locally, and it looks like the endpoint wants to face the outside world (I see that the port is open when I run netstat), but no response. I'm certain that I have something setup incorrectly in nginx.
I currently already host a few other websites on this server, just want to add the necessary config to get requests made to wss://domain.com:8001 to communicate with my app.
Here is the location entry I'm using now:
location / {
proxy_pass http://localhost:8001;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
access_log /var/www/logs/test.access.log;
error_log /var/www/logs/test.error.log;
}
Could anyone help point me in the right direction? My guess is that I actually have too much in the config, and what's there is probably not correct.
** EDIT: ** For interested parties, I put up my working config (based on Erik Dannenberg's answer) in a gist.
You are missing two more headers, a minimal working config:
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
# add the two below
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
# optional, but helpful if you run into timeouts
proxy_read_timeout 86400;
}

Django media file not serve in production but serving in development (localhost)?

Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs

trouble getting a file from node.js using nginx reverse proxy

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

Nginx/Django Admin POST https only

I've got an Nginx/Gunicorn/Django server deployed on a Centos 6 machine with only the SSL port (443) visible to the outside world. So unless the server is called with the https://, you won't get any response. If you call it with an http://domain:443, you'll merely get a 400 Bad Request message. Port 443 is the only way to hit the server.
I'm using Nginx to serve my static files (CSS, etc.) and all other requests are handled by Gunicorn, which is running Django at http://localhost:8000. So, navigating to https://domain.com works just fine, as do links within the admin site, but when I submit a form in the Django admin, the https is lost on the redirect and I'm sent to http://domain.com/request_uri which fails to reach the server. The POST action does work properly even so and the database is updated.
My configuration file is listed below. The location location / section is where I feel like the solution should be found. But it doesn't seem like the proxy_set_header X-* directives have any effect. Am I missing a module or something? I'm running nginx/1.0.15.
Everything I can find on the internet points to the X-Forwarded-Protocol https like it should do something, but I get no change. I'm also unable to get the debugging working on the remote server, though my next step may have to be compiling locally with debugging enabled to get some more clues. The last resort is to expose port 80 and redirect everything...but that requires some paperwork.
[http://pastebin.com/Rcg3p6vQ](My nginx configure arguments)
server {
listen 443 ssl;
ssl on;
ssl_certificate /path/to/cert.crt;
ssl_certificate_key /path/to/key.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
root /home/gunicorn/project/app;
access_log /home/gunicorn/logs/access.log;
error_log /home/gunicorn/logs/error.log debug;
location /static/ {
autoindex on;
root /home/gunicorn;
}
location / {
proxy_pass http://localhost:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol https;
}
}
Haven't had time yet to understand exactly what these two lines do, but removing them solved my problems:
proxy_redirect off;
proxy_set_header Host $host;

Nginx 502 Bad Gateway error ONLY in Firefox

I am running a website locally, all the traffic is routed through NGinx which then dispatches requests to PHP pages to Apache and serves static files. Works perfectly in Chrome, Safari, IE, etc.
However, whenever I open the website in Firefox I get the following error:
502 Bad Gateway
nginx/0.7.65
If I clear out cache and cookies, and then restart FireFox, I am able to load the site once or twice before the error returns. I've tried both Firefox 3.6 and 3.5 and both have the same problem.
Here is what my Nginx config looks like:
worker_processes 2;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name local.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://local.mysite.amc:8080;
}
include /opt/local/etc/nginx/rewrite.txt;
}
server {
include /opt/local/etc/nginx/mime.types;
listen 80;
server_name local.static.mysite.amc;
root /Users/joshmaker/Sites/mysite;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
And here is the errors that Firefox generates in my error.log file:
[error] 11013#0: *26 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream
[error] 11013#0: *30 upstream sent too big header while reading response header from upstream
[error] 11013#0: *30 no live upstreams while connecting to upstream
I am completely at a loss why a browser would cause a server error. Can someone help?
I seem to have found a work around that fixed my problem. After some additional Google research, I added the following lines to my Nginx config:
proxy_buffers 8 16k;
proxy_buffer_size 32k;
However, I still don't know why this worked and why only Firefox seemed to have problems. If anyone can shed light on this, or offer a better solution, it would be much appreciated!
If you have firePHP disable it. Big headers causes problems while nginx comunication with php.
Increasing the size of your proxy buffers solves this issue. Firefox allows large cookies (up to 4k each) that are attached to every request. The Nginx default config has small buffers (only 4k). If your traffic uses big cookies, you will see the error "upstream sent too big header while reading response header" in your nginx error log, and Nginx will return a http 502 error to the client. What happened is Nginx ran out of buffer space while parsing and processing the request.
To solve this, change your nginx.conf file
proxy_buffers 8 16k;
proxy_buffer_size 32k;
-or-
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
open /etc/nginx/nginx.conf and
add the following lines into http section :
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
This fix worked for me in a CI web application. read more at http://www.adminsehow.com/2012/01/fix-nginx-502-bad-gateway-error/

Resources