I need to pass/set custom header from Nginx block to server (not response header) to detect the tenant.
I have tried with Nginx proxy but failed: Here is my code:
server {
server_name app.another.com www.app.another.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header tenant-id 1001;
proxy_pass ...domain.com ;
proxy_redirect ...domain.com ....app.another.com;
}
}
Error:
" 786 worker_connections are not enough while connecting to upstream "
I also change worker_connections to 20000 but show
" ... accept4() failed (24: Too many open files) ..."
&
"... socket() failed (24: Too many open files) while connecting to upstream ..."
after fixing this error the again arise previous error.
I also tried without proxy but can not pass custom header to request (not in response).
location / {
add_header tenant-id 10010;
try_files $uri $uri/ /index.php?$query_string;
}
*** It's a Laravel based Application.
Issue Fixed.
-- must pass local host (with port) like: localhost:808 / 127.0.0.1:808 (if same ip)
-- Laravel generates URL with HTTP (not https) because we proxy_pass to http://localhost.
For fixing this issue add URL::forceScheme('https') to AppServiceProvider inside boot method;
Related
I'm new to Nginx and EC2 and trying to add some simple authentication as below. It's a one page app and i want to secure the access to the page but not the tile server. With no authentication all works well. With authentication as the below I get back an error saying;
http://map.domain.org.uk is requesting your username and password. The site says: “GeoServer Realm”
I think this is because I've set authentication for any location and the tiles sit under that. How would I set up to just require authentication for the equivalent of a landing page?
server {
listen 80;
listen [::]:80;
root /var/www/domain.org.uk/public_html;
index index.html;
server_name domain.org.uk www.domain.org.uk map.domain.org.uk;
access_log /var/log/nginx/domain.org.uk.access.log;
error_log /var/log/nginx/domain.org.uk.error.log;
# auth_basic "Server level Password required to proceed";
# auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
location /geoserver/gwc/service/wmts {
auth_basic off;
#also tested without auth_basic off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/geoserver/gwc/service/wmts;
}
location / {
try_files $uri $uri/ =404;
auth_basic "Location level Password required to proceed";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
}
}
Try running http://localhost:8080/geoserver/wms?request=GetCapabilities
Also, I think This is useful for you in this case.
This uses curl utility to issue HTTP requests that test authentication. Install curl before proceeding.
Also, check /etc/nginx/sites-available/example.com Here on Linode.
Example
upstream appcluster{
server linode.example.com:8801;
server linode.example.com:8802 weight=1;
}
So I have a web app project on my server with 2 directories:
1) frontend - a nodejs app (next.js) that I start with pm2.
2) api - the laravel backend api
I also the have nginx installed and tried this as my server settings:
server {
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 80;
listen [::]:80;
root /project/api/public;
index index.php index.html index.htm;
location / {
try_files $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Now when i go to my web address the frontend part works so when i go to example.com I see my web app (so the config for the proxy to port 3000 works).
Howvever the backend/api does not work, i think because it does not redirect to the api when i perform a backend task. like
http://example.com:3000/api/auth/login/ should go to my laravel app.
"/api/..." is the backend endpoints.
I think I need to somehow define a location /api {} to go to the laravel app.
Any help to get ths working would be appreciated.
You have 2 "location /" definitions... One is clobbering the other.
Try using the following for your second definition:
location /api {
try_files $uri/ /index.php?$query_string;
}
Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs
Is there a way around to avoid CORS issue using nginx. My one application is running no netty-server which comes with play framework using joc.lan as domain name and other application is on php web server which i had integrated in my application which uses iframe to load and it uses chat.joc.lan as domain name which is a subdomain of joc.lan.
So,when anyone of my application tries to access any data for other application,the error i get on console is
Uncaught SecurityError: Blocked a frame with origin
"http://chat.joc.lan" from accessing a frame with origin
"http://joc.lan". Protocols, domains, and ports must match.
I had resolved this error by setting document.domain on both application as the main domain name which is joc.lan.
And for ajax requests i am using JSONP.
I had read somewhere it's not supported on firefox and IE.
The first once is for my main application joc.lan
server {
listen 80;
server_name joc.lan;
location / {
proxy_pass http://localhost:9000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
the second one i am interating inside joc.lan using iframe.
server {
listen 80;
server_name chat.joc.lan;
root /opt/apps/flyhi/chat;
index index.php;
# caching for images and disable access log for images
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml|ttf|eot)$ {
access_log off;
expires 360d;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9007
location ~ \.php {
fastcgi_pass 127.0.0.1:9011;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
access_log off;
}
location / {
try_files $uri $uri/ /index.php?r=$request_uri;
}
I am not sure but you can set the parameters in nginx config file for allowing CORS in all browsers.
This link can be a help where there is nginx config file is given to allow CORS
I've got an Nginx/Gunicorn/Django server deployed on a Centos 6 machine with only the SSL port (443) visible to the outside world. So unless the server is called with the https://, you won't get any response. If you call it with an http://domain:443, you'll merely get a 400 Bad Request message. Port 443 is the only way to hit the server.
I'm using Nginx to serve my static files (CSS, etc.) and all other requests are handled by Gunicorn, which is running Django at http://localhost:8000. So, navigating to https://domain.com works just fine, as do links within the admin site, but when I submit a form in the Django admin, the https is lost on the redirect and I'm sent to http://domain.com/request_uri which fails to reach the server. The POST action does work properly even so and the database is updated.
My configuration file is listed below. The location location / section is where I feel like the solution should be found. But it doesn't seem like the proxy_set_header X-* directives have any effect. Am I missing a module or something? I'm running nginx/1.0.15.
Everything I can find on the internet points to the X-Forwarded-Protocol https like it should do something, but I get no change. I'm also unable to get the debugging working on the remote server, though my next step may have to be compiling locally with debugging enabled to get some more clues. The last resort is to expose port 80 and redirect everything...but that requires some paperwork.
[http://pastebin.com/Rcg3p6vQ](My nginx configure arguments)
server {
listen 443 ssl;
ssl on;
ssl_certificate /path/to/cert.crt;
ssl_certificate_key /path/to/key.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
root /home/gunicorn/project/app;
access_log /home/gunicorn/logs/access.log;
error_log /home/gunicorn/logs/error.log debug;
location /static/ {
autoindex on;
root /home/gunicorn;
}
location / {
proxy_pass http://localhost:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol https;
}
}
Haven't had time yet to understand exactly what these two lines do, but removing them solved my problems:
proxy_redirect off;
proxy_set_header Host $host;