Geoserver NGINX configuration - amazon-ec2

I'm new to Nginx and EC2 and trying to add some simple authentication as below. It's a one page app and i want to secure the access to the page but not the tile server. With no authentication all works well. With authentication as the below I get back an error saying;
http://map.domain.org.uk is requesting your username and password. The site says: “GeoServer Realm”
I think this is because I've set authentication for any location and the tiles sit under that. How would I set up to just require authentication for the equivalent of a landing page?
server {
listen 80;
listen [::]:80;
root /var/www/domain.org.uk/public_html;
index index.html;
server_name domain.org.uk www.domain.org.uk map.domain.org.uk;
access_log /var/log/nginx/domain.org.uk.access.log;
error_log /var/log/nginx/domain.org.uk.error.log;
# auth_basic "Server level Password required to proceed";
# auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
location /geoserver/gwc/service/wmts {
auth_basic off;
#also tested without auth_basic off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/geoserver/gwc/service/wmts;
}
location / {
try_files $uri $uri/ =404;
auth_basic "Location level Password required to proceed";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
}
}

Try running http://localhost:8080/geoserver/wms?request=GetCapabilities
Also, I think This is useful for you in this case.
This uses curl utility to issue HTTP requests that test authentication. Install curl before proceeding.
Also, check /etc/nginx/sites-available/example.com Here on Linode.
Example
upstream appcluster{
server linode.example.com:8801;
server linode.example.com:8802 weight=1;
}

Related

Laravel Forge default site

We use Laravel Forge on a Load Balancer to handle a lot of sites on there. We always had one of the sites as a default/catch-all when a domain is pointed at us with no site conf set. Recently, that site's SSL expired. Took us a little bit but we got it back. Ever since then though, it has stopped being the catch-all. So if a site isn't pointing right, the invalid domain gets redirected to the first site in the list.
Here's a nginx conf for a site that redirects to the first server in the list.
FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/www.accuproadvisors.com/before/*;
# FORGE CONFIG (DO NOT REMOVE!)
include upstreams/www.accuproadvisors.com;
server {
listen 80;
listen [::]:80;
server_name www.accuproadvisors.com accuproaccounting.com;
server_tokens off;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate
# ssl_certificate_key
ssl_protocols TLSv1.2;
charset utf-8;
access_log off;
error_log /var/log/nginx/www.accuproadvisors.com-error.log error;
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/www.accuproadvisors.com/server/*;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://1127640_app/;
proxy_redirect off;
# Handle Web Socket Connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/www.accuproadvisors.com/after/*;
We have a 000-catch-all file and it contains:
server {
listen 80;
server_name _;
root /home/forge/catch-all;
index index.html index.htm;
error_page 404 /404.html;
location / { }
# return 404;
}
The folder /home/forge/catch-all contains the default index.html that was always the default until the SSL expired. Anyone have any tips? Anything is appreciated. Thanks!

Serve single application from multiple domain/subdomain in Nginx?

I need to pass/set custom header from Nginx block to server (not response header) to detect the tenant.
I have tried with Nginx proxy but failed: Here is my code:
server {
server_name app.another.com www.app.another.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header tenant-id 1001;
proxy_pass ...domain.com ;
proxy_redirect ...domain.com ....app.another.com;
}
}
Error:
" 786 worker_connections are not enough while connecting to upstream "
I also change worker_connections to 20000 but show
" ... accept4() failed (24: Too many open files) ..."
&
"... socket() failed (24: Too many open files) while connecting to upstream ..."
after fixing this error the again arise previous error.
I also tried without proxy but can not pass custom header to request (not in response).
location / {
add_header tenant-id 10010;
try_files $uri $uri/ /index.php?$query_string;
}
*** It's a Laravel based Application.
Issue Fixed.
-- must pass local host (with port) like: localhost:808 / 127.0.0.1:808 (if same ip)
-- Laravel generates URL with HTTP (not https) because we proxy_pass to http://localhost.
For fixing this issue add URL::forceScheme('https') to AppServiceProvider inside boot method;

Nginx fails to load static after putting caching configuration

I am trying to implement leverage browser caching for my flask project nginx. But when i insert the code inside conf file static files are not served by nginx showing 403 permission denied error.
This is my conf file for site in site-enabled
server {
listen 80;
server_name site.in;
root /root/site-demo/;
access_log /var/log/site/access_log;
error_log /var/log/site/error_log;
location / {
proxy_pass http://127.0.0.1:4000/;
proxy_redirect http://127.0.0.1:4000 http://site.in;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
}
When i remove the cache expire part everything works fine. I tried similar question's answers and put root specification. But still error remains same. User specified in nginx.conf is www-data. Do i have to change user?

NGINX CORS ISSUE

Is there a way around to avoid CORS issue using nginx. My one application is running no netty-server which comes with play framework using joc.lan as domain name and other application is on php web server which i had integrated in my application which uses iframe to load and it uses chat.joc.lan as domain name which is a subdomain of joc.lan.
So,when anyone of my application tries to access any data for other application,the error i get on console is
Uncaught SecurityError: Blocked a frame with origin
"http://chat.joc.lan" from accessing a frame with origin
"http://joc.lan". Protocols, domains, and ports must match.
I had resolved this error by setting document.domain on both application as the main domain name which is joc.lan.
And for ajax requests i am using JSONP.
I had read somewhere it's not supported on firefox and IE.
The first once is for my main application joc.lan
server {
listen 80;
server_name joc.lan;
location / {
proxy_pass http://localhost:9000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
the second one i am interating inside joc.lan using iframe.
server {
listen 80;
server_name chat.joc.lan;
root /opt/apps/flyhi/chat;
index index.php;
# caching for images and disable access log for images
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml|ttf|eot)$ {
access_log off;
expires 360d;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9007
location ~ \.php {
fastcgi_pass 127.0.0.1:9011;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
access_log off;
}
location / {
try_files $uri $uri/ /index.php?r=$request_uri;
}
I am not sure but you can set the parameters in nginx config file for allowing CORS in all browsers.
This link can be a help where there is nginx config file is given to allow CORS

Nginx/Django Admin POST https only

I've got an Nginx/Gunicorn/Django server deployed on a Centos 6 machine with only the SSL port (443) visible to the outside world. So unless the server is called with the https://, you won't get any response. If you call it with an http://domain:443, you'll merely get a 400 Bad Request message. Port 443 is the only way to hit the server.
I'm using Nginx to serve my static files (CSS, etc.) and all other requests are handled by Gunicorn, which is running Django at http://localhost:8000. So, navigating to https://domain.com works just fine, as do links within the admin site, but when I submit a form in the Django admin, the https is lost on the redirect and I'm sent to http://domain.com/request_uri which fails to reach the server. The POST action does work properly even so and the database is updated.
My configuration file is listed below. The location location / section is where I feel like the solution should be found. But it doesn't seem like the proxy_set_header X-* directives have any effect. Am I missing a module or something? I'm running nginx/1.0.15.
Everything I can find on the internet points to the X-Forwarded-Protocol https like it should do something, but I get no change. I'm also unable to get the debugging working on the remote server, though my next step may have to be compiling locally with debugging enabled to get some more clues. The last resort is to expose port 80 and redirect everything...but that requires some paperwork.
[http://pastebin.com/Rcg3p6vQ](My nginx configure arguments)
server {
listen 443 ssl;
ssl on;
ssl_certificate /path/to/cert.crt;
ssl_certificate_key /path/to/key.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
root /home/gunicorn/project/app;
access_log /home/gunicorn/logs/access.log;
error_log /home/gunicorn/logs/error.log debug;
location /static/ {
autoindex on;
root /home/gunicorn;
}
location / {
proxy_pass http://localhost:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol https;
}
}
Haven't had time yet to understand exactly what these two lines do, but removing them solved my problems:
proxy_redirect off;
proxy_set_header Host $host;

Resources