I've been fighting with this for a while now ans still cannot get it to run. I got nginx running on opensuse tumbleweed with codeigniter application.
I get to the front page with no problem but any attempt open a controller fails with 404 error.
When I was working on Ubuntu I was able to solve this within minutes with a little bit of googling, but here for some reason I am struggling. I am attaching my default config file for nginx. There must be something I am really not seeing here, since I got this working in the past. Is nginx on opensuse really such a pain?
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html/;
index index.php;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html/;
index index.php index.html index.htm;
## Handling of CORS
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
#
# Om nom nom cookies
#
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fastcgi_ignore_client_abort off;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
Right, I eventually found and answer to the problem. I have pretty much completely erased the the contents of the config file and replaced them with what I found on nginx website (here).
Funny thing is, if you notice there are no more directives for handling CORS in the config file, yet all the contents such as fonts, scripts, etc. are working fine without a glitch on localhost.
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
autoindex on;
index index.php;
location / {
try_files $uri $uri/ /index.php;
location = /index.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
include fastcgi_params;
}
}
location ~ \.php$ {
return 444;
}
}
Hope that will be of use to someone ;)
Related
I have a subdomain at
https://numan-rest.allrestaurants.us/
however if I try to open it with http like
http://numan-rest.allrestaurants.us/
or with www www.numan-rest.allrestaurants.us
I am getting redirected to the main domain at allrestaurants.us/
I want to stay in subdomain even if I request with HTTP, without any protocol and with WWW
this is how my nginx conf looks like in /etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/allrest/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
charset utf-8;
index index.html index.htm index.php;
# Enable nginx status page for Zabbix
location = /basic_status {
stub_status;
allow 127.0.0.1;
allow ::1;
deny all;
}
# Enable php-fpm status page for Zabbix
location ~ ^/(status|ping)$ {
## disable access logging for request if you prefer
access_log off;
## Only allow trusted IPs for security, deny everyone else
allow 127.0.0.1;
deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
## Now the port or socket of the php-fpm pool we want the status of
# fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_connect_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
fastcgi_buffering off;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(png|jpg|jpeg|gif|svg|ico)$ {
expires 30d;
add_header Cache-Control "public, no-transform";
}
error_log /var/log/nginx/allrest_error.log;
access_log /var/log/nginx/allrest_access.log;
}
and this is laravel routes
Route::group(['domain' => '{subdomain}.' . config('allrest.app_domain')], function () {
Route::get('/', 'SubdomainController#show');
});
Route::group(['domain' => 'www.{subdomain}.' . config('allrest.app_domain')], function () {
Route::get('/', 'SubdomainController#show');
});
I tried with adding
return 301 https://$host$request_uri;
at the end of nginx conf but that prevent accessing the website completetly
and chrome give error To many redirects
and this is how my digital ocean DNS rules are
digitalocean Dns Records
No load balancer at digital ocean
Load Balancer Digital ocean
However I found some rules in firewall setting I don't know if it has something to do with these rules
Here is the picture Firewall rules Digital ocean
How to configure Orchid control panel, and nginx if needed, so that Orchid loads the javascript and css files?
Under ubuntu 18.04 running vesta control panel, Orchid does not load the javascript and css content at somesite.com/dashboard.
Since nginx properly loads the css and javascript at the somesite.com/ it appears that the nginx conf is not causing it.
Any help would be greatly appreciated.
/home/user/conf/web/somesite.com.nginx.conf:
server {
listen some-server-ip:443 ssl;
server_name somedomain.com www.somedomain.com;
root /home/user/web/somedomain.com/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php index.html index.htm;
access_log /var/log/nginx/domains/somedomain.com.log combined;
access_log /var/log/nginx/domains/somedomain.com.bytes bytes;
error_log /var/log/nginx/domains/somedomain.com.error.log error;
ssl_certificate /home/user/conf/web/ssl.somedomain.com.pem;
ssl_certificate_key /home/user/conf/web/ssl.somedomain.com.key;
location / {
# added the following line to allow nginx to recognize laravel dynamically created directories
try_files $uri $uri/ /index.php?$query_string;
location ~* ^.+\.(jpeg|jpg|png|gif|bmp|ico|svg|css|js)$ {
expires max;
}
location ~ [^/]\.php(/|$) {
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
fastcgi_pass 127.0.0.1:9002;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
}
error_page 403 /error/404.html;
error_page 404 /error/404.html;
error_page 500 502 503 504 /error/50x.html;
location /error/ {
alias /home/user/web/somedomain.com/document_errors/;
}
location ~* "/\.(htaccess|htpasswd)$" {
deny all;
return 404;
}
location /vstats/ {
alias /home/user/web/somedomain.com/stats/;
include /home/user/conf/web/somedomain.com.auth*;
}
include /etc/nginx/conf.d/phpmyadmin.inc*;
include /etc/nginx/conf.d/phppgadmin.inc*;
include /etc/nginx/conf.d/webmail.inc*;
include /home/user/conf/web/snginx.somedomain.com.conf*;
}
Orchid relies entirely on Laravel settings
server {
listen 80;
server_name example.com;
root /example.com/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
Pay attention to the documentation.
Solution: orchid.software/en/docs/installation > Publishing resources: php artisan orchid:link.
For my backend API application I have Laravel 6 with Laravel Passport oAuth2 plugin. Im my routes/web.php I'm using Auth::routes(); to make all oAuth routes. My Nginx config (Running on Amazon instance):
/etc/nginx/conf.d/app.conf
server {
server_name my-app-domain.net;
listen 80;
client_max_body_size 20M;
include /etc/nginx/default.d/*.conf;
root /var/www/app/public;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.php;
location ~ /\. {
deny all;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
error_page 500 502 503 504 /index.php;
location ~* \.(?:ico|css|otf|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
root /var/www/app/public;
}
location ~ \.php$ {
fastcgi_index index.php;
fastcgi_pass unix:/run/php-fpm/www.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
}
If I send POST with grant_type and other login credentials to http://<domain name>/oauth/token I'm getting this weird ERROR:
"Symfony \ Component \ HttpKernel \ Exception \
MethodNotAllowedHttpException The GET method is not supported for this
route. Supported methods: POST."
Aside from that the other API requests are working fine, so probably its not a CORS restrictions
However if I run php artisan serve and send POST to http://localhost:8080/oauth/token it works as expected
I was wrong about CORS. Despite I have installed barryvdh/laravel-cors package and settings are set to "*" the CORS was still the issue. So I had to tweak a bit Nginx
server {
server_name my-app-domain.net;
listen 80;
client_max_body_size 20M;
include /etc/nginx/default.d/*.conf;
root /var/www/app/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Headers' '*' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
index index.html index.php;
charset utf-8;
.....
Plugin barryvdh/laravel-cors has to be deleted as well
I have a Laravel site running nginx 1.15.0. The site config specifies HSTS (HTTP Strict Transport Security) headers at the server level. This works just fine for all valid URLs.
However, when requesting a resource that results in a 404, the HSTS header is not returned with the response. This is also true of other headers set by add_header in the server block.
What I'm trying to do is get the HSTS header included even in all responses, even for an error. To be honest, it's just to satisfy the security scanners flagging it as a medium-level vulnerability. It may be security theater, but I'd still like to understand what's going on here.
With one explicitly-defined exception for .json URLs, there are no other add_header directives that would be interfering with those in the server level.
Here is the content of my nginx configuration for this site. The includes before/* and after/* do not appear to be issuing any add_header directives so I'm not expanding those here.
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/before/*;
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name .example.com;
root /home/forge/example.com/current/public;
client_max_body_size 100M;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/example.com/302491/server.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/302491/server.key;
ssl_protocols TLSv1.2;
# Updated cipher suite per Mozilla recommendation for Modern compatibility
# https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff";
add_header Vary "Origin";
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Credentials 'true';
add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
add_header Access-Control-Allow-Headers 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header Referrer-Policy "strict-origin-when-cross-origin";
add_header Public-Key-Pins 'pin-sha256="hpkppinhash="; pin-sha256="anotherpinhash="; pin-sha256="yetanotherpinhash="; pin-sha256="anotherpinhash="; pin-sha256="lastpinhash="; max-age=86400';
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/example.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location ~* \.json {
add_header Cache-Control "no-store,no-cache";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header Referrer-Policy "strict-origin-when-cross-origin";
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/after/*;
You need to add the always parameter as stated in the documentation:
Adds the specified field to a response header provided that the response code equals 200, 201 (1.3.10), 204, 206, 301, 302, 303, 304, 307 (1.1.16, 1.0.13), or 308 (1.13.0). The value can contain variables.
...
If the always parameter is specified (1.7.5), the header field will be added regardless of the response code.
So change your config to this:
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
I'm switching a Laravel Website from Apache (htaccess) to NGinx. I have built an Image Serving that produces a uri with the appropriate parameters for resizing ex : pics/images/max24h/music_video_red_icon.png.
I Apache does not find the file it would redirect it to a route image/images/max24h/music_video_red_icon.png where an Action in laravel creates and returns the image. In .htaccess it works with this code :
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^pics(.*)\.(jpe?g|png|gif|ico|bmp)$ /image$1.$2 [R=307,L]
Now in NGinx conf, How do I redirect it properly? I tried a lot of suggestions like :
# Redirect pics file not found to php laravel route to create the image
location #img_proxy {
rewrite ^/pics(.*)\.(jpe?g|png|gif|ico|bmp)$ /image$1.$2;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
It does not work at all, the server works if I remove those lines. I'm using Mac High Sierra 10.13.6 . Could it be some config conflict? Here is the full nginx.conf :
user _www _www;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80 default_server;
server_name localhost;
root /var/www/megalobiz/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
# removes trailing slashes (prevents SEO duplicate content issues)
#if (!-d $request_filename)
#{
# rewrite ^/(.+)/$ /$1 permanent;
#}
# Redirect pics file not found to php laravel route to create the image
location ~ ^/pics(.*)\.(jpe?g|png|gif|ico|bmp)$ {
try_files $uri #img_proxy;
}
location #img_proxy {
rewrite ^/pics(.*)\.(jpe?g|png|gif|ico|bmp)$ /image$1.$2;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|svg|woff|woff2|ttf)\$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)\$ {
expires 7d;
access_log off;
add_header Cache-Control "public";
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~ /\.ht {
deny all;
}
}
include servers/*;
}
The main difference between the function of the .htaccess file and your rewrite statement is that Apache will perform an external redirect using a 307 response, while Nginx performs an internal redirect instead. See this document for details.
To force Nginx to perform an external redirect, append the permanent or redirect flag to the rewrite statement, which generates a 301 or 302 response respectively.
For example:
location #img_proxy {
rewrite ^/pics(.*)\.(jpe?g|png|gif|ico|bmp)$ /image$1.$2 redirect;
}
The difference between a 302 and a 307 response, is that the latter will redirect a POST request method without changing it to a GET.
If you need to redirect POST requests, you will need to capture the relevant parts of the URI with a regular expression location or if block and use a return 307 instead.
For example:
location #img_proxy {
if ($uri ~ ^/pics(.*)\.(jpe?g|png|gif|ico|bmp)$) {
return 307 /image$1.$2$is_args$args;
}
}
See this caution on the use of if.