bash function to perform a sanity check on nginx files - bash

I am trying to write a function which performs a sanity check of files before I move them into /etc/nginx/site-available.
There are located in my home directory and are modified regularly.
The only modification done in those files is to add a server_name.
They look like:
server {
listen 80;
server_name domain.com;
server_name www.domain.com;
server_name mynsite1.com;
server_name www.mysite1.com;
server_name mysite2.com;
server_name www.mysite2.com;
server_name mysite3.com;
server_name www.mysite3.com;
server_name mysite4.com;
server_name www.mysite4.com;
access_log /var/log/nginx/domain.com-access.log main;
error_log /var/log/nginx/domain.com-error.log warn;
root /var/www/docroot;
index index.php index.html index.htm;
location / {
try_files $uri /app_dev.php;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
Here is the function I have right now:
verify_nginx()
{
if [ ! -s "$file1" ]; then
echo "-> File \"$file1\" is empty or do not exist" | ts
exit 1
elif [ ! -s "$file2" ]; then
echo "-> File \"$file2\" is empty or do not exist" | ts
exit 1
fi
}
I would also like to add nginx -t -c /homedir/file1 in the function but I get the following error:
nginx: [emerg] "server" directive is not allowed here in /homedir/file:1
nginx: configuration file /homedir/file test failed
Indeed nginx -c is expecting nginx.conf which does not include my files in my homedir.
I could put my files in /etc/nginx/site-available which is included in nginx.conf but I want to perform the sanity check before I move the files to the correct location.
My questions:
Is there a way to test the configuration file located somewhere else than in /etc/nginx/site-available using nginx?
What kind of sanity checks should be performed on nginx files?

The files you're trying to sanity-check are not nginx config files, therefore (understandably) nginx -t says they're invalid. The -c flag expects "an alternative configuration file", not a single server block. server blocks live inside an http block.
If you want to run nginx -t you need to pass it a proper config file, which would include the files you're attempting to modify. You could, as Etan Reisner suggests, simply write a dummy nginx.conf that includes your files, something like this might work (I'm not on a machine with nginx installed atm, so you may have to add a few more stub directives):
http {
include path/to/files/*;
}
Then you can run nginx -t -c dummy_nginx.conf.
This has a problem though; you could still have any number of errors that are only revealed when your real config file is loaded.
Instead, you can verify your real config file with your changes before they're loaded by simply calling nginx -t before you reload; you could wrap this in a bash function if you wanted:
safe_reload() {
nginx -t &&
service nginx reload # only runs if nginx -t succeeds
}
You should also have some sort of backup or restore mechanism. This could be as simple as copying your old config files to parallel *.bak files, but much more pleasant is using a VCS like Mercurial or Git. You check in each iteration of your configs that succeed, and then you can easily revert to the previous known good configuration if anything goes wrong.

Related

Changing the nginx fcgiwrap user

I'm trying to use fcgi with a shell script and make it callable via nginx.
No matter what I do though, the script is ran with www-data user. I need it to run as nginx user that nginx is using.
nginx 1.15.1
Installed fcgiwrap:
apt get install fcgiwrap
Config is following:
nginx.conf:
user nginx nginx;
worker_processes 1;
worker_rlimit_nofile 100000;
http {
location ~ (\.sh)$ {
gzip off;
root /home/nginx/www;
autoindex on;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /usr/local/nginx/conf/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /home/nginx/www;
fastcgi_param SCRIPT_FILENAME /home/nginx/www$fastcgi_script_name;
}
}
The problem I need to fix is access to some of the files other scripts have created that run under nginx user.
I also tried editing /etc/init.d/fcgiwrap to change
FCGI_USER="nginx"
FCGI_GROUP="nginx"
# Socket owner/group (will default to FCGI_USER/FCGI_GROUP if not defined)
FCGI_SOCKET_OWNER="nginx"
FCGI_SOCKET_GROUP="nginx"
But it had no effect. The script:
#!/bin/bash -e
echo 'Content-Type: text/plain'
echo ''
echo $(whoami)
echo $(groups)
The output is:
www-data
www-data
Define the path to the fcgiwrap.service file:
systemctl status fcgiwrap.service
Change the User and the Group in fcgiwrap.service file:
nano /lib/systemd/system/fcgiwrap.service

Laravel 5 with Heroku: "/" route returns 403 Forbidden

As the title says, I'm using Heroku with the following Procfile:
web: vendor/bin/heroku-php-nginx -C nginx.conf public/
After I push to Heroku, I'm getting the following screen:
Given my experience with Linux, I was sure it has to do something with permissions so I, little by little, tested chmoding 777 on folders recursively, and at some point ended up doing 777 on the entire project, needless to say, it didn't work.
Any ideas?
EDIT: Here's my nginx.conf as per Laravel 5 docs
location / {
try_files $uri $uri/ /index.php?$query_string;
}
With the -C switch, you're overriding the default Nginx config "include snippet", which also contains this:
location / {
index index.php index.html index.htm;
}
So your problem now is that your rewrites likely work just fine (try a URL like /foobar), just not /, because the try_files directive works (the folder / exists), but indexes are not allowed, hence the 403.
The easiest way to fix that is to add an index directive:
location / {
index index.php;
try_files $uri $uri/ /index.php?$query_string;
}
That should do the trick.

How to redirect ALL requests, even index.php to go to another php file (parse.php)

Right now I have this in my nginx config:
location / {
rewrite ^(.*)$ /parse.php;
}
Then further down:
location ~\.php$ {
root /var/www/site.com/public/;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
If I browse to
site.com/example/this
It does as it's supposed to and goes to parse.php ($_SERVER['REQUEST_URI'] is properly set to '/example/this'). The same goes for site.com/images/test.jpg, it will work as intended and pass it to the parse.php script.
However, if I got to 'site.com/another.php' it doesn't go to parse, and instead it says:
No input file specified.
Any idea how to get this to work? I removed the try_files clause and still no luck.
Well, if anyone is wondering, this is the solution, simply merge both location blocks together then add break; to the rewrite forcing everything to go to the php script.
For some reason when both blocks are separate, the first location (which specifies that everything should be rewritten to parse.php) gets overridden by the second location (targeting php only files) and somehow reverts the request back to the original php file instead of parse.php as per the rewrite.
location / {
rewrite ^(.*)$ /parse.php break;
root /var/www/site.com/public/;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
This will effectively parse every single request to your parse.php for the given server{} block.
Now you can determine the original request via $_SERVER['REQUEST_URI']; in your parse.php script and do whatever you want to from there on.
Actually you don't need the rewrite rule because every request is sent to parse.php. A regex is slow. You may also try this:
location / {
root /var/www/site.com/public;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/parse.php;
fastcgi_param SCRIPT_NAME parse.php;
fastcgi_pass 127.0.0.1:9000;
}
$uri are just for assets support, images and static files will be served directly without being passed to parse.php if they are found, otherwise it will be passed to parse.php
location / {
try_files $uri /parse.php$request_uri;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}

php-fpm and nginx session problems

I've been having this problem for the past week or so. I've been working on a PHP project that relies HEAVILY on Sessions. For some reason we've been having troubles with the sessions saving the past few days. Any idea why?
Here's the error:
Warning: Unknown: open(/tmp/sess_mmd0ru5pl2h2h9bummcu1uu620, O_RDWR) failed: Permission denied (13) in Unknown on line 0 Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0
Warning: session_start(): open(/tmp/sess_mmd0ru5pl2h2h9bummcu1uu620, O_RDWR) failed: Permission denied (13)
nginx version:
nginx version: nginx/1.0.11
PHP-FPM config:
;;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;
; All relative paths in this configuration file are relative to PHP's install
; prefix.
; Include one or more files. If glob(3) exists, it is used to include a bunch of
; files from a glob(3) pattern. This directive can be used everywhere in the
; file.
include=/etc/php-fpm.d/*.conf
;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;
[global]
; Pid file
; Default Value: none
pid = /var/run/php-fpm/php-fpm.pid
; Error log file
; Default Value: /var/log/php-fpm.log
error_log = /var/log/php-fpm/error.log
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
;log_level = notice
; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of '0' means 'Off'.
; Default Value: 0
;emergency_restart_threshold = 0
; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated. This can be useful to work around
; accidental corruptions in an accelerator's shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;emergency_restart_interval = 0
; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;process_control_timeout = 0
; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.
; Default Value: yes
;daemonize = yes
;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;
; See /etc/php-fpm.d/*.conf
nginx.conf:
#######################################################################
#
# This is the main Nginx configuration file.
#
# More information about the configuration options is available on
# * the English wiki - http://wiki.nginx.org/Main
# * the Russian documentation - http://sysoev.ru/nginx/
#
#######################################################################
#----------------------------------------------------------------------
# Main Module - directives that cover basic functionality
#
# http://wiki.nginx.org/NginxHttpMainModule
#
#----------------------------------------------------------------------
user nginx nginx;
worker_processes 5;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
#----------------------------------------------------------------------
# Events Module
#
# http://wiki.nginx.org/NginxHttpEventsModule
#
#----------------------------------------------------------------------
events {
worker_connections 4096;
}
#----------------------------------------------------------------------
# HTTP Core Module
#
# http://wiki.nginx.org/NginxHttpCoreModule
#
#----------------------------------------------------------------------
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
index index.php index.html index.htm;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name stats.smilingdevil.com;
error_page 404 /404.php;
root /var/www;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
set $page_to_view "/index.php";
try_files $uri $uri/ #rewrites;
root /var/www/;
index index.php;
}
location #rewrites {
if ($uri ~* ^/([a-z0-9]+)$) {
set $page_to_view "/$1.php";
rewrite ^/([a-z]+)$ /$1.php last;
}
}
location ~ \.php$ {
include /etc/nginx/fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name;
}
}
}
Just change the ownership of /var/lib/php/session/ to nginx from apache instead of giving a world read.
sudo chown -R nginx:nginx /var/lib/php/session/
I found that my php.ini was attempting to save sessions to /var/lib/php/session rather than /tmp
So check your ini file and see where they're being saved to (or set it to somewhere else); then make sure that directory is writeable by the appropriate processes
TL;DR: Add nginx user to apache group
RHEL has decided that /var/lib/php/session is owned by the php package. That package has decided that it will always recreate the /var/lib/php/session directory when installed and will always return the directory to being owned by root with group set to apache with full permissions for each and no permissions for anything else. Therefore, while many suggested solutions here suggest changing the permissions of /var/lib/php/session, that will cause problems in the future.
https://bugzilla.redhat.com/show_bug.cgi?id=1146552
The RHEL suggested way of fixing this issue is to create your own session directory wherever you'd like to store it and set the permissions as necessary. Future php updates won't affect that new location and everything should stay working.
An alternative that has worked quite well for me is to simply add nginx to the apache group.
Chris Rutledge is right,
php some times is saving sesions on /var/lib/php/session/ directory
check your php.ini file or
create the directory with 777 rights
mkdir /var/lib/php/session
chmod -R 777 /var/lib/php/session
this error occured due to the user which run php process may not have permission to write on /tmp directory
to make it writeable by all user use this commend
chmod 777 /tmp
another reason which causes the same issue is read only file system
if /dev/sda1 is mounted on /tmp and due to heavy write your file system may become read only...
to make it rewritable again use this command
mount -t ext3 -o rw,remount /dev/sda1 /tmp
Seems I found something interesting on the Linux. In the chroot php-cgi make same errors when some PHP software try to read/write session. I thought this could be permission issue, but after set 777 and set owner of the webserver to the "/tmp" and set it in the After many hours it found that "urandom" device in the "/dev" needed to work it. Just make sure that it found or copy/make it and change permissions temporary (just for check and then change to safely):
chmod 777 /dev/urandom
Strange to me that it wasn't required in some PHP5.x version but in some PHP7.x need to be there.
I just went through an upgrade of PHP on CentOS. I had to change /etc/php-fpm.d/www.conf and update the php_value[session.save_path] variable and set it to /tmp
php_value[session.save_path] = /tmp
This works fine.
I don't think this will be a security hazard.
You might get this error when you'r using NGINX and the server gives permission to apache instead of nginx.
My fix is:
chown -R nginx:nginx /var/lib/php/
With chown you are changhing the owner of that specific folder and -R means its recursive.

Ajax not working on Nginx / Wordpress

ahoy,
i'm running wordpress 3x on nginx and all my ajax calls are broken. the exactly same wordpress runs fine on Apache.
i've fixed somehow an ajax call to work with nginx by removing 'index.php' from all jquery.post() calls, but i couldn't fix the other calls in the same way.
basically the changes were:
for nginx the line:
jQuery.post( 'index.php?ajax=true', form_values, function(returned_data) {
was replaced with:
jQuery.post( '?ajax=true', form_values, function(returned_data) {
i suspect the problem lies in the nginx config file with rewrite rules. There you are with my configuration
if (!-e $request_filename) {
rewrite ^.+/?(/wp-.*) $1 last;
rewrite ^.+/?(/.*\.php)$ $1 last;
rewrite ^(.+)$ /index.php?q=$1 last;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /xxx/public$fastcgi_script_name;
include fastcgi_params;
}
}
Could it be that you are in a directory or "virtual" directory in the browser URL?
If, for example, you are at www.myblog.com this should work, but on www.myblog.com/my-category/my-post/ it probably wouldn't.
Have you done your testing from the exact same URL-location on the Apache site and the Nginx site?
Have you tried with a slash in front of the path to ensure that it is the root script being called?
jQuery.post( '/index.php?ajax=true', form_values, function(returned_data) {

Resources