Say I have a config in /etc/nginx/conf.d/myscript.conf
server {
listen 8080;
server_name _;
location = /a {...} # <-- needs to be disabled during maintainence
location = /b {...}
location = /c {...} # <-- needs to be enabled during maintainence
}
For maintainence I need to disable /a location, do some commands\deployments, then enable /a location back.
Can this be done automatically via bash, without programmatic config modifyings?
You can use includes and then just deal with creating and removing symlinks. Usually you see this done with server blocks (the base nginx.conf actually just includes conf.d/* which is how it loads your server blocks), but it can be done with anything. Basically you'll have two folders, named something like locations-available and locations-enabled, and put all of your location blocks in individual files in locations-available. In your server block include locations-enabled/* and then symlink all the locations you want enabled from locations-available to locations-enabled. Every time you add or remove symlinks just reload nginx and you should be good to go.
In you case just rm the symlink, reload, do whatever you want, recreate symlink, reload.
Related
I was trying to set certain kernel parameters using "/etc/sysctl.conf" file on Cent OS 7.5. I copied "/etc/sysctl.conf" file into "/etc/sysctl.d/sysctl.conf" and updated certain parameters and reloaded settings using "sysctl --system".
But I see parameters inside "/etc/sysctl.conf" overwrites those present inside (/etc/sysctl.d/sysctl.conf) . (I can also see the same when I execute command i.e settings from /etc/sysctl.d/sysctl.conf gets applied first and then settings from "/etc/sysctl.conf" gets applied which causes issue.)
But according to man page as sysctl --system should have ignored settings inside "/etc/sysctl.conf" as I have created file with same name inside "/etc/sysctl.d/sysctl.conf" which gets read first. ( Reference : http://man7.org/linux/man-pages/man8/sysctl.8.html ).
--system
Load settings from all system configuration files. Files are
read from directories in the following list in given order
from top to bottom. ***Once a file of a given filename is
loaded, any file of the same name in subsequent directories is
ignored.***
/run/sysctl.d/*.conf
/etc/sysctl.d/*.conf
/usr/local/lib/sysctl.d/*.conf
/usr/lib/sysctl.d/*.conf
/lib/sysctl.d/*.conf
/etc/sysctl.conf ```
The man page does not agree with the source code sysctl.c. According to the source code of the PreloadSystem() function, it processes the *.conf files in the various sysctl.d search directories (skipping those *.conf filenames that have already been seen, as described in the man page). Then it processes the default /etc/sysctl.conf file if it exists without checking whether the sysctl.conf filename has already been seen.
In summary, the settings in /etc/sysctl.conf cannot be overridden by the *.conf files in /etc/sysctl.d/ and other sysctl.d directories, because the settings in /etc/sysctl.conf are always applied last.
I have a location on my nginx server like /folder/. In that folder there are .jpg files and a .php script.
What I am trying to achieve is the following:
When a user accesses a url like: http://mywebsite.com/folder/10-11 nginx will search for 10-11 file, 10-11.png file, 10-11.jpg file and display it (if the file is on the server). This section works just fine.
Now when a user accesses the same url, if the file does not exist I want the user to be redirected to that php script.
Bellow is the part of the nginx config which works. All the image files in that folder are like 10-10, 21-19, 90-20 etc (2 digit number - 2 digit number).
location /app/comics/lqthumb/ {
try_files $uri /$uri.png /$uri.jpg;
}
The other nginx section should look like this:
location /app/comics/lqthumb/ {
rewrite ^/app/comics/lqthumb/(.*?)-(.*?)$ /app/comics/lqthumb/thumbgen.php?chapter=$1&page=$2 permanent;
}
The ideea is to combine this rule together. How can I do an if else statement or something similar? I have tried several combination, read a lot of forum posts or even answers from here but I cant make it work.
If you have questions please let me know!
You combine these by using a named location. The last element of a try_files can invoke a default action to enter a named location to process the rewrite.
root /path/to/root;
location /app/comics/lqthumb {
try_files $uri $uri.png $uri.jpg #thumbgen;
}
location #thumbgen {
rewrite ^/app/comics/lqthumb/(.*?)-(.*?)$ /app/comics/lqthumb/thumbgen.php?chapter=$1&page=$2;
}
location ~ \.php$ { ... }
I have removed the spurious / you placed before $uri. I assume you have a location which invokes the PHP interpreter, otherwise, you could put those directives into the named location. I wasn't sure if you intended the redirect to be permanent. With no flag an internal redirect is performed transparently to the user.
See try_files documentation.
Looking for a little help with Zeus Rewrite.Script for multiple directories - I have a working script that operates in I place Wordpress in a sub-directory - however, If I want to run another Wordpress installation in another directory I can't get the rewrite to work with this too. Any help would be greatly appreciated!
My rewrite for 1 directory;-
#Zeus webserver version of basic Wordpress mod_rewrite rules
map path into SCRATCH:path from %{URL}
look for file at %{SCRATCH:path}
if exists then goto END
look for dir at %{SCRATCH:path}
if exists then goto END
##### FIX FOR LOGIN/FORGOTTEN PASSWORD/ADMIN ETC #####
match URL into $ with ^/site1/wp-.*$
if matched then goto END
##### FIX TO ALLOW SEARCH TO WORK #####
match URL into $ with ^/oxford-walking-tours/(.*)
set URL = /site1/index.php/$1
So site1 in the first directory - can anyone suggest how I can make this work for site2 as well?
Thanks
The script below is one that may help all ZEUS server redirecting
It ignores the folder called myfolder (replace this with one you want to exclude from any redirect, you can still get to it.)
And the rest of the file redirects to the .com version of the website in www form (rule2) and without www (rule1).
Save the file as rewrite.script and upload to root directory of server.
#ignore anything to do with myfolder
match URL into $ with ^/myfolder(\/)?.*
if matched then goto END
RULE_1_START:
match IN:Host into $ with ^mywebsite\.net$
if matched then
match URL into $ with ^/(.*)$
if matched then
set OUT:Location = http://www.mywebsite.com/$1
set OUT:Content-Type = text/html
set RESPONSE = 301
set BODY = Moved
endif
endif
RULE_1_END:
RULE_2_START:
match IN:Host into $ with ^www.\mywebsite\.net$
if matched then
match URL into $ with ^/(.*)$
if matched then
set OUT:Location = http://www.mywebsite.com/$1
set OUT:Content-Type = text/html
set RESPONSE = 301
set BODY = Moved
endif
endif
RULE_2_END:
I have some nginx config in a repo, and the application root is sometimes different on different machines and setups:
server {
listen 80;
server_name admin.triface.local;
root /Users/xxxxxx/Sites/triface-admin/public;
index triface.html;
}
I want to set a variable somewhere (like an bash environment variable or equivalent) that lets me avoid hardcoding the server root. It seems like this should be straightforward, but I can't find anything on it. Any clues?
So the answer is, there isn't any! Intentionally! And once I read the reasoning, it actually made sense. Though it is a bummer I can't do local nginx installs to people's $HOME dir, but I can live with that.
See this stackoverflow answer:
How do I pass ImageMagick environment variables to nginx mongrels?
Sure:
set $homedir /Users/xxxxxx/Sites/triface-admin/public;
Then just call $homedir
Is it possible to configure lighttpd so that a request for a file succeeds if the file exists, but is handled and redirected, for example to a cgi script, if the file does not exist?
What I'm trying to achieve is having a set of image files on disk which are generated by a script and served directly. On a request, if the file does not exist, the script will generate the image and save it to disk (for future requests) and then either serve the image directly or redirect back to the same URL which will this time succeed. I'm essentially caching the generated output on disk.
I currently have a prototype in which the script always handles the request, reading and echoing the file if it exists, but I'd rather save the overhead and have lighttpd serve it directly if possible.
You can have the best of both worlds. Lighttpd will serve the file if you give it a
X-Sendfile: path to file
see: http://redmine.lighttpd.net/wiki/1/X-LIGHTTPD-send-file. There's a php example on the documentation page.
You can set the:
server.error-handler-404
config option to a script which will do what you want.
see http://redmine.lighttpd.net/wiki/1/Server.error-handler-404Details
This question may be old, but it asked exactly what I wanted an answer to. Here is the solution that I came up with...
Here is a complete, and minimal, working configuration file for Lighttpd.
server.document-root = "/srv/http"
server.port = 80
server.username = "nobody"
server.groupname = "nobody"
server.dir-listing = "enable"
server.stream-response-body = 2
server.modules = (
"mod_rewrite",
"mod_alias",
"mod_cgi"
)
url.rewrite-if-not-file = ( "^/alpine/.*\.apk$" => "/fecher" )
alias.url += ( "/fecher" => "/bin/fecher" )
$HTTP["url"] =~ "^/fecher$" {
cgi.assign = ( "" => "" )
}
This sits on a server where I store package files. It directly serves any files it has and requests for anything it doesn't are delegated to a CGI script called /bin/fecher.
The url.rewrite-if-not-file rewrites any URL matching the given regex to /fecher.
the URL /fecher is aliased (changing its document root) to /bin/fecher.
CGI is enabled for URLs matching the regex ^/fecher$ (i.e. only /fecher).
the server.stream-response-body setting prevents Lighttpd buffering the CGI output into a temporary file (I did not want to provide it with write access to /var/tmp or anywhere else).
If the server encounters a URL matching the first expression for which it lacks the file, the URL gets rewritten and mapped to the CGI script which is executed.
On my server /bin/fecher is a shell script that pulls the missing package from upstream, returns it to the client and stores it locally for future requests.