Shell Script To Automate Nginx Blocks - shell

I'm trying to automate nginx setup through the script
Command - SECRET_KEY='xxxx' HTTP='$http_update' AMCU_HOST='$host' AMCU_RURI='$redirect_uri' sh ./script.sh domain.com port username
In the server block file, I want HTTP='$http_update' but it replaces it and just leave blank ''. Same for AMCU_HOST='$host' and AMCU_RURI='$redirect_uri'. Tried without attributes in command but same happens again...leaves '' as script thinks it as $ attr.
script.sh
#!/bin/bash
domain=$1
port=$2
user=$3
block="/etc/nginx/sites-available/$domain"
ssh="/home/$user/.ssh/authorized_keys"
#Create User
echo "▶ Creating User"
sudo useradd $user
#User mkdir
echo "▶ Updating home dir"
sudo mkdir /home/$user
#Create .ssh/authkeys
echo "▶ Updating SSH dir"
cd /home/$user && mkdir .ssh/
#Create the SSH Auth file:
echo "▶ Updating SSH AuthKey"
sudo tee $ssh > /dev/null <<EOF
$SECRET_KEY
EOF
#Create the Nginx server block file:
echo "▶ Updating NGINX Server Block"
sudo tee $block > /dev/null <<EOF
server {
listen 80;
server_name $domain;
return 301 https://$domain$AMCU_RURI;
}
server {
#Secure HTTP (HTTPS)
listen 443 ssl;
server_name $domain;
error_page 500 502 503 504 /500.html;
location /500.html {
root /var/www/html;
internal;
}
ssl_certificate /etc/letsencrypt/live/$domain/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$domain/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $HTTP;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $AMCU_HOST;
proxy_cache_bypass $HTTP;
}
}
EOF
#Link to make it available
echo "▶ Linking Server Blocks"
sudo ln -s $block /etc/nginx/sites-enabled/
#Test configuration and reload if successful
echo "▶ Reloading Server"
sudo nginx -t && sudo service nginx reload

Related

Puma - No such file or directory - connect(2). No idea where it's getting this location from

So, I'm trying to deploy a Sinatra app with Capistrano. I have deployed the app successfully, however I am unable to start Puma. When I enter my app's current diretory and run pumactl -F config/puma.rb start I get the following error:
ubuntu#ip-10-0-0-195:/srv/apps/cx/current$ pumactl -F config/puma.rb start
[18512] Puma starting in cluster mode...
[18512] * Version 4.3.5 (ruby 2.5.1-p57), codename: Mysterious Traveller
[18512] * Min threads: 2, max threads: 6
[18512] * Environment: staging
[18512] * Process workers: 1
[18512] * Phased restart available
No such file or directory - connect(2) for /srv/apps/cx/releases/shared/tmp/sockets/puma.sock
I have no idea how or why it's looking in the cx/releases directory. I've attached some of my files below and maybe someone can tell me what I'm doing wrong here.
Puma.rb
# Change to match your CPU core count
workers 1
# Min and Max threads per worker
threads 2, 6
app_dir = File.expand_path('../../..', __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || 'staging'
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/tmp/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/logs/puma.stdout.log", "#{shared_dir}/logs/puma.stderr.log", true
daemonize
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/current/config/database.yml")[rails_env])
end
lowlevel_error_handler do |ex, env|
Raven.capture_exception(
ex,
message: ex.message,
extra: { puma: env },
transaction: 'Puma'
)
# note the below is just a Rack response
[500, {}, ['An error has occurred, and engineers have been informed. Please reload the page']]
end
Puma.service
[Unit]
Description=Connect-Puma Server
After=network.target
[Service]
Type=simple
User=ubuntu
# EnvironmentFile=/srv/apps/cx-api/current/.rbenv-vars
Environment=RAILS_ENV=staging
WorkingDirectory=/srv/apps/cx/current/
ExecStart=/usr/bin/rbenv exec bundle exec puma -C /srv/apps/cx/current/config/puma.rb
ExecStop=/usr/bin/rbenv exec bundle exec pumactl -F /srv/apps/cx/current/config/puma.rb stop
ExecReload=/usr/bin/rbenv exec bundle exec pumactl -F /srv/apps/cx/current/config/puma.rb phased-restart
TimeoutSec=15
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
etc/nginx/sites-enabled/cx
pstream sinatra {
server unix:/srv/apps/cx/shared/tmp/puma.sock;
}
server {
root /srv/apps/cx/current/public;
server_name staging.ldelivers.com;
location / {
try_files $uri $uri/index.html #puma;
}
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://sinatra;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/staging.lmdelivers.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/staging.lmdelivers.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = staging.lmdelivers.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name staging.lmdelivers.com;
return 404; # managed by Certbot
}
As you can see I'm not calling to the /releases directory anywhere. If anyone can get me pointed in the right direction I would appreciate it soo soo much.
Thanks
Try this
app_dir = File.expand_path('../../../..', __FILE__)
Also, the socket path is set as .../tmp/puma.sock in one configuration file and .../tmp/sockets/puma.sock in another.

Why is nginx running on port 8080 but not 81?

Based on brew info nginx the terminal output is telling me that nginx is running on port 8080 by default:
The default port has been set in /usr/local/etc/nginx/nginx.conf to
8080 so that nginx can run without sudo.
This is the full output:
$ brew info nginx
nginx: stable 1.19.0 (bottled), HEAD
HTTP(S) server and reverse proxy, and IMAP/POP3 proxy server
https://nginx.org/
/usr/local/Cellar/nginx/1.19.0 (25 files, 2.1MB) *
Poured from bottle on 2020-06-16 at 17:55:46
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/nginx.rb
==> Dependencies
Required: openssl#1.1 ✔, pcre ✔
==> Options
--HEAD
Install HEAD version
==> Caveats
Docroot is: /usr/local/var/www
The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that
nginx can run without sudo.
nginx will load all files in /usr/local/etc/nginx/servers/.
To have launchd start nginx now and restart at login:
brew services start nginx
Or, if you don't want/need a background service you can just run:
nginx
==> Analytics
install: 33,973 (30 days), 101,534 (90 days), 407,985 (365 days)
install-on-request: 33,387 (30 days), 99,128 (90 days), 394,576 (365 days)
build-error: 0 (30 days)
My Mac OS is Catalina 10.15
However, when I go to look at the nginx.conf in /usr/local/etc/nginx/nginx.conf I do not see that the nginx port is open on 8080. I see it open on port 81:
server {
listen 81;
server_name localhost;
....
....
When I go to visit http://localhost:8080/ I get the nginx welcome message. However when I go to visit http://localhost:81/ I get a "site can't be reached" ERR_CONNECTION_REFUSED error.
How is nginx running on port 8080 without such a specification in the nginx.conf file? And why is nginx not running on port 81 which the conf appears to suggest it should.
Here's the full nginx.conf:
# cat /usr/local/etc/nginx/nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 81;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
include servers/*;
}
Have you tried to restart/reload Nginx yet? In order for the configuration change to take effect, you need to reload it.
You can use this command on Mac OSX to reload Nginx: sudo nginx -s reload

How to append multi-lines to file in a dockerfile? [duplicate]

This question already has answers here:
launch a CAT command unix into Dockerfile
(7 answers)
Closed 7 months ago.
I have a dockerfile and can't seem to be able to embed the nginx configuration file to it, so that it can be appended to /etc/nginx/nginx.conf.
I tried the following formats:
RUN cat <<EOT >> /etc/nginx/nginx.conf
user www;
worker_processes auto; # it will be determinate automatically by the number of core
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid; # it permit you to use /etc/init.d/nginx reload|restart|stop|start
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
keepalive_timeout 3000;
server {
listen 80;
root /usr/local/www;
index index.html index.htm;
server_name localhost;
client_max_body_size 32m;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/lib/nginx/html;
}
}
}
EOT
and
RUN echo $
'user www; \n
worker_processes auto; # it will be determinate automatically by the number of core \n
error_log /var/log/nginx/error.log warn; \n
pid /var/run/nginx.pid; # it permit you to use /etc/init.d/nginx reload|restart|stop|start \n
events { \n
worker_connections 1024; \n
} \n
http { \n
include /etc/nginx/mime.types; \n
default_type application/octet-stream; \n
sendfile on; \n
access_log /var/log/nginx/access.log; \n
keepalive_timeout 3000; \n
server { \n
listen 80; \n
root /usr/local/www; \n
index index.html index.htm; \n
server_name localhost; \n
client_max_body_size 32m; \n
error_page 500 502 503 504 /50x.html; \n
location = /50x.html { \n
root /var/lib/nginx/html; \n
} \n
} \n
}'
> /etc/nginx/nginx.conf
However with either of the two examples I get the following error, which kinda looks like docker is trying to treat the nginx config file as its own variables:
Sending build context to Docker daemon 33.28 kB
Error response from daemon: Unknown instruction: WORKER_PROCESSES
Docker version is 1.13.1, build 07f3374/1.13.1 and the distro I am using is CentOS Atomic Host 7.1902, while docker base image is alpinelinux.
Thanks
That should do the trick:
RUN echo $'first line \n\
second line \n\
third line' > /etc/nginx/nginx.conf
Basically it's wrapped in a $'' and uses \n\ for new lines.
I was looking to create & append lines to my .npmrc to install private packages. The only syntax that worked for me was:
RUN echo #myscope:registry=https://gitlab.com/api/v4/packages/npm/ > .npmrc \
&& echo //gitlab.com/api/v4/packages/npm/:_authToken=${MY_TOKEN} >> .npmrc \
&& echo strict-ssl=false >> .npmrc

creating file in shell then replacing text inside

I have a script where i declare variables then create a file and then replace a variable within that file, this is my example script
#!/bin/bash
DMNAME = mydomain.com
cat <<EOF > /etc/nginx/conf.d/default.conf
server_name DMNAME;
root /usr/share/nginx/html/;
index index.php index.html index.htm;
ssl_certificate /etc/letsencrypt/live/DMNAME/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DMNAME/privkey.pem;
EOF
sed -i 's/DMNAME/mydomain.com/g' /etc/nginx/conf.d/default.conf
#
Would this be the correct way of replacing DMNAME with mydomain.com ?
#!/bin/bash
DMNAME="mydomain.com"
cat <<EOF > /etc/nginx/conf.d/default.conf
server_name $DMNAME;
root /usr/share/nginx/html/;
index index.php index.html index.htm;
ssl_certificate /etc/letsencrypt/live/$DMNAME/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DMNAME/privkey.pem;
EOF
#Bor is right that he fills /etc/nginx/conf.d/default.conf with the correct values when creating it.
When you want to use default.conf more than once, you shouldn't let sed change the file with the -i option, but redirect the results to the file you want.
# Answer: do not use this here: DMNAME = mydomain.com
cat <<EOF > /etc/nginx/conf.d/default.conf
server_name DMNAME;
root /usr/share/nginx/html/;
index index.php index.html index.htm;
ssl_certificate /etc/letsencrypt/live/DMNAME/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DMNAME/privkey.pem;
EOF
# And now DMNAME="mydomain.com" (without spaces, quotes are optional here).
# or for different domains
confpath="/etc/nginx/conf.d"
for domain in mydomain.com yourdomain.com hisdomain.com; do
sed "s/DMNAME/${domain}/g" "${confpath}"/default.conf > "${confpath}"/${domain%.*}.conf
done

How to run a shell script on every request?

I want to run a shell script every time my nginx server receives any HTTP request. Any simple ways to do this?
You can execute a shell script via Lua code from the nginx.conf file to achieve this. You need to have the HttpLuaModule to be able to do this.
Here's an example to do this.
location /my-website {
content_by_lua_block {
os.execute("/bin/myShellScript.sh")
}
}
I found the following information online at this address: https://www.ruby-forum.com/topic/2960191
This does expect that you have fcgiwrap installed on the machine. It is really as simple as:
sudo apt-get install fcgiwrap
Example script (Must be executable)
#!/bin/sh
# -*- coding: utf-8 -*-
NAME=`"cpuinfo"`
echo "Content-type:text/html\r\n"
echo "<html><head>"
echo "<title>$NAME</title>"
echo '<meta name="description" content="'$NAME'">'
echo '<meta name="keywords" content="'$NAME'">'
echo '<meta http-equiv="Content-type"
content="text/html;charset=UTF-8">'
echo '<meta name="ROBOTS" content="noindex">'
echo "</head><body><pre>"
date
echo "\nuname -a"
uname -a
echo "\ncpuinfo"
cat /proc/cpuinfo
echo "</pre></body></html>"
Also using this as an include file, not restricted to only shell
scripts.
location ~ (\.cgi|\.py|\.sh|\.pl|\.lua)$ {
gzip off;
root /var/www/$server_name;
autoindex on;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /var/www/$server_name;
fastcgi_param SCRIPT_FILENAME /var/www/$server_name$fastcgi_script_name;
}
I found it extremely helpful for what I am working on, I hope it help you out with your RaspberryPI project.
Install OpenResty (OpenResty is just an enhanced version of Nginx by means of addon modules ) Refer https://openresty.org/en/getting-started.html for this
Configure aws cli on the instance
Write a shell script which download a file from specified S3 bucket
Do the required changes in nginx.conf file
Restart the nginx server
I have tested the http request using curl and file gets download in /tmp directory of respective instance:
curl -I http://localhost:8080/
OutPut:
curl -I http://localhost:8080/
HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Tue, 14 Aug 2018 07:34:49 GMT
Content-Type: text/plain
Connection: keep-alive
Content of nginx.conf file:
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
default_type text/html;
content_by_lua '
ngx.say("<p>hello, world</p>")
';
}
location / {
content_by_lua_block{
os.execute("sh /tmp/s3.sh")
}
}
}
}
If you prefer full control in Python:
Create /opt/httpbot.py:
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self._handle()
def do_POST(self):
self._handle()
def _handle(self):
try:
self.log_message("command: %s", self.path)
if self.path == '/foo':
subprocess.run(
"cd /opt/bar && GIT_SSH_COMMAND='ssh -i .ssh/id_rsa' git pull",
shell=True,
)
finally:
self.send_response(200)
self.send_header("content-type", "application/json")
self.end_headers()
self.wfile.write('{"ok": true}\r\n'.encode())
if __name__ == "__main__":
HTTPServer(("127.0.0.1", 4242), Handler).serve_forever()
No concurrency/parallelism here, so httpbot runs one command at a time, no conflicts.
Run apt install supervisor
Create /etc/supervisor/conf.d/httpbot.conf:
[program:httpbot]
environment=PYTHONUNBUFFERED="TRUE"
directory=/opt
command=/opt/httpbot.py
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/httpbot.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
Add to your nginx server:
location /foo {
proxy_pass http://127.0.0.1:4242/foo;
}
Run:
chmod u+x /opt/httpbot.py
service supervisor status
# If stopped:
service supervisor start
supervisorctl status
# If httpbot is not running:
supervisorctl update
curl https://example.com/foo
# Should return {"ok": true}
tail /var/log/httpbot.log
# Should show `command: /foo` and the output of shell script
You can also use the nginx mirror module and poxy_pass it to a web script that runs whatever, in my case I just added this to my main site location {...
mirror /mirror;
mirror_request_body off;
and then a new location called mirror that I had run a php script that executed whatever...
location = /mirror {
internal;
proxy_pass http://localhost/run_script.php;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
https://nginx.org/en/docs/http/ngx_http_mirror_module.html
You can use nginx's perl module which is usually part of a repo and can be easily installed. Sample to call system curl command:
location /mint {
perl '
sub {
my $r = shift;
$r->send_http_header("text/html");
$r->print(`curl -X POST --data \'{"method":"evm_mine"}\' localhost:7545`);
return OK;
}
';
}

Resources