How to run a shell script on every request? - shell

I want to run a shell script every time my nginx server receives any HTTP request. Any simple ways to do this?

You can execute a shell script via Lua code from the nginx.conf file to achieve this. You need to have the HttpLuaModule to be able to do this.
Here's an example to do this.
location /my-website {
content_by_lua_block {
os.execute("/bin/myShellScript.sh")
}
}

I found the following information online at this address: https://www.ruby-forum.com/topic/2960191
This does expect that you have fcgiwrap installed on the machine. It is really as simple as:
sudo apt-get install fcgiwrap
Example script (Must be executable)
#!/bin/sh
# -*- coding: utf-8 -*-
NAME=`"cpuinfo"`
echo "Content-type:text/html\r\n"
echo "<html><head>"
echo "<title>$NAME</title>"
echo '<meta name="description" content="'$NAME'">'
echo '<meta name="keywords" content="'$NAME'">'
echo '<meta http-equiv="Content-type"
content="text/html;charset=UTF-8">'
echo '<meta name="ROBOTS" content="noindex">'
echo "</head><body><pre>"
date
echo "\nuname -a"
uname -a
echo "\ncpuinfo"
cat /proc/cpuinfo
echo "</pre></body></html>"
Also using this as an include file, not restricted to only shell
scripts.
location ~ (\.cgi|\.py|\.sh|\.pl|\.lua)$ {
gzip off;
root /var/www/$server_name;
autoindex on;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /var/www/$server_name;
fastcgi_param SCRIPT_FILENAME /var/www/$server_name$fastcgi_script_name;
}
I found it extremely helpful for what I am working on, I hope it help you out with your RaspberryPI project.

Install OpenResty (OpenResty is just an enhanced version of Nginx by means of addon modules ) Refer https://openresty.org/en/getting-started.html for this
Configure aws cli on the instance
Write a shell script which download a file from specified S3 bucket
Do the required changes in nginx.conf file
Restart the nginx server
I have tested the http request using curl and file gets download in /tmp directory of respective instance:
curl -I http://localhost:8080/
OutPut:
curl -I http://localhost:8080/
HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Tue, 14 Aug 2018 07:34:49 GMT
Content-Type: text/plain
Connection: keep-alive
Content of nginx.conf file:
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
default_type text/html;
content_by_lua '
ngx.say("<p>hello, world</p>")
';
}
location / {
content_by_lua_block{
os.execute("sh /tmp/s3.sh")
}
}
}
}

If you prefer full control in Python:
Create /opt/httpbot.py:
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self._handle()
def do_POST(self):
self._handle()
def _handle(self):
try:
self.log_message("command: %s", self.path)
if self.path == '/foo':
subprocess.run(
"cd /opt/bar && GIT_SSH_COMMAND='ssh -i .ssh/id_rsa' git pull",
shell=True,
)
finally:
self.send_response(200)
self.send_header("content-type", "application/json")
self.end_headers()
self.wfile.write('{"ok": true}\r\n'.encode())
if __name__ == "__main__":
HTTPServer(("127.0.0.1", 4242), Handler).serve_forever()
No concurrency/parallelism here, so httpbot runs one command at a time, no conflicts.
Run apt install supervisor
Create /etc/supervisor/conf.d/httpbot.conf:
[program:httpbot]
environment=PYTHONUNBUFFERED="TRUE"
directory=/opt
command=/opt/httpbot.py
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/httpbot.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
Add to your nginx server:
location /foo {
proxy_pass http://127.0.0.1:4242/foo;
}
Run:
chmod u+x /opt/httpbot.py
service supervisor status
# If stopped:
service supervisor start
supervisorctl status
# If httpbot is not running:
supervisorctl update
curl https://example.com/foo
# Should return {"ok": true}
tail /var/log/httpbot.log
# Should show `command: /foo` and the output of shell script

You can also use the nginx mirror module and poxy_pass it to a web script that runs whatever, in my case I just added this to my main site location {...
mirror /mirror;
mirror_request_body off;
and then a new location called mirror that I had run a php script that executed whatever...
location = /mirror {
internal;
proxy_pass http://localhost/run_script.php;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
https://nginx.org/en/docs/http/ngx_http_mirror_module.html

You can use nginx's perl module which is usually part of a repo and can be easily installed. Sample to call system curl command:
location /mint {
perl '
sub {
my $r = shift;
$r->send_http_header("text/html");
$r->print(`curl -X POST --data \'{"method":"evm_mine"}\' localhost:7545`);
return OK;
}
';
}

Related

Let's Encrypt Unauthorized 404 nginx no such file or directory but get 200 response

I have a bash script that creates nginx virtual hosts and then requests a ssl via certbot let's encrypt
The problem I'm running into is the bash script works great for the first 3-5 domains but then it starts throwing an unauthorized error but another weird thing is the error at the end says 404 not 403 and the letsencrypt log shows 200 repsonse example(I've replaced some sensitive info for privacy) ALSO keep in mind in may be relavant in answer all these virtual hosts I want to put to the same root/directory so if that has a part in it or may let me know:
HTTP 200
Server: nginx
Date: Sun, 01 May 2022 14:30:03 GMT
Content-Type: application/json
Content-Length: 1036
Connection: keep-alive
Boulder-Requester: 494001960
Cache-Control: public, max-age=0, no-cache
Link: <https://acme-v02.api.letsencrypt.org/directory>;rel="index"
Replay-Nonce: asdfasdfasdfasdfasdf
X-Frame-Options: DENY
Strict-Transport-Security: max-age=604800
{
"identifier": {
"type": "dns",
"value": "www.1.1.1.1.1(hiddenfor privacy).com"
},
"status": "invalid",
"expires": "2022-05-08T14:29:24Z",
"challenges": [
{
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "1.1.1.1.1(hiddenfor privacy): Invalid response from http://www.example.com/.well-known/acme-challenge/asdfasdfdasfsdf: 404",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/asdfasdf/KIm4sQ",
"token": ";alskdjf;laskdjf;lka",
"validationRecord": [
{
"url": "http://www.example.com/.well-known/acme-challenge/a;sldkfja;sldkfja;lsdkfj",
"hostname": "www.example.com",
"port": "80",
"addressesResolved": [
"1.1.1.1.1(hiddenfor privacy)"
],
"addressUsed": "1.1.1.1.1(hiddenfor privacy)"
}
],
"validated": "2022-05-01T14:30:02Z"
}
]
}
I also get this in the nginx error log
2022/05/01 14:36:50 [error] 148403#148403: *1255 open() "/var/www/example_lander/example_example/.well-known/acme-challenge/asdfasdf" failed (2: No such file or directory), client: 35.89.74.29, server: example.com, request: "GET /.well-known/acme-challenge/asdfasdfasdfasdfasdfHTTP/1.1", host: "www.example.com"
here's the bash script I use to create virtual host and request ssl
# process for creating ssl and domain settings
cd /etc/nginx/sites-available/
wait
sudo sh -c "echo 'server {
server_name $1 www.$1;
root /var/www/example_lander/example_example;
index index.html index.htm index.php;
location ^~ /.well-known {
allow all;
}
location / {
try_files \$uri \$uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}' > $1"
wait
sudo sh -c "ln -s /etc/nginx/sites-available/$1 /etc/nginx/sites-enabled/"
wait
sudo sh -c "service nginx reload"
wait
sudo sh -c "certbot --nginx -d $1 -d www.$1 --redirect --agree-tos"
wait
so as you can see I've also added the well_known error that seems to happen a lot to other users with let's encrypt but still get stuck witht hese 404/403 /200 random stuff that happens after 3-5 successful times of the bash script running successfully
Have you tried standalone challenge? So you don't have to create a server for each domain, as long as DNS record of the domain points to the server running the challenge should do the job.
Example:
#!/bin/bash
certbot certonly --standalone -d "$1" -d "www.$1" \
--non-interactive --agree-tos --email <your_email> # \
# --pre-hook="<stop_nginx_maybe>" \ # stop anything that is running on port 80
# --post-hook="<start_nginx_maybe>" # start anything that was running on port 80
# print certificate
cat /etc/letsencrypt/live/$1/{fullchain.pem,privkey.pem}
And then you can add an entry to crontab that runs a renew script which after renew runs some code for each certificate.
So it ended up being that the CPU usage was high due to certbot so in AWS I enabled unlimited mode for my t3.medium to allow the certs to be issued since there is a lot and that seemed to solve the problem basically my theory is that certbot most likely couldn't issue the cert quickly enough by the time the challenge happened so it would fail challenges before deploying due to the high cpu usage of certbot

Shell Script To Automate Nginx Blocks

I'm trying to automate nginx setup through the script
Command - SECRET_KEY='xxxx' HTTP='$http_update' AMCU_HOST='$host' AMCU_RURI='$redirect_uri' sh ./script.sh domain.com port username
In the server block file, I want HTTP='$http_update' but it replaces it and just leave blank ''. Same for AMCU_HOST='$host' and AMCU_RURI='$redirect_uri'. Tried without attributes in command but same happens again...leaves '' as script thinks it as $ attr.
script.sh
#!/bin/bash
domain=$1
port=$2
user=$3
block="/etc/nginx/sites-available/$domain"
ssh="/home/$user/.ssh/authorized_keys"
#Create User
echo "▶ Creating User"
sudo useradd $user
#User mkdir
echo "▶ Updating home dir"
sudo mkdir /home/$user
#Create .ssh/authkeys
echo "▶ Updating SSH dir"
cd /home/$user && mkdir .ssh/
#Create the SSH Auth file:
echo "▶ Updating SSH AuthKey"
sudo tee $ssh > /dev/null <<EOF
$SECRET_KEY
EOF
#Create the Nginx server block file:
echo "▶ Updating NGINX Server Block"
sudo tee $block > /dev/null <<EOF
server {
listen 80;
server_name $domain;
return 301 https://$domain$AMCU_RURI;
}
server {
#Secure HTTP (HTTPS)
listen 443 ssl;
server_name $domain;
error_page 500 502 503 504 /500.html;
location /500.html {
root /var/www/html;
internal;
}
ssl_certificate /etc/letsencrypt/live/$domain/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$domain/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $HTTP;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $AMCU_HOST;
proxy_cache_bypass $HTTP;
}
}
EOF
#Link to make it available
echo "▶ Linking Server Blocks"
sudo ln -s $block /etc/nginx/sites-enabled/
#Test configuration and reload if successful
echo "▶ Reloading Server"
sudo nginx -t && sudo service nginx reload

Puma - No such file or directory - connect(2). No idea where it's getting this location from

So, I'm trying to deploy a Sinatra app with Capistrano. I have deployed the app successfully, however I am unable to start Puma. When I enter my app's current diretory and run pumactl -F config/puma.rb start I get the following error:
ubuntu#ip-10-0-0-195:/srv/apps/cx/current$ pumactl -F config/puma.rb start
[18512] Puma starting in cluster mode...
[18512] * Version 4.3.5 (ruby 2.5.1-p57), codename: Mysterious Traveller
[18512] * Min threads: 2, max threads: 6
[18512] * Environment: staging
[18512] * Process workers: 1
[18512] * Phased restart available
No such file or directory - connect(2) for /srv/apps/cx/releases/shared/tmp/sockets/puma.sock
I have no idea how or why it's looking in the cx/releases directory. I've attached some of my files below and maybe someone can tell me what I'm doing wrong here.
Puma.rb
# Change to match your CPU core count
workers 1
# Min and Max threads per worker
threads 2, 6
app_dir = File.expand_path('../../..', __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || 'staging'
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/tmp/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/logs/puma.stdout.log", "#{shared_dir}/logs/puma.stderr.log", true
daemonize
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/current/config/database.yml")[rails_env])
end
lowlevel_error_handler do |ex, env|
Raven.capture_exception(
ex,
message: ex.message,
extra: { puma: env },
transaction: 'Puma'
)
# note the below is just a Rack response
[500, {}, ['An error has occurred, and engineers have been informed. Please reload the page']]
end
Puma.service
[Unit]
Description=Connect-Puma Server
After=network.target
[Service]
Type=simple
User=ubuntu
# EnvironmentFile=/srv/apps/cx-api/current/.rbenv-vars
Environment=RAILS_ENV=staging
WorkingDirectory=/srv/apps/cx/current/
ExecStart=/usr/bin/rbenv exec bundle exec puma -C /srv/apps/cx/current/config/puma.rb
ExecStop=/usr/bin/rbenv exec bundle exec pumactl -F /srv/apps/cx/current/config/puma.rb stop
ExecReload=/usr/bin/rbenv exec bundle exec pumactl -F /srv/apps/cx/current/config/puma.rb phased-restart
TimeoutSec=15
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
etc/nginx/sites-enabled/cx
pstream sinatra {
server unix:/srv/apps/cx/shared/tmp/puma.sock;
}
server {
root /srv/apps/cx/current/public;
server_name staging.ldelivers.com;
location / {
try_files $uri $uri/index.html #puma;
}
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://sinatra;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/staging.lmdelivers.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/staging.lmdelivers.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = staging.lmdelivers.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name staging.lmdelivers.com;
return 404; # managed by Certbot
}
As you can see I'm not calling to the /releases directory anywhere. If anyone can get me pointed in the right direction I would appreciate it soo soo much.
Thanks
Try this
app_dir = File.expand_path('../../../..', __FILE__)
Also, the socket path is set as .../tmp/puma.sock in one configuration file and .../tmp/sockets/puma.sock in another.

How to append multi-lines to file in a dockerfile? [duplicate]

This question already has answers here:
launch a CAT command unix into Dockerfile
(7 answers)
Closed 7 months ago.
I have a dockerfile and can't seem to be able to embed the nginx configuration file to it, so that it can be appended to /etc/nginx/nginx.conf.
I tried the following formats:
RUN cat <<EOT >> /etc/nginx/nginx.conf
user www;
worker_processes auto; # it will be determinate automatically by the number of core
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid; # it permit you to use /etc/init.d/nginx reload|restart|stop|start
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
keepalive_timeout 3000;
server {
listen 80;
root /usr/local/www;
index index.html index.htm;
server_name localhost;
client_max_body_size 32m;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/lib/nginx/html;
}
}
}
EOT
and
RUN echo $
'user www; \n
worker_processes auto; # it will be determinate automatically by the number of core \n
error_log /var/log/nginx/error.log warn; \n
pid /var/run/nginx.pid; # it permit you to use /etc/init.d/nginx reload|restart|stop|start \n
events { \n
worker_connections 1024; \n
} \n
http { \n
include /etc/nginx/mime.types; \n
default_type application/octet-stream; \n
sendfile on; \n
access_log /var/log/nginx/access.log; \n
keepalive_timeout 3000; \n
server { \n
listen 80; \n
root /usr/local/www; \n
index index.html index.htm; \n
server_name localhost; \n
client_max_body_size 32m; \n
error_page 500 502 503 504 /50x.html; \n
location = /50x.html { \n
root /var/lib/nginx/html; \n
} \n
} \n
}'
> /etc/nginx/nginx.conf
However with either of the two examples I get the following error, which kinda looks like docker is trying to treat the nginx config file as its own variables:
Sending build context to Docker daemon 33.28 kB
Error response from daemon: Unknown instruction: WORKER_PROCESSES
Docker version is 1.13.1, build 07f3374/1.13.1 and the distro I am using is CentOS Atomic Host 7.1902, while docker base image is alpinelinux.
Thanks
That should do the trick:
RUN echo $'first line \n\
second line \n\
third line' > /etc/nginx/nginx.conf
Basically it's wrapped in a $'' and uses \n\ for new lines.
I was looking to create & append lines to my .npmrc to install private packages. The only syntax that worked for me was:
RUN echo #myscope:registry=https://gitlab.com/api/v4/packages/npm/ > .npmrc \
&& echo //gitlab.com/api/v4/packages/npm/:_authToken=${MY_TOKEN} >> .npmrc \
&& echo strict-ssl=false >> .npmrc

403 Forbidden on Rails app w/ Nginx, Passenger, unix

Hi i am having the 403 error despite following the steps from here
403 Forbidden on Rails app w/ Nginx, Passenger
My app folder permissions
namei -l /home/ubuntu/resume_consumer/current/public
f: /home/ubuntu/resume_consumer/current/public
drwxr-xr-x root root /
drwxr-xr-x root root home
drwxr-xr-x ubuntu ubuntu ubuntu
drwxrwxr-x ubuntu ubuntu resume_consumer
lrwxrwxrwx ubuntu ubuntu current -> /home/ubuntu/resume_consumer/releases/20150815211156
drwxr-xr-x root root /
drwxr-xr-x root root home
drwxr-xr-x ubuntu ubuntu ubuntu
drwxrwxr-x ubuntu ubuntu resume_consumer
drwxrwxr-x ubuntu ubuntu releases
drwxrwxr-x ubuntu ubuntu 20150815211156
drwxrwxr-x ubuntu ubuntu public
The Nginx app is running as nobody
ps waux | grep nginx
root 12005 0.0 0.0 42480 900 ? Ss Jul28 0:00 nginx: master process /opt/nginx/sbin/nginx
nobody 12006 0.0 0.1 42804 2016 ? S Jul28 0:00 nginx: worker process
My nginx config looks as follows
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/ubuntu/.rvm/gems/ruby-2.2.1/gems/passenger-5.0.14;
passenger_ruby /home/ubuntu/.rvm/wrappers/ruby-2.2.1/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name qa.enterprise.getmeed.com;
root /home/ubuntu/resume/current/public;
passenger_enabled on;
#charset koi8-r;
#access_log logs/host.access.log main;
# location / {
# root html;
# index index.html index.htm;
#}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
server {
listen 80;
server_name qa.getmeed.com;
root /home/ubuntu/resume_consumer/current/public;
index index.html index.htm;
passenger_enabled on;
rails_env production;
passenger_friendly_error_pages on;
# location / {
# root html;
# index index.html index.htm;
# }
}
# HTTPS server
#
#server {
# listen 443;
# server_name localhost;
# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
When i look at logs, i find an alert that says PassengerAgent not found another error that the directory is forbidden. I am not sure if the alert is related.
2015/08/15 23:40:41 [notice] 20858#0: signal process started
2015/08/15 23:40:41 [alert] 12005#0: Unable to start Phusion Passenger: Support binary PassengerAgent not found (tried: /home/ubuntu/.rvm/gems/ruby-2.2.1/gems/passenger-5.0.14/buildout/support-binaries/PassengerAgent and /root/.passenger/support-binaries/5.0.14/PassengerAgent). This probably means that your Phusion Passenger installation is broken or incomplete, or that your 'passenger_root' setting contains the wrong value. Please reinstall Phusion Passenger or adjust the setting (see: https://www.phusionpassenger.com/documentation/Users%20guide%20Nginx.html#PassengerRoot). (-1: Unknown error)
2015/08/15 23:45:04 [error] 20859#0: *375 directory index of "/home/ubuntu/resume_consumer/current/public/" is forbidden, client: 104.135.15.7, server: qa.getmeed.com, request: "GET / HTTP/1.1", host: "qa.getmeed.com"
You simply need to see if you can read the file inside /home/ubuntu/resume_consumer/current/public. You are showing us the permissions set in the home directory of Ubuntu, but current and public will have their own permissions as well.
First assuming there is an index.html inside of public, does the following return an error:
sudo -u nobody /home/ubuntu/resume_consumer/current/public
If so then you have a permission problem. You could resolve this by adding the nobody user to the same ubuntu group:
sudo adduser nobody ubuntu
and then ensure the group ubuntu users have the same access as the ubuntu user.
sudo chmod -R g=u /home/ubuntu/resume_consumer
Thanks all! it turns out there is no issue with the nginx. There was an error about passenger in nginx logs, which i wan't sure was related initially, But it turns out that was the root cause.
I had passenger installed but the passenger was not installed for the application specifically. I had to go to the application directory and install passenger gem and then use the passenger_root corresponding to the gem in application in nginx config. That fixed the issue.

Resources