How to install a service and lauch it during a cloud-init phase via AWS cloudformation user data bash script? - bash

I needed to add a shutdown hooks on my EC2 instances to do some resource clean up stuff.
Moreover, I would also be able to start and stop manually my instance for testing purpose and I wanted the startup and shutdown hooks to be triggered the same way as on the initial bootstrap.
I then decided to install a script as a service on AWS EC2 Ubuntu 16.04 LTS instance via a Cloudformation bash script.
Here is the first naive version of the script:
UserData:
"Fn::Base64":
!Sub
- |
#!/usr/bin/env bash
BOOTSTRAP_SCRIPT_NAME=bootstrap
BOOTSTRAP_SCRIPT_PATH=/etc/init.d/${BOOTSTRAP_SCRIPT_NAME}
cat > /etc/init.d/boostrap <<EOF
### BEGIN INIT INFO
# Provides: ${BOOTSTRAP_SCRIPT_NAME}
# Required-Start: \\\$local_fs \\\$remote_fs \\\$network \\\$syslog \\\$named
# Required-Stop: \\\$local_fs \\\$remote_fs \\\$network \\\$syslog \\\$named
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Bootstrap an instance
# Description: Bootstrap an instance
### END INIT INFO
function start() {
echo "STARTUP on $(date)"
}
function stop() {
echo "SHUTDOWN on $(date)"
}
case "\$1" {
start)
start | tee -a /var/log/${BOOTSTRAP_SCRIPT_NAME}.log
;;
stop)
stop | tee -a /var/log/${BOOTSTRAP_SCRIPT_NAME}.log
;;
}
EOF
chmod +x ${BOOTSTRAP_SCRIPT_PATH}
update-rc.d -f ${BOOTSTRAP_SCRIPT_NAME} remove
update-rc.d ${BOOTSTRAP_SCRIPT_NAME} defaults
With this version, the bootstrap script is never started.
I quickly understood that the bootstrap script was installed during the cloud-init phase and by the way during the linux sysv init phase and would not take part of the current init phase ... (If this is wrong tell me ;-))
I then decided to start it manually such as apache2 in cloudformation bash examples. I added the following line at the end of the script.
${BOOTSTRAP_SCRIPT_PATH} start
I tested it again, and saw the "STARTUP on XXX" log in the bootstrap.log file after this fix.
But when I tried to stop the instance in the consol, no "SHUTDOWN on XXX" logs appeared in the bootstrap.log file ...
I log into the instance and try to start/stop the script manually ... all the startup and shutdown logs appeared 8-O. I then supposed that as the boostrap script was not identified as an init script the stop callback would not be called on instance stop or terminate ... (If this is wrong tell me ;-))
I then start and stop several times the instance from the AWS console and both STARTUP and SHUTDOWN messages still appeared in the logs.
This confirmed my hypothesis. The logs are only missing during the first init and shutdown cycle.
So I did something weird and ugly ... I replace the last line start command with this one :
reboot -n
The script now works as I need but I think there should be a cleaner way to enable my script for init or a least for the shutdown phase during cloud-init without rebooting ...
Is anyone has a best solution or more details on the issue ?
PS : I tried init u and telinit u instead of reboot with no success

The reason for this seems to be that the bootstrap is not started as a service the first time. It is run as a normal script. Instead of ${BOOTSTRAP_SCRIPT_PATH} start, try adding the following line to your user-data:
sudo service ${BOOTSTRAP_SCRIPT_NAME} start

Related

call a script automatically in container before docker stops the container

I want a custom bash script in the container that is called automatically before the container stops (docker stop or ctrl + c).
According to this docker doc and multiple StackOverflow threads, I need to catch the SIGTERM signal in the container and then run my custom script when the event appears. As I know SIGTERM can be only used from a root process with PID 1.
Relevand part of my Dockerfile:
...
COPY container-scripts/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
I use [] to define the entrypoint and as I know this will run my script directly, without having a /bin/sh -c wrapper (PID 1), and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
entrypoint.sh:
...
# run the external bash script if it exists
BOOT_SCRIPT="/boot.sh"
if [ -f "$BOOT_SCRIPT" ]; then
printf ">> executing the '%s' script\n" "$BOOT_SCRIPT"
source "$BOOT_SCRIPT"
fi
# start something here
...
The boot.sh is used by child containers to execute something else that the child container wants. Everything is fine, my containers work like a charm.
ps axu in a child container:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
134 root 0:25 /usr/lib/jvm/java-17-openjdk/bin/java -server -D...
...
421 root 0:00 ps axu
Before stopping the container I need to run some commands automatically so I created a shutdown.sh bash script. This script works fine and does what I need. But I execute the shutdown script manually this way:
$ docker exec -it my-container /bin/bash
# /shutdown.sh
# exit
$ docker container stop my-container
I would like to automate the execution of the shutdown.sh script.
I tried to add the following to the entrypoint.sh but it does not work:
trap "echo 'hello SIGTERM'; source /shutdown.sh; exit" SIGTERM
What is wrong with my code?
Your help and comments guided me in the right direction.
I went through again the official documentations here, here, and here and finally I found what was the problem.
The issue was the following:
My entrypoint.sh script, which kept alive the container executed the following command at the end:
# start the ssh server
ssh-keygen -A
/usr/sbin/sshd -D -e "$#"
The -D option runs the ssh daemon in a NOT detach mode and sshd does not become a daemon. Actually, that was my intention, this is the way how I kept alive the container.
But this foreground process prevented to be executed properly the trap command. I changed the way how I started the sshd app and now it runs as a normal background process.
Then, I added the following command to keep alive my docker container (this is a recommended best practice):
tail -f /dev/null
But of course, the same issue appeared. Tail runs as a foreground process and the trap command does not do its job.
The only way how I can keep alive the container and let the entrypoint.sh runs as a foreign process in docker is the following:
while true; do
sleep 1
done
This way the trap command works fine and my bash function that handles the SIGINT, etc. signals runs properly when the time comes.
But honestly, I do not like this solution. This endless loop with a sleep looks ugly, but I have no idea at the moment how to manage it in a nice way :(
But this is another question that not belongs to this thread (but could be great if you can suggest my a better solution).

Bash script works in while but not in until?

This is very simple. I run this as a systemctl.
#!/bin/bash
until ping -c1 $1 www.google.com &>/dev/null
do protonvpn c -f
done
my systemctl is:
### BEGIN INIT INFO
# Provides: protonvpn
# Required-Start: $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: autostartvpn
### END INIT INFO
[Unit]
After=remote-fs.target
[Service]
ExecStart=/dir/startupscript
[Install]
WantedBy=default.target
It does not work (the upper script) when i execute it as systemctl, and it stops working generally after reboot for unknown reason.
I want to run this command protonvpn c -f on boot as soon as i get internet connection and i want it to loop until a connection is found (then kill-switch controls the app, and all works indefinitely).
Can you help me make it work?
If you want to run a service after having an active network connection, you can use this on your Systemd service file:
After=network-online.target
I think that would kill two birds with one stone, because that behaviour implies that you no longer should need to add any kind of check in your script to make sure it runs after network connection is established.
#hads0m has the right idea, but the loop pattern is also wrong.
until command1
do
command2
done
does not do what you think. If command1 succeeds the first time then it never runs command2. If command1 does not succeed it'll run command2, and if that command runs in the background or exits it'll run command1 again. What you want instead is
until command1
do
sleep 1
done
command2
to not run a busy loop and to only run command2 when the prerequisite is met.

Started JBOSS Service via Shell Script, but hanging

I managed to start JBOSS service through a shell script running locally inside the server.
if [ $? -eq 0 ]; then
{
sh /jboss-6.1.0.Final/bin/run.sh -c server1 -g app1 -u x.x.x.x -b x.x.x.x -Djboss.messaging.ServerPeerID=1 &
}; fi
My problem is I am able to start the service and found the application working, but once the script finishes running, it is not returning to shell ($ prompt) back and keep on hanging there forever. When I run the same command directly (without script), after the command finishes running, on hitting Enter key, I can get my $ prompt and I shall do other works.
Can someone tell me what I am missing in my code so that I can return back to my $ prompt.
Remove & from the shell script. Also remove {} from if block , no need.

Embedded linux application start script works better from command line

I'm running embedded linux on an Altera FPGA. It uses SystemD to run startup, and I have a script in the "multi-user.target.wants" section that runs my application.
When it runs from startup my code runs slower than when I run the identical script from an ssh shell.
I have checked that paths are the same, that permissions are correct on the scripts, that full paths are used in the scripts. Using 'top' I can see that priorities are set the same for the various threads started, yet somehow performance is completely different between the two ways of starting.
The script in full is:
#!/bin/sh
sleep 5s
mount /dev/mmcblk0p5 /home/root/linux
cd /home/root/linux/mem_driver
./memdev_load
cd /home/root/linux/gpio_driver
insmod ./gpiodev.ko
mknod /dev/gpiodev c 249 0
sleep 5s
cd /home/root/src/control
mysqld_safe &
up=0
while [ $up -ne 2 ]
do
up=$(pgrep mysql | wc -l);
echo $up
done
sleep 3s
cd /home/root/studio_web/myapp
npm start &
sleep 1s
cd /home/root/src/control
#sleep 1s
./control > /home/root/linux/output.log
various sleep commands have been inserted to try and make sure things start up in the right order.
Any help in diagnosing why this behaves differently would be greatly appreciated.
Is that the only shell script you are using? or do you have a systemd service file that executes that single shell script?
Using sleep is ineffective here. You should separate them into separate shell scripts and then use systemd to ensure that the shell scripts are run in order.
For example, we want to mount the directory first, because if this fails then nothing following will be successful. So we create a systemd mount service:
# home-root-linux.mount
[Unit]
Description=Mount /home/root/linux
Before=gpiodev.service
[Mount]
What=/dev/mmcblk0p5
Where=/home/root/linux
Options=defaults
[Install]
WantedBy=multi-user.target
Then we can create another systemd service which depends on the mount above before executing the three parts of the shell script which were previously separated by sleep to ensure that they were run in order.
# gpiodev.service
[Unit]
Description=Handle gpiodev kernel module
After=home-root-linux.mount
Before=mysqlsafe.service
[Service]
Type=oneshot
ExecStartPre=/home/root/linux/mem_driver/memdev_load
ExecStart=/sbin/insmod gpiodev.ko; /bin/mknod /dev/gpiodev c 249 0
WorkingDirectory=/home/root/linux/gpio_driver
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Second part of the systemd service (following the sleep). We have a separate shellscript which is placed in /sbin/ in this example as it contains a while loop so it would be best to separate this:
# mysqlsafe.service
[Unit]
Description=MySQL safe
After=gpiodev.service
Before=npmoutput.service
[Service]
Type=oneshot
ExecStart=/sbin/mysqlsafe.sh
WorkingDirectory=/home/root/src/control
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Second part of the shell script which is executed in the systemd service above (separated to a separate file due to the complexity):
# /sbin/mysqlsafe.sh
#!/bin/sh
mysqld_safe &
up=0
while [ $up -ne 2 ]
do
up=$(pgrep mysql | wc -l);
echo $up
done
Third part of the systemd service (the third section of the original shell script which was separated by sleep):
# mpmoutput.service
[Unit]
Description=npm and output to log
After=mysqlsafe.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/npm &
ExecStart=/home/root/src/control > /home/root/linux/output.log
WorkingDirectory=/home/root/studio_web/myapp
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
The idea behind this approach is that systemd recognises the importance of each service and the reliance upon the following service i.e. if one service fails the following services in queue will not execute. You can then check this using systemctl and see logging in journalctl.
Just a quick copy, paste and edit. Could contain errors as it was not tested or checked.
More reading can be found here regarding systemd service files: https://www.freedesktop.org/software/systemd/man/systemd.service.html

Run java jar file on a server as background process

I need to run a java jar in server in order to communicate between two applications. I have written two shell scripts to run it, but once I start up that script I can't shut down / terminate the process. If I press ctrl+C or close the console, the server will shut down. Could anyone help me how to modify this script to run as a normal server?
#!/bin/sh
java -jar /web/server.jar
echo $!
#> startupApp.pid
You can try this:
#!/bin/sh
nohup java -jar /web/server.jar &
The & symbol, switches the program to run in the background.
The nohup utility makes the command passed as an argument run in the background even after you log out.
Systemd which now runs in the majority of distros
Step 1:
Find your user defined services mine was at /usr/lib/systemd/system/
Step 2:
Create a text file with your favorite text editor name it whatever_you_want.service
Step 3:
Put following
Template to the file whatever_you_want.service
[Unit]
Description=webserver Daemon
[Service]
ExecStart=/usr/bin/java -jar /web/server.jar
User=user
[Install]
WantedBy=multi-user.target
Step 4:
Run your service
as super user
$ systemctl start whatever_you_want.service # starts the service
$ systemctl enable whatever_you_want.service # auto starts the service
$ systemctl disable whatever_you_want.service # stops autostart
$ systemctl stop whatever_you_want.service # stops the service
$ systemctl restart whatever_you_want.service # restarts the service
If you're using Ubuntu and have "Upstart" (http://upstart.ubuntu.com/) you can try this:
Create /var/init/yourservice.conf
with the following content
description "Your Java Service"
author "You"
start on runlevel [3]
stop on shutdown
expect fork
script
cd /web
java -jar server.jar >/var/log/yourservice.log 2>&1
emit yourservice_running
end script
Now you can issue the service yourservice start and service yourservice stop commands. You can tail /var/log/yourservice.log to verify that it's working.
If you just want to run your jar from the console without it hogging the console window, you can just do:
java -jar /web/server.jar > /var/log/yourservice.log 2>&1
Run in background and add logs to log file using the following:
nohup java -jar /web/server.jar > log.log 2>&1 &

Resources