Bash script works in while but not in until? - bash

This is very simple. I run this as a systemctl.
#!/bin/bash
until ping -c1 $1 www.google.com &>/dev/null
do protonvpn c -f
done
my systemctl is:
### BEGIN INIT INFO
# Provides: protonvpn
# Required-Start: $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: autostartvpn
### END INIT INFO
[Unit]
After=remote-fs.target
[Service]
ExecStart=/dir/startupscript
[Install]
WantedBy=default.target
It does not work (the upper script) when i execute it as systemctl, and it stops working generally after reboot for unknown reason.
I want to run this command protonvpn c -f on boot as soon as i get internet connection and i want it to loop until a connection is found (then kill-switch controls the app, and all works indefinitely).
Can you help me make it work?

If you want to run a service after having an active network connection, you can use this on your Systemd service file:
After=network-online.target
I think that would kill two birds with one stone, because that behaviour implies that you no longer should need to add any kind of check in your script to make sure it runs after network connection is established.

#hads0m has the right idea, but the loop pattern is also wrong.
until command1
do
command2
done
does not do what you think. If command1 succeeds the first time then it never runs command2. If command1 does not succeed it'll run command2, and if that command runs in the background or exits it'll run command1 again. What you want instead is
until command1
do
sleep 1
done
command2
to not run a busy loop and to only run command2 when the prerequisite is met.

Related

How to run 2 commands on bash concurrently

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.
You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm
From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

Embedded linux application start script works better from command line

I'm running embedded linux on an Altera FPGA. It uses SystemD to run startup, and I have a script in the "multi-user.target.wants" section that runs my application.
When it runs from startup my code runs slower than when I run the identical script from an ssh shell.
I have checked that paths are the same, that permissions are correct on the scripts, that full paths are used in the scripts. Using 'top' I can see that priorities are set the same for the various threads started, yet somehow performance is completely different between the two ways of starting.
The script in full is:
#!/bin/sh
sleep 5s
mount /dev/mmcblk0p5 /home/root/linux
cd /home/root/linux/mem_driver
./memdev_load
cd /home/root/linux/gpio_driver
insmod ./gpiodev.ko
mknod /dev/gpiodev c 249 0
sleep 5s
cd /home/root/src/control
mysqld_safe &
up=0
while [ $up -ne 2 ]
do
up=$(pgrep mysql | wc -l);
echo $up
done
sleep 3s
cd /home/root/studio_web/myapp
npm start &
sleep 1s
cd /home/root/src/control
#sleep 1s
./control > /home/root/linux/output.log
various sleep commands have been inserted to try and make sure things start up in the right order.
Any help in diagnosing why this behaves differently would be greatly appreciated.
Is that the only shell script you are using? or do you have a systemd service file that executes that single shell script?
Using sleep is ineffective here. You should separate them into separate shell scripts and then use systemd to ensure that the shell scripts are run in order.
For example, we want to mount the directory first, because if this fails then nothing following will be successful. So we create a systemd mount service:
# home-root-linux.mount
[Unit]
Description=Mount /home/root/linux
Before=gpiodev.service
[Mount]
What=/dev/mmcblk0p5
Where=/home/root/linux
Options=defaults
[Install]
WantedBy=multi-user.target
Then we can create another systemd service which depends on the mount above before executing the three parts of the shell script which were previously separated by sleep to ensure that they were run in order.
# gpiodev.service
[Unit]
Description=Handle gpiodev kernel module
After=home-root-linux.mount
Before=mysqlsafe.service
[Service]
Type=oneshot
ExecStartPre=/home/root/linux/mem_driver/memdev_load
ExecStart=/sbin/insmod gpiodev.ko; /bin/mknod /dev/gpiodev c 249 0
WorkingDirectory=/home/root/linux/gpio_driver
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Second part of the systemd service (following the sleep). We have a separate shellscript which is placed in /sbin/ in this example as it contains a while loop so it would be best to separate this:
# mysqlsafe.service
[Unit]
Description=MySQL safe
After=gpiodev.service
Before=npmoutput.service
[Service]
Type=oneshot
ExecStart=/sbin/mysqlsafe.sh
WorkingDirectory=/home/root/src/control
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Second part of the shell script which is executed in the systemd service above (separated to a separate file due to the complexity):
# /sbin/mysqlsafe.sh
#!/bin/sh
mysqld_safe &
up=0
while [ $up -ne 2 ]
do
up=$(pgrep mysql | wc -l);
echo $up
done
Third part of the systemd service (the third section of the original shell script which was separated by sleep):
# mpmoutput.service
[Unit]
Description=npm and output to log
After=mysqlsafe.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/npm &
ExecStart=/home/root/src/control > /home/root/linux/output.log
WorkingDirectory=/home/root/studio_web/myapp
RemainAfterExit=yes
StandardOutput=journal
[Install]
WantedBy=multi-user.target
The idea behind this approach is that systemd recognises the importance of each service and the reliance upon the following service i.e. if one service fails the following services in queue will not execute. You can then check this using systemctl and see logging in journalctl.
Just a quick copy, paste and edit. Could contain errors as it was not tested or checked.
More reading can be found here regarding systemd service files: https://www.freedesktop.org/software/systemd/man/systemd.service.html

How to install a service and lauch it during a cloud-init phase via AWS cloudformation user data bash script?

I needed to add a shutdown hooks on my EC2 instances to do some resource clean up stuff.
Moreover, I would also be able to start and stop manually my instance for testing purpose and I wanted the startup and shutdown hooks to be triggered the same way as on the initial bootstrap.
I then decided to install a script as a service on AWS EC2 Ubuntu 16.04 LTS instance via a Cloudformation bash script.
Here is the first naive version of the script:
UserData:
"Fn::Base64":
!Sub
- |
#!/usr/bin/env bash
BOOTSTRAP_SCRIPT_NAME=bootstrap
BOOTSTRAP_SCRIPT_PATH=/etc/init.d/${BOOTSTRAP_SCRIPT_NAME}
cat > /etc/init.d/boostrap <<EOF
### BEGIN INIT INFO
# Provides: ${BOOTSTRAP_SCRIPT_NAME}
# Required-Start: \\\$local_fs \\\$remote_fs \\\$network \\\$syslog \\\$named
# Required-Stop: \\\$local_fs \\\$remote_fs \\\$network \\\$syslog \\\$named
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Bootstrap an instance
# Description: Bootstrap an instance
### END INIT INFO
function start() {
echo "STARTUP on $(date)"
}
function stop() {
echo "SHUTDOWN on $(date)"
}
case "\$1" {
start)
start | tee -a /var/log/${BOOTSTRAP_SCRIPT_NAME}.log
;;
stop)
stop | tee -a /var/log/${BOOTSTRAP_SCRIPT_NAME}.log
;;
}
EOF
chmod +x ${BOOTSTRAP_SCRIPT_PATH}
update-rc.d -f ${BOOTSTRAP_SCRIPT_NAME} remove
update-rc.d ${BOOTSTRAP_SCRIPT_NAME} defaults
With this version, the bootstrap script is never started.
I quickly understood that the bootstrap script was installed during the cloud-init phase and by the way during the linux sysv init phase and would not take part of the current init phase ... (If this is wrong tell me ;-))
I then decided to start it manually such as apache2 in cloudformation bash examples. I added the following line at the end of the script.
${BOOTSTRAP_SCRIPT_PATH} start
I tested it again, and saw the "STARTUP on XXX" log in the bootstrap.log file after this fix.
But when I tried to stop the instance in the consol, no "SHUTDOWN on XXX" logs appeared in the bootstrap.log file ...
I log into the instance and try to start/stop the script manually ... all the startup and shutdown logs appeared 8-O. I then supposed that as the boostrap script was not identified as an init script the stop callback would not be called on instance stop or terminate ... (If this is wrong tell me ;-))
I then start and stop several times the instance from the AWS console and both STARTUP and SHUTDOWN messages still appeared in the logs.
This confirmed my hypothesis. The logs are only missing during the first init and shutdown cycle.
So I did something weird and ugly ... I replace the last line start command with this one :
reboot -n
The script now works as I need but I think there should be a cleaner way to enable my script for init or a least for the shutdown phase during cloud-init without rebooting ...
Is anyone has a best solution or more details on the issue ?
PS : I tried init u and telinit u instead of reboot with no success
The reason for this seems to be that the bootstrap is not started as a service the first time. It is run as a normal script. Instead of ${BOOTSTRAP_SCRIPT_PATH} start, try adding the following line to your user-data:
sudo service ${BOOTSTRAP_SCRIPT_NAME} start

Run / Close Programs over and over again

Is there a way I can write a simple script to run a program, close that program about 5 seconds later, and then repeat?
I just want to be able to run a program that I wrote over and over again but to do so Id have to close it like 5 seconds after running it.
Thanks!
If your command is non-interactive (requires no user interaction):
Launch your program in the background with control operator &, which gives you access to its PID (process ID) via $!, by which you can kill the running program instance after sleeping for 5 seconds:
#!/bin/bash
# Start an infinite loop.
# Use ^C to abort.
while :; do
# Launch the program in the background.
/path/to/your/program &
# Wait 5 seconds, then kill the program (if still alive).
sleep 5 && { kill $! && wait $!; } 2>/dev/null
done
If your command is interactive:
More work is needed if your command must run in the foreground to allow user interaction: then it is the command to kill the program after 5 seconds that must run in the background:
#!/bin/bash
# Turn on job control, so we can bring a background job back to the
# foreground with `fg`.
set -m
# Start an infinite loop.
# CAVEAT: The only way to exit this loop is to kill the current shell.
# Setting up an INT (^C) trap doesn't help.
while :; do
# Launch program in background *initially*, so we can reliably
# determine its PID.
# Note: The command line being set to the bakground is invariably printed
# to stderr. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
/path/to/your/program &
pid=$! # Save the PID of the background job.
# Launch the kill-after-5-seconds command in the background.
# Note: A status message is invariably printed to stderr when the
# command is killed. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
{ (sleep 5 && kill $pid &) } 2>/dev/null
# Bring the program back to the foreground, where you can interact with it.
# Execution blocks until the program terminates - whether by itself or
# by the background kill command.
fg
done
Check out the watch command. It will let you run a program repeatedly monitoring the output. Might have to get a little fancy if you need to kill that program manually after 5 seconds.
https://linux.die.net/man/1/watch
A simple example:
watch -n 5 foo.sh
To literally answer your question:
Run 10 times with sleep 5:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
# your script
sleep 5
let COUNTER=COUNTER+1
done
Run continuously:
#!/bin/bash
while [ 1 ]; do
# your script
sleep 5
done
If there is no input on the code, you can simply do
#!/bin/bash
while [ 1 ]
do
./exec_name
if [ $? == 0 ]
then
sleep 5
fi
done

How to switch a sequence of tasks to background?

I'm running two tests on a remote server, here is the command I used several hours ago:
% ./test1.sh; ./test2.sh
The two tests are supposed to run one by one.If the second runs before the first completes, everything will be in ruin, and I'll have to restart the whole procedure.
The dilemma is, these two tasks cost too many hours to complete, and when I prepare to logout the server and wait for the result. I don't know how to switch both of them to background... If I use Ctrl+Z, only the first task will be suspended, while the second starts doing nothing useful while wiping out current data.
Is it possible to switch both of them to background, preserving their orders? Actually I should make these two tasks in the same process group like (./test1.sh; ./test2.sh) &, but sadly, the first test have run several hours, and it's quite a pity to restart the tests.
An option is to kill the second test before it starts, but is there any mechanism to cope with this?
First rename the ./test2.sh to ./test3.sh. Then do [CTRL+Z], followed by bg and disown -h. Then save this script (test4.sh):
while :; do
sleep 5;
pgrep -f test1.sh &> /dev/null
if [ $? -ne 0 ]; then
nohup ./test3.sh &
break
fi
done
then do: nohup ./test4.sh &.
and you can logout.
First, screen or tmux are your friends here, if you don't already work with them (they make remote machine work an order of magnitude easier).
To use conditional consecutive execution you can write:
./test1.sh && ./test2.sh
which will only execute test2.sh if test1.sh returns with 0 (conventionally meaning: no error). Example:
$ true && echo "first command was successful"
first command was successful
$ ! true && echo "ain't gonna happen"
More on control operators: http://www.humbug.in/docs/the-linux-training-book/ch08s01.html

Resources