How to get a systemd service running on boot with tmux - bash

I'm trying to have a systemd service run on boot on a raspberry pi. It is a bash script that checks a github repository and keeps a python script running and up to date. I have tried various configurations for the service file noticing that some network errors occurred but I can't seem to have it run as the tmux task doesn't start properly.
[Unit]
Description=Sensor API Service
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=simple
Restart=always
ExecStart=/usr/bin/bash /home/pi/sensor/update.sh
[Install]
WantedBy=network-online.target
After boot tmux doesn't even run properly with the following error:
error connecting to /tmp//tmux-100/default (No such file or directory)
This is the bash script I'm trying to run.
#!/bin/bash
session="sensorInput"
workingDir="/home/pi/sensor"
if [ ! -d $workingDir ]; then
mkdir $workingDir
cd $workingDir
else
cd $workingDir
fi
function UOI () {
cd $workingDir
git clone {githublink}
shopt -s dotglob
mv -u sensor/* ./
rm -fr sensor
git reset --hard
git pull --force
git checkout .
pip3 install -r requirements.txt
}
function run () {
git fetch
if [ ! -f "app.py" ]; then
echo "Python app not found, running update/install function."
UOI
echo "Finished installing the script."
tmux new -d -s $session 'python3 app.py'
echo "Script is now running in session \'sensorInput\'"
elif git status --branch --porcelain -uno | grep behind; then
echo "Differences from main branch found, updating script."
echo "Terminating python script session."
tmux kill-session -t $session
UOI
echo "Finished updating the script."
tmux new -d -s $session 'python3 app.py'
echo "Script is now running in session \'sensorInput\'."
else
echo "No differences found, no action taken."
tmux has-session -t $session 2>/dev/null
if [ $? != 0 ]; then
echo "The script is not running, rebooting the script."
tmux new -d -s $session 'python3 app.py'
echo "Script is now running in session \'sensorInput\'."
fi
fi
}
while true; do run & sleep 60m; done
I have honestly ran out of ideas I searched in quite a bit of other questions and tried modifying the service file but I can't seem to figure it out.
I'm sorry if I somehow missed the question already being answered here before. Hope someone can help me.

Related

Linux script run with run-this-one doesn't work with docker

I'm experiencing an issue in which I run a command in a cronjob and want to make sure that it's not already being executed. I achieve that running as run-one [command] (man-page).
If I want to cancel the already running command and force the new command to run, I run as run-this-one [command].
At least this is what I expected, but if the command runs a docker container, the other process seems to be terminated (but isn't), the terminal shows Terminated, but continues to show the command output that is running in the container (but the commands after the container ends running are not executed). In this case, the command that runs run-this-one is not executed (not expected).
Example:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
docker run --rm alpine /bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
If I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-one /path/to/file.sh, this command is not executed, as expected, and that command ends succesfully.
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
sleep ended inside...
sleep ended...
user#host:/path$
Terminal2:
user#host:/path$ sudo run-one /path/to/file.sh
user#host:/path$
But if I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-this-one /path/to/file.sh, this command is not executed, which is not expected, and that command shows in the terminal Terminated, with the terminal showing user#host:/path$, but the output in the container still shows (the command is still running in the container created in the 1st terminal).
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
Terminated
user#host:/path$ sleep ended inside...
# terminal doesn't show new input from the keyboard, but I can run commands after
Terminal2:
user#host:/path$ sudo run-this-one /path/to/file.sh
user#host:/path$
It works if the file is changed to:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
sleep 5
echo "sleep ended..." >&2
The above script file with docker was just an example, in my case it's different, but the problem is the same, and occurs independently of running the container with or without -it.
Someone knows why this is happening? Is there a (not very complex and not very hackish) solution to this problem? I've executed the above commands in Ubuntu 20.04 inside a VirtualBox machine (with vagrant).
Update (2021-07-15)
Based on #ErikMD comment and #DannyB answer, I put a trap and a cleanup function to remove the container, as can be seen in the script below:
/path/to/test
#!/bin/bash
set -eou pipefail
trap 'echo "[error] ${BASH_SOURCE[0]}:$LINENO" >&2; exit 3;' ERR
RED='\033[0;31m'
NC='\033[0m' # No Color
function error {
msg="$(date '+%F %T') - ${BASH_SOURCE[0]}:${BASH_LINENO[0]}: ${*}"
>&2 echo -e "${RED}${msg}${NC}"
exit 2
}
file="${BASH_SOURCE[0]}"
command="${1:-}"
if [ -z "$command" ]; then
error "[error] no command entered"
fi
shift;
case "$command" in
"cmd1")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-one ${args[*]}" >&2
run-one "${args[#]}"
;;
"cmd2")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-this-one ${args[*]}" >&2
run-this-one "${args[#]}"
;;
"cmd:unique")
"$file" "cmd:container"
;;
"cmd:container")
echo "sleep started..." >&2
sudo docker run --rm --name "test-container" alpine \
/bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
;;
*)
echo -e "${RED}[error] invalid command: $command${NC}"
exit 1
;;
esac
If I run /path/to/test cmd1 (run-one) and /path/to/test cmd2 (run-this-one) in another terminal, it works as expected (the cmd1 process is stopped and removes the container, and the cmd2 process runs successfully).
If I run /path/to/test cmd2 in 2 terminals, it also works as expected (the 1st cmd2 process is stopped and removes the container, and the 2nd cmd2 process runs successfully).
But not so good: in the 2 cases above, sometimes the 2nd process stops with an error before the 1st removes the container (this can occur intermittently, probably due to a race condition).
And it gets worse: if I run /path/to/test cmd1 in 2 terminals, both commands fail, although the 1st cmd1 should run successfully (it fails because the 2nd cmd1 removes the container in the cleanup).
I tried to put the cleanup in the cmd:unique command instead (removing from the other 2 places), so as to call only by the single process running, to avoid the problem above, but weirdly the cleanup is not called there, even if the trap is also defined there.
Just to simplify your question, I would use this command to reproduce the problem:
run-one docker run --rm -it alpine sleep 10
As can be seen - either with run-one and run-this-one - the behavior is definitely not the desired one.
Since the command creates a process managed by docker, I suspect that the run-one set of tools is not the right tool for the job, since docker containers should not be killed with pkill, but rather with docker kill.
One relatively easy solution, is to embrace the way docker wants you to kill containers, and create your short run-one scripts that handle docker properly.
run-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-one-docker.sh NAME COMMAND"
echo "Example: ./run-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "$name is already running, aborting"
exit 1
else
docker run --rm -it --name "$name" "${command[#]}"
fi
run-this-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-this-one-docker.sh NAME COMMAND"
echo "Example: ./run-this-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "killing old $name"
docker kill "$name" > /dev/null
fi
docker run --rm -it --name "$name" "${command[#]}"

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

Docker Check if DB is Running

entrypoint.sh contains various cqlsh commands that require Cassandra. Without something like script.sh, cqlsh commands fail because Cassandra doesn't have enough time to start. When I execute the following locally, everything appears to work properly. However, when I run via Docker, script.sh never finishes. In other words, $status never changes from 1 to 0.
Dockerfile
FROM cassandra
RUN apt-get update && apt-get install -y netcat
RUN mkdir /dir
ADD ./scripts /dir/scripts
RUN /bin/bash -c 'service cassandra start'
RUN /bin/bash -c '/dir/scripts/script.sh'
RUN /bin/bash -c '/dir/scripts/entrypoint.sh'
script.sh
#!/bin/bash
set -e
cmd="$#"
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec $cmd
Alternatively, I could do something like until cqlsh -e 'some code'; do .., as noted here for psql, but that doesn't appear to work for me. Wondering how best to approach the problem.
You're misusing the RUN command in your Dockerfile. It's not for starting services, it's for making filesystem changes in your image. The reason $status doesn't update is because you can't start Cassandra via a RUN command.
You should add service cassandra start and /dir/scripts/entrypoint.sh to your script.sh file, and make that the CMD that's executed by default:
Dockerfile
CMD ['/bin/bash', '-c', '/dir/scripts/script.sh']
script.sh
#!/bin/bash
set -e
# NOTE: I removed your `cmd` processing in favor of invoking entrypoint.sh
# directly.
# Start Cassandra before waiting for it to boot.
service cassandra start
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec /bin/bash -c /dir/scripts/entrypoint.sh

Run go app by service

In CentOS 6.8 I have a golang app , that run in command go run main.go and I need to create a system service to run it in boot like service httpd.
I know that I have to create file like /etc/rc.d/init.d/httpd But I don't know how to do it to run that command.
First, you will need to build your Go binary and put it in your path.
go install main.go
If your "main" file is called main, go install will place a binary called "main" in your path, so I suggest you rename your file to whatever you call your project/server.
mv main.go coolserver.go
go install coolserver.go
You can run coolserver to make sure everything is fine. It will if you have your $GOPATH setup properly.
Here it is an example of a init.d service called service.sh
#!/bin/sh
### BEGIN INIT INFO
# Provides: <NAME>
# Required-Start: $local_fs $network $named $time $syslog
# Required-Stop: $local_fs $network $named $time $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Description: <DESCRIPTION>
### END INIT INFO
SCRIPT=<COMMAND>
FLAGS="--auth=user:password"
RUNAS=<USERNAME>
PIDFILE=/var/run/<NAME>.pid
LOGFILE=/var/log/<NAME>.log
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
local CMD="$SCRIPT $FLAGS &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" $RUNAS > "$PIDFILE"
echo 'Service started' >&2
}
stop() {
if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then
echo 'Service not running' >&2
return 1
fi
echo 'Stopping service…' >&2
kill -15 $(cat "$PIDFILE") && rm -f "$PIDFILE"
echo 'Service stopped' >&2
}
uninstall() {
echo -n "Are you really sure you want to uninstall this service? That cannot be undone. [yes|No] "
local SURE
read SURE
if [ "$SURE" = "yes" ]; then
stop
rm -f "$PIDFILE"
echo "Notice: log file is not be removed: '$LOGFILE'" >&2
update-rc.d -f <NAME> remove
rm -fv "$0"
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
uninstall)
uninstall
;;
restart)
stop
start
;;
*)
echo "Usage: $0 {start|stop|restart|uninstall}"
esac
Copy to /etc/init.d:
cp "service.sh" "/etc/init.d/coolserver"
chmod +x /etc/init.d/coolserver
Remember to replace
<NAME> = coolserver
<DESCRIPTION> = Describe your service here (be concise)
<COMMAND> = /path/to/coolserver
<USER> = Login of the system user the script should be run as
Start and test your service and install the service to be run at boot-time:
service coolserver start
service coolserver stop
update-rc.d coolserver defaults
I assume you tried to use apache web server. Actually, Go web server is enough itself. Main purpose is to run Go web server in system service.So, you can use tmux https://tmux.github.io/ or nohup to run as system service. You can also use apache or nginx web server as proxy.

run inotifywait on background

I have this code copied from linuxaria.com as example and work just fine on my case the problem is when I exit from terminal inotifywait stop. I want run on back ground even after exit the terminal. how I can do that?
#!/bin/sh
# CONFIGURATION
DIR="/tmp"
EVENTS="create"
FIFO="/tmp/inotify2.fifo"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time Fichier créé: $file"
}
# MAIN
if [ ! -e "$FIFO" ]
then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
You can run the script with screen or nohup but I'm not sure how that would help since the script does not appear to log its output to any file.
nohup bash script.sh </dev/null >/dev/null 2>&1 &
Or
screen -dm bash script.sh </dev/null >/dev/null 2>&1 &
Disown could also apply:
bash script.sh </dev/null >/dev/null 2>&1 & disown
You should just test which one would not allow the command to suspend or hang up when the terminal exits.
If you want to log the output to a file, you can try these versions:
nohup bash script.sh </dev/null >/path/to/logfile 2>&1 &
screen -dm bash script.sh </dev/null >/path/to/logfile 2>&1 &
bash script.sh </dev/null >/path/to/logfile 2>&1 & disown
I made a 'service' out of it. So I could stop/start it like a normal service and also it would start after a reboot:
This was made on a Centos distro So I'm not if it works on others right away.
Create a file with execute right on in the service directory
/etc/init.d/servicename
#!/bin/bash
# chkconfig: 2345 90 60
case "$1" in
start)
nohup SCRIPT.SH > /dev/null 2>&1 &
echo $!>/var/run/SCRIPT.SH.pid
;;
stop)
pkill -P `cat /var/run/SCRIPT.SH.pid`
rm /var/run/SCRIPT.SH.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/SCRIPT.SH.pid ]; then
echo SCRIPT.SH is running, pid=`cat /var/run/SCRIPT.SH.pid`
else
echo SCRIPT.SH is not running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
Everything in caps you should change to what your script name.
The line # chkconfig: 2345 90 60 makes it possible to start the service when the system is rebooted. this probably doens't work in ubuntu like distro's.
The best way I found is to create a systemd service.
Create systemd file in /lib/systemd/system/checkfile.service:
sudo vim /lib/systemd/system/checkfile.service
And paste this there:
[Unit]
Description = Run inotifywait in backgoround
[Service]
User=ubuntu
Group=ubuntu
ExecStart=/bin/bash /path_to/script.sh
RestartSec=10
[Install]
WantedBy=multi-user.target
and in /path_to/script.sh, you can have this:
inotifywait -m /path-to-dir -e create -e moved_to |
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'" >> /dir/event.txt
done
Make sure that your file is executable by the user:
sudo chmod +x /path_to/script.sh
After creating two files, reload systemd manager configuration with:
sudo systemctl daemon-reload
Now you can use start/stop/enable to your script:
sudo systemctl enable checkfile
sudo systemctl start checkfile
Make sure to replace file/directory/user/group values before executing.
replace -m with
-d -o /dev/null
ie:
inotifywait -d -o /dev/null -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' >"$DIR" > "$FIFO" & INOTIFY_PID=$!
You can check the inotifywait help manual at:
https://helpmanual.io/help/inotifywait/
Method that will work even if the file to be watched is not there yet, or gets deleted in between (just watch the whole directory instead of a single file, and then do the action on a particular file):
nohup inotifywait -m -e close_write /var/opt/some_directory/ |
while read -r directory events filename; do
if [ "$filename" = "file_to_be_watched.log" ]; then
# do your stuff here; I'm just printing the events to file
echo "$events" >> /tmp/events.log
fi
done &

Resources