Inotifywait executes bash script first time only - bash

I am running inotifywait on CentOS 6 with the following script:
#!/bin/bash
#
# Try to run inotifywait to execute .buildeach time a file is changed
watch=~/vuejs/files
outfile=/dev/null
build_script=~/.build
inotifywait --monitor --daemon --outfile $outfile --event modify,create,delete --recursive $watch && $build_script
This references a build_script variable which contains:
#!/bin/bash
function build {
echo $'#\n# Started Build: ' $(date +"%Y-%m-%d %T") >> $output
cp -r $src_dir $dest_dir
npm run --prefix $dest_dir build
if [[ -d $dest_dir/dist ]]; then
cp -r $dest_dir/dist/* $out_dir
else
echo '# Build Error: ' $(date +"%Y-%m-%d %T") >> $output
fi
echo '# Build Completed: ' $(date +"%Y-%m-%d %T") >> $output
}
output=/home/vue/www/.build.txt
src_dir=/home/vue/vuejs/files/*
dest_dir=/home/vue/vuejs/www
out_dir=/home/vue/www/
build $1 $2 $3
When I run ./.listen from the command line everything seems to run find. I even can do
ps aux | grep inotifywait
and I see that the process is still running, but it only executes the .build script once. Is this because I need to run .listen as a service? How do I make inotifywait execute the .build shell script every time?

The --daemon option makes inotifywait put itself into the background immediately. the main process exits immediately, and then $build_script is executed.
You should monitor the output file and run the build script every time a line is written:
inotifywait --monitor --daemon --outfile "$outfile" --event modify,create,delete --recursive "$watch"
while read -r line; do
"$build_script"
done < "$outfile"

Related

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

Inotifywait won't run

inotifwait won't run command
"Setting up watches.
Watches established" is output, script just exit
#!/bin/bash
while $(inotifywait -e modify,close_write /home/centos/test.txt);
do
touch /home/centos/log.txt
done
but when i modify test.txt log.txt is not created
Tried this version:
#!/bin/bash
inotifywait -e modify,close_write /home/centos/test.txt |
while read output; do
touch /home/centos/log.txt;
done
tried this also:
inotifywait -e modify,close_write /home/centos/test.txt |
while read -r filename event; do
echo "test" # or "./$filename"
done
Solved it by adding -m /folder

Ubuntu BASH inotifywait to trigger another script

I am trying to use inotifywait within a bash script to monitor a directory for a file with a certain tag in it (*SDS.csv).
I also only want to execute once (once when the file is written to the directory data ).
example:
#! /bin/bash
inotifywait -m -e /home/adam/data | while read LINE
do
if [[ $LINE == *SDS.csv ]]; then
./another_script.sh
fi
done
While this may not be the ideal solution, it may do the trick:
#! /bin/bash
while true
do
FNAME="$(inotifywait -e close_write /home/adam/data | awk '{ print $NF }')"
if [ -f "/home/adam/data/$FNAME" ]
then
if grep -q 'SDS.csv' "/home/adam/data/$FNAME"
then
./another_script.sh
fi
done
done

run inotifywait on background

I have this code copied from linuxaria.com as example and work just fine on my case the problem is when I exit from terminal inotifywait stop. I want run on back ground even after exit the terminal. how I can do that?
#!/bin/sh
# CONFIGURATION
DIR="/tmp"
EVENTS="create"
FIFO="/tmp/inotify2.fifo"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time Fichier créé: $file"
}
# MAIN
if [ ! -e "$FIFO" ]
then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
You can run the script with screen or nohup but I'm not sure how that would help since the script does not appear to log its output to any file.
nohup bash script.sh </dev/null >/dev/null 2>&1 &
Or
screen -dm bash script.sh </dev/null >/dev/null 2>&1 &
Disown could also apply:
bash script.sh </dev/null >/dev/null 2>&1 & disown
You should just test which one would not allow the command to suspend or hang up when the terminal exits.
If you want to log the output to a file, you can try these versions:
nohup bash script.sh </dev/null >/path/to/logfile 2>&1 &
screen -dm bash script.sh </dev/null >/path/to/logfile 2>&1 &
bash script.sh </dev/null >/path/to/logfile 2>&1 & disown
I made a 'service' out of it. So I could stop/start it like a normal service and also it would start after a reboot:
This was made on a Centos distro So I'm not if it works on others right away.
Create a file with execute right on in the service directory
/etc/init.d/servicename
#!/bin/bash
# chkconfig: 2345 90 60
case "$1" in
start)
nohup SCRIPT.SH > /dev/null 2>&1 &
echo $!>/var/run/SCRIPT.SH.pid
;;
stop)
pkill -P `cat /var/run/SCRIPT.SH.pid`
rm /var/run/SCRIPT.SH.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/SCRIPT.SH.pid ]; then
echo SCRIPT.SH is running, pid=`cat /var/run/SCRIPT.SH.pid`
else
echo SCRIPT.SH is not running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
Everything in caps you should change to what your script name.
The line # chkconfig: 2345 90 60 makes it possible to start the service when the system is rebooted. this probably doens't work in ubuntu like distro's.
The best way I found is to create a systemd service.
Create systemd file in /lib/systemd/system/checkfile.service:
sudo vim /lib/systemd/system/checkfile.service
And paste this there:
[Unit]
Description = Run inotifywait in backgoround
[Service]
User=ubuntu
Group=ubuntu
ExecStart=/bin/bash /path_to/script.sh
RestartSec=10
[Install]
WantedBy=multi-user.target
and in /path_to/script.sh, you can have this:
inotifywait -m /path-to-dir -e create -e moved_to |
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'" >> /dir/event.txt
done
Make sure that your file is executable by the user:
sudo chmod +x /path_to/script.sh
After creating two files, reload systemd manager configuration with:
sudo systemctl daemon-reload
Now you can use start/stop/enable to your script:
sudo systemctl enable checkfile
sudo systemctl start checkfile
Make sure to replace file/directory/user/group values before executing.
replace -m with
-d -o /dev/null
ie:
inotifywait -d -o /dev/null -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' >"$DIR" > "$FIFO" & INOTIFY_PID=$!
You can check the inotifywait help manual at:
https://helpmanual.io/help/inotifywait/
Method that will work even if the file to be watched is not there yet, or gets deleted in between (just watch the whole directory instead of a single file, and then do the action on a particular file):
nohup inotifywait -m -e close_write /var/opt/some_directory/ |
while read -r directory events filename; do
if [ "$filename" = "file_to_be_watched.log" ]; then
# do your stuff here; I'm just printing the events to file
echo "$events" >> /tmp/events.log
fi
done &

Resources