"inotifywait -e close_write" not triggering even when tail shows new lines - bash

I have a script:
nohup tail -f /somefile >> /soemeotherfile.dat &
nohup while inotifywait -e close_write /someotherfile.dat; do ./script.sh; done &
but it seems that script.sh is never activated despite input arriving at the tail of /somefile every 5 minutes. What is wrong with my script above?

From the inotifywait docs:
close_write
A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to.
close_write only triggers when a file is closed.
tail -f /somefile >> /soemeotherfile.dat
...continually appends to someotherfile.dat. It does not close it after each individual write.
Probably you want the modify event instead.

Related

On what occasion does 'tee' deletes the file it was writing on?

bash4.2, centos
the script
#!/bin/bash
LOG_FILE=$homedir/logs/result.log
exec 3>&1
exec > >(tee -a ${LOG_FILE}) 2>&1
echo
end_shell_number=10
for script in `seq -f "%02g_*.sh" 0 $end_shell_number`; do
if ! bash $homedir/$script; then
printf 'Script "%s" failed, terminating...\n' "$script" >&2
exit 1
fi
done
It basically runs through sub-shells number 00 to 10 and logs everything to a LOG_FILE while also displaying on stdout.
I was watching the log getting stacked with tail -F ./logs/result.log,
and it was working nicely until the log file suddenly got removed.
The sub-shells does nothing related to file descriptors nor the log file. They remotely restart tomcats via ssh commands.
Question :
tee was writing on a log file successfully until the file gets erased and logging stops from then on.
Is there a filesize limit or timeout in tee? Is there any known behavior of tee that it deletes a file?
On what occasion does 'tee' deletes the file it was writing on?
tee does not delete nor truncate the file once it has started writing.
Is there a filesize limit or timeout in tee?
No.
Is there any known behavior of tee that it deletes a file?
No.
Note that file can be removed, but the process (tee) still will wrote the open file descriptor, but the file will not be accessible (see man 3 unlink).

How to run an Inotify shell script as an asynchronous process

I have an inotify shell script which monitors a directory, and executes certain commands if a new file comes in. I need to make this inotify script into a parallelized process, so the execution of the script doesn't wait for the process to complete whenever multiple files comes into the directory.
I have tried using nohup, & and xargs to achieve this task. But the problem was, xargs runs the same script as a number of processes, whenever a new file comes in, all the running n processes try to process the script. But essentially I only want one of the processes to process the new file whichever is idle. Something like worker pool, whichever worker is free or idle tries to execute the task.
This is my shell script.
#!/bin/bash
# script.sh
inotifywait --monitor -r -e close_write --format '%w%f' ./ | while read FILE
do
echo "started script";
sleep $(( $RANDOM % 10 ))s;
#some more process which takes time when a new file comes in
done
I did try to execute the script like this with xargs =>
xargs -n1 -P3 bash sample.sh
So whenever a new file comes in, it is getting processed thrice because of P3, but ideally i want one of the processes to pick this task which ever is idle.
Please shed some light on how to approach this problem?
There is no reason to have a pool of idle processes. Just run one per new file when you see new files appear.
#!/bin/bash
inotifywait --monitor -r -e close_write --format '%w%f' ./ |
while read -r file
do
echo "started script";
( sleep $(( $RANDOM % 10 ))s
#some more process which takes time when a new "$file" comes in
) &
done
Notice the addition of & and the parentheses to group the sleep and the subsequent processing into a single subshell which we can then background.
Also, notice how we always prefer read -r and Correct Bash and shell script variable capitalization
Maybe this will work:
https://www.gnu.org/software/parallel/man.html#EXAMPLE:-GNU-Parallel-as-dir-processor
If you have a dir in which users drop files that needs to be processed you can do this on GNU/Linux (If you know what inotifywait is called on other platforms file a bug report):
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
parallel -u echo
This will run the command echo on each file put into my_dir or subdirs of my_dir.
To run at most 5 processes use -j5.

Continuously watch a socket file, run command when it doesn't exist?

It seems that when my screen is locked for some period of time, my S.gpg-agent.ssh disappears, and so in order to continue using my key I have to re-initialise it.
Obviously, this is a relatively frequent occurrence, so I've written a function for my shell to kill gpg-agent, restart it, and reset the appropriate environment variables.
This may be a bit of an 'X-Y problem', X being above this line, but I think Y below is more generally useful to know anyway.
How can I automatically run a command when an extant file no longer exists?
The best I've come up with is:
nohup echo "$file" | entr $command &
at login. But entr runs a command when files change, not just deletion, so it's not clear to me how that will behave with a socket.
According to your comment, cron daemon does not fit.
Watch socket file deletion
Try auditd
# auditctl -w /var/run/<your_socket_file> -p wa
$ tail -f /var/log/audit/audit.log | grep 'nametype=DELETE'
Howto run a script if event occurred
If you want to run a script on socketile deletion, you can use while loop, e.g.:
tail -Fn0 /var/log/audit/audit.log | grep 'name=<your_socket_file>' | grep 'nametype=DELETE' \
while IFS= read -r line; do
# your script here
done
thx to Tom Klino and his answer
You don't mention the OS you're using, but if it's linux, you can use inotifywait from the inotify-tools package:
#!/bin/sh
while inotifywait -qq -e delete_self /path/to/S.gpg-agent.ssh; do
echo "Socket was deleted!"
# Recreate it.
done

Triggering Event in BASH with inotifywait

I'm monitoring a log file (it's a PBX queue file, it's written to when a call comes in and it's the result of what happens to the call, whether the caller hangs up, etc)
Here's what I have:
while inotifywait -m -e close_write /var/log/asterisk/queue_log_test;
do
if [ tail -n1 /var/log/asterisk/queue_log | grep EXITWITHTIMEOUT ];
then
php /usr/local/scripts/queue_monitor/pbx_queue_timeout.php
elif [ tail -n1 /var/log/asterisk/queue_log | grep ABANDON ];
then
php /usr/local/scripts/queue_monitor/pbx_queue_abandon.php
elif [ tail -n1 /var/log/asterisk/queue_log | grep COMPLETE ];
then
php /usr/local/scripts/queue_monitor/pbx_queue_complete.php
else
# Don't do anything unless we've found one of those
:
fi
done
Now, if I run the script, it successfully sets up the watches and detects the change/close (I've tried both MODIFY and CLOSE_WRITE, both work)
Setting up watches.
Watches established.
/var/log/asterisk/queue_log_test CLOSE_WRITE,CLOSE
But the event is never triggered (I have tested the PHP scripts outside of the inotify script and they execute splendidly)
If I run a tail by hand of the file that's being watched, it's successful and finds the phrase:
[root#pbx ...local/scripts/queue_monitor]: tail /var/log/asterisk/queue_log_test
ABANDON
[Load: 0.00] [Time: 19:04:43]
[root#pbx ...local/scripts/queue_monitor]:
What is it I'm missing here?
You are using the -m switch of inotifywait, which makes it run indefinitely:
-m, --monitor
Instead of exiting after receiving a single event, execute
indefinitely. The default behaviour is to exit after the first
event occurs.
And the while loop is waiting for it to finish, so it can evaluate is exit code to decide if the loop should continue or not.

background shell script terminating automatically

I've created a background shell to watch a folder (with inotifywait) and execute a process (a php script to send information to several other server and update a database, but I don't think that's relevant) when a new file is created in it.
My problem is that after some times the script is actually terminated, and I don't understand why (I redirected the output to a file not to fill up the buffer, even for php execution).
I'm using Ubuntu 12.04 server and latest version of php.
Here is my script:
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
You should take a look at nohup and screen this is exactly what you are looking for
Ok, after hours and hours i finally found a solution, it might (must) be a bit dirty but it works ! As i said in a previous command, i used the trap command, here is my final script :
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
#trap SIGHUP SIGINT SIGTERM and relaunch the script
trap "pkill -9 inotifywait;($SCRIPT &);exit" 1 2 15
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
hope it will help shell beginner as me :)
Edit : added "pkill -9 inotifywait" to make sure inotify process won't stack up,the parenthesis to make sure the new process is not a child of the current one, and exit to make sure the current process stops running

Resources