Continuously watch a socket file, run command when it doesn't exist? - shell

It seems that when my screen is locked for some period of time, my S.gpg-agent.ssh disappears, and so in order to continue using my key I have to re-initialise it.
Obviously, this is a relatively frequent occurrence, so I've written a function for my shell to kill gpg-agent, restart it, and reset the appropriate environment variables.
This may be a bit of an 'X-Y problem', X being above this line, but I think Y below is more generally useful to know anyway.
How can I automatically run a command when an extant file no longer exists?
The best I've come up with is:
nohup echo "$file" | entr $command &
at login. But entr runs a command when files change, not just deletion, so it's not clear to me how that will behave with a socket.

According to your comment, cron daemon does not fit.
Watch socket file deletion
Try auditd
# auditctl -w /var/run/<your_socket_file> -p wa
$ tail -f /var/log/audit/audit.log | grep 'nametype=DELETE'
Howto run a script if event occurred
If you want to run a script on socketile deletion, you can use while loop, e.g.:
tail -Fn0 /var/log/audit/audit.log | grep 'name=<your_socket_file>' | grep 'nametype=DELETE' \
while IFS= read -r line; do
# your script here
done
thx to Tom Klino and his answer

You don't mention the OS you're using, but if it's linux, you can use inotifywait from the inotify-tools package:
#!/bin/sh
while inotifywait -qq -e delete_self /path/to/S.gpg-agent.ssh; do
echo "Socket was deleted!"
# Recreate it.
done

Related

Does this bash script using grep look correct?

I am having some strange rebooting issues and I think it is due to an error in my shell script.
#!/bin/bash
if ps -a | grep -q gridcoin
then
echo nothing
else
sudo reboot -h now
fi
The script is supposed to work like this:
Run the ps -a command so that all processes are listed. Pipe the results to grep and have grep check to see if any of the processes have the word "gridcoin" in them. If gridcoin is running, do nothing. If gridcoin is not one of the running processes, reboot the system.
I have a cron job that runs this script once every five minutes; however, my system keeps rebooting about every five minutes even when I know for a fact that gridcoin is running.
Please take a look at the code and see if it looks right before I start trying to herd other code cats.
Respectfully,
chadrick
I see at least two problems here. First, ps -a will not show processes that don't have a controlling terminal (so, basically, non-interactive processes); you want ps -ax to show all processes.
Second, depending on the timing, the grep -q gridcoin command may be listed as a running process, and of course it finds itself, meaning that it mistakes itself for the gridcoin process. If you have the pgrep program available, replace both ps and grep with it, since it automatically avoids listing itself. Unfortunately, it doesn't have a -q option like grep, so you have to redirect its output to /dev/null:
if pgrep gridcoin >/dev/null
If pgrep is not available, there are two standardish idioms to prevent grep from finding itself:
if ps -ax | grep -q '[g]ridcoin' # Adding brackets prevents it from matching itself
if ps -ax | grep gridcoin | grep -vq grep # Remove grep processes from consideration
Also, as David C. Rankin pointed out, rebooting the entire system to restart one process is overkill; just restart that one process.

How to run an Inotify shell script as an asynchronous process

I have an inotify shell script which monitors a directory, and executes certain commands if a new file comes in. I need to make this inotify script into a parallelized process, so the execution of the script doesn't wait for the process to complete whenever multiple files comes into the directory.
I have tried using nohup, & and xargs to achieve this task. But the problem was, xargs runs the same script as a number of processes, whenever a new file comes in, all the running n processes try to process the script. But essentially I only want one of the processes to process the new file whichever is idle. Something like worker pool, whichever worker is free or idle tries to execute the task.
This is my shell script.
#!/bin/bash
# script.sh
inotifywait --monitor -r -e close_write --format '%w%f' ./ | while read FILE
do
echo "started script";
sleep $(( $RANDOM % 10 ))s;
#some more process which takes time when a new file comes in
done
I did try to execute the script like this with xargs =>
xargs -n1 -P3 bash sample.sh
So whenever a new file comes in, it is getting processed thrice because of P3, but ideally i want one of the processes to pick this task which ever is idle.
Please shed some light on how to approach this problem?
There is no reason to have a pool of idle processes. Just run one per new file when you see new files appear.
#!/bin/bash
inotifywait --monitor -r -e close_write --format '%w%f' ./ |
while read -r file
do
echo "started script";
( sleep $(( $RANDOM % 10 ))s
#some more process which takes time when a new "$file" comes in
) &
done
Notice the addition of & and the parentheses to group the sleep and the subsequent processing into a single subshell which we can then background.
Also, notice how we always prefer read -r and Correct Bash and shell script variable capitalization
Maybe this will work:
https://www.gnu.org/software/parallel/man.html#EXAMPLE:-GNU-Parallel-as-dir-processor
If you have a dir in which users drop files that needs to be processed you can do this on GNU/Linux (If you know what inotifywait is called on other platforms file a bug report):
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
parallel -u echo
This will run the command echo on each file put into my_dir or subdirs of my_dir.
To run at most 5 processes use -j5.

"inotifywait -e close_write" not triggering even when tail shows new lines

I have a script:
nohup tail -f /somefile >> /soemeotherfile.dat &
nohup while inotifywait -e close_write /someotherfile.dat; do ./script.sh; done &
but it seems that script.sh is never activated despite input arriving at the tail of /somefile every 5 minutes. What is wrong with my script above?
From the inotifywait docs:
close_write
A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to.
close_write only triggers when a file is closed.
tail -f /somefile >> /soemeotherfile.dat
...continually appends to someotherfile.dat. It does not close it after each individual write.
Probably you want the modify event instead.

Using bash and watch to monitor qemu-kvm

I'm trying to monitor some qemu-kvm processes using a bash script and watch to show several details like memory/cpu use, resources, ports,etc. Everything is going great until I try to get the IMG file that qemu-kvm is using.
Initially I thought I could get that information from the running process cmd:
ps -p PID -o cmd | tail -1
qemu-kvm -name TEST -drive file=/img/test.qcow2,if=ide,media=disk,cache=none -daemonize
So after plenty of tests I have wrote this little script just to synthesise the problem:
#!/bin/bash
lsof -p PID | grep '/img/' | awk {'print $9'}
The output of this script looks totally fine when I execute it from the command line:
bash testscript
/img/test.qcow2
The problem comes when I try to run the script along with watch:
watch -n1 "bash test-script" or watch -n1 ./test-script
The output is just empty...
Why don't I get any results? I'd be really glad if somebody can help me understanding this.
EDIT: I have found an alternative solution. I am now getting the information from parsing the procfs with some arrays to find the IMG info:
OIFS=$IFS;
IFS='-';
Array=($(cat /proc/PID/cmdline))
for ((i=0; i<${#Array[#]}; ++i))
do
if [[ "${Array[$i]}" == *drive* ]]; then
image=${Array[$i]}
echo $image
fi
done
IFS=$OIFS;
This works fine combined with watch, but I would still like to know what's the problem with the other method. Is lsof limited somehow??
I tried the same process as you; the only difference in my case was the need for sudo. With that it worked. Same issue?
#!/bin/sh
sudo lsof | grep .B.disk.img
watch -n1 sh test

background shell script terminating automatically

I've created a background shell to watch a folder (with inotifywait) and execute a process (a php script to send information to several other server and update a database, but I don't think that's relevant) when a new file is created in it.
My problem is that after some times the script is actually terminated, and I don't understand why (I redirected the output to a file not to fill up the buffer, even for php execution).
I'm using Ubuntu 12.04 server and latest version of php.
Here is my script:
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
You should take a look at nohup and screen this is exactly what you are looking for
Ok, after hours and hours i finally found a solution, it might (must) be a bit dirty but it works ! As i said in a previous command, i used the trap command, here is my final script :
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
#trap SIGHUP SIGINT SIGTERM and relaunch the script
trap "pkill -9 inotifywait;($SCRIPT &);exit" 1 2 15
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
hope it will help shell beginner as me :)
Edit : added "pkill -9 inotifywait" to make sure inotify process won't stack up,the parenthesis to make sure the new process is not a child of the current one, and exit to make sure the current process stops running

Resources