I was thinking this was going to be rather easy, but it's turn out not to be.
I have a mounted network server that has inotifywatch running on it watching a folder. I was wanting to have this set up so that anyone on the network with access to this server could drop a file into the watched folder and have it execute a script. Unfortunately it doesn't seem to trigger unless the file was moved on the server itself.
Is there any way to make inotify watch a folder in a way that if any file, from anywhere, triggers the inotify event? Or should I look into something else?
For reference, here is what I'm using in a shell script:
inotifywait -m --format '%f' -e moved_to "/mnt/server/folder/" |
while read NAME
do
echo $NAME
done
I ended up setting up rsynch in a cron job to copy over the folder from the network every few minutes, then use inotifywatch to pick up new files from there.
inotifywait workaround for mounted drives :/
saveloop2 ./03_packages.js "clear" "node ./03_packages.js"
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo -e "\e[91;48;5;52musage: saveloop <watchfile> <execution script> [another] [another] ✗✗✗\e[0m\n"
echo -e "example: saveloop .gitalias \"foo param1 param2\" [\"command2\"] [\"command2\"]"
echo -e "Note: each command with its params to be in quotes."
exit
fi
while :
do
oldTimestamp=$timestamp
timestamp=$(stat -c %Y $1)
if [[ $oldTimestamp != $timestamp ]]; then
$2
$3
$4
fi
sleep 1
done
Related
I have shell script with inotifwait set up as under:
inotifywait -r -e close_write,moved_to -m "<path>/upload" --format '%f######%e######%w'
There are some docx files residing in watched directory and some script converts docx to PDF via below command:
soffice --headless --convert-to pdf:writer_pdf_Export <path>/upload/somedoc.docx --outdir <path>/upload/
Somehow event is triggered twice as soon as PDF is generated. Entries are as under:
somedoc.pdf######CLOSE_WRITE,CLOSE######<path>/upload/
somedoc.pdf######CLOSE_WRITE,CLOSE######<path>/upload/
What else is wrong here?
Regards
It's triggered twice because this is how soffice appears to behave internally.
One day it may start writing it 10 times and doing sleep 2 between such writes during a single run, our program can't and I believe shouldn't anticipate it and depend on it.
So I'd try solving the problem from a different angle - lets just put the converted file into a temporary directory and then move it to the target dir, like this:
soffice --headless --convert-to pdf:writer_pdf_Export <path>/upload/somedoc.docx --outdir <path>/tempdir/ && mv <path>/tempdir/somedoc.pdf <path>/upload/
and use inotifywait in the following way:
inotifywait -r -e moved_to -m "<path>/upload" --format '%f######%e######%w'
The advantage is that you no longer depend on soffice's internal logic.
If you can't adjust behavior of the script producing the pdf files then indeed you'll need to resort to a workaround like #Tarun suggested.
I don't think you can control the external program as such. But I assume you are using this output for a pipe and then inputing it some place else. In that case you can avoid a event that happens continuously with a span of few seconds
So we add %T to --format and --timefmt "%s" to get the epoch time. Below is the updated command
$ inotifywait -r -e close_write,moved_to --timefmt "%s" -m "/home/vagrant" --format '%f######%e######%w##T%T' -q | ./process.sh
test.txt######CLOSE_WRITE,CLOSE######/home/vagrant/
Skipping this event as it happend within 2 seconds. TimeDiff=2
test.txt######CLOSE_WRITE,CLOSE######/home/vagrant/
This was done by using touch test.txt, multiple time every second. And as you can see second even was skipped. The process.sh is a simple bash script
#!/bin/bash
LAST_EVENT=
LAST_EVENT_TIME=0
while read line
do
DEL="##T"
EVENT_TIME=$(echo "$line" | awk -v delimeter="$DEL" '{split($0,a,delimeter)} END{print a[2]}')
EVENT=$(echo "$line" | awk -v delimeter="$DEL" '{split($0,a,delimeter)} END{print a[1]}')
TIME_DIFF=$(( $EVENT_TIME - $LAST_EVENT_TIME))
if [[ "$EVENT" == "$LAST_EVENT" ]]; then
if [[ $TIME_DIFF -gt 2 ]]; then
echo "$EVENT"
else
echo "Skipping this event as it happend within 2 seconds. TimeDiff=$TIME_DIFF"
fi
else
echo $EVENT
LAST_EVENT_TIME=$EVENT_TIME
fi
LAST_EVENT=$EVENT
done < "${1:-/dev/stdin}"
In your actual script you will disable the echo in if, this one was just for demo purpose
I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.
I want to have a launcher that runs a Bash commands that toggle a setting; switching the setting one way requires one command and switching it the other way requires another command. If there is no easy way to query the system to find out the status of that setting, how should Bash remember the status of the setting so that it can run the appropriate command?
An obvious solution would be to save the status as files and then check for the existence of those files to determine the appropriate command to run, but is there some neater way, perhaps one that would use volatile memory?
Here's an attempt at a toggle script using temporary files:
#!/bin/bash
main(){
settingOn="/tmp/red_on.txt"
settingOff="/tmp/red_off.txt"
if [[ ! -e "${settingOff}" ]] && [[ ! -e "${settingOn}" ]]; then
echo "no prior use detected -- creating default off"
touch "${settingOff}"
fi
if [ -f "${settingOff}" ]; then
echo "switch on"
redshift -o -t 1000:1000 -l 0.0:0.0
rm -f "${settingOff}"
touch "${settingOn}"
elif [ -f "${settingOn}" ]; then
echo "switch off"
redshift -x
rm -f "${settingOn}"
touch "${settingOff}"
fi
}
main
Is it possible to save last entered value of a variable by the user in the bash script itself so that I reuse value the next time while executing again?.
Eg:
#!/bin/bash
if [ -d "/opt/test" ]; then
echo "Enter path:"
read path
p=$path
else
.....
........
fi
The above script is just a sample example I wanted to give(which may be wrong), is it possible if I want to save the value of p permanently in the script itself to so that I use it somewhere later in the script even when the script is re-executed?.
EDIT:
I am already using sed to overwrite the lines in the script while executing, this method works but this is not at all good practice as said. Replacing the lines in the same file as said in the below answer is much better than what I am using like the one below:
...
....
PATH=""; #This is line no 7
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";
name="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")";
...
if [ condition ]
fi
path=$path
sed -i '7s|.*|PATH='$path';|' $DIR/$name;
Someting like this should do the asked stuff :
#!/bin/bash
ENTERED_PATH=""
if [ "$ENTERED_PATH" = "" ]; then
echo "Enter path"
read path
ENTERED_PATH=$path
sed -i 's/ENTERED_PATH=""/ENTERED_PATH='$path'/g' $0
fi
This script will ask user a path only if not previously ENTERED_PATH were defined, and store it directly into the current file with the sed line.
Maybe a safer way to do this, would be to write a config file somewhere with the data you want to save and source it . data.saved at the begining of your script.
In the script itself? Yes with sed but it's not advisable.
#!/bin/bash
test='0'
echo "test currently is: $test";
test=`expr $test + 1`
echo "changing test to: $test"
sed -i "s/test='[0-9]*'/test='$test'/" $0
Preferable method:
Try saving the value in a seperate file you can easily do a
myvar=`cat varfile.txt`
And whatever was in the file is not in your variable.
I would suggest using the /tmp/ dir to store the file in.
Another option would be to save the value as an extended attribute attached to the script file. This has many of the same problems as editing the script's contents (permissions issues, weird for multiple users, etc) plus a few of its own (not supported on all filesystems...), but IMHO it's not quite as ugly as rewriting the script itself (a config file really is a better option).
I don't use Linux, but I think the relevant commands would be something like this:
path="$(getfattr --only-values -n "user.saved_path" "${BASH_SOURCE[0]}")"
if [[ -z "$path" ]]; then
read -p "Enter path:" path
setfattr -n "user.saved_path" -v "$path" "${BASH_SOURCE[0]}"
fi
does anyone know how to lock on a function in bash script?
I wanted to do something like in java (like synchronize), ensuring that each file saved in monitored folder is on hold ever tries to use submit function.
an excerpt from my script:
(...)
ON_EVENT () {
local date = $1
local time = $2
local file = $3
sleep 5
echo "$date $time New file created: $file"
submit $file
}
submit () {
local file = $1
python avsubmit.py -f $file -v
python dbmgr.py -a $file
}
if [ ! -e "$FIFO" ]; then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
trap "on_exit" 2 3 15
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
on_exit
I'm using inotify to monitor a folder when a new file is saved. For each file saved (received), submit to VirusTotal service (avsubmit.py) and TreathExpert (dbmgr.py).
Concurrent access would be ideal to avoid blocking every new file created in monitored folder, but lock submit function should be sufficient.
Thank you guys!
Something like this should work:
if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# Your code here
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire $lockfile. Held by $(cat $lockfile)"
then
Any code using rm in combination with trap or similar facility is inherently flawed against ungraceful kills, panics, system crashes, newbie sysadmins, etc. The flaw is that the lock needs to be manually cleaned after such catastrophic event for the script to run again. That may or may not be a problem for you. It is a problem for those managing many machines or wishing to have an unplugged vacation once in a while.
A modern solution using a file descriptor lock has been around for a while - I detailed it here and a working example is on the GitHub here. If you do not need to track process ID for whatever monitoring or other reasons, there is an interesting suggestion for a self-lock (I did not try it, not sure of its portability guarantee).
You can use a lock file to determine whether or not the file should be submitted.
Inside your ON_EVENT function, you should check if the appropriate lock file exists before calling the submit function. If it does exist, then return, or sleep and check again later to see if it's gone. If it doesn't exist, then create the lock and call submit. After the submit function completes, then delete the lock file.
See this thread for implementation details.
But I liked that files can not get lock stay on the waiting list (cache) to be submitted then or later.
I currently have something like this:
lockfile="./lock"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time New file created: $file"
if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
submit_samples $file
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi
}
submit_samples() {
local file=$1
python avsubmit.py -f $file -v
python dbmgr.py -a $file
}
Thank you once again ...
I had proplems wiith this approach and found a better solution:
Procmail comes with a lockfile command which does what I wanted:
lockfile -5 -r10 /tmp/lock.file
do something very important
rm -f /tmp/lock.file
lockfile will try to create the specified lockfile. If it exists it iwll retry in 5 seconds, this will be repeated for maximum 10 times. If can create the flile it goes on with the script.
Another solution are the lockfile-progs in debian, example directly from the man page:
Locking a file during a lengthy process:
lockfile-create /some/file
lockfile-touch /some/file &
# Save the PID of the lockfile-touch process
BADGER="$!"
do-something-important-with /some/file
kill "${BADGER}"
lockfile-remove /some/file
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f $DIR |
parallel -u python avsubmit.py -f {}\; python dbmgr.py -a {}
It will run at most one python per CPU when a file is written (and closed). That way you can bypass all the locking, and you get the added benefit that you avoid a potential race condition where a file is immediately overwritten (how do you make sure that both the first and the second version was checked?).
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1