Bash script for inotifywait - How to write deletes to log file, and cp close_writes? - bash

I have this bash script:
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
cp ./"$file" zinot/"$file"
done
~
Now, how would I have it do the same thing but also handle deletes by writing the filenames to a log file?
Something like?
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
# if DELETE, append $file to /inotify.log
# else
cp ./"$file" zinot/"$file"
done
~
EDIT:
By looking at the messages generated, I found that inotifywait generates CLOSE_WRITE,CLOSE whenever a file is closed. So that is what I'm now checking in my code.
I tried also checking for DELETE, but for some reason that section of the code is not working. Check it out:
#!/bin/bash
fromdir=/path/to/directory/
inotifywait -m -e close_write,delete --exclude '\*.sw??$' "$fromdir" |
while read dir ev file; do
if [ "$ev" == 'CLOSE_WRITE,CLOSE' ]
then
# copy entire file to /root/zinot/ - WORKS!
cp "$fromdir""$file" /root/zinot/"$file"
elif [ "$ev" == 'DELETE' ]
then
# trying this without echo does not work, but with echo it does!
echo "$file" >> /root/zinot.txt
else
# never saw this error message pop up, which makes sense.
echo Could not perform action on "$ev"
fi
done
In the dir, I do touch zzzhey.txt. File is copied. I do vim zzzhey.txt and file changes are copied. I do rm zzzhey.txt and the filename is added to my log file zinot.txt. Awesome!

You need to add -e delete to your monitor, otherwise DELETE events won't be passed to the loop. Then add a conditional to the loop that handles the events. Something like this should do:
#!/bin/bash
inotifywait -m -e delete -e close_write --exclude '\*.sw??$' . |
while read dir ev file; do
if [ "$ev" = "DELETE" ]; then
echo "$file" >> /inotify.log
else
cp ./"$file" zinot/"$file"
fi
done

Related

Are delete events considered close_write in inotifywait?

I have a simple inotifywait script that watches for FTP file uploads to be closed and then moving them to a aws s3. It seems to be working except that in the inotify logs, it indicates that the file was not found ( although the file was indeed uploaded to s3 ).
The s3 move command moves the file to the cloud and deletes it locally. Could this be because inotifywait detects deleting the file as a close_write event ?
Why is inotify seems to be executing the commands twice ?
TARGET=/home/*/ftp/files
inotifywait -m -r -e close_write $TARGET |
while read directory action file
do
if [[ "$file" =~ .*mp4$ ]]
then
echo COPY PATH IS "$directory$file"
aws s3 mv "$directory$file" s3://bucket
fi
done
example logs:
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
COPY PATH IS /home/user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4
COPY PATH IS /home/user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4
COPY PATH IS /home/user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4
move: ../user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4 to s3://bucket/user-cam-1_00_20220516114055.mp4
upload: ../user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4 to s3://bucket/user-cam-1_00_20220516114055.mp4
move failed: ../user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4 to s3://bucket/user-cam-1_00_20220516114055.mp4 [Errno 2] No such file or directory: '/home/user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4'
rm: cannot remove '/home/user/ftp/files/2022/05/16/user-cam-1_00_20220516114055.mp4': No such file or directory
Cleaned-up your script and added some safety with quotes and check for already processed file in case the filesystem triggers duplicate events for same file.
#!/usr/bin/env bash
# Prevents expanding pattern without matches
shopt -s nullglob
# Expands pattern into an array
target=(/home/*/ftp/files/)
# Creates temporary directory and cleanup trap
declare -- tmpdir=
if tmpdir=$(mktemp -d); then
trap 'rm -fr -- "$tmpdir"' EXIT INT
else
# or exit error if it fails
exit 1
fi
# In case no target matches, exit error
[ "${#target[#]}" -gt 0 ] || exit 1
s3move() {
local -- p=$1
local -- tmp="$tmpdir/$p"
printf 'Copy path is: %s\n' "$p"
# Moves the file to temporary dir
# so it is away from inotify watch dir ASAP
mv -- "$p" "$tmp"
# Then perform the slow remote copy to s3 bucket
# Remove the echo onces it is ok
echo aws s3 mv "$p" s3://bucket
# File has been copied to s3, tmp file no longer needed
rm -f -- "$tmp"
}
while read -r -d '' p; do
# Skip if file does not exist, as it has already been moved away
# case of a duplicate event for already processed file
[ -e "$p" ] || continue
s3move "$p"
done < <(
# Good practice to spell long option names in a script
# --format will print null-delimited full file path
inotifywait \
--monitor \
--recursive \
--event close_write \
--includei '.*\.mp4$' \
--format '%w%f%0' \
"${target[#]}" 2>/dev/null
)

Send files to folders using bash script

I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo

shell script for checking new files [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Need a script that copies a single file to multiple directories

I'm stuck with this script and would like some help on the same!
I want to make a folder called "upload" which will contain a script that copies a .jar file from there to multiple directories (See below)
/home/minecraft/multicraft/servers/EUSim1
/home/minecraft/multicraft/servers/EUSim2
/home/minecraft/multicraft/servers/EUSim3
and so on.
A Quick Script, for reference:
script
#!/bin/bash
inputfile=$1
for var in "$#"
do
if [[ $2 == $3 ]];then
exit 1
fi
cp -v $inputfile $2
shift
done
Command
./script simple.jar \
/home/minecraft/multicraft/servers/EUSim1/simple.jar \
/home/minecraft/multicraft/servers/EUSim2/simple.jar \
/home/minecraft/multicraft/servers/EUSim3/simple.jar \
Output
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim1/simple.jar'
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim2/simple.jar'
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim3/simple.jar'
This is a simple script. you can do little tweaks and add --prefix or make the script read the input from a file.
(or)
use cp with xargs:
echo dir1 dir2 dir3 | xargs -n 1 cp file
Very simple script:
How to use it
touch simpleScript.sh
vim simpleScript.sh
Copy/Paste the line below
Update TRX_SOURCE_PATH, DEST_PATH, DEST_PATH1, DEST_PATH2
Save
chmod +x ./simpleScript.sh
#!/bin/bash
TRX_SOURCE_PATH='/Path/Test.pdf'
DEST_PATH='/Path/Test'
DEST_PATH1='/Path/Test1'
DEST_PATH2='/Path/Test2'
echo "Starting copy"
echo "Destination:" $DEST_PATH
cp $TRX_SOURCE_PATH $DEST_PATH
echo "copy done for folder:" $DEST_PATH
echo "Destination:" $DEST_PATH1
cp $TRX_SOURCE_PATH $DEST_PATH1
echo "copy done for folder:" $DEST_PATH1
echo "Destination:" $DEST_PATH2
cp $TRX_SOURCE_PATH $DEST_PATH2
echo "copy done for folder:" $DEST_PATH2
echo "All Copy done"
Hope this script helps you .

Shell script that watches HTML templates and compiles with Handlebars

I'm trying to create a script that watches my HTML template files in a directory and when it notify's changes, it compiles the template. I can't get it working, this is what I got:
#!/bin/sh
while FILENAME=$(inotifywait --format %w -r templates/*.html)
do
COMPILED_FILE=$(echo "$FILENAME" | sed s/templates/templates\\/compiled/g | sed s/.html/.js/g)
handlebars $FILENAME -f $COMPILED_FILE -a
done
I use inotifywait to watch the current dir, although I want it also to check for sub directories. The compiled files then need to be saved in a sub directory called templates/compiled with optionally the sub directory.
So templates/foo.html needs to be compiled and stored as templates/compiled/foo.js
So templates/other/foo.html needs to be compiled and stored as templates/compiled/other/foo.js
As you can see I tried to watch the directoy and replace the templates/ name with templates/compiled.
Any help is welcome!
A few observations, then a solution:
Passing the argument -r templates/*.html only matches .html files in templates/ — not in templates/other/. Instead we're going to do -r templates which notifies us of changes to any file anywhere under templates.
If you don't use inotifywait in --monitor mode, you will miss any files that are changed in the brief period that handlebars is running (which could happen if you save all your open files at once). Better to do something like this:
#!/bin/bash
watched_dir="templates"
while read -r dirname events filename; do
printf 'File modified: %s\n' "$dirname$filename"
done < <(inotifywait --monitor --event CLOSE_WRITE --recursive "$watched_dir")
Then, as for transforming the paths, you could do something like:
$ dirname=templates/other/
$ echo "${dirname#*/}"
other/
$ echo "$watched_dir/compiled/${dirname#*/}"
templates/compiled/other/
$ filename=foo.html
$ echo "${filename%.html}"
foo
$ echo "${filename%.html}.js"
foo.js
$ echo "$watched_dir/compiled/${dirname#*/}${filename%.html}.js"
templates/compiled/other/foo.js
Notice that we can leverage Bash's builtin parameter expansion — no need for sed.
Putting it all together, we get:
#!/bin/bash
watched_dir="templates"
while read -r dirname events filename; do
[[ "${filename##*.}" != 'html' ]] && continue
output_path="$watched_dir/compiled/${dirname#*/}${filename%.html}.js"
handlebars "$dirname$filename" -f "$output_path" -a
done < <(inotifywait --monitor --event CLOSE_WRITE --recursive "$watched_dir")

Resources