I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.
Related
I have 50 files in a directory with .ar extension.
I have an idea of making a list with these file names, read each file, go back to the directory and run the following 2 commands on each file.
$i is the filename.ar
paz -r -L -e clean $i
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $i.ps/cps -c set=pub -c psd=0 $i $i.clean
Using *.ar does not work as it just keeps over-writing the first file and gives no proper output. Can someone please help with a bash script.
The bash script I used without making a list and directly running in the directory is
#!env bash
for i in $#
do
outfile=$(basename $i).txt
echo $i
paz -r -L -e clean $i
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $i.ps/cps -c set=pub -c psd=0 $i $i.clean
done
Please help, I have been trying for a while.
You want to process each file, one at a time. The safest way to do this is to use find ... -print0 with a while read .... Like this:
#!/bin/bash
#
ardir="/data"
# Basic validation
if [[ ! -d "$ardir" ]]
then
echo "ERROR: the directory ($ardir) does not exist."
exit 1
fi
# Process each file
find "$ardir" -type f -name "*.ar" -print0 | while IFS= read -r -d '' arfile
do
echo "DEBUG file=$arfile"
paz -r -L -e clean $arfile
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $arfile.ps/cps -c set=pub -c psd=0 $arfile $arfile.clean
done
This method (and so many more!) is documented here: http://mywiki.wooledge.org/BashFAQ/001
I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo
I am running below commands in a script
move_jobs() {
cd $JOB_DIR
for i in `cat $JOBS_FILE`
do
if [ `ls | grep -i ^${i}- | wc -l` -gt 0 ]; then
cd $i
if [ ! -d jobs ]; then
mkdir jobs && cd .. && mv "${i}"-* "${i}"/jobs/
else
cd .. && mv "${i}"-* "${i}"/jobs/
fi
error_handler $?
fi
done
}
but it failing as
mv: cannot stat `folder-*': No such file or directory
Not sure why move command is failing with regular expression
Your script is overly complicated and has several issues, one of which will be the problem, I guess it's the ls | grep ... part, but to find that out, you should include some debug logging.
for i in $(cat ...) loops through words, not lines.
Do not parse ls
And if you still do, do not ever grep for filenames but include it in your ls call: ls "${i}"-* | wc -l.
You do not need to check if a folder exists when the only thing that is different then is that you create it. You can use mkdir -p instead.
Jumping around folders in your script makes it almost unreadable, as you need to keep track of all cd commands when reading your script.
You could simply write the following, which I think will do what you want:
xargs -a "$JOBS_FILE" -I{} \
sh -c "
mkdir -p '$JOB_DIR/{}/jobs';
mv '$JOB_DIR/{}-'* '$JOB_DIR/{}/jobs';
"
or if you need more control:
while IFS= read -r jid; do
if ls "$JOB_DIR/$jid-"* &>/dev/null; then
TARGET_DIR="$JOB_DIR/$jid/jobs"
mkdir -p "$TARGET_DIR"
mv "$JOB_DIR/$jid-"* "$TARGET_DIR"
echo "OK"
else
echo "No files to move."
fi
done < "$JOBS_FILE"
Using the Ampersand (&) to place it in the background. But in this script for some reason it doesnt work. My programming skills are not great, so please remember im a noob trying to get stuff working.
#!/bin/bash
# Date in format used by filenaming
date=$(date '+%Y%m%d')
# Location where the patch files should be downloaded
patches=~/lists/patches
# Location of the full list
blacklist=~/lists/list
while :
do
# Fetching last download date from downloaded patches
ldd=$(cd $patches && printf '%s\n' * | sed "s/[^0-9]*//g"); echo $ldd
if [ "$ldd" = "" ]
then
break
else
if [ "$ldd" = "$date" ]
then
break
else
ndd=$(date +%Y%m%d -d "${ldd}+1 days")
# Cant have multiple patches in $patches directory, otherwise script wont work
rm -rf $patches/*
sleep 1
file=$patches/changes-$ndd.diff.gz
curl -s -o "$file" "http://url.com/directory/name-$ndd.diff.gz" &
sleep 1
done=$(jobs -l | grep curl | wc -l)
until [ "$done" == 1 ]
do
echo "still here"
done
gunzip "$file"
# Apply patch directory to list's file directories
cat $(echo "$file" | sed "s/.gz//g") | sed 's/.\/yesterday//' | sed 's/.\/today//' > $patches/$ndd.diff
rm $(echo $file | sed "s/.gz//g")
cd $blacklist
patch -p1 --batch -r /root/fail.patch < $patches/$ndd.diff
rm /root/fail.patch
fi
fi
done
What i want to do is let the script wait for each command until the one before is finished. As you can see i used 'sleep' sometimes but i know that isnt a solution. I also read about the wait command, but then you have to place a command in the background using the Ampersand. And thats the problem. For some reason this script doesnt recognize the ampersand at the end of my curl command. I also tried wget, same results. Who can point me in the right direction?
It would never change done after first check. So you need to check every iteration, that's why you should test for command, not for variable
And while will be better, because you need to check before entering
while [ "$(jobs -l | grep curl | wc -l)" -ne 0 ]; do
echo "Still there"
sleep 1
done
I've added sleep because otherwise it wold just flood your console.
I have this bash script:
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
cp ./"$file" zinot/"$file"
done
~
Now, how would I have it do the same thing but also handle deletes by writing the filenames to a log file?
Something like?
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
# if DELETE, append $file to /inotify.log
# else
cp ./"$file" zinot/"$file"
done
~
EDIT:
By looking at the messages generated, I found that inotifywait generates CLOSE_WRITE,CLOSE whenever a file is closed. So that is what I'm now checking in my code.
I tried also checking for DELETE, but for some reason that section of the code is not working. Check it out:
#!/bin/bash
fromdir=/path/to/directory/
inotifywait -m -e close_write,delete --exclude '\*.sw??$' "$fromdir" |
while read dir ev file; do
if [ "$ev" == 'CLOSE_WRITE,CLOSE' ]
then
# copy entire file to /root/zinot/ - WORKS!
cp "$fromdir""$file" /root/zinot/"$file"
elif [ "$ev" == 'DELETE' ]
then
# trying this without echo does not work, but with echo it does!
echo "$file" >> /root/zinot.txt
else
# never saw this error message pop up, which makes sense.
echo Could not perform action on "$ev"
fi
done
In the dir, I do touch zzzhey.txt. File is copied. I do vim zzzhey.txt and file changes are copied. I do rm zzzhey.txt and the filename is added to my log file zinot.txt. Awesome!
You need to add -e delete to your monitor, otherwise DELETE events won't be passed to the loop. Then add a conditional to the loop that handles the events. Something like this should do:
#!/bin/bash
inotifywait -m -e delete -e close_write --exclude '\*.sw??$' . |
while read dir ev file; do
if [ "$ev" = "DELETE" ]; then
echo "$file" >> /inotify.log
else
cp ./"$file" zinot/"$file"
fi
done