Bash : Concatenate wildcard to a variable in a for loop - bash

Don't be to hard on me this is my first bash attempt since school (like 9 years ago).
I'm trying to create a script that will rsync remote git repos locally.
I want to store my local directory into a variable because I'm using it several times but I don't find how to do it with the wildcard I use in the for loop.
I tried to concatenate the "/*/" and some other things that I didn't really understood but nothing worked.
Here is my script :
#!/bin/bash
if [ "$#" -lt 1 ]; then
echo "Usage :"
echo "syncDown <repo1> <repo2> ..."
echo "syncDown --all"
exit 1;
fi
LOCAL_PATH="/Volumes/Case Sensitive/repos"
# Sync all local repos
if [ $1 = "--all" ]; then
# for dir in "$LOCAL_PATH/*/";do <-- What I tried
for dir in /Volumes/Case\ Sensitive/repos/*/;do # <-- What works
folder=$(echo $dir | rev | cut -d'/' -f2 | rev);
rsync -avzh --delete-after devweb:$folder --exclude ".git" --exclude ".idea" "$LOCAL_PATH"
done
# Sync specific repos
else
for repo in "$#"; do
rsync -avzh --delete-after devweb:$repo --exclude ".git" --exclude ".idea" "$LOCAL_PATH"
done
fi
Thanks for your help.

You should be using:
for dir in "$LOCAL_PATH"/*/; do
Make sure to not keep glob character in quote and quote only the variable.

Related

Bash: splitting filename by space? A backup rollback script

I need some help with Bash. I am a Python/Rust guy and do not understand bash too well. I have a "backup" script which copies a selected file to a "$filename $datetime.backup" file. Now I need to write a rollback script which copies latest backup file over the original (without space and datetime and backup suffix). Any guides will be appreciated.
Backup script, for your convenience:
set -e
DT=$(date --iso=seconds)
for f in $*
do
OLD="${f%/}"
NEW="${f%/} $DT.backup"
cp --no-clobber --recursive "$OLD" "$NEW"
done
Use parameter expansion to get the original name back.
for b in *.backup ; do
original=${b% *}
cp "$b" "$original"
done
${b% *} removes everything after the last space from $b.
Solved! Yay!
set -e
LB=$(ls $1 *.backup | sort --reverse | head -n 1)
echo "Moving $1 to trash for safe keeping"
trash "$1"
echo "Copying from $LB"
cp --no-clobber --recursive "$LB" "$1"

bash check for subdirectories under directory

This is my first day scripting, I use linux but needed a script that I have been racking my brain until i finally ask for help. I need to check a directory that has directories already present to see if any new directories are added that are not expected.
Ok I think i have got this as simple as possible. The below works but displays all files in the directory as well. I will keep working at it unless someone can tell me how not to list the files too | I tried ls -d but it is doing the echo "nothing new". I feel like an idiot and should have got this sooner.
#!/bin/bash
workingdirs=`ls ~/ | grep -viE "temp1|temp2|temp3"`
if [ -d "$workingdirs" ]
then
echo "nothing new"
else
echo "The following Direcetories are now present"
echo ""
echo "$workingdirs"
fi
If you want to take some action when a new directory is created, used inotifywait. If you just want to check to see that the directories that exist are the ones you expect, you could do something like:
trap 'rm -f $TMPDIR/manifest' 0
# Create the expected values. Really, you should hand edit
# the manifest, but this is just for demonstration.
find "$Workingdir" -maxdepth 1 -type d > $TMPDIR/manifest
while true; do
sleep 60 # Check every 60 seconds. Modify period as needed, or
# (recommended) use inotifywait
if ! find "$Workingdir" -maxdepth 1 -type d | cmp - $TMPDIR/manifest; then
: Unexpected directories exist or have been removed
fi
done
Below shell script will show directory present or not.
#!/bin/bash
Workingdir=/root/working/
knowndir1=/root/working/temp1
knowndir2=/root/working/temp2
knowndir3=/root/working/temp3
my=/home/learning/perl
arr=($Workingdir $knowndir1 $knowndir2 $knowndir3 $my) #creating an array
for i in ${arr[#]} #checking for each element in array
do
if [ -d $i ]
then
echo "directory $i present"
else
echo "directory $i not present"
fi
done
output:
directory /root/working/ not present
directory /root/working/temp1 not present
directory /root/working/temp2 not present
directory /root/working/temp3 not present
**directory /home/learning/perl present**
This will save the available directories in a list to a file. When you run the script a second time, it will report directories that have been deleted or added.
#!/bin/sh
dirlist="$HOME/dirlist" # dir list file for saving state between runs
topdir='/some/path' # the directory you want to keep track of
tmpfile=$(mktemp)
find "$topdir" -type d -print | sort -o "$tmpfile"
if [ -f "$dirlist" ] && ! cmp -s "$dirlist" "$tmpfile"; then
echo 'Directories added:'
comm -1 -3 "$dirlist" "$tmpfile"
echo 'Directories removed:'
comm -2 -3 "$dirlist" "$tmpfile"
else
echo 'No changes'
fi
mv "$tmpfile" "$dirlist"
The script will have problems with directories that have very exotic names (containing newlines).

shell backup script renaming

I was able to script the backup process, but I want to make an another script for my storage server for a basic file rotation.
What I want to make:
I want to store my files in my /home/user/backup folder. Only want to store the 10 most fresh backup files and name them like this:
site_foo_date_1.tar site_foo_date_2.tar ... site_foo_date_10.tar
site_foo_date_1.tar being the most recent backup file.
Past num10 the file will be deleted.
My incoming files from the other server are simply named like this: site_foo_date.tar
How can I do this?
I tried:
DATE=`date "+%Y%m%d"`
cd /home/user/backup/com
if [ -f site_com_*_10.tar ]
then
rm site_com_*_10.tar
fi
FILES=$(ls)
for file in $FILES
do
echo "$file"
if [ "$file" != "site_com_${DATE}.tar" ]
then
str_new=${file:18:1}
new_str=$((str_new + 1))
to_rename=${file::18}
mv "${file}" "$to_rename$new_str.tar"
fi
done
file=$(ls | grep site_com_${DATE}.tar)
filename=`echo "$file" | cut -d'.' -f1`
mv "${file}" "${filename}_1.tar"
The main issue with your code is that looping through all files in the directory with ls * without some sort of filter is a dangerous thing to do.
Instead, I've used for i in $(seq 9 -1 1) to loop through files from *_9 to *_1 to move them. This ensures we only move backup files, and nothing else that may have accidentally got into the backup directory.
Additionally, relying on the sequence number to be the 18th character in the filename is also destined to break. What happens if you want more than 10 backups in the future? With this design, you can change 9 to be any number you like, even if it's more than 2 digits.
Finally, I added a check before moving site_com_${DATE}.tar in case it doesn't exist.
#!/bin/bash
DATE=`date "+%Y%m%d"`
cd "/home/user/backup/com"
if [ -f "site_com_*_10.tar" ]
then
rm "site_com_*_10.tar"
fi
# Instead of wildcarding all files in the directory
# this method picks out only the expected files so non-backup
# files are not changed. The renumbering is also made easier
# this way.
# Loop through from 9 to 1 in descending order otherwise
# the same file will be moved on each iteration
for i in $(seq 9 -1 1)
do
# Find and expand the requested file
file=$(find . -maxdepth 1 -name "site_com_*_${i}.tar")
if [ -f "$file" ]
then
echo "$file"
# Create new file name
new_str=$((i + 1))
to_rename=${file%_${i}.tar}
mv "${file}" "${to_rename}_${new_str}.tar"
fi
done
# Check for latest backup file
# and only move it if it exists.
file=site_com_${DATE}.tar
if [ -f $file ]
then
filename=${file%.tar}
mv "${file}" "${filename}_1.tar"
fi

shell script for checking new files [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Bash script for inotifywait - How to write deletes to log file, and cp close_writes?

I have this bash script:
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
cp ./"$file" zinot/"$file"
done
~
Now, how would I have it do the same thing but also handle deletes by writing the filenames to a log file?
Something like?
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
# if DELETE, append $file to /inotify.log
# else
cp ./"$file" zinot/"$file"
done
~
EDIT:
By looking at the messages generated, I found that inotifywait generates CLOSE_WRITE,CLOSE whenever a file is closed. So that is what I'm now checking in my code.
I tried also checking for DELETE, but for some reason that section of the code is not working. Check it out:
#!/bin/bash
fromdir=/path/to/directory/
inotifywait -m -e close_write,delete --exclude '\*.sw??$' "$fromdir" |
while read dir ev file; do
if [ "$ev" == 'CLOSE_WRITE,CLOSE' ]
then
# copy entire file to /root/zinot/ - WORKS!
cp "$fromdir""$file" /root/zinot/"$file"
elif [ "$ev" == 'DELETE' ]
then
# trying this without echo does not work, but with echo it does!
echo "$file" >> /root/zinot.txt
else
# never saw this error message pop up, which makes sense.
echo Could not perform action on "$ev"
fi
done
In the dir, I do touch zzzhey.txt. File is copied. I do vim zzzhey.txt and file changes are copied. I do rm zzzhey.txt and the filename is added to my log file zinot.txt. Awesome!
You need to add -e delete to your monitor, otherwise DELETE events won't be passed to the loop. Then add a conditional to the loop that handles the events. Something like this should do:
#!/bin/bash
inotifywait -m -e delete -e close_write --exclude '\*.sw??$' . |
while read dir ev file; do
if [ "$ev" = "DELETE" ]; then
echo "$file" >> /inotify.log
else
cp ./"$file" zinot/"$file"
fi
done

Resources