BASH user drive selection - macos

I am creating a simple script for mac os x to provide a user with a list of available drives to backup from based on the contents of /Volumes, but I am running into an issue with handling the output of the 'find' command if the drive name contains a space. The find command outputs each drive on a separate line, but the 'for each' breaks the name into parts. Example:
Script:
#!/bin/bash
find /Volumes -maxdepth 1 -type d
echo ""
i=1
for Output in $(find /Volumes -maxdepth 1 -type d)
do
DriveChoice[$i]=$Output
echo $i"="${DriveChoice[$i]}
i=$(( i+1 ))
done
Output:
/Volumes
/Volumes/backup
/Volumes/EZBACKUP DRIVE
/Volumes/Tech
1=/Volumes
2=/Volumes/backup
3=/Volumes/EZBACKUP
4=DRIVE
5=/Volumes/Tech
logout
[Process completed]
This seems like it should be fairly straight-forward. Is there a better way for me to accomplish this?
Update: Thank you chepner, that works perfectly. It is a simple script to generate a ditto command, but I will post it here anyway in case someone finds any part of it useful:
#!/bin/bash
#Get admin rights
sudo -l -U administrator bash
#Set the path to the backup drive
BackupPath="/Volumes/backup/"
#Generate a list of source drives, limiting out invalid options
i=1
while read -r Output; do
if [ "$Output" != "/Volumes" ] && [ "$Output" != "/Volumes/backup" ] && [ "$Output" != "/Volumes/Tech" ] ; then
DriveChoice[$i]=$Output
echo "$i=${DriveChoice[$i]}"
i=$(( i+1 ))
fi
done < <( find /Volumes -maxdepth 1 -type d)
#Have the user select from valid drives
echo "Source Drive Number?"
read DriveNumber
#Ensure the user input is in range
if [ $DriveNumber -lt $i ] && [ $DriveNumber -gt 0 ]; then
Source=${DriveChoice[$DriveNumber]}"/"
#Get the user's NetID for generating the folder structure
echo "User's NetID?"
read NetID
NetID=$NetID
#Grab today's date for generating folder structure
Today=$(date +"%m_%d_%Y")
#Destination for the Logfile
Destination=$BackupPath$NetID"_"$Today"/"
#Full path for the LogFile
LogFile=$Destination$NetID"_log.txt"
mkdir -p $Destination
touch $LogFile
#Destination for the backup
Destination=$Destination"ditto/"
#Execute the command
echo "Processing..."
sudo ditto "$Source" "$Destination" 2>&1 | tee "$LogFile"
else
#Fail if the drive selection was out of range
echo "Drive selection error!"
fi

You cannot safely iterate over the output of find using a for loop, because of the space issue you are seeing. Use a while loop with the read built-in instead:
#!/bin/bash
find /Volumes -maxdepth 1 -type d
echo ""
i=1
while read -r output; do
DriveChoice[$i]=$output
echo "$i=${DriveChoice[$i]}"
i=$(( i+1 ))
done < <( find /Volumes -maxdepth 1 -type d)

Related

Move directory from queue directory Bash script

I'm trying to implement directory Queue.
I have the following directories:
Q_Dir
folder1
subfolder1
...
subfolderN
files1....filesN+X
....
Target_Dir
folder1
subfolder1
....
subfolderN
files1...filesN
....
I want to move maximum X files from Q_Dir to Target_Dir.
Pseudo code:
While True:
totalFiles = Count of total files in Target_Dir
If totalFiles < X then:
Move X-totalFiles files From Q_Dir to Target_Dir
Else
Sleep 5 seconds
I looking for the best solution in Linux bash script to do it
Any suggestions?
Consider the following implementation of the pseudo-code. It is a one-to-one implementation. Possible to improve, if more details will be available.
S=Q_Dir
T=Target_Dir
X=6
mkdir -p $T
while true ; do
t_count=$(find $T -type f | wc -l)
if [[ "$t_count" -lt "$X" ]] ; then
readarray -t -n "$((X-t_count))" files <<< "$(cd $S && find . -type f)"
echo "F=${#files[#]}"
for f in "${files[#]}" ; do
d=${f%/*}
mkdir -p $T/$d
mv $S/$f $T/$d/
done
sleep 3
else
sleep 5
fi
done
Note that the code does not provide atomic update - if multiple instances of the script will be executing against the same source or target folder.

Bash: Check if a directory contains only files with a specific suffix

I am trying to write a script that will check if a directory contains only
a specific kind of file (and/or folder) and will return 1 for false, 0 for true.
IE: I want to check if /my/dir/ contains only *.gz files and nothing else.
This is what i have so far, but it doesn't seem to be working as intended:
# Basic vars
readonly THIS_JOB=${0##*/}
readonly ARGS_NBR=1
declare dir_in=$1
dir_in=$1"/*.gz"
#echo $dir_in
files=$(shopt -s nullglob dotglob; echo ! $dir_in)
echo $files
if (( ${#files} ))
then
echo "Success: Directory contains files."
exit 0
else
echo "Failure: Directory is empty (or does not exist or is a file)"
exit 1
fi
I want to check if /my/dir/ contains only *.gz files and nothing else.
Use find instead of globulation. It's really easier to use find and to parse find output. Globulation are simple for simple scripts, but once you want to parse "all files in a directory" and do some filtration and such, it's way easier (and safer) to use find:
find "$1" -mindepth 1 -maxdepth 1 \! -name '*.gz' -o \! -type f | wc -l | xargs test 0 -eq
This finds all "things" that are not named *.gz inside the directory or are not files (so mkdir a.gz is accounted for), counts them, and then tests if they're count is equal to 0. If the count is equal to 0, xargs test 0 -eq will return 0, if not, it will return status between 1 - 125. You can handle the nonzero return status with a simple || return 1 if you wish.
You can remove xargs with a simple bash substitution and use the method from this thread for a little speedup and get test return value, which is 0 or 1:
[ 0 -eq "$(find "$1" -mindepth 1 -maxdepth 1 \! -name '*.gz' -o \! -type f -print '.' | wc -c)" ]
Remember that the exit status of a script is the exit status of the last command executed. So you don't need anything else in your script if you wish, only a shebang and this oneliner will suffice.
Using Bash's extglob, !(*.gz) and grep:
$ if grep -qs . path/!(*.gz) ; then echo yes ; else echo nope ; fi
man grep:
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
Since you are using bash, there is another setting you can use: GLOBIGNORE
#!/bin/bash
containsonly(){
dir="$1"
glob="$2"
if [ ! -d "$dir" ]; then
echo 1>&2 "Failure: directory does not exist"
return 2
fi
local res=$(
cd "$dir"
GLOBIGNORE=$glob"
shopt -s nullglob dotglob
echo *
)
if [ ${#res} = 0 ]; then
echo 1>&2 "Success: directory contains no extra files"
return 0
else
echo 1>&2 "Failure: directory contains extra files"
return 1
fi
}
# ...
containsonly myfolder '*.gz'
Some have suggested to count all files which do not match the globbing pattern *.gz. This might be quite inefficient depending on the the number of files. For you job it is sufficient to find just one file, which does not match your globbing pattern. Use the -quite action of find to exit after the first match:
if [ -z "$(find /usr/share/man/man1/* -not -name '*.gz' -print -quit)" ]
then echo only gz
fi

bash shell code confusion

The following code recursively find the sub directory of arg1 (pwd by default),labeling each directory or file with a number. Then prompt user to enter a number, and cd that directory label with that number(if it is a directory).
But I do not understand where that number come from....
and how I can control the depth of subdirectory it reaches...
usage
source gd.sh
gd
#!/bin/bash
function gd ()
{
local dirname dirs dir
if [ $# -gt 0 ]
then
dirname=$1
else
dirname=$(pwd)
fi
dirs=$(find $dirname -type d)
PS3=`echo -e "\nPlease Select Directory Number: "`
select dir in $dirs
do
if [ $dir ]
then
cd $dir
break
else
echo 'Invalid Selection!'
fi
done
Thanks for help :)
The number comes from the select ... in ... instruction. It adds a number for each element of the list. Look at the man page of bash.
For your second question, use the option -maxdepth of find:
dirs=$(find $dirname -maxdepth 2 -type d)

How to split the file path to extract the various subfolders into variables? (Ubuntu Bash)

I need help with Ubuntu Precise bash script.
I have several tiff files in various folders
masterFOlder--masterSub1 --masterSub1-1 --file1.tif
|--masterSub1-2 --masterSub1-2-1 --file2.tif
|
|--masterSub2 --masterSub1-2 .....
I need to run an Imagemagick command and save them to new folder "converted" while retaining the sub folder tree i.e. the new tree will be
converted --masterSub1 --masterSub1-1 --file1.png
|--masterSub1-2 --masterSub1-2-1 --file2.png
|
|--masterSub2 --masterSub1-2 .....
How do i split the filepath into folders, replace the first folder (masterFOlder to converted) and recreate a new file path?
Thanks to everyone reading this.
This script should work.
#!/bin/bash
shopt -s extglob && [[ $# -eq 2 && -n $1 && -n $2 ]] || exit
MASTERFOLDER=${1%%+(/)}/
CONVERTFOLDER=$2
OFFSET=${#MASTERFOLDER}
while read -r FILE; do
CPATH=${FILE:OFFSET}
CPATH=${CONVERTFOLDER}/${CPATH%.???}.png
CDIR=${CPATH%/*}
echo "Converting $FILE to $CPATH."
[[ -d $CDIR ]] || mkdir -p "$CDIR" && echo convert "$FILE" "$CPATH" || echo "Conversion failed."
done < <(exec find "${MASTERFOLDER}" -mindepth 1 -type f -iname '*.tif')
Just replace echo convert "$FILE" "$CPATH" with the actual command you use and run bash script.sh masterfolder convertedfolder

help with shell script for finding and moving files based on condition

Looking for some help with my bash script. I am trying to write this shell script to do the following:
find files in a dir named:
server1-date.done
server2-date.done
server3-date.done
...
server10-date.done
print to a listA
find files in a dir (*.gz) and print to a listB
if listA has a count of 10 (basically found 10 .done files), then
proceed with the moving the files in listB to its new directory
after moving the files from listB, then remove the old directory which is similarly named (server1-date, server2-date, ...) and the .done files.
So far, I have this in the works. I can't get the condition for the if section working. I don't think I coded that correctly. Any code suggestions, improvements, etc would be appreciated. Thanks.
#Directories
GZDIR=/mydumps/mytest
FINALDIR=/mydumps/mytest/final
FLGDIR=/backup/mytest/flags
export GZDIR FINALDIR FLGDIR
#lists
FLGLIST="/mydumps/mytest/lists/userflgs.lst"
GZLIST="/mydumps/mytest/lists/gzfiles.lst"
export FLGLIST GZLIST
#Find files
find $FLGDIR -name \*.done -print > $FLGLIST
find $GZDIR -name \*.gz -print > $GZLIST
#Get need all (10) flags found before we do the move
FLG_VAL =`cat $FLGLIST | wc -l`
export $FLG_VAL
if [ "$FLG_VAL" = "10" ]; then
for FILE in $GZLIST
do
echo "mv $GZLIST $FINALDIR" 2>&1
for FLAG in $FLGLIST
do
echo "rmdir -f $FLAG" 2>&1
done
done
else
echo "Cannot move file" 2>&1
exit 0
fi
I do not know if this will work, but it will fix all the obvious problems:
#!/bin/sh
#Directories
GZDIR=/mydumps/mytest
FINALDIR=/mydumps/mytest/final
FLGDIR=/backup/mytest/flags
export GZDIR FINALDIR FLGDIR
#lists
FLGLIST="/mydumps/mytest/lists/userflgs.lst"
GZLIST="/mydumps/mytest/lists/gzfiles.lst"
#Find files
find "$FLGDIR" -name '*.done' -print > "$FLGLIST"
find "$GZDIR" -name '*.gz' -print > "$GZLIST"
#Get need all (10) flags found before we do the move
FLG_VAL=$(wc -l <"$FLGLIST") # Always prefer $( ... ) to backticks.
if [ "$FLG_VAL" -ge 10 ]; then
for FILE in $(cat "$GZLIST")
do
echo "mv $FILE $FINALDIR" 2>&1
done
for FLAG in $(cat "$FLGLIST")
do
echo "rmdir -f $FLAG" 2>&1
done
else
echo "Cannot move file" 2>&1
exit 0
fi
First of all, I really recommend that you as the default approach always test for exceptions, and do not include the "normal" case inside a test unless that is necessary.
...
FLG_VAL=`wc -l < $FLGLIST` # no need for cat, and no space before '='
export $FLG_VAL
if [ "$FLG_VAL" != "10" ]; then
echo "Cannot move file" 2>&1
exit 0
fi
for FILE in $GZLIST
do
echo "mv $GZLIST $FINALDIR" 2>&1
for FLAG in $FLGLIST
do
echo "rmdir -f $FLAG" 2>&1
done
done
See how much easier the code is to read now that the error check is extracted and stands by itself?
FLG_VAL =`cat $FLGLIST | wc -l`
Should be:
FLG_VAL=`cat $FLGLIST | wc -l`

Resources