How to make one loop out of five in bash? - bash

I have a small script for Mac where I'm adding printers. It works fine but I think I could make it simpler or at least it would be interesting to know a different solution.
while IFS= read -r line;
do
if [[ $line == *"Printer_E1"* ]]
then
if [[ "$FIND_PRINTERS" =~ "$PRINTER_E1_IP" ]];
then
echo "found printer e1"
else
echo "adding printer e1"
"$LPADMIN" -p "$PRINTER_E1_IP" -v "lpd://$PRINTER_E1_IP" -L "$PRINTER_E1_LOCATION" -P "$PRINTER_E1_PPD" -E -o printer-is-shared=false -D "$PRINTER_E1_NAME"
echo "adding printer e1 done"
fi
fi
done <<< "$AD_GROUPS"
The content of $AD_GROUPS is:
Printer_E0
Printer_E1
Printer_E2
Printer_E3
Printer_E4
Printer_Strasse
Printer_Wien
I have such a loop for 5 printers, so 5 times that just with different variables.
How could I do that with one loop? (or how can I make that different or simpler)?

Something like this:
while IFS= read -r printer; do
[[ "$FIND_PRINTERS" =~ "${printer}_IP" ]] && \
echo "Found ${printer}" && continue
echo "Adding ${printer}..."
"$LPADMIN" -p "${printer}_IP" \
-v "lpd://${printer}_IP" \
-L "${printer}_LOCATION" \
-P "${printer}_PPD" -E -o printer-is-shared=false \
-D "${printer}_NAME" \
&& echo "Done"
done <<< "$AD_GROUPS"
I assume your variable FIND_PRINTERS has some printers you want to skip, that you have already set the parameters (IP, LOCATION etc) related to each printer.
We use the variable inside double quotes into there, so it expands to what you want for the various commands. Also I have simplified the if condition then command to condition && command and also continue moves to next iteration.

Related

Bash Script variable assigned "-r" when using rm -r $VAR

Heres the code:
function rm {
cd ~/
if [[ -d ./Jam/projects/"$1" ]]; then
echo Removing $1 from projects...
rm -r ./Jam/projects/"$1"
elif [[ -d ./Jam/archive/"$1" ]]; then
echo Removing $1 from archives...
rm -r ./Jam/archive/"$1"
else
echo $1 does not exist \in ./Jam/projects/ or ./Jam/archive
exit
fi
echo Finished\!
}
When this is ran, $1 is "Hello World" (a Directory in i./Jam/archive/)
I get this output:
Removing HelloWorld from archives...
-r does not exist in ./Jam/projects/ or ./Jam/archive
Somehow, $1 is assigned to "-r".
I don't know how on earth this would happen. Any help is much appreciated.
Your function is called "rm" and inside your function "rm" you call rm -r thinking it's "normal rm", but it isn't - it's your function, which perfectly demonstrates the danger of calling your function a name that already has a well known meaning.

BASH dirname basename problems with spaces

Writing a script to optimize my images for the web. Having issues with filenames and directories with spaces in the names.
Heres what I have:
read -p "Enter full path from root (/) to your site... example /var/www/public_html: " path1
echo ""
#read -p "Enter in ImageMagick quality (default is 80) if unsure enter 80: " optjpg
#echo ""
#id="$(id -u optiimage)"
cmd="id -u optiimage"
eval $cmd
id=$(eval $cmd)
tmp1="${path1}/shell/optiimage/imagemagick"
tmp2="${path1}/shell/optiimage/imagemagick/jpg"
restore1="${path1}/shell/optiimage/restore"
restore2="${path1}/shell/optiimage/restore/imagemagick/jpg"
backup1="${path1}/shell/optiimage/backup"
backup2="${path1}/shell/optiimage/backup/imagemagick/jpg"
log1="${path1}/shell/optiimage/log/imagemagick/"
DATE="$(date +%a-%b-%y-%T)"
# Need user input for www path from root
##
## Make directories
##
############################################################################################################
mkdir -p ${tmp1}
mkdir -p ${tmp2}
mkdir -p ${restore1}
mkdir -p ${restore2}
mkdir -p ${backup1}
mkdir -p ${backup2}
mkdir -p ${log1}
mkdir -p ${path1}/build
echo "Processing JPG Files"
find $path1 -iname "*jpg" | \
#write out script to put on cron for image optimization
while read file;
do
# If not equal to optimage uid
# to check username id -u optimage
if [ -u "${id}" ]; then
filebase=`basename "$file" .jpg`
dirbase=`dirname "$file"`
echo "${dirbase}/${filebase}.jpg already optimized" >> ${log1}_optimized_$DATE.log
else
#simple log for size of image before optimization
ls -s $file >> ${log1}_before_$DATE.log
#Do the following if *.jpg found
filebase=`basename $file .jpg`
dirbase=`dirname $file`
echo "cp -p ${dirbase}/${filebase}.jpg ${tmp2}" >> ${path1}/build/backup_jpg.txt
echo "chown optiimage:www-data ${filebase}.jpg" >> ${path1}/build/restore_jpg.txt #${restore1}/imagemagick.sh
echo "cp -p ${filebase}.jpg ${dirbase}/${filebase}.jpg" >> ${path1}/build/restore_jpg.txt #${restore1}/imagemagick.sh
##
## ImageMagick
## Original Command:
## convert $file -quality 80 ${filebase}.new.jpg
##########################
echo "convert ${dirbase}/${filebase}.jpg -quality 80 ${tmp2}/${filebase}.jpg" >> ${path1}/build/imagemagick.txt
echo "mogrify -strip ${tmp2}/${filebase}.jpg" >> ${path1}/build/imagemagick.txt
echo "chown optiimage:www-data ${tmp2}/${filebase}.jpg" >> ${path1}/build/owner_jpg.txt
echo "rm ${dirbase}/${filebase}.jpg" >> ${path1}/build/remove_jpg.txt
echo "cp -p ${tmp2}/${filebase}.jpg ${dirbase}/" >> ${path1}/build/migrate_jpg.txt
simple log for size of image after optimization
ls -s $file >> ${log1}_after_$DATE.log
fi
done
I have edited this with suggestions some have given me. It didn't seem to work.
This works fine if I remove directories with spaces in the names otherwise it ends the name at the space and get errors directory doesn't exist.
You need to double-quote variable substitutions. This applies inside command substitutions as well as in the top-level lexical context. The only exception to this is assignment of a string variable from another string variable, e.g. str2=$str1;, although other types of variable assignments generally need quoting, such as assigning a string variable from an array slice, even if it only slices one element, e.g. str="${#:1:1}";.
Although unlikely to be a problem here, the read builtin strips leading and trailing whitespace if you provide one or more NAMEs; you can solve that by not providing any NAMEs at all, and just letting it store the whole line in the $REPLY variable by default.
You should always use the -r option of the read builtin, as that prevents its ill-advised default behavior of doing backslash interpolation/removal on the input data.
If you don't need any kind of interpolation in a string literal, prefer the '...' syntax to "...", as the former does not do any interpolation.
Prefer the [[ ... ]] expression evaluation form to the old-style [ ... ] form, as the former syntax is slightly more powerful.
Prefer the $(...) command substitution form to the old-style `...` form, as the former syntax has more favorable nesting properties (namely, no need to escape the nested command substitution delimiters).
find "$path1" -iname '*jpeg'| \
# write out script to put on cron for image optimization
while read -r; do
file=$REPLY;
# If not equal to optimage uid
# to check username id -u optimage
if [[ -u "$id" ]]; then
filebase=$(basename "$file" .jpeg);
dirbase=$(dirname "$file");
#MYBASENAME=$(basename "$1")
echo "${dirbase}/${filebase}.jpeg already optimized" >>"${log1}_optimized_$DATE.log";
fi;
done;
;
Quote your $file variable in every place where is used:
find $path1 -iname "*jpeg" | \
while read file;
do
if [ -u "${id}" ]; then
filebase=`basename "$file" .jpeg`
dirbase=`dirname "$file"`
fi
done

Curl not downloading files correctly

So I have been struggling with this task for eternity and still don't get what went wrong. This program doesn't seem to download ANY pdfs. At the same time I checked the file that stores final links - everything stored correctly. The $PDFURL also checked, stores correct values. Any bash fans ready to help?
#!/bin/sh
#create a temporary directory where all the work will be conducted
TMPDIR=`mktemp -d /tmp/chiheisen.XXXXXXXXXX`
echo $TMPDIR
#no arguments given - error
if [ "$#" == "0" ]; then
exit 1
fi
# argument given, but wrong format
URL="$1"
#URL regex
URL_REG='(https?|ftp|file)://[-A-Za-z0-9\+&##/%?=~_|!:,.;]*[-A-Za-z0-9\+&##/%=~_|]'
if [[ ! $URL =~ $URL_REG ]]; then
exit 1
fi
# go to directory created
cd $TMPDIR
#download the html page
curl -s "$1" > htmlfile.html
#grep only links into temp.txt
cat htmlfile.html | grep -o -E 'href="([^"#]+)\.pdf"' | cut -d'"' -f2 > temp.txt
# iterate through lines in the file and try to download
# the pdf files that are there
cat temp.txt | while read PDFURL; do
#if this is an absolute URL, download the file directly
if [[ $PDFURL == *http* ]]
then
curl -s -f -O $PDFURL
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURL)">&2
else
echo "$(basename $PDFURL)"
fi
else
#update url - it is always relative to the first parameter in script
PDFURLU="$1""/""$(basename $PDFURL)"
curl -s -f -O $PDFURLU
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURLU)">&2
else
echo "$(basename $PDFURLU)"
fi
fi
done
#delete the files
rm htmlfile.html
rm temp.txt
P.S. Another minor problem I have just spotted. Maybe the problem is with the if in regex? I pretty much would like to see something like that there:
if [[ $PDFURL =~ (https?|ftp|file):// ]]
but this doesn't work. I don't have unwanted parentheses there, so why?
P.P.S. I also ran this script on URLs beginning with http, and the program gave the desired output. However, it still doesn't pass the test.

Check if bash command has specified modificator

During the configuration of Symfony 2 project it is required to set appropriate privilages to the cache and log directories.
Documentation says to do it in two ways. One of them is calling setfacl command with -m modificator. However not every version contains this modificator. Is it possible to check if this command or any other command allows to set some modificator ?
For example with following pseudocode:
if [ checkmods --command=setfacl --modificator=-m ]
setfacl -m ....
else
chmod ...
You can parse the usage information by running setfacl --help and check if contains the modificator. For example:
if setfacl --help | grep -q -- -m,
then
echo "setfacl -m supported"
else
echo "setfacl -m not supported"
fi
If you want to do it for any command which has the --help option, take a look at the _parse_help function available in your bash-completion file.
http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion
# Parse GNU style help output of the given command.
# #param $1 command; if "-", read from stdin and ignore rest of args
# #param $2 command options (default: --help)
#
_parse_help()
{
eval local cmd=$( quote "$1" )
local line
{ case $cmd in
-) cat ;;
*) LC_ALL=C "$( dequote "$cmd" )" ${2:---help} 2>&1 ;;
esac } \
| while read -r line; do
[[ $line == *([ $'\t'])-* ]] || continue
# transform "-f FOO, --foo=FOO" to "-f , --foo=FOO" etc
while [[ $line =~ \
((^|[^-])-[A-Za-z0-9?][[:space:]]+)\[?[A-Z0-9]+\]? ]]; do
line=${line/"${BASH_REMATCH[0]}"/"${BASH_REMATCH[1]}"}
done
__parse_options "${line// or /, }"
done
}

bash commands in parallel

I want to have two youtube-dl processes (or as much as possible )to run in parallel. Please show me how. thanks in advance.
#!/bin/bash
#package: youtube-dl axel
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
#number of lines in FILE
COUNTER=`wc -l $FILE | cut -f1 -d' '`
#download destination
cd /srv/backup/transmission/completed
if [[ -s $FILE ]]; then
while [ $COUNTER -gt 0 ]; do
#get video link
URL=`head -n 1 $FILE`
#get video name
NAME=`youtube-dl --get-filename -o "%(title)s.%(ext)s" "$URL" --restrict-filenames`
#real video url
vURL=`youtube-dl --get-url $URL`
#remove first link
sed -i 1d $FILE
#download file
axel -n 10 -o "$NAME" $vURL &
#update number of lines
COUNTER=`wc -l $FILE | cut -f1 -d' '`
done
else
break
fi
This ought to work with GNU Parallel:
cd /srv/backup/transmission/completed
parallel -j0 'axel -n 10 -o $(youtube-dl --get-filename -o "%(title)s.%(ext)s" "{}" --restrict-filenames) $(youtube-dl --get-url {})' :::: /srv/backup/temp/youtube.txt
Learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Solution
You need to run your command in a subshell, i.e. put your command into ( cmd ) &.
Definition
A shell script can itself launch subprocesses. These subshells let the
script do parallel processing, in effect executing multiple subtasks
simultaneously.
Code
For you it will look like this I guess (I add quote to $vURL) :
( axel -n 10 -o "$NAME" "$vURL" ) &
I don't know if it is the best way, you can define a function and then call it in background
something like this:
#!/bin/bash
#package: youtube-dl axel
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
# define a function
download_video() {
sleep 3
echo $1
}
while read -r line; do
# call it in background, with &
download_video $line &
done < $FILE
script ends quick but function still runs in background, after 3 seconds it will show echos
also used read and while loop to simplify the file reading
Here's my take on it. By avoiding several commands you should see some minor improvement in speed though it might not be noticeable. I did add error checking which can save you time on broken URLs.
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
while read URL ; do
[ -z "$URL" ] && continue
#get video name
if NAME=$(youtube-dl --get-filename -o "%(title)s.%(ext)s" "$URL" --restrict-filenames) ; then
#real video url
if vURL=$(youtube-dl --get-url $URL) ; then
#download file
axel -n 10 -o "$NAME" $vURL &
else
echo "Could not get vURL from $URL"
fi
else
echo "Could not get NAME from $URL"
fi
done << "$FILE"
By request, here's my proposal for paralleling the vURL and NAME fetching as well as the download. Note: Since the download depends on both vURL and NAME there is no point in creating three processes, two gives you about the best return. Below I've put the NAME fetch in its own process, but if it turned out that vURL was consistently faster, there might be a small payoff in swapping it with the NAME fetch. (That way the while loop in the download process won't waste even a second sleeping.) Note 2: This is fairly crude, and untested, it's just off the cuff and probably needs work. And there's probably a much cooler way in any case. Be afraid...
#!/bin/bash
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
GetName () { # URL, filename
if NAME=$(youtube-dl --get-filename -o "%(title)s.%(ext)s" "$1" --restrict-filenames) ; then
# Create a sourceable file with NAME value
echo "NAME='$NAME'" > "$2"
else
echo "Could not get NAME from $1"
fi
}
Download () { # URL, filename
if vURL=$(youtube-dl --get-url $1) ; then
# Wait to see if GetName's file appears
timeout=300 # Wait up to 5 minutes, adjust this if needed
while (( timeout-- )) ; do
if [ -f "$2" ] ; then
source "$2"
rm "$2"
#download file
if axel -n 10 -o "$NAME" "$vURL" ; then
echo "Download of $NAME from $1 finished"
return 0
else
echo "Download of $NAME from $1 failed"
fi
fi
sleep 1
done
echo "Download timed out waiting for file $2"
else
echo "Could not get vURL from $1"
fi
return 1
}
filebase="tempfile${$}_"
filecount=0
while read URL ; do
[ -z "$URL" ] && continue
filename="$filebase$filecount"
[ -f "$filename" ] && rm "$filename" # Just in case
(( filecount++ ))
( GetName "$URL" "$filename" ) &
( Download "$URL" "$filename" ) &
done << "$FILE"

Resources