Bash script trouble interpretting input - bash

I wrote a bash script that uploads a file on my home server. It gets activated from a folder action script using applescript. The setup is the folder on my desktop is called place_on_server. Its supposed to have an internal file structure exactly like the folder I want to write to: /var/www/media/
usage goes something like this:
if directory etc added to place_on_server: ./upload DIR etc
if directory of directory: etc/movies ./upload DIR etc movies //and so on
if file to place_on_server: ./upload F file.txt
if file in file in place_on_server ./upload F etc file.txt //and so on
for creating a directory its supposed to execute a command like:
ssh root#192.168.1.1<<EOF
cd /var/www/media/wherever
mkdir newdirectory
EOF
and for file placement:
rsync -rsh='ssh -p22' file root#192.168.1.1:/var/www/media/wherever
script:
#!/bin/bash
addr=$(ifconfig -a | ./test)
if ($# -le "1")
then
exit
elif ($1 -eq "DIR")
then
f1="ssh -b root#$addr<<EOF"
list = "cd /var/www/media\n"
if($# -eq "2")
then
list=list+"mkdir $2\nEOF\n"
else
num=2
i=$(($num))
while($num < $#)
do
i=$(($num))
list=list+"mkdir $i\n"
list=list+"cd $i\n"
$num=$num+1
done
fi
echo $list
elif ($1 -eq "F")
then
#list = "cd /var/www/media\n"
f2="rsync -rsh=\'ssh -p22\' "
f3 = "root#$addr:/var/www/media"
if($# -eq "2")
then
f2=f2+$2+" "+f3
else
num=3
i=$(($num))
while($num < $#)
do
i=$(($num))
f2=f2+"/"+$i
$num=$num+1
done
i=$(($num))
f2=f2+$i+" "+$f3
fi
echo $f2
fi
exit
output:
(prompt)$ ./upload2 F SO test.txt
./upload2: line 3: 3: command not found
./upload2: line 6: F: command not found
./upload2: line 25: F: command not found
So as you can see I'm having issues handling input. Its been awhile since I've done bash. And it was never extensive to begin with. Looking for a solution to my problem but also suggestions. Thanks in advance.

For comparisons, use [[ .. ]]. ( .. ) is for running commands in subshells
Don't use -eq for string comparisons, use =.
Don't use < for numerical comparisons, use -lt
To append values, f2="$f2$i $f3"
To add line feeds, use $'\n' outside of double quotes, or a literal linefeed inside of them.
You always need "$" on variables in strings to reference them, otherwise you get the literal string.
You can't use spaces around the = in assignments
You can't use $ before the variable name in assignments
To do arithmetics, use $((..)): result=$((var1+var2))
For indirect reference, such as getting $4 for n=4, use ${!n}
To prevent word splitting removing your line feeds, double quote variables such as in echo "$line"
Consider writing smaller programs and checking that they work before building out.
Here is how I would have written your script (slightly lacking in parameter checking):
#!/bin/bash
addr=$(ifconfig -a | ./test)
if [[ $1 = "DIR" ]]
then
shift
( IFS=/; echo ssh "root#$addr" mkdir -p "/var/www/media/$*"; )
elif [[ $1 = "F" ]]
then
shift
last=$#
file=${!last}
( IFS=/; echo rsync "$file" "root#$addr:/var/www/media/$*" )
else
echo "Unknown command '$1'"
fi
$* gives you all parameters separated by the first character in $IFS, and I used that to build the paths. Here's the output:
$ ./scriptname DIR a b c d
ssh root#somehost mkdir -p /var/www/media/a/b/c/d
$ ./scriptname F a b c d somefile.txt
rsync somefile.txt root#somehost:/var/www/media/a/b/c/d/somefile.txt
Remove the echos to actually execute.

The main problem with your script are the conditional statements, such as
if ($# -le "1")
Despite what this would do in other languages, in Bash this is essentially saying, execute the command line $# -le "1" in a subshell, and use its exit status as condition.
in your case, that expands to 3 -le "1", but the command 3 does not exist, which causes the error message
./upload2: line 3: 3: command not found
The closest valid syntax would be
if [ $# -le 1 ]
That is the main problem, there are other problems detailed and addressed in that other guy's post.
One last thing, when you're assigning value to a variable, e.g.
f3 = "root#$addr:/var/www/media"
don't leave space around the =. The statement above would be interpreted as "run command f3 with = and "root#$addr:/var/www/media" as arguments".

Related

Script is not glob-expanding, but works fine when running the culprit as a minimalistic example

I've been trying for hours on this problem, and cannot set it straight.
This minimal script works as it should:
#!/bin/bash
wipe_thumbs=1
if (( wipe_thumbs )); then
src_dir=$1
thumbs="$src_dir"/*/t1*.jpg
echo $thumbs
fi
Invoke with ./script workdir and a lot of filenames starting with t1* in all the sub-dirs of workdir are shown.
When putting the above if-case in the bigger script, the globbing is not executed:
SRC: -- workdir/ --
THUMBS: -- workdir//*/t1*.jpg --
ls: cannot access workdir//*/t1*.jpg: No such file or directory
The only difference with the big script and the minimal script is that the big script has a path-validator and getopts-extractor. This code is immediately above the if-case:
#!/bin/bash
OPTIONS=":ts:d:"
src_dir=""
dest_dir=""
wipe_thumbs=0
while getopts $OPTIONS opt ; do
case "$opt" in
t) wipe_thumbs=1
;;
esac
done
shift $((OPTIND - 1))
src_dir="$1"
dest_dir="${2:-${src_dir%/*}.WORK}"
# Validate source
echo -n "Validating source..."
if [[ -z "$src_dir" ]]; then
echo "Can't do anything without a source-dir."
exit
else
if [[ ! -d "$src_dir" ]]; then
echo "\"$src_dir\" is really not a directory."
exit
fi
fi
echo "done"
# Validate dest
echo -n "Validating destination..."
if [[ ! -d "$dest_dir" ]]; then
mkdir "$dest_dir"
(( $? > 0 )) && exit
else
if [[ ! -w "$dest_dir" ]]; then
echo "Can't write into the specified destination-dir."
exit
fi
fi
echo "done"
# Move out the files into extension-named directories
echo -n "Moving files..."
if (( wipe_thumbs )); then
thumbs="$src_dir"/*/t1*.jpg # not expanded
echo DEBUG THUMBS: -- "$thumbs" --
n_thumbs=$(ls "$thumbs" | wc -l)
rm "$thumbs"
fi
...rest of script, never reached due to error...
Can anyone shed some lights on this? Why is the glob not expanded in the big script, but working fine in the minimalistic test script?
EDIT: Added the complete if-case.
The problem is that wildcards aren't expanded in assignment statements (e.g. thumbs="$src_dir"/*/t1*.jpg), but are expanded when variables are used without double-quotes. Here's an interactive example:
$ src_dir=workdir
$ thumbs="$src_dir"/*/t1*.jpg
$ echo $thumbs # No double-quotes, wildcards will be expanded
workdir/sub1/t1-1.jpg workdir/sub1/t1-2.jpg workdir/sub2/t1-1.jpg workdir/sub2/t1-2.jpg
$ echo "$thumbs" # Double-quotes, wildcards printed literally
workdir/*/t1*.jpg
$ ls $thumbs # No double-quotes, wildcards will be expanded
workdir/sub1/t1-1.jpg workdir/sub2/t1-1.jpg
workdir/sub1/t1-2.jpg workdir/sub2/t1-2.jpg
$ ls "$thumbs" # Double-quotes, wildcards treated as literal parts of filename
ls: workdir/*/t1*.jpg: No such file or directory
...so the quick-n-easy fix is to remove the double-quotes from the ls and rm commands. But this isn't safe, as it'll also cause parsing problems if $src_dir contains any whitespace or wildcard characters (this may not be an issue for you, but I'm used to OS X where spaces in filenames are everywhere, and I've learned to be careful about these things). The best way to do this is to store the list of thumb files as an array:
$ src="work dir"
$ thumbs=("$src_dir"/*/t1*.jpg) # No double-quotes protect $src_dir, but not the wildcard portions
$ echo "${thumbs[#]}" # The "${array[#]}" idiom expands each array element as a separate word
work dir/sub1/t1-1.jpg work dir/sub1/t1-2.jpg work dir/sub2/t1-1.jpg work dir/sub2/t1-2.jpg
$ ls "${thumbs[#]}"
work dir/sub1/t1-1.jpg work dir/sub2/t1-1.jpg
work dir/sub1/t1-2.jpg work dir/sub2/t1-2.jpg
You might also want to set nullglob in case there aren't any matches (so it'll expand to a zero-length array).
In your script, this'd come out something like this:
if (( wipe_thumbs )); then
shopt -s nullglob
thumbs=("$src_dir"/*/t1*.jpg) # expanded as array elements
shopt -u nullglob # back to "normal" to avoid unexpected behavior later
printf 'DEBUG THUMBS: --'
printf ' "%s"' "${thumbs[#]}"
printf ' --\n'
# n_thumbs=$(ls "${thumbs[#]}" | wc -l) # wrong way to do this...
n_thumbs=${#thumbs[#]} # better...
if (( n_thumbs == 0 )); then
echo "No thumb files found" >&2
exit
fi
rm "${thumbs[#]}"
fi

Add command arguments using inline if-statement in bash

I'd like to add an argument to a command in bash only if a variable evaluates to a certain value. For example this works:
test=1
if [ "${test}" == 1 ]; then
ls -la -R
else
ls -R
fi
The problem with this approach is that I have to duplicate ls -R both when test is 1 or if it's something else. I'd prefer if I could write this in one line instead such as this (pseudo code that doesn't work):
ls (if ${test} == 1 then -la) -R
I've tried the following but it doesn't work:
test=1
ls `if [ $test -eq 1 ]; then -la; fi` -R
This gives me the following error:
./test.sh: line 3: -la: command not found
A more idiomatic version of svlasov's answer:
ls $( (( test == 1 )) && printf %s '-la' ) -R
Since echo understands a few options itself, it's safer to use printf %s to make sure that the text to print is not mistaken for an option.
Note that the command substitution must not be quoted here - which is fine in the case at hand, but calls for a more robust approach in general - see below.
However, in general, the more robust approach is to build up arguments in an array and pass it as a whole:
# Build up array of arguments...
args=()
(( test == 1 )) && args+=( '-la' )
args+=( '-R' )
# ... and pass it to `ls`.
ls "${args[#]}"
Update: The OP asks how to conditionally add an additional, variable-based argument to yield ls -R -la "$PWD".
In that case, the array approach is a must: each argument must become its own array element, which is crucial for supporting arguments that may have embedded whitespace:
(( test == 1 )) && args+= ( '-la' "$PWD" ) # Add each argument as its own array element.
As for why your command,
ls `if [ $test -eq 1 ]; then -la; fi` -R
didn't work:
A command between backticks (or its modern, nestable equivalent, $(...)) - a so-called command substitution - is executed just like any other shell command (albeit in a sub-shell) and the whole construct is replaced with the command's stdout output.
Thus, your command tries to execute the string -la, which fails. To send it to stdout, as is needed here, you must use a command such as echo or printf.
Print the argument with echo:
test=1
ls `if [ $test -eq 1 ]; then echo "-la"; fi` -R
I can't say how acceptable this is, but:
test=1
ls ${test:+'-la'} -R
See https://stackoverflow.com/revisions/16753536/1 for a conditional truth table.
Another answer without using eval and using BASH arrays:
myls() { local arr=(ls); [[ $1 -eq 1 ]] && arr+=(-la); arr+=(-R); "${arr[#]}"; }
Use it as:
myls
myls "$test"
This script builds whole command in an array arr and preserves the original order of command options.

Creating a which command in bash script

For an assignment, I'm supposed to create a script called my_which.sh that will "do the same thing as the Unix command, but do it using a for loop over an if." I am also not allowed to call which in my script.
I'm brand new to this, and have been reading tutorials, but I'm pretty confused on how to start. Doesn't which just list the path name of a command?
If so, how would I go about displaying the correct path name without calling which, and while using a for loop and an if statement?
For example, if I run my script, it will echo % and wait for input. But then how do I translate that to finding the directory? So it would look like this?
#!/bin/bash
path=(`echo $PATH`)
echo -n "% "
read ans
for i in $path
do
if [ -d $i ]; then
echo $i
fi
done
I would appreciate any help, or even any starting tutorials that can help me get started on this. I'm honestly very confused on how I should implement this.
Split your PATH variable safely. This is a general method to split a string at delimiters, that is 100% safe regarding any possible characters (including newlines):
IFS=: read -r -d '' -a paths < <(printf '%s:\0' "$PATH")
We artificially added : because if PATH ends with a trailing :, then it is understood that current directory should be in PATH. While this is dangerous and not recommended, we must also take it into account if we want to mimic which. Without this trailing colon, a PATH like /bin:/usr/bin: would be split into
declare -a paths='( [0]="/bin" [1]="/usr/bin" )'
whereas with this trailing colon the resulting array is:
declare -a paths='( [0]="/bin" [1]="/usr/bin" [2]="" )'
This is one detail that other answers miss. Of course, we'll do this only if PATH is set and non-empty.
With this split PATH, we'll use a for-loop to check whether the argument can be found in the given directory. Note that this should be done only if argument doesn't contain a / character! this is also something other answers missed.
My version of which handles a unique option -a that print all matching pathnames of each argument. Otherwise, only the first match is printed. We'll have to take this into account too.
My version of which handles the following exit status:
0 if all specified commands are found and executable
1 if one or more specified commands is nonexistent or not executable
2 if an invalid option is specified
We'll handle that too.
I guess the following mimics rather faithfully the behavior of my which (and it's pure Bash):
#!/bin/bash
show_usage() {
printf 'Usage: %s [-a] args\n' "$0"
}
illegal_option() {
printf >&2 'Illegal option -%s\n' "$1"
show_usage
exit 2
}
check_arg() {
if [[ -f $1 && -x $1 ]]; then
printf '%s\n' "$1"
return 0
else
return 1
fi
}
# manage options
show_only_one=true
while (($#)); do
[[ $1 = -- ]] && { shift; break; }
[[ $1 = -?* ]] || break
opt=${1#-}
while [[ $opt ]]; do
case $opt in
(a*) show_only_one=false; opt=${opt#?} ;;
(*) illegal_option "${opt:0:1}" ;;
esac
done
shift
done
# If no arguments left or empty PATH, exit with return code 1
(($#)) || exit 1
[[ $PATH ]] || exit 1
# split path
IFS=: read -r -d '' -a paths < <(printf '%s:\0' "$PATH")
ret=0
# loop on arguments
for arg; do
# Check whether arg contains a slash
if [[ $arg = */* ]]; then
check_arg "$arg" || ret=1
else
this_ret=1
for p in "${paths[#]}"; do
if check_arg "${p:-.}/$arg"; then
this_ret=0
"$show_only_one" && break
fi
done
((this_ret==1)) && ret=1
fi
done
exit "$ret"
To test whether an argument is executable or not, I'm checking whether it's a regular file1 which is executable with:
[[ -f $arg && -x $arg ]]
I guess that's close to my which's behavior.
1 As #mklement0 points out (thanks!) the -f test, when applied against a symbolic link, tests the type of the symlink's target.
#!/bin/bash
#Get the user's first argument to this script
exe_name=$1
#Set the field separator to ":" (this is what the PATH variable
# uses as its delimiter), then read the contents of the PATH
# into the array variable "paths" -- at the same time splitting
# the PATH by ":"
IFS=':' read -a paths <<< $PATH
#Iterate over each of the paths in the "paths" array
for e in ${paths[*]}
do
#Check for the $exe_name in this path
find $e -name $exe_name -maxdepth 1
done
This is similar to the accepted answer with the difference that it does not set the IFS and checks if the execute bits are set.
#!/bin/bash
for i in $(echo "$PATH" | tr ":" "\n")
do
find "$i" -name "$1" -perm +111 -maxdepth 1
done
Save this as my_which.sh (or some other name) and run it as ./my_which java etc.
However if there is an "if" required:
#!/bin/bash
for i in $(echo "$PATH" | tr ":" "\n")
do
# this is a one liner that works. However the user requires an if statment
# find "$i" -name "$1" -perm +111 -maxdepth 1
cmd=$i/$1
if [[ ( -f "$cmd" || -L "$cmd" ) && -x "$cmd" ]]
then
echo "$cmd"
break
fi
done
You might want to take a look at this link to figure out the tests in the "if".
For a complete, rock-solid implementation, see gniourf_gniourf's answer.
Here's a more concise alternative that makes do with a single invocation of find [per name to investigate].
The OP later clarified that an if statement should be used in a loop, but the question is general enough to warrant considering other approaches.
A naïve implementation would even work as a one-liner, IF you're willing to make a few assumptions (the example uses 'ls' as the executable to locate):
find -L ${PATH//:/ } -maxdepth 1 -type f -perm -u=x -name 'ls' 2>/dev/null
The assumptions - which will hold in many, but not all situations - are:
$PATH must not contain entries that when used unquoted result in shell expansions (e.g., no embedded spaces that would result in word splitting, no characters such as * that would result in pathname expansion)
$PATH must not contain an empty entry (which must be interpreted as the current dir).
Explanation:
-L tells find to investigate the targets of symlinks rather than the symlinks themselves - this ensures that symlinks to executable files are also recognized by -type f
${PATH//:/ } replaces all : chars. in $PATH with a space each, causing the result - due to being unquoted - to be passed as individual arguments split by spaces.
-maxdepth 1 instructs find to only look directly in each specified directory, not also in subdirectories
-type f matches only files, not directories.
-perm -u=x matches only files and directories that the current user (u) can execute (x).
2>/dev/null suppresses error messages that may stem from non-existent directories in the $PATH or failed attempts to access files due to lack of permission.
Here's a more robust script version:
Note:
For brevity, only handles a single argument (and no options).
Does NOT handle the case where entries or result paths may contain embedded \n chars - however, this is extremely rare in practice and likely leads to bigger problems overall.
#!//bin/bash
# Assign argument to variable; error out, if none given.
name=${1:?Please specify an executable filename.}
# Robustly read individual $PATH entries into a bash array, splitting by ':'
# - The additional trailing ':' ensures that a trailing ':' in $PATH is
# properly recognized as an empty entry - see gniourf_gniourf's answer.
IFS=: read -r -a paths <<<"${PATH}:"
# Replace empty entries with '.' for use with `find`.
# (Empty entries imply '.' - this is legacy behavior mandated by POSIX).
for (( i = 0; i < "${#paths[#]}"; i++ )); do
[[ "${paths[i]}" == '' ]] && paths[i]='.'
done
# Invoke `find` with *all* directories and capture the 1st match, if any, in a variable.
# Simply remove `| head -n 1` to print *all* matches.
match=$(find -L "${paths[#]}" -maxdepth 1 -type f -perm -u=x -name "$name" 2>/dev/null |
head -n 1)
# Print result, if found, and exit with appropriate exit code.
if [[ -n $match ]]; then
printf '%s\n' "$match"
exit 0
else
exit 1
fi

Why does my script report ls: not found

I have the following korn script:
#!/bin/ksh
TAPPDATADIR=/hp/qa02/App/IPHSLDI/Data
echo $TAPPDATADIR
if [[ls $TAPPDATADIR/zip_file_MD5_checksum*.txt | wc -l > 1]]
then
exit "asdf"
fi
When I attempt to run it I get:
/hp/qa02/App/IPHSLDI/Data
./iftest.ksh: line 7: [[ls: not found
Why isn't my if statement working?
I'm trying to see if there are multiple checksum files in the Data directory. If there are I want to exit the script.
There are several problems:
There shouldn't be any spaces around = in the assignment.
You need spaces around [[ and ]] in the if statement.
To substitute the result of a command into the test expression, you need to use backticks or $(...).
The parameter to exit should be a number, I think you just want to echo the string.
> performs string comparison, you have to use -gt to perform numeric comparison.
So the full script should be:
#!/bin/ksh
TAPPDATADIR=/hp/qa02/App/IPHSLDI/Data
echo $TAPPDATADIR
if [[ $(ls $TAPPDATADIR/zip_file_MD5_checksum*.txt | wc -l) -gt 1 ]]
then
echo "asdf"
fi

How to handle "--" in the shell script arguments?

This question has 3 parts, and each alone is easy, but combined together is not trivial (at least for me) :)
Need write a script what should take as its arguments:
one name of another command
several arguments for the command
list of files
Examples:
./my_script head -100 a.txt b.txt ./xxx/*.txt
./my_script sed -n 's/xxx/aaa/' *.txt
and so on.
Inside my script for some reason I need distinguish
what is the command
what are the arguments for the command
what are the files
so probably the most standard way write the above examples is:
./my_script head -100 -- a.txt b.txt ./xxx/*.txt
./my_script sed -n 's/xxx/aaa/' -- *.txt
Question1: Is here any better solution?
Processing in ./my_script (first attempt):
command="$1";shift
args=`echo $* | sed 's/--.*//'`
filenames=`echo $* | sed 's/.*--//'`
#... some additional processing ...
"$command" "$args" $filenames #execute the command with args and files
This solution will fail when the filenames will contain spaces and/or '--', e.g.
/some--path/to/more/idiotic file name.txt
Question2: How properly get $command its $args and $filenames for the later execution?
Question3: - how to achieve the following style of execution?
echo $filenames | $command $args #but want one filename = one line (like ls -1)
Is here nice shell solution, or need to use for example perl?
First of all, it sounds like you're trying to write a script that takes a command and a list of filenames and runs the command on each filename in turn. This can be done in one line in bash:
$ for file in a.txt b.txt ./xxx/*.txt;do head -100 "$file";done
$ for file in *.txt; do sed -n 's/xxx/aaa/' "$file";done
However, maybe I've misinterpreted your intent so let me answer your questions individually.
Instead of using "--" (which already has a different meaning), the following syntax feels more natural to me:
./my_script -c "head -100" a.txt b.txt ./xxx/*.txt
./my_script -c "sed -n 's/xxx/aaa/'" *.txt
To extract the arguments in bash, use getopts:
SCRIPT=$0
while getopts "c:" opt; do
case $opt in
c)
command=$OPTARG
;;
esac
done
shift $((OPTIND-1))
if [ -z "$command" ] || [ -z "$*" ]; then
echo "Usage: $SCRIPT -c <command> file [file..]"
exit
fi
If you want to run a command for each of the remaining arguments, it would look like this:
for target in "$#";do
eval $command \"$target\"
done
If you want to read the filenames from STDIN, it would look more like this:
while read target; do
eval $command \"$target\"
done
The $# variable, when quoted will be able to group parameters as they should be:
for parameter in "$#"
do
echo "The parameter is '$parameter'"
done
If given:
head -100 test this "File name" out
Will print
the parameter is 'head'
the parameter is '-100'
the parameter is 'test'
the parameter is 'this'
the parameter is 'File name'
the parameter is 'out'
Now, all you have to do is parse the loop out. You can use some very simple rules:
The first parameter is always the file name
The parameters that follow that start with a dash are parameters
After the "--" or once one doesn't start with a "-", the rest are all file names.
You can check to see if the first character in the parameter is a dash by using this:
if [[ "x${parameter}" == "x${parameter#-}" ]]
If you haven't seen this syntax before, it's a left filter. The # divides the two parts of the variable name. The first part is the name of the variable, and the second is the glob filter (not regular expression) to cut off. In this case, it's a single dash. As long as this statement isn't true, you know you have a parameter. BTW, the x may or may not be needed in this case. When you run a test, and you have a string with a dash in it, the test might mistake it for a parameter of the test and not the value.
Put it together would be something like this:
parameterFlag=""
for parameter in "$#" #Quotes are important!
do
if [[ "x${parameter}" == "x${parameter#-}" ]]
then
parameterFlag="Tripped!"
fi
if [[ "x${parameter}" == "x--" ]]
then
print "Parameter \"$parameter\" ends the parameter list"
parameterFlag="TRIPPED!"
fi
if [ -n $parameterFlag ]
then
print "\"$parameter\" is a file"
else
echo "The parameter \"$parameter\" is a parameter"
fi
done
Question 1
I don't think so, at least not if you need to do this for arbitrary commands.
Question 3
command=$1
shift
while [ $1 != '--' ]; do
args="$args $1"
shift
done
shift
while [ -n "$1" ]; do
echo "$1"
shift
done | $command $args
Question 2
How does that differ from question 3?

Resources