I'm trying to tidy up one of my bash scripts by using a function for something that happens 6 times. The script sets a number of variables from a config.ini file and then lists them and asks for confirmation that the user wishes to proceed with these predefined values. If not, it steps through each variable and asks for a new one to be entered (or to leave it blank and press enter to use the predefined value). This bit of code accomplishes that:
echo Current output folder: $OUTPUT_FOLDER
echo -n "Enter new output folder: "
read C_OUTPUT_FOLDER
if [ -n "$C_OUTPUT_FOLDER" ]; then OUTPUT_FOLDER=$C_OUTPUT_FOLDER; fi
The idea is to set $OUTPUT_FOLDER to the value of $C_OUTPUT_FOLDER but only if $C_OUTPUT_FOLDER is not null. If $C_OUTPUT_FOLDER IS null, it will not do anything and leave $OUTPUT_FOLDER as it was for use later in the script.
There are 6 variables that are set from the config.ini so this block is currently repeated 6 times. I've made a function new_config () which is as follows:
new_config () {
echo Current $1: ${!2}
echo -n "Enter new $1: "
read $3
if [ -n "${!3}" ]; then $2=${!3}; fi
}
I'm calling it with (in this instance):
new_config "output folder" OUTPUT_FOLDER C_OUTPUT_FOLDER
When I run the script, it has an error on the if line:
./test.sh: line 9: OUTPUT_FOLDER=blah: command not found
So, what gives? The block of code in the script works fine and (in my quite-new-to-bash eyes), the function should be doing exactly the same thing.
Thanks in advance for any pointers.
The problem is that bash splits the command into tokens before variable substitution, see http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_04.html#sect_01_04_01_01. Specifically there's rules for POSIX shells that make assignments a special case for tokenization: "If all the characters preceding '=' form a valid name (see XBD Name), the token ASSIGNMENT_WORD shall be returned." - it's the ASSIGNMENT_WORD token that triggers the assignment path. It doesn't repeat the tokenization after variable substitution, which is why your code doesn't work.
You can get your code to work like so:
new_config () {
echo Current $1: ${!2}
echo -n "Enter new $1: "
read $3
if [[ -n "${!3}" ]]; then echo setting "$2='${!3}'"; eval "$2='${!3}'"; fi
}
new_config "output folder" OUTPUT_FOLDER C_OUTPUT_FOLDER
echo $OUTPUT_FOLDER
As #chepner points out, you can use declare -g $2="${!3}" instead of eval here, and on newer bash versions that's a better answer. Unfortunately declare -g requires bash 4.2, and even though that's 3 years old it's still not everywhere - for example, OS X Mavericks is stuck on 3.2.51.
Related
Here's my problem, from console if I type the below,
var=`history 1`
echo $var
I get the desired output. But when I do the same inside a shell script, it is not showing any output. Also, for other commands like pwd, ls etc, the script shows the desired output without any issue.
As value of variable contains space, add quotes around it.
E.g.:
var='history 1'
echo $var
I believe all you need is this as follows:
1- Ask user for the number till which user need to print the history in script.
2- Run the script and take Input from user and get the output as follows:
cat get_history.ksh
echo "Enter the line number of history which you want to get.."
read number
if [[ $# -eq 0 ]]
then
echo "Usage of script: get_history.ksh number_of_lines"
exit
else
history "$number"
fi
Added logic where it will check arguments if number of arguments passed is 0 then it will exit from script then.
By default history is turned off in a script, therefore you need to turn it on:
set -o history
var=$(history 1)
echo "$var"
Note the preferred use of $( ) rather than the deprecated backticks.
However, this will only look at the history of the current process, that is this shell script, so it is fairly useless.
One of the routines I frequently use is a check for valid arguments passed when invoking scripts. Ideally, I'd like to make these, and other, similar, routines external functions that I could call from any script, for handling these more trivial processes. But, I'm having trouble retrieving the values I need from said function(s), without making the process more complicated.
I have tried using command substitution (e.g., echoing the output of the external function into a variable name local to the calling script), which seems to at least work with simpler functions. However, working with this file checking function, requires the read command in a loop, and, thus, user interactivity, which causes the script to hang when trying to resolve the variable that function call is stored in:
#!/bin/bash
# This is a simple function I want to call from other scripts.
exist(){
# If the first parameter passed is not a directory, then the input is
#+ invalid.
if [ ! -d "$1" ]; then
# Rename $1, so we can manipulate its value.
userDir="$1"
# Ask the user for new input while his input is invalid.
while [ ! -d "$userDir" ]; do
echo "\"$userDir\" does not exist."
echo "Enter the path to the directory: "
read userDir
# Convert any tildes in the variable b/c the shell didn't get to
#+ perform expansion.
userDir=`echo "$userDir" | sed "s|~|$HOME|"`
done
fi
}
exist "$1"
How can I retrieve the value of userDir in the calling script without adding (much) complexity?
You can have the exist function interact with the user over stderr and still capture the variable with command substitution. Let's take a simplified example:
exist() { read -u2 -p "Enter dir: " dir; echo "$dir"; }
The option -u2 tells read to use file descriptor 2 (stderr) for interacting with the user. This will continue to work even if stdout has been redirected via command substitution. The option -p "Enter dir: " allows read to set the prompt and capture the user input in one command.
As an example of how it works:
$ d=$(exist)
Enter dir: SomeDirectory
$ echo "$d"
SomeDirectory
Complete example
exist() {
local dir="$1"
while [ ! -d "$dir" ]; do
echo "'$dir' is not a directory." >&2
read -u2 -p "Enter the path to the directory: " dir
dir="${dir/\~/$HOME}"
done
echo "$dir"
}
As an example of this in use:
$ d=$(exist /asdf)
'/asdf' is not a directory.
Enter the path to the directory: /tmp
$ echo "new directory=$d"
new directory=/tmp
Notes:
There is no need for an if statement and a while loop. The while is sufficient on its own.
Single quotes can be put in double-quoted strings without escapes. So, if we write the error message as "'$dir' is not a directory.", escapes are not needed.
All shell variables should be double-quoted unless one wants them to be subject to word splitting and pathname expansion.
Right off the bat I'd say you can 'echo' to the user on stderr and echo your intended answer on stdout.
I had to rearrange a bit to get it working, but this is tested:
exist(){
# If the first parameter passed is not a directory, then the input is
#+ invalid.
userDir="$1"
if [ ! -d "$userDir" ]; then
# Ask the user for new input while his input is invalid.
while [ ! -d "$userDir" ]; do
>&2 echo "\"$userDir\" does not exist."
>&2 echo "Enter the path to the directory: "
read userDir
done
else
>&2 echo "'$1' is indeed a directory"
fi
echo "$userDir"
}
When I tested, I saved that to a file called exist.inc.func
Then I wrote another script that uses it like this:
#!/bin/sh
source ./exist.inc.func
#Should work with no input:
varInCallingProg=$(exist /root)
echo "Got back $varInCallingProg"
#Should work after you correct it interactively:
varInCallingProg2=$(exist /probablyNotAdirOnYourSystem )
echo "Got back $varInCallingProg2"
I am trying to check if all the non POSIX commands that my script depends on are present before my script proceeds with its main job. This will help me to ensure that my script does not generate errors later due to missing commands.
I want to keep the list of all such non POSIX commands in a variable called DEPS so that as the script evolves and depends on more commands, I can edit this variable.
I want the script to support commands with spaces in them, e.g. my program.
This is my script.
#!/bin/sh
DEPS='ssh scp "my program" sftp'
for i in $DEPS
do
echo "Checking $i ..."
if ! command -v "$i"
then
echo "Error: $i not found"
else
echo "Success: $i found"
fi
echo
done
However, this doesn't work, because "my program" is split into two words while the for loop iterates: "my and program" as you can see in the output below.
# sh foo.sh
Checking ssh ...
/usr/bin/ssh
Success: ssh found
Checking scp ...
/usr/bin/scp
Success: scp found
Checking "my ...
Error: "my not found
Checking program" ...
Error: program" not found
Checking sftp ...
/usr/bin/sftp
Success: sftp found
The output I expected is:
# sh foo.sh
Checking ssh ...
/usr/bin/ssh
Success: ssh found
Checking scp ...
/usr/bin/scp
Success: scp found
Checking my program ...
Error: my program not found
Checking sftp ...
/usr/bin/sftp
Success: sftp found
How can I solve this problem while keeping the script POSIX compliant?
I'll repeat the answer I gave to your previous question: use a while loop with a here document rather than a for loop. You can embed newlines in a string, which is all you need to separate command names in a string if those command names might contain whitespace. (If your command names contain newlines, strongly consider renaming them.)
For maximum POSIX compatibility, use printf, since the POSIX specification of echo is remarkably lax due to differences in how echo was implemented in various shells prior to the definition of the standard.
deps="ssh
scp
my program
sftp
"
while read -r cmd; do
printf "Checking $cmd ...\n"
if ! command -v "$cmd"; then
printf "Error: $i not found\n"
else
printf "Success: $cmd found\n"
fi
printf "\n"
done <<EOF
$deps
EOF
This happens because the steps after parameter expansion are string-splitting and glob-expansion -- not syntax-level parsing (such as handling quoting). To go all the way back to the beginning of the parsing process, you need to use eval.
Frankly, the best approaches are to either:
Target a shell that supports arrays (ksh, bash, zsh, etc) rather than trying to support POSIX
Don't try to retrieve the value from a variable.
...there's a reason proper array support is ubiquitous in modern shells; writing unambiguously correct code, particularly when handling untrusted data, is much harder without it.
That said, you have the option of using $# to store your contents, which can be set, albeit dangerously, using eval:
deps='goodbye "cruel world"'
eval "set -- $deps"
for program; do
echo "processing $program"
done
If you do this inside of a function, you'll override only the function's argument list, leaving the global list unmodified.
Alternately, eval "yourfunction $deps" will have the same effect, setting the argument list within the function to the results of running all the usual parsing and expansion phases on the contents of $deps.
Because the script is in your controll, you can use the eval with reasonable safety, so #Charles Duffy's answer is an simple and good solution. Use it. :)
Also, consider to use the autoconf for generating the usual configure script what is doing good job for what you need - e.g. checking commands and much more... At least, check some configure scripts for ideas how to solvle common problems...
If you want play with your own implementation:
divide the dependecies into two groups
core_deps - unix tools, what are commonly needed for the script itself, like sed, cat cp and such. Those programs doesn't contains spaces in their names, nor in the $PATH.
runtime_deps - programs, what are needed for your application, but not for the script itself.
do the checks in two steps (or more, for example if you need check e.g. libraries)
never use the for loop for space delimited elements unless you getting them as the function arguments - so you can use the "$#"
As starting script could be something like the following:
_check_core_deps() {
for _cmd
do
_cpath=$(command -v "$_cmd")
case "$_cpath" in
/*) continue;;
*) echo "Missing install dependency [$_cmd] - can't continue" ; exit 1 ;;
esac
done
return 0
}
core_deps="grep sed hooloovoo cp" #list of "core" commands - they doesn't contains spaces
_check_core_deps $core_deps || exit 1
The above will blow up on non-existent "hooloovoo" command. :)
Now you can safely continue, all core commands needed for the install script are available. In the next step, you can check other strange dependencies.
Some ideas:
# function what returns your dependecies as lines from HEREDOC
# (e.g. could contain any character except "\n")
# you can decorate the dependecies with comments...
# because we have sed (checked in the 1st step, can use it)
# if want, you can add "fields" too, for some extended functinality with an specified delimiter
list_deps() {
_sptab=$(printf " \t") # the $' \t' is approved by POSIX for the next version only
#the "sed" removes comments and empty lines
#the UUOC (useless use of cat) is intentional here
#for example if you want add "tr" before the "sed"
#of course, you can remove it...
cat - <<DEPS |sed "s/[$_sptab]*#.*//;/^[$_sptab]*$/d"
########## DEPENDECIES ############
#some comment
ssh
scp
sftp
#comment
#bla bla
my program #some comment
/Applications/Some Long And Spaced OSX Apllication.app
DEPS
########## END of DEPENDECIES #####
}
_check_deps() {
#in the "while" loop you can use IFS=: or such and adding anouter variable to read
#for getting more fields for some extended functionality
list_deps | while read -r line
do
#do any checks with the line
#implement additional functionalities as functions
#etc...
#remember - your in an subshell here
printf "command:%s\n" "$line"
done
}
_check_deps
One more thing :), (or two)
if you doubt about the content of some variables, don't use the echo. The POSIX isn't defines how it should act when contains escaped characters (e.g. echo "some\nwed"). Use:
printf '%s' "$variable"
never use uppercase only variables like "DEPS"... they're only for environment variables...
Is it possible to pass command line arguments to shell script as name value pairs, something like
myscript action=build module=core
and then in my script, get the variable like
$action and process it?
I know that $1....and so on can be used to get variables, but then won't be name value like pairs. Even if they are, then the developer using the script will have to take care of declaring variables in the same order. I do not want that.
This worked for me:
for ARGUMENT in "$#"
do
KEY=$(echo $ARGUMENT | cut -f1 -d=)
KEY_LENGTH=${#KEY}
VALUE="${ARGUMENT:$KEY_LENGTH+1}"
export "$KEY"="$VALUE"
done
# from this line, you could use your variables as you need
cd $FOLDER
mkdir $REPOSITORY_NAME
Usage
bash my_scripts.sh FOLDER="/tmp/foo" REPOSITORY_NAME="stackexchange"
STEPS and REPOSITORY_NAME are ready to use in the script.
It does not matter what order the arguments are in.
Changelog
v1.0.0
In the Bourne shell, there is a seldom-used option '-k' which automatically places any values specified as name=value on the command line into the environment. Of course, the Bourne/Korn/POSIX shell family (including bash) also do that for name=value items before the command name:
name1=value1 name2=value2 command name3=value3 -x name4=value4 abc
Under normal POSIX-shell behaviour, the command is invoked with name1 and name2 in the environment, and with four arguments. Under the Bourne (and Korn and bash, but not POSIX) shell -k option, it is invoked with name1, name2, name3, and name4 in the environment and just two arguments. The bash manual page (as in man bash) doesn't mention the equivalent of -k but it works like the Bourne and Korn shells do.
I don't think I've ever used it (the -k option) seriously.
There is no way to tell from within the script (command) that the environment variables were specified solely for this command; they are simply environment variables in the environment of that script.
This is the closest approach I know of to what you are asking for. I do not think anything equivalent exists for the C shell family. I don't know of any other argument parser that sets variables from name=value pairs on the command line.
With some fairly major caveats (it is relatively easy to do for simple values, but hard to deal with values containing shell meta-characters), you can do:
case $1 in
(*=*) eval $1;;
esac
This is not the C shell family. The eval effectively does the shell assignment.
arg=name1=value1
echo $name1
eval $arg
echo $name1
env action=build module=core myscript
You said you're using tcsh. For Bourne-based shells, you can drop the "env", though it's harmless to leave it there. Note that this applies to the shell from which you run the command, not to the shell used to implement myscript.
If you specifically want the name=value pairs to follow the command name, you'll need to do some work inside myscript.
It's quite an old question, but still valid
I have not found the cookie cut solution. I combined the above answers. For my needs I created this solution; this works even with white space in the argument's value.
Save this as argparse.sh
#!/bin/bash
: ${1?
'Usage:
$0 --<key1>="<val1a> <val1b>" [ --<key2>="<val2a> <val2b>" | --<key3>="<val3>" ]'
}
declare -A args
while [[ "$#" > "0" ]]; do
case "$1" in
(*=*)
_key="${1%%=*}" && _key="${_key/--/}" && _val="${1#*=}"
args[${_key}]="${_val}"
(>&2 echo -e "key:val => ${_key}:${_val}")
;;
esac
shift
done
(>&2 echo -e "Total args: ${#args[#]}; Options: ${args[#]}")
## This additional can check for specific key
[[ -n "${args['path']+1}" ]] && (>&2 echo -e "key: 'path' exists") || (>&2 echo -e "key: 'path' does NOT exists");
#Example: Note, arguments to the script can have optional prefix --
./argparse.sh --x="blah"
./argparse.sh --x="blah" --yy="qwert bye"
./argparse.sh x="blah" yy="qwert bye"
Some interesting use cases for this script:
./argparse.sh --path="$(ls -1)"
./argparse.sh --path="$(ls -d -1 "$PWD"/**)"
Above script created as gist, Refer: argparse.sh
Extending on Jonathan's answer, this worked nicely for me:
#!/bin/bash
if [ "$#" -eq "0" ]; then
echo "Error! Usage: Remind me how this works again ..."
exit 1
fi
while [[ "$#" > "0" ]]
do
case $1 in
(*=*) eval $1;;
esac
shift
done
I am trying to learn shell scripting and I am kind of confused with the idea of := or default value
#!/bin/sh
echo "Please enter a number \c"
read input
input=$((input % 2))
if [ $input -eq 0 ]
then
echo "The number is even"
else
echo "The number is odd"
fi
echo "Beginning of second part"
a="BLA"
a="Dennis"
echo $a
unset a
echo "a after unsetting"
echo $a
${a:=HI}
echo "unsetting a again"
unset a
echo $a
And I get this
Dennis
a after unsetting
./ifstatement.sh: line 21: HI: command not found
unsetting a again
When you write
${a:=HI}
the shell splits the result of the expansion into words, and interprets the first word as a command, as it would for any command line.
Instead, write
: "${a:=HI}"
: is a no-op command. The quotes prevent the shell from trying to do globbing, which in rare circumstances could cause a slowdown or an error.
There isn't a way to set a value that a variable will always "fall back" to when you un-set it. When you use the unset command, you are removing the variable (not just clearing the value associated with it) so it can't have any value, default or otherwise.
Instead, try a combination of two things. First, make sure the variable gets initialized. Second, create a function that sets the variable to the desired default value. Call this variable instead of unset. With this combination, you can simulate a variable having a "default" value.
${a:=HI} expands to HI, which your shell then tries to run as a command. If you're just trying to set the value of a variable if it is not set, you may want to do something like [ -z "$b" ] && b=BYE
Instead of calling unset $a, you do ${a:=HI} again