What is the purpose of 'set -- $args' after getopt? - bash

The usual example for using getopt in bash is as follow
args=`getopt abo: $*`
errcode=$?
set -- $args
What does that last line achieve?

This explains it very well. Essentially, it is to break a single argument with multiple flags into multiple arguments each with single flag:
Whether you call your script as
script -ab
or as
script -a -b
after the set -- $args, $1 will be -a and $2 will be -b. It makes processing easier.
BTW, getopts is much better

set updates the positional parameters of the script.
#! /bin/bash
echo "$*"
set -- $1 baz
echo "$*"
If this script is invoked with /path/to/script foo bar, the output is:
foo bar
foo baz

Related

How to extend command stored in sh variable with parameters?

I tried to google the problem and play with different approaches but I failed to actually execute the command :/ I want to construct a command in a set of conditional checks. Here is what I want to achieve:
run_script=`command`
if type command &>/dev/null; then
run_script=`command`
else
run_script=`something command`
fi
# while ... do
$params=`-a -b -c` # calculated
$anotherparam = "./test.file" # calculated
# run $run_script+$params+$anotherparam ???
# like we run "command -a -b -c ./test.file" command
# done
Note: it's just an example
How can I do this type of combination? I can's use arrays because I need it to be compatible with sh
Correct POSIX sh way of dynamically constructing a command call with parameters:
#!/bin/sh
run_script='command'
if command -v "$run_script" >/dev/null 2>&1; then
set -- "$run_script"
else
set -- "something" "$run_script"
fi
# while ... do
# $params=`-a -b -c` # calculated
set -- "$#" -a -b -c # calculated
# $anotherparam = "./test.file" # calculated
set -- "$#" "./test.file" # calculated
# run $run_script+$params+$anotherparam ???
"$#" # run command with its parameters
# like we run "command -a -b -c ./test.file" command
# done
Explanations:
if command -v "$run_script" >/dev/null 2>&1: Tests if the "$run_script" command exists.
set -- "$run_script": Sets the arguments array to the "$run_script" command.
set -- "something" "$run_script": Sets the arguments array to the something command with first argument "$run_script".
set -- "$#" -a -b -c: Sets the arguments array with "$#" current content of the arguments array, and -a, -b and -c as additional arguments.
Stand-alone "$#": Calls the command and arguments contained in the arguments array.
You could use:
if type -p command; then
run_script="command"
else
run_script="something command"
fi
params="-a -b -c"
anotherparam="./test.file"
"$run_script" $params $anotherparam
Changes to OP code:
use type -p cmd instead of type cmd &>/dev/null
use quotes instead of backticks for run_script="command"
remove spaces around the equal sign =
remove $ from left side of the equal sign
This will raise strong opinions.
If you are completely sure that you will only receive trusted, non-malicious input, eval is part of the standard shell and a perfect tool for this task.
See: https://unix.stackexchange.com/questions/278427/why-and-when-should-eval-use-be-avoided-in-shell-scripts
The arguments are concatenated together into a single command, which
is then read and executed, and its exit status returned as the exit
status of eval. If there are no arguments or only empty arguments, the
return status is zero.
if type command &>/dev/null; then
run_script="command"
else
run_script="something command"
fi
params="-a -b -c" # calculated
anotherparam="./test.file" # calculated
eval "$run_script $params $anotherparam"

Empty Positional Argument when using /bin/bash -c <script>

I'm trying to launch a script with /bin/bash -c with positional arguments but can't figure out the following issue:
Suppose I have test.sh as follows:
#!/bin/bash
echo $0
echo $1
> ./test.sh a
./test.sh
a
> /bin/bash -c ./test.sh a
./test.sh
Why does the second one return an empty position argument for $1? Based on the man page:
-c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional
parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages.
It seems like "a" should be assigned to $0 at least, which is not what I saw. /bin/bash -c 'echo $0' a works as expected. Thanks!
The string after -c acts like a miniature script, and the arguments after that are passed to it as $0, $1, $2, etc. For example:
$ bash -c 'echo "\$0=$0, \$1=$1, \$2=$2"' zero one two
$0=zero, $1=one, $2=two
(Note: it's important that the mini-script is in single-quotes; without them the references to $0 would be expanded by your interactive shell before they even get passed to the bash -c command.)
In your case, the mini-script runs another script (./test.sh), but doesn't pass on the arguments. If you wanted to pass them on, you'd do something like this:
$ bash -c './test.sh "$1" "$2"' zero one two
./test.sh
one
If the script had bothered to print its $2 here, it would've gotten "two". It doesn't help to pass on $0, because for a real script that's automatically set to the actual command used to run the script.
bash [long-opt] [-abefhkmnptuvxdBCDHP] [-o option] [-O shopt_option]
-c string [argument ...]
-c supposed to be followed by a string, so you may quote ./test.sh a like:
$ /bin/bash -c "./test.sh a"
./test.sh
a
The -c option does not collect all following arguments of the bash command, but just uses the first non-option argument, which in your case is the one immediately following it. I don't see why you want to use -c here. I would write your command as
/bin/bash test.sh a
Since in this case, no PATH search is involved, you can also omit the ./ part. In fact, test.sh doesn't even need to be executable here.

How to do named command line arguments in Bash Scripting better way?

This is my sample Bash Script example.sh:
#!/bin/bash
# Reading arguments and mapping to respective variables
while [ $# -gt 0 ]; do
if [[ $1 == *"--"* ]]; then
v="${1/--/}"
declare $v
fi
shift
done
# Printing command line arguments through the mapped variables
echo ${arg1}
echo ${arg2}
Now if in terminal I run the bash script as follows:
$ bash ./example.sh "--arg1=value1" "--arg2=value2"
I get the correct output like:
value1
value2
Perfect! Meaning I was able to use the values passed to the arguments --arg1 and --arg2 using the variables ${arg1} and ${arg2} inside the bash script.
I am happy with this solution for now as it serves my purpose, but, anyone can suggest any better solution to use named command line arguments in bash scripts?
You can just use environment variables:
#!/bin/bash
echo "$arg1"
echo "$arg2"
No parsing needed. From the command line:
$ arg1=foo arg2=bar ./example.sh
foo
bar
There's even a shell option to let you put the assignments anywhere, not just before the command:
$ set -k
$ ./example.sh arg1=hello arg2=world
hello
world

ksh set-o pipefail is not inherited by scripts

I would like to have set -o pipefail set "always" (.kshrc) but consider the following trivial example:
#!/bin/ksh
function fun {
return 99
}
if [[ $1 == 'pipe' ]] ; then
set -o pipefail
fi
if fun | tee /tmp/bork.txt ; then
print "fun returned 0"
else
print "fun nonzero"
fi
This results in:
/home/khb>./b pipe
fun nonzero GOOD what we want
/home/khb>./b
fun returned 0 What we expect without pipefail!
/home/khb>set -o pipefail
/home/khb>./b
fun returned 0 BAD: expected the set to impact inferior shells
No doubt this should be obvious, but other than creating an environment variable and having every script reference it ... or sourcing a common set of definitions ... what other options are there to arrange for this option to be "found" in each script?
Thanks in advance.
A slightly awkward solution would be to have set -o pipefail in your ~/.profile file and then write scripts that always invoke ksh as a login shell, i.e. by using #!/bin/ksh -l as the hash-bang line.
A less (?) awkward solution would be to put set -o pipe fail in the file pointed to by $ENV and then invoke ksh with -E (instead of -l above). However, shells parsing $ENV are usually interactive shells...

how to silently disable xtrace in a shell script?

I'm writing a shell script that loops over some values and run a long command line for each value. I'd like to print out these commands along the way, just like make does when running a makefile. I know I could just "echo" all commands before running them, but it feels inelegant. So I'm looking at set -x and similar mechanisms instead :
#!/bin/sh
for value in a long list of values
do
set -v
touch $value # imagine a complicated invocation here
set +v
done
My problem is: at each iteration, not only is the interresting line printed out, but also the set +x line as well. Is it somehow possible to prevent that ? If not, what workaround do you recommend ?
PS: the MWE above uses sh, but I also have bash and zsh installed in case that helps.
Sandbox it in a subshell:
(set -x; do_thing_you_want_traced)
Of course, changes to variables or the environment made in that subshell will be lost.
If you REALLY care about this, you could also use a DEBUG trap (using set -T to cause it to be inherited by functions) to implement your own set -x equivalent.
For instance, if using bash:
trap_fn() {
[[ $DEBUG && $BASH_COMMAND != "unset DEBUG" ]] && \
printf "[%s:%s] %s\n" "$BASH_SOURCE" "$LINENO" "$BASH_COMMAND"
return 0 # do not block execution in extdebug mode
}
trap trap_fn DEBUG
DEBUG=1
# ...do something you want traced...
unset DEBUG
That said, emitting BASH_COMMAND (as a DEBUG trap can do) is not fully equivalent of set -x; for instance, it does not show post-expansion values.
You want to try using a single-line xtrace:
function xtrace() {
# Print the line as if xtrace was turned on, using perl to filter out
# the extra colon character and the following "set +x" line.
(
set -x
# Colon is a no-op in bash, so nothing will execute.
: "$#"
set +x
) 2>&1 | perl -ne 's/^[+] :/+/ and print' 1>&2
# Execute the original line unmolested
"$#"
}
The original command executes in the same shell under an identity transformation. Just prior to running, you get a non-recursive xtrace of the arguments. This allows you to xtrace the commands you care about without spamming stederr with duplicate copies of every "echo" command.
# Example
for value in $long_list; do
computed_value=$(echo "$value" | sed 's/.../...')
xtrace some_command -x -y -z $value $computed_value ...
done
Next command disables 'xtrace' option:
$ set +o xtrace
I thought of
set -x >/dev/null 2>1; echo 1; echo 2; set +x >/dev/null 2>&1
but got
+ echo 1
1
+ echo 2
2
+ 1> /dev/null 2>& 1
I'm surprised by these results. .... But
set -x ; echo 1; echo 2; set +x
+ echo 1
1
+ echo 2
2
looks to meet your requirement.
I saw similar results when I put each statement on its only line (excepting the set +x)
IHTH.

Resources