I have this command in bash:
cmd -c -s junk text.txt
If I change the command to
cmd -c junk -s text.txt
how to I keep track which parameter ($2 or $3) is set to junk?.
I try to use a for loop but I don't know how to find out junk from $#.
You need to use getopts inside your script. Something like this should work:
while getopts "c:s:" optionName; do
case "$optionName" in
s) arg="$OPTARG"; echo "-s is present with [$arg]";;
c) arg="$OPTARG"; echo "-c is present with [$arg]";;
esac
done
From the example you show it seems that the -s option has a single argument and that is junk in the first example. However the semantics seems to change in the second example and there apparently -c takes a single argument and it is again junk. Also in the second example it seems -s takes text.txt as argument.
In general the arguments in bash commands do not have fixed positions but if an option takes an argument it should directly follow the option parameter(in the first case -s).
As pointed out by anubhava you may use getopts to parse the arguments for your script. Still this will not work for the case where you change the whole semantics like it seems you do.
Related
I came across a script that is supposed to set up postgis in a docker container, but it references this "${psql[#]}" command in several places:
#!/bin/sh
# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"
# Create the 'template_postgis' template db
"${psql[#]}" <<- 'EOSQL'
CREATE DATABASE template_postgis;
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
EOSQL
I'm guessing it's supposed to use the psql command, but the command is always empty so it gives an error. Replacing it with psql makes the script run as expected. Is my guess correct?
Edit: In case it's important, the command is being run in a container based on postgres:11-alpine.
$psql is supposed to be an array containing the psql command and its arguments.
The script is apparently expected to be run from here, which does
psql=( psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --no-password )
and later sources the script in this loop:
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh)
# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
# https://github.com/docker-library/postgres/pull/452
if [ -x "$f" ]; then
echo "$0: running $f"
"$f"
else
echo "$0: sourcing $f"
. "$f"
fi
;;
*.sql) echo "$0: running $f"; "${psql[#]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[#]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
See Setting an argument with bash for the reason to use an array rather than a string.
The #!/bin/sh and the [#] are incongruous. This is a bash-ism, where the psql variable is an array. This literal quote dollarsign psql bracket at bracket quote is expanded into "psql" "array" "values" "each" "listed" "and" "quoted" "separately." It's the safer way, e.g., to accumulate arguments to a command where any of them might have spaces in them.
psql=(/foo/psql arg arg arg) is the best way to define the array you need there.
It might look obscure, but it would work like so...
Let's say we have a bash array wc, which contains a command wc, and an argument -w, and we feed that a here document with some words:
wc=(wc -w)
"${wc[#]}" <<- words
one
two three
four
words
Since there are four words in the here document, the output is:
4
In the quoted code, there needs to be some prior point, (perhaps a calling script), that does something like:
psql=(psql -option1 -option2 arg1 arg2 ... )
As to why the programmer chose to invoke a command with an array, rather than just invoke the command, I can only guess... Maybe it's a crude sort of operator overloading to compensate for different *nix distros, (i.e. BSD vs. Linux), where the local variants of some necessary command might have different names from the same option, or even use different commands. So one might check for BSD or Linux or a given version, and reset psql accordingly.
The answer from #Barmar is correct.
The script was intended to be "sourced" and not "executed".
I faced the same problem and came to the same answer after I read that it had been reported here and fixed by "chmod".
https://github.com/postgis/docker-postgis/issues/119
Therefore, the fix is to change the permissions.
This can be done either in your git repository:
chmod -x initdb-postgis.sh
or add a line to your docker file.
RUN chmod -x /docker-entrypoint-initdb.d/10_postgis.sh
I like to do both so that it is clear to others.
Note: if you are using git on windows then permission can be lost. Therefore, "chmod" in the docker file is needed.
I am calling a command from a script function. I would like it not to print to out, and only to a variable.
This prints to out:
MYVAR=$(docker inspect -f {{.State.Status}} $1)
I tried to add &>/dev/null, but sure enough the variable is not set. Is there a middel step like; $(command >MYVAR&>/dev/null)
Update
Just as I am waiting to accept #anubhava very good answer, I found this to explain Difference between 2>&-, 2>/dev/null, |&, &>/dev/null and >/dev/null 2>&1
What you actually need is to suppress stderr since you're storing command output in a variable by using 2>/dev/null. So use:
myvar=$(docker inspect -f {{.State.Status}} "$1" 2>/dev/null)
Also suggest you to avoid using all caps variable names to avoid chance of overriding an env var.
I am working on an option driven bash script that will use getopts. The script has cases where it can accept multiple options and specific cases where only one option is accepted. While testing a few cases out I ran into this issue which I have reduced down to pseudo-code for this question.
for arg in "$#"; do
echo ${arg}
done
echo "end"
Running below returns:
$ ./test.sh -a -b
-a
end
I am running bash 4.1.2, why isn't the -b returned on the empty line? I assume this has to do with the '-'.
I cannot reproduce your exact error, but this is the risk of using echo: if $arg is a valid option, it will be treated as such, not a string to print. Use printf instead:
printf '%s\n' "$arg"
Also check if you have applied any "shift" commands that might remove the arguments before you test then (typical in a argument collection block that might include a case statement)
I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.
I am trying to write a wrapper shell script that caches information every time a command is called. It only needs to store the first non-option argument. For example, in
$ mycommand -o option1 -f another --spec more arg1 arg2
I want to retrieve "arg1."
How can this be done in bash?
Using getopt is probably the way to go.
If you wanted to see argument scanning code in bash, the non-getopt way is:
realargs="$#"
while [ $# -gt 0 ]; do
case "$1" in
-x | -y | -z)
echo recognized one argument option $1 with arg $2
shift
;;
-a | -b | -c)
echo recognized zero argument option $1, no extra shift
;;
*)
saveme=$1
break 2
;;
esac
shift
done
set -- $realargs
echo saved word: $saveme
echo run real command: "$#"
There's no way to pick off any particular argument without examining the entire command line. The reason for this is bash's underlying assumption that any option can appear in any order, without regard to relative position on the command line. The other premise is that any option specified in the man pages in either short or long format (i.e., "-f" or "--file") will have valid, recognized use in the the execution of the command.
Your best bet is to use the example provided by DigitalRoss and either code a value for the case statement for every valid option for the command, or code for just the one(s) you want to deal with in your script and capture everything else with the "*)" construct and disregard it if it falls into that test. The trick is that if a particular option has more than one valid argument, you need to know in advance if the distinction between the arguments is positional or pattern matching based on the content of the argument. You'll also need to use the "skip" directive in order to move from one argument to the next for options with multiple arguments.
You probably want to do something with getopt (look here or here for how to use).
Maybe save the whole command line (so you can hand it to the real tool intact), then process the arguments with getopt, grab the info you need, and launch the underling tool.