I have a makefile like so:
.SILENT: #don't echo commands as we run them.
###
# here, set $1 (in the latter bash script) to the first make arg.
# $2 to the second one.
# et cetera.
# ($0 being set properly is optional)
###
foo:
echo "you passed to make: $1 $2 $3 $4 $5 $6"
so i can do:
make foo
You passed to make: foo
(don't tell anyone, but i'm creating a 'make me a sandwich' thing)
No, you cannot "see" the full list of arguments of the make command in a Makefile. You can see the list of "targets" in the order they were entered on the command line, e.g.
make a c b d
will produce $(MAKECMDGOALS) with a c b d.
You can specifically check that something was set on the make command line, e.g.
make A=B C=D
will produce $(A) with B, and $(C) with D. But you will never know if it was make A=B C=D or make C=D A=B or make A=F -B C=D A=B.
If it is really important, you can wrap your make in a shell script, and have the full power of it:
make.sh a A=B c
calls
#!/bin/bash
make MAKECMDLINE="$*" $*
Now inside the Makefile, you can parse $(MAKECMDLINE) as you wish.
Note that some command-line options may disable your parser, e.g.
make A=B C=D -h
will never read your Makefile, and
make A=B C=D -f /dev/null
will also ignore your Makefile.
Edit 2019: Wow, past-me was so stupid.
Alright, I figured out my own question.
You can create a shell script to have total control:
$ cat make.sh
if [ $1 = 'me' ]; then
if [ $1 = 'a' ]; then
if[ $3 = 'sandwich' ];then
if [ `whoami` = 'root' ];
echo "Okay."
#todo: echo an actual sandwich in ascii.
else
echo "what? Make it yourself!"
fi
exit 0
fi
fi
fi
make $#
$ echo 'alias make="/home/connor/make.sh"' >> .bashrc
$ source .bashrc
...tada! now when you call make, it'll actually call the shell script, check if the arguments you passed to make are 'make me a sandwich', and if they aren't, call make with all of it's args.
if you want to tell make to run a rule for certain args:
if [ $1 = 'foo' ]; then
make somerule
fi
Related
I have a code that I wrap with a bash script, and I want to know if a certain flag was given (-b), and if so, to update some variable I have in the script (called "x"). Unfortunately, I need this to be done on the bash script, and the synthax here drives me nuts. A small example of what I did is (without the call to the code I wrap):
#!/bin/bash
x=0
while getopts :b opt; do
case $opt in
b) x=1;;
?) echo "";;
esac
done
echo "x is ${x}"
My problem is with having more than one flag I want to pass to the command line.
I tried to use:
while getopts :b:f:c opt; do..., but it looks like it fails to work when I do not give the flags in the -b -f ... -c ... order (I get x is 0 when I expect it to be x is 1).
Moreover, this does not work when I want to give flags with the -- prefix (--frame instead of -f).
What should be the code for this purpose? I tried a few options but I do not manage to figure out what should the synthax exactly.
Thank you in advance.
TL;DR
Options with argument have a colon after their character in the optstring. So getopts :b:c:f means:
you have a -b option that takes an argument (b:)
you have a -c option that takes an argument (c:)
you have a -f option that takes no argument (f)
use silent error reporting (the leading colon)
If you use while getopts :b:f:c opt; do... in your script and you type:
./script.sh -c -b -f
-b is considered as the argument of option -c, not as an option itself.
Details
Considering this part of your question: "it looks like it fails to work when I do not give the flags in the -b -f ... -c ... order", let's assume you want the b option to have no argument, and the f and c options to have one.
If your option has no argument don't put a colon after it in the optstring parameter; else put the colon after the option character. With your simple example you could try:
#!/bin/bash
x=0
fopt=""
copt=""
while getopts bf:c: opt; do
case "$opt" in
b) x=1;;
f) echo "f option found"; fopt="$OPTARG";;
c) echo "c option found"; copt="$OPTARG";;
?) echo "";;
esac
done
echo "x is ${x}"
echo "fopt is ${fopt}"
echo "copt is ${copt}"
And then:
$ ./script.sh -b -c foo -f bar
c option found
f option found
x is 1
fopt is bar
copt is foo
$ ./script.sh -c foo -f bar
c option found
f option found
x is 0
fopt is bar
copt is foo
$ ./script.sh -c foo -b
c option found
x is 1
fopt is
copt is foo
But be careful: with the last example, if you omit the foo argument of the -c option:
./script.sh -c -b
c option found
x is 0
fopt is
copt is -b
See? -b became the missing argument and has not been considered as an option. Is it the cause of your problem?
Note: If you want to also use long options don't use the bash getopts builtin, use the getopt utility.
I'm trying to execute a simple if else statement in a Makefile:
check:
if [ -z "$(APP_NAME)" ]; then \
echo "Empty" \
else \
echo "Not empty" \
fi
When I execute make check I get the following error:
if [ -z "" ]; then
/bin/bash: -c: line 1: syntax error: unexpected end of file
make: *** [check] Error 2
Any idea what I'm doing wrong?
I know I can use the following, but I have a lot of logic after the echos so I need to spread it out across multiple lines:
check:
[ -z "$(PATH)" ] && echo "Empty" || echo "Not empty"
Change your make target to this (adding semicolons):
check:
if [ -z "$(APP_NAME)" ]; then \
echo "Empty"; \
else \
echo "Not empty"; \
fi
For evaluating a statement in a shell without newlines (newlines get eaten by the backslash \) you need to properly end it with a semicolon. You cannot use real newlines in a Makefile for conditional shell-script code (see Make-specific background)
[ -z "$(APP_NAME)" ],
echo "Empty",
echo "Not empty" are all statements that need to be evaluated (similar to pressing enter in terminal after you typed in a command).
Make-specific background
make spawns a new shell for each command on a line, so you cannot use true multi line shell code as you would e.g. in a script-file.
Taking it to an extreme, the following would be possible in a shell script file, because the newline acts as command-evaluation (like in a terminal hitting enter is a newline-feed that evaluates the entered command):
if
[ 0 ]
then
echo "Foo"
fi
Listing 1
If you would write this in a Makefile though, if would be evaluated in its own shell (changing the shell-state to if) after which technically the condition [ 0 ] would be evaluated in its own shell again, without any connection to the previous if.
However, make will not even get past the first if, because it expects an exit code to go on with the next statement, which it will not get from just changing the shell's state to if.
In other words, if two commands in a make-target are completely independent of each other (no conditions what so ever), you could just perfectly fine separate them solely by a normal newline and let them execute each in its own shell.
So, in order to make make evaluate multi line conditional shell scripts correctly, you need to evaluate the whole shell script code in one line (so it all is evaluated in the same shell).
Hence, for evaluating the code in Listing 1 inside a Makefile, it needs to be translated to:
if \
[ 0 ]; \
then \
echo "Foo"; \
fi
The last command fi does not need the backslash because that's where we don't need to keep the spawned shell open anymore.
This is shell syntax, not makefiles. You need to familiarize yourself with the rules surrounding using backslashes to enter long commands into a single line of shell.
In your example, after the backslash newline pairs are removed, it looks like this:
if [ -z "$(APP_NAME)" ]; then echo "Empty" else echo "Not empty" fi
Maybe you can now see that the issue is. The shell interprets that as:
if [ -z "$(APP_NAME)" ]; then
followed by a single long command:
echo "Empty" else echo "Not empty" fi
which would echo the content Empty else echo not empty fi, except that since there's no trailing fi shell token it's instead a syntax error.
In shell syntax you need to add a semicolon after every individual command, so the shell knows how to split it up:
check:
if [ -z "$(APP_NAME)" ]; then \
echo "Empty"; \
else \
echo "Not empty"; \
fi
Note the semicolons after the echo commands telling the shell that the command arguments end there.
Other answers already pointed out that the problem is combination of makefile design and shell syntax. The design of Makefiles make it really cumbersome to write complex recipes. Often it is better to rethink the process and either rewrite parts of the makefile or put the complexity in a shell script.
Here is example of your recipe put in a shell script:
check:
sh check.sh "$(APP_NAME)"
and the script:
if [ -z "$1" ]; then
echo "Empty"
else
echo "Not empty"
fi
advantage: you have all the power and flexibility of a shell script without any of the makefile awkwardness. You just need to pass the right arguments.
disadvantage: you have aditional files for your build process and your makefile recipes is spread across multiple files.
If the condition is "simple" you might use the conditional construct from make itself. In your case I would argue that it is just barely simple enough to tolerate, but any more complexity and it will go in a shell script.
Here is how to write conditional recipes using makefile features:
check:
ifdef APP_NAME
echo "Empty"
else
echo "Not empty"
endif
again with annotation
check: # target name
ifdef APP_NAME # makefile conditional syntax
echo "Empty" # recipe if condition true
else # makefile conditional syntax
echo "Not empty" # recipe if condition false
endif # makefile conditional syntax
For example if APP_NAME is defined the rule will effectively look like this during execution:
check:
echo "Empty"
This specific example is probably semantically equivalent to your makefile. I cannot say for sure because I did not test thoroughly.
It is important to know that this conditional is evaluated before the recipe is executed. That means the value of variables that get computed values might be different.
advantage: all build commands in one place.
disadvantage: headaches trying to figure out when makefile does variable assignment and evaluation if the conditional does not behave like you expected.
read here for more info:
https://www.gnu.org/software/make/manual/html_node/Conditional-Example.html
https://www.gnu.org/software/make/manual/html_node/Conditional-Syntax.html
https://www.gnu.org/software/make/manual/html_node/Reading-Makefiles.html
see also
Passing arguments to "make run"
I have a common run-class.sh file defined as follows:
#!/bin/bash
if [ -z "$MAIN_CLASS" ] ; then
echo "Do not run this script on its own. It's intended to be included in other commands."
exit 1
fi
JAVA_ARGS=-client -Xmx16M
export JAVA_ARGS
DIR=`dirname "$0"`
# set jars
JARS=
for JAR in $DIR/../lib/*.jar; do JARS=$JAR:$JARS; done
# set java classpath and export
CLASSPATH=$DIR/../conf/:$DIR/../conf/*:$JARS
export CLASSPATH
java $JAVA_ARGS $MAIN_CLASS "$#"
and another test-class.sh script as follows to invoke a java class:
#!/bin/bash
MAIN_CLASS="com.my.package.TestClass"
. run-class.sh
When I run the test-class.sh file as follows:
>./test-class.sh
I get a console message saying:
run-class.sh: line 8: -Xmx16M: command not found
I'm not sure why this is incorrect when I'm already exporting the JAVA_ARGS.
Use quotes with JAVA_ARGS assignment :
JAVA_ARGS="-client -Xmx16M"
I find using bash arrays tends to make things more robust:
#!/bin/bash
if [ -z "$MAIN_CLASS" ] ; then
echo "Do not run this script on its own. It's intended to be included in other commands."
exit 1
fi
# use an array
java_args=(-client -Xmx16M)
dir=$(dirname "$0")
# set java classpath and export
cp=(
"$dir"/../.conf/
"$dir"/../.conf/"*" # I assume you want a literal star here
"$dir"/../lib/*.jar
)
export CLASSPATH=$( IFS=":"; echo "${cp[*]}" )
java "${java_args[#]}" "$MAIN_CLASS" "$#"
Other notes:
don't use ALL_CAPS variable names, except for environment variables.
read http://mywiki.wooledge.org/BashFAQ/050
You can set variables that are localized to a command.
Most everyone knows simple environment vars, set like this -
$: x=foo
$: echo $x
foo
But you can set a local override.
$: x=bar eval 'echo $x' # <<--- uses echo's local x
bar
$: echo $x
foo
(Just don't be fooled by a false test...
$: x=bar echo $x # $x parsed BEFORE passing to echo
foo
...which will confuse you if you don't realize echo received the value when the line was parsed, so didn't see the change.)
So, by saying
JAVA_ARGS=-client -Xmx16M
without quotes, the command interpreter is assuming this is what you are doing, and failing because -Xmx16M isn't found. By putting quotes around it you make the entire value part of the assignment.
JAVA_ARGS='-client -Xmx16M'
This will do what you wanted.
I'm trying to write a "phone home" script, which will log the exact command line (including any single or double quotes used) into a MySQL database. As a backend, I have a cgi script which wraps the database. The scripts themselves call curl on the cgi script and include as parameters various arguments, including the verbatim command line.
Obviously I have quite a variety of quote escaping to do here and I'm already stuck at the bash stage. At the moment, I can't even get bash to print verbatim the arguments provided:
Desired output:
$ ./caller.sh -f -hello -q "blah"
-f hello -q "blah"
Using echo:
caller.sh:
echo "$#"
gives:
$ ./caller.sh -f -hello -q "blah"
-f hello -q blah
(I also tried echo $# and echo $*)
Using printf %q:
caller.sh:
printf %q $#
printf "\n"
gives:
$ ./caller.sh -f hello -q "blah"
-fhello-qblah
(I also tried print %q "$#")
I would welcome not only help to fix my bash problem, but any more general advice on implementing this "phone home" in a tidier way!
There is no possible way you can write caller.sh to distinguish between these two commands invoked on the shell:
./caller.sh -f -hello -q "blah"
./caller.sh -f -hello -q blah
There are exactly equivalent.
If you want to make sure the command receives special characters, surround the argument with single quotes:
./caller.sh -f -hello -q '"blah"'
Or if you want to pass just one argument to caller.sh:
./caller.sh '-f -hello -q "blah"'
You can get this info from the shell history:
function myhack {
line=$(history 1)
line=${line#* }
echo "You wrote: $line"
}
alias myhack='myhack #'
Which works as you describe:
$ myhack --args="stuff" * {1..10} $PATH
You wrote: myhack --args="stuff" * {1..10} $PATH
However, quoting is just the user's way of telling the shell how to construct the program's argument array. Asking to log how the user quotes their arguments is like asking to log how hard the user punched the keys and what they were wearing at the time.
To log a shell command line which unambiguously captures all of the arguments provided, you don't need any interactive shell hacks:
#!/bin/bash
line=$(printf "%q " "$#")
echo "What you wrote would have been indistinguishable from: $line"
I understand you want to capture the arguments given by the caller.
Firstly, quotes used by the caller are used to protect during the interpretation of the call. But they do not exist as argument.
An example: If someone call your script with one argument "Hello World!" with two spaces between Hello and World. Then you have to protect ALWAYS $1 in your script to not loose this information.
If you want to log all arguments correctly escaped (in the case where they contains, for example, consecutive spaces...) you HAVE to use "$#" with double quotes. "$#" is equivalent to "$1" "$2" "$3" "$4" etc.
So, to log arguments, I suggest the following at the start of the caller:
i=0
for arg in "$#"; do
echo "arg$i=$arg"
let ++i
done
## Example of calls to the previous script
#caller.sh '1' "2" 3 "4 4" "5 5"
#arg1=1
#arg2=2
#arg3=3
#arg4=4 4
#arg5=5 5
#Flimm is correct, there is no way to distinguish between arguments "foo" and foo, simply because the quotes are removed by the shell before the program receives them. What you need is "$#" (with the quotes).
I'm trying to wrap a bash script b with a script a.
However I want to pass the options passed to a also to b as they are.
#!/bin/bash
# script a
./b ${#:$OPTIND}
This will also print $1 (if any). What's the simplest way not to?
So calling:
./a -c -d 5 first-arg
I want b to execute:
./b -c -d 5 # WITHOUT first-arg
In bash, you can build an array containing the options, and use that array to call the auxiliary program.
call_b () {
typeset -i i=0
typeset -a a; a=()
while ((++i <= OPTIND)); do # for i=1..$OPTIND
a+=("${!i}") # append parameter $i to $a
done
./b "${a[#]}"
}
call_b "$#"
In any POSIX shell (ash, bash, ksh, zsh under sh or ksh emulation, …), you can build a list with "$1" "$2" … and use eval to set different positional parameters.
call_b () {
i=1
while [ $i -le $OPTIND ]; do
a="$a \"\$$i\""
i=$(($i+1))
done
eval set -- $a
./b "$#"
}
call_b "$#"
As often, this is rather easier in zsh.
./b "${(#)#[1,$OPTIND]}"
Why are you using ${#:$OPTIND} and not just $# or $*?
The ${parameter:index} syntax says to use index to parse $parameter. If you're using $#, it'll use index as an index into the parameters.
$ set one two three four #Sets "$#"
$ echo $#
one two three four
$ echo ${#:0}
one two three four
$ echo ${#:1}
one two three four
$ echo ${#:2}
two three four
$OPTIND is really only used if you're using getopts. This counts the number of times getopts processes the parameters in $#. According to the bash manpage:
OPTIND is initialized to 1 each time the shell or a shell script is invoked.
Which may explain why you're constantly getting the value of 1.
EDITED IN RESPONSE TO EDITED QUESTION
#David - "./b $# " still prints the arguments of passed to a (see Q edit). I want to pass only the options of a and not the args
So, if I executed:
$ a -a foo -b bar -c fubar barfu barbar
You want to pass to b:
$ b -a foo -b bar -c fubar
but not
$ b -arg1 foo -arg2 bar -arg3 fubar barfu barbar
That's going to be tricky...
Is there a reason why you can't pass the whole line to b and just ignore it?
I believe it might be possible to use regular expressions:
$ echo "-a bar -b foo -c barfoo foobar" | sed 's/\(-[a-z] [^- ][^- ]*\) *\([^-][^-]*\)$/\1/'
-a bar -b foo -c barfoo
I can't vouch that this regular expression will work in all situations (i.e. what if there are no parameters?). Basically, I'm anchoring it to the end of the line, and then matching for the last parameter and argument and the rest of the line. I do a replace with just the last parameter and argument.
I've tested it in a few situations, but you might simply be better off using getopts to capture the arguments and then passing those to b yourself, or simply have b ignore those extra arguments if possible.
In order to separate the command options from the regular arguments, you need to know which options take arguments, and which stand alone.
In the example command:
./a -c -d 5 first-arg
-c and -d might be standalone options and 5 first-arg the regular arguments
5 might be an argument to the -d option (this seems to be what you mean)
-d might be an argument to the -c option and (as in the first case) 5 first-arg the regular arguments.
Here's how I'd handle it, assuming -a, -b, -c and -d are the only options, and that -b and -d is the only ones that take an option argument. Note that it is necessary to parse all of the options in order to figure out where they end.
#!/bin/bash
while getopts ab:cd: OPT; do
case "$OPT" in
a|b|c|d) : ;; # Don't do anything, we're just here for the parsing
?) echo "Usage: $0 [-ac] [-b something] [-d something] [args...]" >&2
exit 1 ;;
esac
done
./b "${#:1:$((OPTIND-1))}"
The entire while loop is there just to compute OPTIND. The ab:cd: in the getopts command defines what options are allowed and which ones take arguments (indicated by colons). The cryptic final expression means "elements 1 through OPTIND-1 of the argument array, passed as separate words".