How to use a variable inside eval function from input variable Makefile - makefile

I have a Makefile where I have to define a variable using a generic function where a parameter is another variable. Here is my code :
testX :
#read -p "Enter Size Stack : " REP; \
$(eval ARG=$(shell shuf -i 0-50 -n $$REP))
echo $(ARG)
The problem is that shuf do not recognize $$REP.
Thanks for your answers.

You can't do this. Make recipe commands are invoked in shells, and shells have their own variables which exist only in those shells. Once the shell exits, those variables are gone. You can't use them in subsequent shells.
Also, make will expand all make variables and functions first before it runs any shells. So by the time make runs the shell script that performs the read, the eval is already run, and the make variable ARG is already expanded.
In any event, it's very unusual and not really a good idea to ask for input inside a make recipe. That's just not how make is designed to work. Better is to use a command line variable:
$ cat Makefile
testX :
ARG=$$(shuf -i 0-50 -n $(SIZE)) ; \
echo $$ARG
$ make SIZE=50

Related

How to evaluate bash function arguments as command with possible environment overrides?

How to write a function in bash (I can rely on it being v4+), that, given words constituting a command with possible environment overrides, execute this command in the current shell?
For example, given
f cd src
f CXX="ccache gcc" make -k XOPTIONS="--test1 --test2"
the function f would do approximately same thing as simply having these lines in the shell script without the f up front?
A few unsuccessful attempts.
This tries to evaluate environment override CXX="ccache gcc" as command.
f() { "$#" ; }
This loses word-quoting on all arguments, breaking single argument words on spaces:
f() { eval "$#" ; }
This handles the environment overrides, but runs the command in a subshell, as env(1) is not a bash builtin:
f() { env -- "$#" ; }
This question came up multiple times on SO and Unix SE, but I have never seen it asked about supporting all three important parts, namely: environment overrides; execution in the current shell; and correct handling of arguments containing spaces (and other characters that are lexically special to bash).
One thing I could potentially use is that environment overrides are rarely used with builtins (but v. IFS= read...), so I can select between the "#" ; and eval -- "#" ; patterns based on $1 being syntactically a variable assignment. But that is, again, not as simple as spotting a = in it, as the equal sign may be quoted part of a command, albeit that is not likely sane. Still, I usually prefer correct code to mostly correct code, and this approach has 2 consecutive leaps of faith.
Addressing a possible question why do I need a function replicating the default behavior of the shell ("just drop the f"): in reality, f() is more complex that just running a command, implementing a pattern repeating in the script in a few dozen locations; this is only the part I cannot get right.
If you can make eval see your arguments properly quoted, it should work. To this end, you can use the %q format specification of printf, which works as follows:
$ printf '%q ' CXX="ccache gcc" make -k XOPTIONS="--test1 --test2"
CXX=ccache\ gcc make -k XOPTIONS=--test1\ --test2
This would result in a function like
f () {
eval "$(printf '%q ' "$#")"
}
Notice that this appends an extra space at the end of the command, but this shouldn't hurt.
Tricky. You could do this, but it's going to pollute the environment of the shell:
f() {
# process any leading "var=value" assignments
while [[ $1 == ?*=* ]]; do
declare -x "$1"
shift
done
"$#"
}
Just did a quick test: the env vars declared in the function are still local to the scope of the function and will not actually pollute the script's environment.
$ f() {
declare -x FOO=bar
sh -c 'echo subshell FOO=$FOO'
echo function FOO=$FOO
}
$ unset foo
$ f
subshell FOO=bar
function FOO=bar
$ echo main shell FOO=$FOO
main shell FOO=

How can my Bash script see a loop variable inside a command line argument?

I can't seem to "access" the value of my loop variable when executing a command line argument in a Bash script. I'd like to be able to write something like
#!/bin/bash
for myvar in 1 2 3
do
$#
done
and run the script as ./myscript echo "${myvar}".
When I do this, the lines are echoed as empty. I probably don't have a firm grasp one exactly what's being evaluated where.
Is what I want even possible?
$myvar is evaluated before the child script is even run, so it can't be evaluated within.
That is, when you invoke your script as:
./myscript echo "${myvar}"
what is actually being called is:
./myscript echo ''
presuming that $myvar is empty in the enclosing environment.
If you wanted to be evil (and this is evil, and will create bugs, and you should not do it), you could use eval:
#!/bin/bash
for myvar in 1 2 3; do
eval "$1"
done
...and then call as:
./myscript 'echo "${myvar}"'
Something less awful would be to export a variable:
#!/bin/bash
export myvar
for myvar in 1 2 3; do
"$#"
done
...but even then, the way you call the script will need to avoid preevaluation. For instance, this will work in conjunction with the above:
./myscript bash -c 'echo "$myvar"'
...but that basically has you back to eval. On the other hand:
./myscript ./otherscript
...will do what you want, presuming that otherscript refers to $myvar somewhere within. (This need not be a shell script, or even a script; it can be an executable or other command, and it will still be able to find myvar in the environment).
What's the real goal (ie. business purpose) you're trying to achieve? There's probably a way to accomplish it better aligned with best practices.

Can you wrapper each command in GNU's make?

I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.

Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why.
$ cat typeset_err.ksh
#!/bin/ksh
unset _typeset_var
typeset -i -r _typeset_var=1
DIR=init # this is the variable I want to print
When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
$ echo $A
init
When run on ksh93 (in this case, a Linux machine), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
$ print $A
($A is undefined)
The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables.
The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r.
In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh.
There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening.
By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.
It works fine for me in ksh93 (Version JM 93t+ 2009-05-01). If I do this, though:
$ . ./typeset_err.ksh
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
So it may be that you're getting that variable typeset -r in the current environment somehow.
Try this
A=$(ksh -c "./typeset_err.ksh && print \$DIR")
or
A=$(env -i ksh -c "./typeset_err.ksh && print \$DIR")

bash: function + source + declare = boom

Here is a problem:
In my bash scripts I want to source several file with some checks, so I have:
if [ -r foo ] ; then
source foo
else
logger -t $0 -p crit "unable to source foo"
exit 1
fi
if [ -r bar ] ; then
source bar
else
logger -t $0 -p crit "unable to source bar"
exit 1
fi
# ... etc ...
Naively I tried to create a function that do:
function safe_source() {
if [ -r $1 ] ; then
source $1
else
logger -t $0 -p crit "unable to source $1"
exit 1
fi
}
safe_source foo
safe_source bar
# ... etc ...
But there is a snag there.
If one of the files foo, bar, etc. have a global such as --
declare GLOBAL_VAR=42
-- it will effectively become:
function safe_source() {
# ...
declare GLOBAL_VAR=42
# ...
}
thus a global variable becomes local.
The question:
An alias in bash seems too weak for this, so must I unroll the above function, and repeat myself, or is there a more elegant approach?
... and yes, I agree that Python, Perl, Ruby would make my life easier, but when working with legacy system, one doesn't always have the privilege of choosing the best tool.
It's a bit late answer, but now declare supports a -g parameter, which makes a variable global (when used inside function). Same works in sourced file.
If you need a global (read exported) variable, use:
declare -g DATA="Hello World, meow!"
Yes, Bash's 'eval' command can make this work. 'eval' isn't very elegant, and it sometimes can be difficult to understand and debug code that uses it. I usually try to avoid it, but Bash often leaves you with no other choice (like the situation that prompted your question). You'll have to weigh the pros and cons of using 'eval' for yourself.
Some background on 'eval'
If you're not familiar with 'eval', it's a Bash built-in command that expects you to pass it a string as its parameter. 'eval' dynamically interprets and executes your string as a command in its own right, in the current shell context and scope. Here's a basic example of a common use (dynamic variable assignment):
$> a_var_name="color"
$> eval ${a_var_name}="blue"
$> echo -e "The color is ${color}."
The color is blue.
See the Advanced Bash Scripting Guide for more info and examples: http://tldp.org/LDP/abs/html/internal.html#EVALREF
Solving your 'source' problem
To make 'eval' handle your sourcing issue, you'd start by rewriting your function, 'safe_source()'. Instead of actually executing the command, 'safe_source()' should just PRINT the command as a string on STDOUT:
function safe_source() { echo eval " \
if [ -r $1 ] ; then \
source $1 ; \
else \
logger -t $0 -p crit \"unable to source $1\" ; \
exit 1 ; \
fi \
"; }
Also, you'll need to change your function invocations, slightly, to actually execute the 'eval' command:
`safe_source foo`
`safe_source bar`
(Those are backticks/backquotes, BTW.)
How it works
In short:
We converted the function into a command-string emitter.
Our new function emits an 'eval' command invocation string.
Our new backticks call the new function in a subshell context, returning the 'eval' command string output by the function back up to the main script.
The main script executes the 'eval' command string, captured by the backticks, in the main script context.
The 'eval' command string re-parses and executes the 'eval' command string in the main script context, running the whole if-then-else block, including (if the file exists) executing the 'source' command.
It's kind of complex. Like I said, 'eval' is not exactly elegant. In particular, there are a couple of special things you should notice about the changes we made:
The entire IF-THEN-ELSE block has becomes one whole double-quoted string, with backslashes at the end of each line "hiding" the newlines.
Some of the shell special characters like '"') have been backslash-escaped, while others ('$') have been left un-escaped.
'echo eval' has been prepended to the whole command string.
Extra semicolons have been appended to all of the lines where a command gets executed to terminate them, a role that the (now-hidden) newlines originally performed.
The function invocation has been wrapped in backticks.
Most of these changes are motived by the fact that 'eval' won't handle newlines. It can only deal with multiple commands if we combine them into a single line delimited by semicolons, instead. The new function's line breaks are purely a formatting convenience for the human eye.
If any of this is unclear, run your script with Bash's '-x' (debug execution) flag turned on, and that should give you a better picture of exactly what's happening. For instance, in the function context, the function actually produces the 'eval' command string by executing this command:
echo eval ' if [ -r <INCL_FILE> ] ; then source <INCL_FILE> ; else logger -t <SCRIPT_NAME> -p crit "unable to source <INCL_FILE>" ; exit 1 ; fi '
Then, in the main context, the main script executes this:
eval if '[' -r <INCL_FILE> ']' ';' then source <INCL_FILE> ';' else logger -t <SCRIPT_NAME> -p crit '"unable' to source '<INCL_FILE>"' ';' exit 1 ';' fi
Finally, again in the main context, the eval command executes these two commands if exists:
'[' -r <INCL_FILE> ']'
source <INCL_FILE>
Good luck.
declare inside a function makes the variable local to that function. export affects the environment of child processes not the current or parent environments.
You can set the values of your variables inside the functions and do the declare -r, declare -i or declare -ri after the fact.

Resources