I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.
Related
I have a Makefile where I have to define a variable using a generic function where a parameter is another variable. Here is my code :
testX :
#read -p "Enter Size Stack : " REP; \
$(eval ARG=$(shell shuf -i 0-50 -n $$REP))
echo $(ARG)
The problem is that shuf do not recognize $$REP.
Thanks for your answers.
You can't do this. Make recipe commands are invoked in shells, and shells have their own variables which exist only in those shells. Once the shell exits, those variables are gone. You can't use them in subsequent shells.
Also, make will expand all make variables and functions first before it runs any shells. So by the time make runs the shell script that performs the read, the eval is already run, and the make variable ARG is already expanded.
In any event, it's very unusual and not really a good idea to ask for input inside a make recipe. That's just not how make is designed to work. Better is to use a command line variable:
$ cat Makefile
testX :
ARG=$$(shuf -i 0-50 -n $(SIZE)) ; \
echo $$ARG
$ make SIZE=50
i am writing a shell script practice.sh. I want to give my first argument $1 from command line to ls command in script.e.g
if I run my script in terminal $bash practice.sh *.mp3
the argument *.mp3
I want to use for ls command
#!/bin/bash
output=$ls $1
it doesn't work
any help?
The obvious answer for what you say you want is just
#!/bin/bash
ls "$1"
which will run ls, passing it (just) the first argument to the script.
However, you also say you want to run this like: practice.sh *.mp3 which runs the script with many arguments (not just one) -- the *.mp3 will be expanded to be all the of the .mp3 files in the current directory. For that, you likely want something more like
#!/bin/bash
ls "$#"
which will pass all of the arguments to your script (however many there are) to the ls command.
These scripts will just run ls with its stdout connected to whatever your script has its stdout connceted to, so the output will (likely) just appear on your terminal. If you instead want to capture the output of the ls command (so you can do something else with it), you need something like
#!/bin/bash
output=$(ls "$#")
which will run ls with all the arguments, and capture the output in the variable $output. You can then do things with that variable.
Use shell expansion to record the output of the command in the variable output:
output=$(ls $1)
This will record the output of the command ls $1 in the variable output.
You can then use echo $output to print out your output.
You can read more about shell expansion in the GNU Bash reference manual.
I want to make a bash script which invokes some simple commands in the following way
./myscript magicword
would invoke
#!/bin/bash
cat * | grep "$#"
However I want to add one more level of complexity, for example triggering case sensitivity or not. Thus I want to include he parsing of arguments such as
./myscript --case-insensitive magicword
which should be interpreted by the script as grep -i "$#".
I have seen many tutorials about bash scripting with arguments, and case structures, but those had to do mostly with each executing different commands altogether, or assigning variables (which are not actually using them as command options).
I didn't find a way of a syntax such as grep $1 "$#" or something, where $1 ought to be null, or anything else.
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs
I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why.
$ cat typeset_err.ksh
#!/bin/ksh
unset _typeset_var
typeset -i -r _typeset_var=1
DIR=init # this is the variable I want to print
When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
$ echo $A
init
When run on ksh93 (in this case, a Linux machine), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
$ print $A
($A is undefined)
The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables.
The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r.
In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh.
There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening.
By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.
It works fine for me in ksh93 (Version JM 93t+ 2009-05-01). If I do this, though:
$ . ./typeset_err.ksh
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
So it may be that you're getting that variable typeset -r in the current environment somehow.
Try this
A=$(ksh -c "./typeset_err.ksh && print \$DIR")
or
A=$(env -i ksh -c "./typeset_err.ksh && print \$DIR")