Makefile pass arguments to a C program - makefile

I have google but it seems to not be possible, is there a way to have the run command in a makefile be able to pass any arguments given down to the program simply?

argset1:
./a.out arg1 arg2
argset2:
./a.out arg3 arg4
arg1='default1'
arg2='default2'
custom:
./a.out $(arg1) $(arg2)
You can do:
make argset1
make argset2
make custom arg1=1234 arg2=3321

Related

Does shell (sh or csh) behave differently for handling arguments when commands get run locally(/bin/sh -c CMD) vs remotely (ssh node1 /bin/sh -c CMD)?

I am getting the following different behaviors for the commands when run locally vs that when run remotely via ssh.
$ /bin/csh -c \'/bin/echo a b c d\'
Unmatched '''.
$ ssh node1 /bin/csh -c \'/bin/echo a b c d\'
a b c d
Does ssh here is handling the command arguments differently and removing the escape chars somehow? (or the local shell itself is doing that?)
TLDR: Your example is failing because of local shell processing.
On Unix when process x (such as a shell) runs a different program y (usually as a child process, but not necessarily) it passes y's own name plus any arguments as a variable length array of strings, e.g. on an old system echo a b c d runs the echo program with the array of arguments (shown one per line for clarity)
echo
a
b
c
d
(On most shells on recent system echo is shell builtin and not a separate program, but other programs still work the traditional way.)
Because C-arrays start at subscript 0 and the argument array is conventionally named argv (argument vector -- C derived from BCPL where arrays were called vectors) C (and C++ etc) programmers often call the program name "argv-zero". Note that you can have the same program appear under multiple names in the filesystem, so its name isn't known at compile time, only when it is run; this trick used to be quite popular for things like gzip/gunzip but in recent decades it seems to have become unfashionable.
In shell input a backslashed singlequote means to treat that singlequote as ordinary data, not use it to quote other characters such as a space the delimits words and thus arguments, so
/bin/csh -c \'/bin/echo a b c d\'
runs /bin/csh with the following argv:
/bin/csh
-c
'/bin/echo
a
b
c
d'
csh (or a Bourne-type shell) takes one argument after -c as an entire command line (or script) to be parsed and the result (possibly) executed, but '/bin/echo is not a valid command line. Any remaining arguments are not considered as commands but as possible arguments to the command being run if it needs them.
The ssh client program on the other hand treats any and all 'data' arguments (not options and not the required [user#]remotehost argument) as forming the command to be run remotely, and sends them over the SSH protocol all concatenated into one line, so your ssh command gets
ssh
node1
/bin/csh
-c
'/bin/echo
a
b
c
d'
but sends
/bin/csh -c '/bin/echo a b c d'
and sshd on the remote system passes that entire line as one argument, preceded by a separate -c argument, to your login shell, which then runs csh with
/bin/csh
-c
/bin/echo a b c d
so it takes the one argument after -c as a command line, which it can parse successfully and run.
And this combination is why we have dozens of Qs on several Stacks for many variations of "how can I get ssh to remotely run this command using special characters and/or interpolations correctly". While there is usually a possible answer, it is often easiest to punt either by running ssh with no remote command argument(s), so it remotely runs an interactive shell, and sending the desired command(s) as input to that shell; or, almost the same, putting the command(s) in a file, transferring that file to the remote system (or making it accessible via NFS or similar) and then telling the remote system to run the file as a script.

Time exec in a subshell

$ time (exec -a foo echo hello)
hello
It seems as though stderr (where time writes its output) leaks somewhere; obviously this is not what I intended.
My question could be phrased in generic terms as "why isn't the standard error stream written on the terminal when a subshell executes another program?".
A few notes:
I need to use exec for its -a switch, which changes the zeroth argument of the process. I would appreciate an alternative to exec to do just this, but I don't know of any, and now this behavior got me curious.
Of course, I need a subshell because I want my script to continue. Again, any alternative would be welcome. Is exec in a subshell even a good thing to do?
time'ing a subshell in general works fine, so it really has to do with exec.
Could somebody point me in the right direction? I'm not sure where to begin in any of the reference materials, exec descriptions are pretty terse.
Update: Actually, I was just "lucky" with time here being the bash builtin. It doesn't parse at all with /usr/bin/time or with any other process:
$ env (exec -a foo echo hello)
bash: syntax error near unexpected token `exec'
Actually this makes sense, we can't pass a subshell as an argument. Any idea how to do this any other way?
Update: To summarize, we have four good answers here, all different, and potentially something lacking:
Use actual filesystem links (hard or symbolic) that bash will use by default and time normally. Credits to hek2mgl.
ln $(which echo) foo && time ./foo hello && rm foo
fork for time using bash and exec using a bash subshell without special syntax.
time bash -c 'exec -a foo echo hello'
fork for time using bash but exec using a tiny wrapper.
time launch -a foo echo hello
fork and exec for time using bash with special syntax. Credits to sjnarv.
time { (exec -a foo echo hello); }
I think that solution 1 has the less impact on time as the timer doesn't have to count the exec in the "proxy" program, but isn't very practical (many filesystem links) nor technically ideal. In all other cases, we actually exec two times: once to load the proxy program (subshell for 2 and 4, wrapper for 3), and once to load the actual program. This means that time will count the second exec. While it can be extremely cheap, exec actually does filesystem lookups which can be pretty slow (especially if it searches through PATH, either itself with exec*p or if the proxy process does).
So, the only clean way (as far as what the answers of this question covered) would be to patch bash to modify its time keyword so that it can exec while setting the zeroth argument to a non-zero value. It would probably look like time -a foo echo hello.
I don't think that the timer's output disappears. I think it (the timer) was running in the
sub-shell overlaid by the exec.
Here's a different invocation. Perhaps this produces what you expected initially:
$ time { (exec -a foo echo hello); }
Which for me emits:
hello
real 0m0.002s
user 0m0.000s
sys 0m0.001s
Time is based on the wait system call. From the time man page
Most information shown by time is derived from the wait3(2) system call.
This will only work if time is the father process of the command to be executed. But exec creates a completely new process.
As time requires fork() and wait() I would not attach too much attention on that zeroth argument of exec (what is useful, of course). Just create a symbolic link and then call it like:
time link_name > your.file 2>&1 &
So, I ended up writing that tiny C wrapper, which I call launch:
#include <stdlib.h>
#include <unistd.h>
int main(const int argc, char *argv[])
{
int opt;
char *zeroth = NULL;
while ((opt = getopt(argc, argv, "a:")) != -1)
if (opt == 'a')
zeroth = optarg;
else
abort();
if (optind >= argc) abort();
argv += optind;
const char *const program = *argv;
if (zeroth) *argv = zeroth;
return execvp(program, argv);
}
I obviously simplified it to emphasize only what's essential. It essentially works just like exec -a, except that since it is not a builtin, the shell will fork normally to run the launch program as a separate process. There is thus no issue with time.
The test program in the following sample output is a simple program that only outputs its argument vector, one argument per line.
$ ./launch ./test hello world
./test
hello
world
$ ./launch -a foo ./test hello world
foo
hello
world
$ time ./launch -a foo ./test hello world
foo
hello
world
real 0m0.004s
user 0m0.001s
sys 0m0.002s
$ ./launch -a foo -- ./test -g hello -t world
foo
-g
hello
-t
world
The overhead should be minimal: just what's necessary to load the program, parse its single and optional argument, and manipulate the argument vector (which can be mostly reused for the next execvp call).
The only issue is that I don't know of a good way to signal that the wrapper failed (as opposed to the wrapped program) to the caller, which may happen if it was invoked with erroneous arguments. Since the caller probably expects the status code from the wrapped program and since there is no way to reliably reserve a few codes for the wrapper, I use abort which is a bit more rare, but it doesn't feel appropriate (nor does it make it all OK, the wrapped program may still abort itself, making it harder for the caller to diagnose what went wrong). But I digress, that's probably not interesting for the scope of this question.
Edit: just in case, the C compiler flags and feature test macros (gcc/glibc):
CFLAGS=-std=c11 -pedantic -Wall -D_XOPEN_SOURCE=700

Using pipe in hadoop

I am using ProcessBuilder to run an executable.It works fine.
Now I am in a scenario where I have to give output of 1st executable to the second one.
eg:
exe1 arg1 arg2 | exe2 arg3
and get its InputStream and print to stdout.
so I am writing a small script.sh for that which contains
exe1 arg1 arg2 | exe2 arg3
The following works fine in java:
ProcessBuilder pb=new ProcessBuilder();
pb.command("/bin/sh","/home/biadmin/Desktop/script.sh");
Process p=pb.start();
InputStream in=p.getInputStream();
//output successfully printed to stdout.
But when I do the same thing in hadoop environment. I dont get anything in inputstream. I need to work this same thing in hadoop. Any suggestions/advise appreciated.
Thanks.

How to execute tcl scripts in tchsh with arguments

Normally I invoke my tcl script under shell like this.
> tclsh8.5 mytest.tcl -opt1 foo -opt2 bar
In case need to launch gdb to debug due to some modules implemented in C++. I have to launch tclsh via gdb. So the question is how to execute my script in tcl sh with arguments.
I need something like:
tclsh> run mytest.tcl -opt1 foo -opt2 bar
Using exec is not ideal as it folks another process and losses my breakpoints settings.
tclsh> exec mytest.tcl -opt1 foo -opt2 bar
I would think something like the following should work for you:
set argv [list -opt1 foo -opt2 bar]
set argc 4
source mytest.tcl
So set argv and argc to get the arguments correct and then just source in your Tcl code to execute.
Alternatively the gdb run command allows you to pass command line arguments to the executable to be debugged. So if your debugging tclsh then the what is the problem with the run command as follows?
run mytest.tcl -opt1 foo -opt2 bar
For example under cygwin I'm able to do the following:
$ tclsh test.tcl
This is a test
$ gdb -q tclsh.exe
(no debugging symbols found)
(gdb) run test.tcl
Starting program: /usr/bin/tclsh.exe test.tcl
If you are running tclsh within a gdb session and set arguments, you do something like this ($ is a shell prompt, (gdb) is a gdb prompt, and I've left out all the messages that get printed by gdb):
$ gdb tclsh
(gdb) set args mytest.tcl -opt1 foo -opt2 bar
(gdb) ... set some breakpoints ...
(gdb) run
You might also need set env FOO=bar to set up the environment, depending on what is going on in your script. Tcl's own build files use techniques like this to pass in arguments when debugging the running of the test suite.
Why do not just run
gdb --args tclsh8.5 mytest.tcl -opt1 foo -opt2 bar
when you need to debug your application?

bash command arguments

I have a bash script that, for reasons I won't discuss, cannot be made executable. However, I need to pass arguments to that script.
I have tried this:
bash MyBashScript.sh MyArgumentOne
But the argument MyArgumentOne isn't passed to the script. I know there must be a way to do this, can anyone help?
Your given command should work. Try to debug with calling trough
strace -o all_system_calls.txt -f -ff bash MyBashScript.sh MyArgumentOne
one of the all_system_calls.txt.<pid> files created should contain something like
execve("/bin/bash", ["bash", "MyBashScript.sh", "MyArgumentOne"], [/* 71 vars */]) = 0
If so you know for sure that the argument is passed into your script.

Resources