Code:
#include <boost/system/process.hpp>
int main() {
boost::process::system bp("echo foo; echo bar");
}
Output:
foo; echo bar
Desired output:
foo
bar
I am receiving a string containing 1:MANY commands to run separated by a semi-colon; as one may run in in a shell terminal. Is there a way to tell boost::process to interpret the string command as such?
Yes, I'd use an explicit bash:
Live On Coliru
#include <boost/process.hpp>
namespace bp = boost::process;
int main() {
bp::child c(
bp::search_path("bash"),
std::vector<std::string>{
"-c", "echo foo; for a in {1..10}; do echo \"bar $a\"; done"});
c.wait();
}
Prints
foo
bar 1
bar 2
bar 3
bar 4
bar 5
bar 6
bar 7
bar 8
bar 9
bar 10
Related
Is it possible to prevent flag.Parse() from "swallowing" (removing) the -- from the flag.Args()?
Example
package main
import (
"flag"
"fmt"
)
func main() {
flag.Parse()
fmt.Println(flag.Args())
}
I can't differentiate these 2 invocations:
$ go run . hello
[hello]
$ go run . -- hello
[hello]
Why would I like to differentiate these 2 invocations?
I'm writing a Go program that wraps another subprogram.
My program has some optional positional args:
myprog [options] [ARG1 ARG2 ...] [-- SUBARG1 SUBARG2...]
Invocation examples:
$ myprog -flag1 val1
$ myprog -flag1 val1 foo
# foo is for myprog
$ myprog -flag1 val1 foo -- bar
# foo is for myprog
# bar is for the subprogram
$ myprog -flag1 val1 -- bar
# bar is normally for the subprogram, BUT flag.Args() = ["bar"] so I have no way to know that it was after "--"
I understand that I can use --- as separator or any other combination, but I was just curious to know for the -- argument.
Edit after accepted answer
Source:
package main
import (
"flag"
"fmt"
"os"
)
func main() {
var initFlag = flag.Bool("init", false, "init")
var subArgs []string
for i := len(os.Args) - 1; i > 0; i-- {
if os.Args[i] == "--" {
subArgs = os.Args[i+1:]
os.Args = os.Args[:i]
break
}
}
flag.Parse()
fmt.Println(*initFlag)
fmt.Println(flag.Args())
fmt.Println(subArgs)
}
All tests succeeded! 👍
$ go run .
false
[]
[]
$ go run . foo
false
[foo]
[]
$ go run . -- bar
false
[]
[bar]
$ go run . foo -- bar
false
[foo]
[bar]
$ go run . -init foo -- bar
true
[foo]
[bar]
$ go run . -init -- bar
true
[]
[bar]
The flag package does not have an option to disable the flag terminator --.
Split the subprogram arguments from the main program arguments before calling flag.Parse():
var subArgs []string
for i := len(os.Args) - 1; i > 0; i-- {
if os.Args[i] == "--" {
subArgs = os.Args[i+1:]
os.Args = os.Args[:i]
break
}
}
flag.Parse()
Is it possible to prevent flag.Parse() from "swallowing" (removing) the -- from the flag.Args()?
No.
Suppose I have this simple C program (test.c):
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
exit (1);
}
Obviously, the exit code of this program is 1:
$ gcc test.c
$ ./a.out
$ echo $?
1
But when I run test ./a.out, the result of the test doesn't match the exit code:
$ test ./a.out
$ echo $?
0
So what is actually being tested? Why is the result of the test 0?
test is a Bash built-in, often invoked by the alternative name [.
The last command (test ./a.out) exits with status 0 indicating success because test ./a.out checks whether ./a.out as a string has one or more characters in it (is not an empty string), and because it isn't an empty string, returns success or 0. The test ./a.out command line does not execute your a.out program — as you could see by printing something from within your program.
As written, your program doesn't need the <stdio.h> header or the arguments to main() — it should be int main(void). You could lose <stdlib.h> too if you use return 1; instead of exit(1);:
int main(void)
{
return 1;
}
To use the exit status in an if condition in the shell, just use it directly:
if ./a.out ; then
echo Success
else
echo Failure
fi
Rule of Thumb: Don't call C programs test because you will be confused sooner or later — usually sooner rather than later.
Your C program returns "1" to the shell (I'd prefer"return()" over exit()", but...)
If you wanted to actually run "a.out" in conjunction with the "*nix" test command, you'd use syntax like:
`./a.out` # classic *nix
or
$(./a.out) # Bash
If you did that, however, "test" would read the value printed to "stdout", and NOT the value returned by your program on exit.
You can read more about test here:
test(1) - Linux man page
The classic test command: Bash hackers wiki
Understanding exit codes and how to use them in Bash scripts
Here is an example:
C program:
#include <stdio.h>
int main (int argc, char *argv[]) {
printf("%d\n", argc);
return 2;
}
Shell script:
echo "Assign RETVAL the return value of a.out:"
./a.out RETVAL=$? echo " " RETVAL=$RETVAL
echo "Assign RETVAL the value printed to stdout by a.out:"
RETVAL=$(./a.out) echo " " RETVAL=$RETVAL
echo "Turn an 'trace' and run a.out with 'test':"
set -x
if [ $(./a.out) -eq 1 ]; then
echo "One"
else
echo "Not One"
fi
Example output:
paulsm#vps2:~$ ./tmp.sh
Assign RETVAL the return value of a.out:
1
RETVAL=2
Assign RETVAL the value printed to stdout by a.out:
RETVAL=1
Turn an 'trace' and run a.out with 'test':
+++ ./a.out
++ '[' 1 -eq 1 ']'
++ echo One
One
ALSO:
A couple of points that have already been mentioned:
a. return 1 is generally a better choice than exit (1).
b. "test" is probably a poor name for your executable - because it collides with the built-in "test" command. Something like "test_return" might be a better choice.
As "is known", a script my-script-file which starts with
#!/path/to/interpreter -arg1 val1 -arg2 val2
is executed by exec calling /path/to/interpreter with 2(!) arguments:
-arg1 val1 -arg2 val2
my-script-file
(and not, as one might naively expect, with 5 arguments
-arg1
val1
-arg2
val2
my-script-file
as has been explained in many previous questions, e.g.,
https://stackoverflow.com/a/4304187/850781).
My problem is from the POV of an interpreter developer, not script writer.
How do I detect from inside the interpreter executable that I was called from shebang as opposed to the command line?
Then I will be able to decide whether I need to split my first argument
by space to go from "-arg1 val1 -arg2 val2" to ["-arg1", "val1", "-arg2", "val2"] or not.
The main issue here is script files named with spaces in them.
If I always split the 1st argument, I will fail like this:
$ my-interpreter "weird file name with spaces"
my-interpreter: "weird": No such file or directory
On Linux, with GNU libc or musl libc, you can use the aux-vector to distinguish the two cases.
Here is some sample code:
#define _GNU_SOURCE 1
#include <stdio.h>
#include <errno.h>
#include <sys/auxv.h>
#include <sys/stat.h>
int
main (int argc, char* argv[])
{
printf ("argv[0] = %s\n", argv[0]);
/* https://www.gnu.org/software/libc/manual/html_node/Error-Messages.html */
printf ("program_invocation_name = %s\n", program_invocation_name);
/* http://man7.org/linux/man-pages/man3/getauxval.3.html */
printf ("auxv[AT_EXECFN] = %s\n", (const char *) getauxval (AT_EXECFN));
/* Determine whether the last two are the same. */
struct stat statbuf1, statbuf2;
if (stat (program_invocation_name, &statbuf1) >= 0
&& stat ((const char *) getauxval (AT_EXECFN), &statbuf2) >= 0)
printf ("same? %d\n", statbuf1.st_dev == statbuf2.st_dev && statbuf1.st_ino == statbuf2.st_ino);
}
Result for a direct invocation:
$ ./a.out
argv[0] = ./a.out
program_invocation_name = ./a.out
auxv[AT_EXECFN] = ./a.out
same? 1
Result for an invocation through a script that starts with #!/home/bruno/a.out:
$ ./a.script
argv[0] = /home/bruno/a.out
program_invocation_name = /home/bruno/a.out
auxv[AT_EXECFN] = ./a.script
same? 0
This approach is, of course, highly unportable: Only Linux has the getauxv function. And there are surely cases where it does not work well.
In Bash and KornShell (ksh), I see the following script works fine.
if [ -n "foo" ]
then
foo()
{
echo function foo is defined
}
else
bar()
{
echo function bar is defined
}
fi
foo
bar
It also generates the expected output when executed.
$ bash scr.sh
function foo is defined
scr.sh: line 15: bar: command not found
$ ksh scr.sh
function foo is defined
scr.sh: line 15: bar: not found
I want to know if this script would run and generate this output on any POSIX conformant shell.
I agree with your reading of the grammar. A function definition may occur in the body of an if statement, making its execution conditional.
If I define a function in a file, say test1.sh:
#!/bin/bash
foo() {
echo "foo"
}
And in a second, test2.sh, I try to redefine foo:
#!/bin/bash
source /path/to/test1.sh
...
foo() {
...
echo "bar"
}
foo
Is there a way to change test2.sh to produce:
foo
bar
I know it is possible to do with Bash built-ins using command, but I want to know if it is possible to extend a user function?
I don't know of a nice way of doing it (but I'd love to be proven wrong).
Here's an ugly way:
# test2.sh
# ..
eval 'foo() {
'"$(declare -f foo | tail -n+2)"'
echo bar
}'
I'm not sure I see the need to do something like this. You can use functions inside of functions, so why reuse the name when you can just call the original sourced function in a newly created function like this:
AirBoxOmega:stack d$ cat source.file
#!/usr/local/bin/bash
foo() {
echo "foo"
}
AirBoxOmega:stack d$ cat subfoo.sh
#!/usr/local/bin/bash
source /Users/d/stack/source.file
sub_foo() {
foo
echo "bar"
}
sub_foo
AirBoxOmega:stack d$ ./subfoo.sh
foo
bar
Of course if you REALLY have your heart set on modifying, you could source your function inside the new function, call it, and then do something esle after, like this:
AirBoxOmega:stack d$ cat source.file
#!/usr/local/bin/bash
foo() {
echo "foo"
}
AirBoxOmega:stack d$ cat modify.sh
#!/usr/local/bin/bash
foo() {
source /Users/d/stack/source.file
foo
echo "bar"
}
foo
AirBoxOmega:stack d$ ./modify.sh
foo
bar
No it's not possible. A new declaration would override the previous instance of a function. But despite not having that capability it's still helpful when you want to disable a function without having to unset it like:
foo() {
: ## Do nothing.
}
It's also helpful with lazy initializations:
foo() {
# Do initializations.
...
# New declaration.
if <something>; then
foo() {
....
}
else
foo() {
....
}
fi
# Call normally.
foo "$#"
}
And if you're brave and capable enough to use eval, you can even optimize your function so it would act without additional ifs based on a condition.
Yes you can,
see this page : https://mharrison.org/post/bashfunctionoverride/
save_function() {
local ORIG_FUNC=$(declare -f $1)
local NEWNAME_FUNC="$2${ORIG_FUNC#$1}"
eval "$NEWNAME_FUNC"
}
save_function foo old_foo
foo() {
initialization_code()
old_foo()
cleanup_code()
}