std::process:Command not returning Err() result when using 'bash -c' - bash

I'm trying to understand why I'm not getting an Err() result when using bash -c expression to run a command.
Here is an example and the output below. I expect out2 and out3 to be an Err() but out3 is an Ok() with the failed status.
I'm using bash -c to execute a command from a given string, which is easy.
It's possible to get an Err() result using bash -c syntax?
#![allow(unused)]
use std::process::{Command, Output};
fn main() {
let out1 = Command::new("bash").arg("-c").arg("ls").output();
println!("out1: {:?}", out1);
let out2 = Command::new("wrongcommand").arg("-c").arg("ls").output();
println!("out2: {:?}", out2);
let out3 = Command::new("bash").arg("-c").arg("wrongcommand").output();
println!("out3: {:?}", out3);
}
Output:
out1: Ok(Output { status: ExitStatus(ExitStatus(0)), stdout: "Cargo.lock\nCargo.toml\ncrate-information.json\nsrc\ntarget\n", stderr: "" })
out2: Err(Os { code: 2, kind: NotFound, message: "No such file or directory" })
out3: Ok(Output { status: ExitStatus(ExitStatus(32512)), stdout: "", stderr: "bash: wrongcommand: command not found\n" })
I tried from command line and
$ bash -c wrongcommand
and
$ wrongcommand
and both return the same exit code (127). That's why I expected that Command failed in the same way.

That can be explained pretty easily. Follow along on the list of error codes
out1 is Ok for obvious reasons
out2 is an Err type because Command directly looked for the process, could not find it, and returned ENOENT (code 2). This error happened within rust, nothing was actually executed.
out3 is Ok because the process that Command ran is bash, and it returned its status. In this case, it wouldn't have found the command, so by bash statuses, it'll have returned 127. However, it's not that easy, because there is an additional layer of information contained in ExitStatus. On unix, when a process fails, it actually returns a 16bit/32bit integer (based on platform/libc/etc), separated in two:
The highest bits are what signal triggered the return
The lowest 8 bits are the return status of the process
And, to no surprise, if we shift 32512 8 bits to the right (it was an u16), we get... 127!
The takeaway is simple:
Err definitely means the process didn't run
Ok means the main process ran, but you'll need to check ExitStatus::success() to confirm that it actually did (i.e. 0 as exit status)
You could recover it like so:
Command::new("bash").arg("-c").arg("wrongcommand").output().and_then(|r| match r.status.success() {
true => Ok(r),
false => Err(io::Error::new(io::ErrorKind::InvalidData, "Process error"))
});
You can play with this on the playground. success() is a decently reliable indicator of child process success; all it does is check if the least significant 8 bits of the exit status is non-zero.
Obviously, this does not help if a process returns non-zero on success, but that's a different problem.

Related

Bash - exit does not exit script (only subshell) [duplicate]

How would you exit out of a function if a condition is true without killing the whole script, just return back to before you called the function.
Example
# Start script
Do scripty stuff here
Ok now lets call FUNCT
FUNCT
Here is A to come back to
function FUNCT {
if [ blah is false ]; then
exit the function and go up to A
else
keep running the function
fi
}
Use:
return [n]
From help return
return: return [n]
Return from a shell function.
Causes a function or sourced script to exit with the return value
specified by N. If N is omitted, the return status is that of the
last command executed within the function or script.
Exit Status:
Returns N, or failure if the shell is not executing a function or script.
Use return operator:
function FUNCT {
if [ blah is false ]; then
return 1 # or return 0, or even you can omit the argument.
else
keep running the function
fi
}
If you want to return from an outer function with an error without exiting you can use this trick:
do-something-complex() {
# Using `return` here would only return from `fail`, not from `do-something-complex`.
# Using `exit` would close the entire shell.
# So we (ab)use a different feature. :)
fail() { : "${__fail_fast:?$1}"; }
nested-func() {
try-this || fail "This didn't work"
try-that || fail "That didn't work"
}
nested-func
}
Trying it out:
$ do-something-complex
try-this: command not found
bash: __fail_fast: This didn't work
This has the added benefit/drawback that you can optionally turn off this feature: __fail_fast=x do-something-complex.
Note that this causes the outermost function to return 1.
My use case is to run the function unless it's already running. I'm doing
mkdir /tmp/nice_exit || return 0
And then at the end of the function
rm -rf /tmp/nice_exit

Error exit code 1 code showing in Jenkins Consiole output ( I do not want to see it )

I have a job running in the Jenkins pipeline and the output is showing error exit code 1 because am using if statement to create NOT_BUILT in the stage. Is there any other way to not see the work error exit code 1. I do not want to use When statement but if possible to still use IF statement and have a blank stage but without this message error exit code 1 in the console output.
This is my script below :
if(route53 == 'false' ) {
catchError(buildResult: 'SUCCESS', stageResult: 'NOT_BUILT') {
sh "exit 1"
}
}
else if(route53 == 'true' && all == "Yes" ) {
catchError(buildResult: 'SUCCESS', stageResult: 'NOT_BUILT') {
sh "exit 1"
}
}
The result in the pipeline console output is showing this, the stage graphic is fine as it is showing a blank stage but the console output error code is what I really want to manipulate.
output result
+ exit 1
[Pipeline] }
[Pipeline] }
ERROR: script returned exit code 1
[Pipeline] }
ERROR: script returned exit code 1
[Pipeline] }
ERROR: script returned exit code 1
[Pipeline] }
When using declarative pipelines the NOT_BUILT state is preserved to a stage the was not executed because its when directive was evaluated as false, and there is not direct way to set it except with the catchError workaround. (btw you can control the error message by using error('Your Message') instead of exit 1)
Therefore, it is easiest to achieve using the when directive and also makes your pipeline more readable. If you insist on using if statements you can still use them inside a when directive with the generic expression option which allows you to run any groovy code and calculate the relevant Boolean value according to your needs.
So you can still use your original code and just update it to return a Boolean:
stage('Conditional stage') {
when {
expression {
if(route53 == 'false' ) {
return false
}
else if(route53 == 'true' && all == "Yes" ) {
return false
}
return true
}
}
steps {
...
}
}

ruby Process.spawn stdout => pipe buffer size limit

In Ruby, I'm using Process.spawn to run a command in a new process. I've opened a bidirectional pipe to capture stdout and stderr from the spawned process. This works great until the bytes written to the pipe (stdout from the command) exceed 64Kb, at which point the command never finishes. I'm thinking the pipe buffer size has been hit, and writes to the pipe are now blocked, causing the process to never finish. In my actual application, i'm running a long command that has lots of stdout that I need to capture and save when the process has finished. Is there a way to raise the buffer size, or better yet have the buffer flushed so the limit is not hit?
cmd = "for i in {1..6600}; do echo '123456789'; done" #works fine at 6500 iterations.
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
Process.wait(cmd_pid)
pipe_cmd_out.close
out = pipe_cmd_in.read
puts "child: cmd out length = #{out.length}"
UPDATE
Open3::capture2e does seem to work for the simple example I showed. Unfortunately for my actual application, I need to be able to get the pid of the spawned process as well, and have control of when I block execution. The general idea is I fork a non blocking process. In this fork, I spawn a command. I send the command pid back to the parent process, then I wait on the command to finish to get the exit status. When command is finished, exit status is sent back to parent. In the parent, a loop is iterating every 1 second checking the DB for control actions such as pause and resume. If it gets a control action, it sends the appropriate signal to the command pid to stop, continue. When the command eventually finishes, the parent hits the rescue block and reads the exit status pipe, and saves to DB. Here's what my actual flow looks like:
#pipes for communicating the command pid, and exit status from child to parent
pipe_parent_in, pipe_child_out = IO.pipe
pipe_exitstatus_read, pipe_exitstatus_write = IO.pipe
child_pid = fork do
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
pipe_child_out.write cmd_pid #send command pid to parent
pipe_child_out.close
Process.wait(cmd_pid)
exitstatus = $?.exitstatus
pipe_exitstatus_write.write exitstatus #send exitstatus to parent
pipe_exitstatus_write.close
pipe_cmd_out.close
out = pipe_cmd_in.read
#save out to DB
end
#blocking read to get the command pid from the child
pipe_child_out.close
cmd_pid = pipe_parent_in.read.to_i
loop do
begin
Process.getpgid(cmd_pid) #when command is done, this will except
#job.reload #refresh from DB
#based on status in the DB, pause / resume command
if #job.status == 'pausing'
Process.kill('SIGSTOP', cmd_pid)
elsif #job.status == 'resuming'
Process.kill('SIGCONT', cmd_pid)
end
rescue
#command is no longer running
pipe_exitstatus_write.close
exitstatus = pipe_exitstatus_read.read
#save exit status to DB
break
end
sleep 1
end
NOTE: I cannot have the parent poll the command output pipe because the parent would then be blocked waiting for the pipe to close. It would not be able to pause and resume the command via the control loop.
This code seems to do what you want, and may be illustrative.
cmd = "for i in {1..6600}; do echo '123456789'; done"
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
#exitstatus = :not_done
Thread.new do
Process.wait(cmd_pid);
#exitstatus = $?.exitstatus
end
pipe_cmd_out.close
out = pipe_cmd_in.read;
sleep(0.1) while #exitstatus == :not_done
puts "child: cmd out length = #{out.length}; Exit status: #{#exitstatus}"
In general, sharing data between threads (#exitstatus) requires more care, but it works
here because it's only written once, by the thread, after initialization. (It turns out
$?.exitstatus can return nil, which is why I initialized it to something else.) The call
to sleep() is unlikely to execute even once since the read() just above it won't complete
until the spawned process has closed its stdout.
Indeed, your diagnosis is likely correct. You could implement a select and read loop on the pipe while waiting for the process to end, but likely you can get what you want more simply with stdlib Open3::capture2e.

C - passing an unknown command into execvp()

I'm writing a fake shell, where I create a child process and then call execvp(). In the normal shell, when I enter an unknown command such as 'hello' it returns 'hello: Command not found.' However, when I pass hello into execvp(), it doesn't return any error by default and just continues running the rest of my program like nothing happened. What's the easiest way to find out if nothing was actually run? here's my code:
if(fork() == 0)
{
execvp(cmd, args);
}
else
{
int status = 0;
int corpse = wait(&status);
printf(Child %d exited with a status of %d\n", corpse, status);
}
I know that if corpse < 0, then it's an unknown command, but there are other conditions in my code not listed where I don't want to wait (such as if & is entered at the end of a command). Any suggestions?
All of the exec methods can return -1 if there was an error (errno is set appropriately). You aren't checking the result of execvp so if it fails, the rest of your program will continue executing. You could have something like this to prevent the rest of your program from executing:
if (execvp(cmd, args) == -1)
exit(EXIT_FAILURE);
You also want to check the result of fork() for <0.
You have two independent concerns.
1) is the return value of execvp. It shouldn't return. If it does there is a problem. Here's what I get execvp'ing a bad command. You don't want to wait if execvp fails. Always check the return values.
int res = execvp(argv[1], argv);
printf ("res is %i %s\n", res, strerror(errno));
// => res is -1 No such file or directory
2) The other concern is background processes and such. That's the job of a shell and you're going to need to figure out when your program should wait immediately and when you want to save the pid from fork and wait on it later.

How to trace a program from its very beginning without running it as root

I'm writing a tool that calls through to DTrace to trace the program that the user specifies.
If my tool uses dtrace -c to run the program as a subprocess of DTrace, not only can I not pass any arguments to the program, but the program runs with all the privileges of DTrace—that is, as root (I'm on Mac OS X). This makes certain things that should work break, and obviously makes a great many things that shouldn't work possible.
The other solution I know of is to start the program myself, pause it by sending it SIGSTOP, pass its PID to dtrace -p, then continue it by sending it SIGCONT. The problem is that either the program runs for a few seconds without being traced while DTrace gathers the symbol information or, if I sleep for a few seconds before continuing the process, DTrace complains that objc<pid>:<class>:<method>:entry matches no probes.
Is there a way that I can run the program under the user's account, not as root, but still have DTrace able to trace it from the beginning?
Something like sudo dtruss -f sudo -u <original username> <command> has worked for me, but I felt bad about it afterwards.
I filed a Radar bug about it and had it closed as a duplicate of #5108629.
Well, this is a bit old, but why not :-)..
I don't think there is a way to do this simply from command line, but as suggested, a simple launcher application, such as the following, would do it. The manual attaching could of course also be replaced with a few calls to libdtrace.
int main(int argc, char *argv[]) {
pid_t pid = fork();
if(pid == 0) {
setuid(123);
seteuid(123);
ptrace(PT_TRACE_ME, 0, NULL, 0);
execl("/bin/ls", "/bin/ls", NULL);
} else if(pid > 0) {
int status;
wait(&status);
printf("Process %d started. Attach now, and click enter.\n", pid);
getchar();
ptrace(PT_CONTINUE, pid, (caddr_t) 1, 0);
}
return 0;
}
This script takes the name of the executable (for an app this is the info.plist's CFBundleExecutable) you want to monitor to DTrace as a parameter (you can then launch the target app after this script is running):
string gTarget; /* the name of the target executable */
dtrace:::BEGIN
{
gTarget = $$1; /* get the target execname from 1st DTrace parameter */
/*
* Note: DTrace's execname is limited to 15 characters so if $$1 has more
* than 15 characters the simple string comparison "($$1 == execname)"
* will fail. We work around this by copying the parameter passed in $$1
* to gTarget and truncating that to 15 characters.
*/
gTarget[15] = 0; /* truncate to 15 bytes */
gTargetPID = -1; /* invalidate target pid */
}
/*
* capture target launch (success)
*/
proc:::exec-success
/
gTarget == execname
/
{
gTargetPID = pid;
}
/*
* detect when our target exits
*/
syscall::*exit:entry
/
pid == gTargetPID
/
{
gTargetPID = -1; /* invalidate target pid */
}
/*
* capture open arguments
*/
syscall::open*:entry
/
((pid == gTargetPID) || progenyof(gTargetPID))
/
{
self->arg0 = arg0;
self->arg1 = arg1;
}
/*
* track opens
*/
syscall::open*:return
/
((pid == gTargetPID) || progenyof(gTargetPID))
/
{
this->op_kind = ((self->arg1 & O_ACCMODE) == O_RDONLY) ? "READ" : "WRITE";
this->path0 = self->arg0 ? copyinstr(self->arg0) : "<nil>";
printf("open for %s: <%s> #%d",
this->op_kind,
this->path0,
arg0);
}
If the other answer doesn't work for you, can you run the program in gdb, break in main (or even earlier), get the pid, and start the script? I've tried that in the past and it seemed to work.
Create a launcher program that will wait for a signal of some sort (not necessarily a literal signal, just an indication that it's ready), then exec() your target. Now dtrace -p the launcher program, and once dtrace is up, let the launcher go.
dtruss has the -n option where you can specify name of process you want to trace, without starting it (Credit to latter part of #kenorb's answer at https://stackoverflow.com/a/11706251/970301). So something like the following should do it:
sudo dtruss -n "$program"
$program
There exists a tool darwin-debug that ships in Apple's CLT LLDB.framework which will spawn your program and pause it before it does anything. You then read the pid out of the unix socket you pass as an argument, and after attaching the debugger/dtrace you continue the process.
darwin-debug will exec itself into a child process <PROGRAM> that is
halted for debugging. It does this by using posix_spawn() along with
darwin specific posix_spawn flags that allows exec only (no fork), and
stop at the program entry point. Any program arguments <PROGRAM-ARG> are
passed on to the exec as the arguments for the new process. The current
environment will be passed to the new process unless the "--no-env"
option is used. A unix socket must be supplied using the
--unix-socket=<SOCKET> option so the calling program can handshake with
this process and get its process id.
See my answer on related question "How can get dtrace to run the traced command with non-root priviledges?" [sic].
Essentially, you can start a (non-root) background process which waits 1sec for DTrace to start up (sorry for race condition), and snoops the PID of that process.
sudo true && \
(sleep 1; cat /etc/hosts) &; \
sudo dtrace -n 'syscall:::entry /pid == $1/ {#[probefunc] = count();}' $! \
&& kill $!
Full explanation in linked answer.

Resources