I have the following Ruby code:
system "clang test.c -o test"
system "./test"
When I execute the above Ruby code, the stdout is printed, but the stderr is not.
When I run clang test.c -o test && ./test from the terminal, I get a segmentation fault, but the Ruby script does not print this.
How can I get the Ruby script to print all output from the system command?
The output "segmentation fault" is not printed by your test program itself. If you see this in your shell, it is in fact printed by the shell itself as a result of your program segfaulting.
Ruby does not print this. You can check the return code of your program after the system call returned however using the $? variable which contains a Process::Status object.
Here, you can e.g. check for signaled? to check if the process aborted due to a signal and then get this signal with the termsig method. The signal number for a segmentation fault is 11.
With your example, this could look like this:
system "./test"
# => false
$?.success?
# => nil
$?.signaled?
# => true
$?.termsig
# => 11
Related
I have a simple shell script written in ruby that runs some predefined commands and saves the output strings.
The script works well, but I need a way branch conditionally if the command fails. I've tried using the $? object but the script exits before it gets there.
#!/usr/bin/ruby
def run_command(cmd)
`#{cmd}`
if $?.success?
# continue as normal
else
# ignore this command and move on
end
end
run_command('ls')
run_command('not_a_command')
# Output:
# No such file or directory - not_a_command (Errno::ENOENT)...
I've tried $?.exitstatus or even just puts $? but it always exits before it gets there because the script is obviously running the command before hitting that line.
Is there a way to check if the command will run before actually running it in the script?
Hope that's clear, thanks for your time!
Use system (which returns true or false depending on exit code) instead of backticks (which return output string):
if system(cmd)
...
else
...
end
If you want it to run quietly without polluting your logs / output:
system(cmd, out: File::NULL, err: File::NULL)
I'm using the sass-lint NPM package to style-check .scss files from within a Rake task, thus:
sass_lint_cmd = "sass-lint --config #{ui_library_path}/scss/.sass-lint.yml '#{ui_library_path}/scss/*.scss' -v -q --max-warnings=0"
output, status = Open3.capture2e(sass_lint_cmd)
raise IOError, output unless status == 0
This basically works, insofar as in the event of any linter warnings or errors the Rake task aborts and the sass-lint output, including errors, is dumped to the console.
However, when run directly, sass-lint produces nice colorized output. When captured by capture2e, the colors are lost.
I assume the issue is that sass-lint (or Node) detects it's not running in a TTY, and so outputs plain text. Is there some Process.spawn() option I can pass to Open3.capture2e(), or some other method, by which I can make it think it's running in a TTY?
(Note: I did look at Trick an application into thinking its stdout is a terminal, not a pipe, but the BSD version of script that ships with macOS doesn't seem to support either the --return or the -c options, and I'm running on macOS.)
Update: I tried script -q /dev/null and PTY.spawn() as per Piccolo's answer, but no luck.
script -q /dev/null … works from the command line, but doesn't work in Open3.capture2e() (it runs, but produces monochrome output and a spurious Bundler::GemNotFound stack trace).
As for PTY.spawn(), replacing the code above with the following:
r, _w, pid = PTY.spawn(scss_lint_command)
_, proc_status = Process.wait2(pid)
output, status = [r, proc_status.exitstatus]
(warn(output); raise) unless status == 0
the subprocess never seems to complete; if I ps in another terminal it shows as in interruptible sleep status. Killing the subprocess doesn't free up the parent process.
The same happens with the block form.
output, status = nil
PTY.spawn(scss_lint_command) do |r, _w, pid|
_, proc_status = Process.wait2(pid)
output, status = [r, proc_status.exitstatus]
end
(warn(output); raise) unless status == 0
Have you considered using Ruby's excellent pty library instead of Open3?
Pseudo terminals, per the thread you linked, seem to emulate an actual TTY, so the script wouldn't know it wasn't in a terminal unless it checked for things like $TERM, but that can also be spoofed with relative ease.
According to this flowchart, the downside of using pty instead of Open3 is that STDERR does not get its own stream.
Alternatively, per this answer, also from the thread you linked, script -q /dev/null $COMMAND appears to do the trick on Mac OS X.
On Macs, ls -G colorizes the output of ls, and as a brief test, I piped ls -G into cat as follows:
script -q /dev/null ls -G | cat
and it displayed with colors, whereas simply running
ls -G | cat
did not.
This method also worked in irb, again using ls -G:
$ touch regular_file
$ touch executable_file
$ mkdir directory
$ chmod +x executable_file
$ irb
2.4.1 :001 > require 'Open3'
=> true
2.4.1 :002 > output, status = Open3.capture2e("ls -G")
=> ["directory\nexecutable_file\nregular_file\n", #<Process::Status: pid 39299 exit 0>]
2.4.1 :003 > output, status = Open3.capture2e("script -q /dev/null ls -G")
=> ["^D\b\b\e[1m\e[36mdirectory\e[39;49m\e[0m \e[31mexecutable_file\e[39;49m\e[0m regular_file\r\n", #<Process::Status: pid 39301 exit 0>]
2.4.1 :004 >
I am running Ruby as a wrapper to an EDA tool, in RH5.The tool segfaulted. However, the command line did not show any indication. Only when running the command that Ruby launched, did we learn that the segfault happened. How can I get the segfault message within the wrapper?
Thanks.
From Kernel#system documentation:
system returns true if the command gives zero exit status, false for non zero exit status. Returns nil if command execution fails. An error status is available in $?.
So, if you just want to make sure that everything went ok, you just check if the return value of system was true. If you want to specifically check if there was a segmentation fault, then the return value will be false and $: will be like this:
puts $?
#=> pid 3658 SIGSEGV (signal 11)
I'm trying to execute a program using the system function in Ruby.
I need to capture the stdout and stderr of the program, so I'm using
a shell command that redirects stdout and stderr to files.
One important requirement is that I need to determine whether
the program exited normally or was killed by a signal.
The weird behavior I'm seeing is that when I redirect stdout and
stderr to files, $?.exited? is true even if the program was
killed by a signal! Here is a program that demonstrates the
problem:
#! /usr/bin/ruby
File.open("bad.c", "w") do |out|
out.print <<'EOF'
#include <stdio.h>
int main(void) {
int *p = 0;
*p = 42;
printf("%d\n", *p);
return 0;
}
EOF
end
raise "Couldn't compile bad.c" unless system("gcc -o bad bad.c")
system("./bad");
puts $?.exited?
system("./bad > out 2> err");
puts $?.exited?
The output of this program is
false
true
However, I would expect
false
false
since the program is killed by a segfault in both cases.
The command ruby -v produces the output
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]
Any explanations and/or workarounds would be greatly appreciated!
Since you are performing shell redirection on the second system() call, ruby needs to invoke your shell to do the work. Even though your program is being killed, the shell ends up executing just fine.
You can, instead, do the redirection directly in ruby:
system("./bad", out: 'out', err: 'err');
puts $?.exited? # => false
For more options, check out the documentation for spawn() - the options on system() are processed the same way.
I am trying to build an application that enables the user to interact with a command-line interactive shell, like IRB or Python. This means that I need to pipe user input into the shell and the shell's output back to the user.
I was hoping this was going to be as easy as piping STDIN, STDOUT, and STDERR, but most shells seem to respond differently to STDIN input as opposed to direct keyboard input.
For example, here is what happens when I pipe STDIN into python:
$ python 1> py.out 2> py.err <<EOI
> print 'hello'
> hello
> print 'goodbye'
> EOI
$ cat py.out
hello
$ cat py.err
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: name 'hello' is not defined
It seems that Python is interpreting the STDIN as a script file, and it doesn't pipe any of the interactive interface, like the ">>>" at the beginning of the line. It also fails at the first line with an error, because we do not see "goodbye" in the outfile.
Here is what happens with irb (Interactive Ruby):
$ irb 1> irb.out 2> irb.err <<EOI
> puts 'hello'
> hello
> puts 'goodbye'
> EOI
$ cat irb.out
Switch to inspect mode.
puts 'hello'
hello
nil
hello
NameError: undefined local variable or method `hello' for main:Object
from (irb):2
from /path/to/irb:16:in `<main>'
puts 'goodbye'
goodbye
nil
$ cat irb.err
IRB responds differently than Python: namely, it continues executing commands even when there is an error. However, it still lacks the shell interface.
How can an application interact with an interactive shell environment?
Technically, your first sample is not piping the input to Python; it is coming from a file — and yes, file input is treated differently.
The way to persuade a program that its input is coming from a terminal is using a pseudo-tty. There's a master side and a slave side; you'll hook the shell (Python, Ruby) to the slave side, and have your controlling program write to and read from the master side.
That's fairly tricky to set up. You may do better using expect or one of its clones to manage the pseudo-tty. Amongst other related questions, see How to perform automated Unix input?