Merge stdout and stderr in Popen - ruby

In Ruby's popen/spawn, how do I merge both STDOUT and STDERR as a single stream wihthout resorting to using >2&1?
In Python, this would be:
>>> import subprocess
>>> subprocess.check_output('my_prog args', stderr=subprocess.STDOUT, shell=True)
Note the stderr argument.
I use Open3 - as I don't want just stdout - but it already separates them into two streams.

Using the code from your other question, here you go:
cmd = 'a_prog --arg ... --arg2 ...'
Open3.popen2({"MYVAR" => "a_value"}, "#{cmd}", {:err => [:child, :out]}) { |i,o|
# This output should include stderr as well
output = o.read()
repr = "$ #{cmd}\n#{output}"
}
A couple changes:
The third parameter to popen2 will redirect stderr to stdoutl. Note that it needs to be the spawned process's stdout, not the system-wide stdout, so you need to specify :child's :out
You need to use .popen2 instead of .popen3 as it seems the redirection is ignored if you include the 3rd e option for stderr
Because you're using .popen2, you only pass |i,o| to the block:

A bit late, but take a look at Open3.popen2e - docs.
This behaves exactly as popen3, but merges stderr stdout as the second argument to the block.
So you can simply do
cmd = 'a_prog --arg ... --arg2 ...'
Open3.popen2e(cmd) { |input,output|
# Process as desired, with output containing stdout and stderr
}

Related

Is it possible for a Bash function, to output a value to a file descriptor, and assign only that to a variable?

I wanted to simulate a return value for a Bash function, and I was wondering if it's possible to use an adhoc file descriptor, in order to pass the value.
In other words:
function myfunction {
# print `stdout_value` to stdout
# print `stderr_value` to stderr
# print `return_value` to FD3 (or other)
}
# the values printed to stderr/stdout should be printed, but only
# `return_value` should be assigned to `myvalue`
myvalue=$(myfunction <FDs manipulation>)
Yes it is. But for that to work, first you need to save stdout to another descriptor for the whole call, and for command substitution; redirect file descriptor 3 to its stdout—so that what's written to it can be captured—, and its stdout to stdout of the whole call. E.g:
{ myvalue=$(myfunction 3>&1 1>&4); } 4>&1
Doing this for each call to that function sounds like a lot of work though. You better follow the convention that:
use stderr for reporting errors, warnings and debug info (including logs and prompts),
use stdout for showing results,
and use a return statement to denote overall success/failure.
Probably easiest to make a global copy of stdout first. For example:
#!/bin/sh
exec 4>&1
myfunction() {
echo stdout
echo stderr >&2
echo fd3 >&3
} 3>&1 1>&4
v=$(myfunction) # assigns the string "fd3"
echo v="$v"

Set redirection from inside function in bash

I’d like to do something of this form:
one() {
redirect_stderr_to '/tmp/file_one'
# function commands
}
two() {
redirect_stderr_to '/tmp/file_two'
# function commands
}
one
two
This would run one and two in succession, redirecting stderr to the respective files. The working equivalent would be:
one() {
# function commands
}
two() {
# function commands
}
one 2> '/tmp/file_one'
two 2> '/tmp/file_two'
But that is a bit ugly. I’d rather just have all the redirection instructions inside the functions themselves. It’d be easier to manage. I have a feeling this might not be possible, but want to be sure.
The simplest and most robust approach is to use function-level redirection: note how a redirection command is applied to whole functions, after the closing } below and is scoped to each function (no need to reset):
# Define functions with redirected stderr streams.
one() {
# Write something to stderr:
echo one >&2
} 2> '/tmp/file_one'
two() {
# Write something to stderr:
echo two >&2
} 2> '/tmp/file_two'
one
two
# Since the function-level redirections are localized to each function,
# this will again print to the terminal.
echo "done" >&2
Documentation links (thanks, #gniourf_gniourf):
Shell Functions in the Bash reference manual
Function Definition Command in the POSIX spec
Note that this implies that the feature is POSIX-compliant, and you can use it in sh (POSIX-features-only) scripts, too.
You can use the exec builtin (notice the effect of exec is not canceled once the function returns):
one() {
exec 2> '/tmp/file_one'
# function commands
}
two() {
exec 2> '/tmp/file_two'
# function commands
}
one # stderr redirected to /tmp/file_one
echo "hello world" >&2 # this is also redirected to /tmp/file_one
exec 2> "$(tty)" # here you are setting the default again (your terminal)
echo "hello world" >&2 # this is wrtitten in your terminal
two # stderr redirected to /tmp/file_two
Now, if you want to apply the redirection only to the function, the best approach is in mklement0's answer.
You can also use :
#!/bin/bash
one() {
(
# function commands
) 2> /tmp/file_one
}
two() {
(
# function commands
) 2> /tmp/file_two
}
one
two

Read STDOUT and STDERR from subprocess continiously

I'm using IO.popen to start a subprocess, but I only get the result of everything that happened in the time it took for the subprocess to run (sometimes 5 minutes or whatever) when the subprocess exits. I really need to be able to see everything the subprocess writes to stderr and stdout as-and-when it happens.
So far I could not find anything that works like this, but I'm sure it's possible.
if you need to get output in real time i would recommend to use stdlib PTY instead of popen
something like this:
require 'pty'
cmd = 'echo a; sleep 1; cat /some/file; sleep 1; echo b'
PTY.spawn cmd do |r, w, pid|
begin
r.sync
r.each_line { |l| puts "#{Time.now.strftime('%M:%S')} - #{l.strip}" }
rescue Errno::EIO => e
# simply ignoring this
ensure
::Process.wait pid
end
end
exit "#{cmd} failed" unless $? && $?.exitstatus == 0
> 33:36 - a
> 33:37 - cat: /some/file: No such file or directory
> 33:38 - b
this way you get output instantly, just as in terminal
You might want to use Open3.popen3 from standard library, it gives access to stdin, stdout, and stderr as streams.

Get error output of a system call?

When I do something like the following:
output = `identify some_file`
output == "Output of identify"
But when...
output = `identify non_existant_file`
output != "Error output of identify"
How can I get the error output of system calls?
I found out the answer. The output is being sent to stderr. So I can just add the following at the end of the command to redirect stderr to stdout:
output = `identify any_file 2>&1`
output == "Error or output of identify"
Here is the explanation of this witchcraft
You may use Open3.popen3.
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open3/rdoc/Open3.html#method-c-popen3
popen3(*cmd, &block) click to toggle source
Open stdin, stdout, and stderr streams and start external executable.
Open3.popen3([env,] cmd... [, opts]) {|stdin, stdout, stderr, wait_thr|
pid = wait_thr.pid # pid of the started process.
...
exit_status = wait_thr.value # Process::Status object returned.
}

How can I log STDOUT and STDERR to a single file and show only STDERR in the console using Ruby?

I can do something like this in bash:
myCommand arg1 arg2 2>&1 >> myLogFolder/myLogFile.log | tee -a myLogFolder/myLogFile.log
I would like to be able to say this instead:
log.rb myLogFolder/myLogFile.log myCommand arg1 arg2
Using the log.rb script would accomplish two things:
Result in a simpler statement with less redirection tricks and only a single specification of the log file.
Create the log folder, if necessary.
I was looking at Ruby's popen and spawn options, but I don't see a way to split the STDERR stream to two destinations.
This Ruby script satisfies my needs, but maybe there is a better way.
logPath = ARGV[0]
logFolder = File.dirname(logPath)
command = ARGV.slice(1..-1).join(" ")
`mkdir -p #{logFolder}`
exec "#{command} 2>&1 >> #{logPath} | tee -a #{logPath}"
Try this article about implementing functionality similar to tee using Ruby. You should be able to use that as a starting point for a pure-ruby (or at least exec-free) implementation of your shell code.
You can use Open3 module (manual) It returns 3 objects: stdin, stdout, stderr
However, you are not able to preserve the order between stdout and stderr.
Example:
#include <stdio.h>
int main(int argc, char *argv[]) {
fprintf(stdout, "Normal output\n");
fprintf(stderr, "Error output\n");
fprintf(stdout, "Normal output\n");
}
It is captured as:
Normal output
Normal output
Error output
The only way how to preserve order is to run program twice. :-(
Sample ruby code:
#!/usr/bin/env ruby
require 'open3'
require 'pathname'
logfile = ARGV.first()
cmd = ARGV.drop(1).join(" ")
dir = Pathname(logfile).dirname
if not File.directory?(dir)
Dir.mkdir(dir)
end
file = File.open(logfile, "w")
stdin, stdout, stderr = Open3.popen3(cmd)
stdin.close()
file.puts(stdout.read())
error = stderr.read()
file.puts(error)
puts error
file.close

Resources