I want to test that echo 1 outputs 1, but
expect { `echo 1` }.to output("1").to_stdout
does not work. It says it outputs nothing to stdout, while
expect { print 1 }.to output("1").to_stdout
works just fine. Why doesn't the first one work?
expect { `echo 1` }.to output("1").to_stdout
doesn't work for two reasons:
echo runs in a subprocess. RSpec's output matcher doesn't handle output from subprocesses by default. But you can use to_stdout_from_any_process instead of to_stdout to handle subprocesses, although it's a bit slower.
output only works for output sent to the same standard output stream as the Ruby process. Backticks open a new standard output stream, send the command's standard output to it and return the contents of the stream when the command completes. I don't think you care whether you run your subprocess with backticks or some other way, so just use system (which sends the command's standard output to the Ruby process's standard output stream) instead.
Addressing those two points gives us this expectation, which passes:
expect { system("echo 1") }.to output("1\n").to_stdout_from_any_process
(I had to change the expected value for it to pass, since echo adds a newline.)
As MilesStanfield pointed out, in the case you gave it's equivalent and easier to just test the output of backticks rather than use output:
expect { `echo 1` }.to eq "1\n"
That might or might not work in the more complicated case that you presumably have in mind.
Related
The following code is a simplification of my current situation. I have a JSON log source which I continuously fetch and write to stdout with puts.
#!/usr/bin/env ruby
require "json"
loop do
puts({ value: "foobar" }.to_json)
sleep 1
end
I want to be able to pipe the output of this script into jq for further processing, but in a 'stream'-friendly way, using unix pipes. Running the above code like so:
./my_script | jq
Results in an empty output. However, if I place an exit statement after the sleep call, the output is sent through the pipe to jq as expected. I was able to solve this problem by calling $stdout.flush following the puts call. While it's working now, I'm not sure why. $stdout.sync is set to true by default (see IO#sync). It seems to me that if sync was enabled, then Ruby should be doing no output buffering, and calling $stdout.flush should not be required - yet it is.
My follow-up question is about using tail instead of jq. It seems to me that I should be able to pipe a text stream into tail the same way I pipe it into jq, but neither method (with the $stdout.flush call or without it) works - the output is just empty.
As #Ry points out in the comments, $stdout.sync is true by default in IRB, but this is not necessarily the same for scripts.
So you should set $stdout.sync = true to be sure to prevent buffering.
I, in my script, shell a function that prints a message on the console. It can be called from any other function.
function print_message
{
echo "message content"
}
The problem is, in shell, functions like echo or printf that usually print data on standard output redirect their messages to the calling function instead as a return value.
return_value=$(print_message) # this line print nothing.
echo $return_value # This line print the message. I don't want to have to do it.
I would like to avoid this behavior and print it directly on standard - or error - output. Is there a way to do it?
Or am I just wrong to want to use functions in shell, and should I use instead a huge script to handle any comportment?
The $(...) calling syntax captures standard output. That is its job. That's what it does.
If you want static messages that don't get caught by that then you can use standard error (though don't do this for things that aren't error message or debugging messages, etc. please).
You can't have a function which outputs to standard output but that doesn't get caught by the $(...) context it is running in because there's only one standard output stream. The best you could do for that would be to detect when you have a controlling terminal/etc. and write directly to that instead (but I'd advise not doing that most of the time either).
To redirect to standard error for the function entirely you can do either of these.
print_message() {
echo "message content" >&2
}
or
print_message() {
echo "message content"
} >&2
The difference is immaterial when there is only one line of output but if there are multiple lines of output then the latter is likely to be slightly more optimized (especially when the output stream happens to be a file).
Also avoid the function keyword as it isn't POSIX/spec and isn't as broadly portable.
You are explicitly saying "don't print the output directly! Put it in a variable so I can print it myself!".
You can simply stop doing that, and the message will be printed automatically:
$ cat yourscript
#!/bin/bash
function print_message
{
echo "message content"
}
print_message
$ ./yourscript
message content
Invoking print_message inside $(...) redirects the output. If you don't want the output redirected then invoke the command without the $(...). E.g.
return_value=print_message # this line print nothing.
echo $return_value # this line print the message. I don't want to have to do it.
Note, the return value from the function you provided will now be the name of the function.
In a shell script I wrote to test how functions are returning values I came across an odd unexpected behavior. The code below assumes that when entering the function fnttmpfile the first echo statement would print to the console and then the second echo statement would actually return the string to the calling main. Well that's what I assumed, but I was wrong!
#!/bin/sh
fntmpfile() {
TMPFILE=/tmp/$1.$$
echo "This is my temp file dude!"
echo "$TMPFILE"
}
mainname=main
retval=$(fntmpfile "$mainname")
echo "main retval=$retval"
What actually happens is the reverse. The first echo goes to the calling function and the second echo goes to STDOUT. why is this and is there a better way....
main retval=This is my temp file dude!
/tmp/main.19121
The whole reason for this test is because I am writing a shell script to do some database backups and decided to use small functions to do specific things, ya know make it clean instead of spaghetti code. One of the functions I was using was this:
log_to_console() {
# arg1 = calling function name
# arg2 = message to log
printf "$1 - $2\n"
}
The whole problem with this is that the function that was returning a string value is getting the log_to_console output instead depending on the order of things. I guess this is one of those gotcha things about shell scripting that I wasn't aware of.
No, what's happening is that you are running your function, and it outputs two lines to stdout:
This is my temp file dude!
/tmp/main.4059
When you run it $(), bash will intercept the output and store it in the value. The string that is stored in the variable contains the first linebreak (the last one is removed). So what is really in your "retval" variable is the following C-style string:
"This is my temp file dude!\n/tmp/main.4059"
This is not really returning a string (can't do that in a shell script), it's just capturing whatever output your function returns. Which is why it doesn't work. Call your function normally if you want to log to console.
Whenever you want to execute something on the command line, you can use the following syntax:
%x(command to run)
However, I want to catch an error or at least get the response so I can parse it correctly. I tried setting:
result = %x(command to run)
and using a try-catch
begin
%x(command to run)
rescue
"didn't work"
end
to no avail. How can I capture the results instead of having them printed out?
So this doesn't directly answer your question (won't capture the command's output). But instead of trying begin/rescue, you can just check the exit code ($?) of the command:
%x(command to run)
unless $? == 0
"ack! error occurred"
end
Edit: Just remembered this new project. I think it does exactly what you want:
https://github.com/envato/safe_shell
You might want to redirect stderr to stdout:
result = %x(command to run 2>&1)
Or if you want to separate the error messages from the actual output, you can use popen3:
require 'open3'
stdin, stdout, stderr = Open3.popen3("find /proc")
Then you can read the actual output from stdout and error messages from stderr.
Here's how to use Ruby's open3:
require 'open3'
include Open3
stdin, stdout, stderr = popen3('date')
stdin.close
puts
puts "Reading STDOUT"
print stdout.read
stdout.close
puts
puts "Reading STDERR"
print stderr.read
stderr.close
# >>
# >> Reading STDOUT
# >> Sat Jan 22 20:03:13 MST 2011
# >>
# >> Reading STDERR
popen3 returns IO streams for STDIN, STDOUT and STDERR, allowing you to do I/O to the opened app.
Many command-line apps require their STDIN to be closed before they'll process their input.
You have to read from the returned STDOUT and STDERR pipes. They don't automatically shove content into a mystical variable.
In general, I like using a block with popen3 because it handles cleaning up behind itself.
Look through the examples in the Open3 doc. There's lots of nice functionality.
You need a mix of #Cam 's answer and #tonttu 's answer.
decent explanation of $? and others.
Edit: the domain http://blog.purifyapp.com is now in hands of a domain-squatter and scammer.
result = %x(command to run 2>&1)
unless $? == 0 #check if the child process exited cleanly.
puts "got error #{result}"
end
i have a Broken pipe (Errno::EPIPE) error popping up and i don't understand what it is or how to fix it. the full error is:
example.rb:19:in `write': Broken pipe (Errno::EPIPE)
from example.rb:19:in `print'
from example.rb:19
line 19 of my code is:
vari.print("x=" + my_val + "&y=1&z=Add+Num\r\n")
It means that whatever connection print is outputting to is no longer connected. Presumably the program began as input to some other program:
% ruby_program | another_program
What's happened is that another_program has exited sometime before the print in question.
Note:
The 1st section applies to Ruby scripts designed to act as terminal-based command-line utilities, assuming they require no custom handling or cleanup on receiving SIGPIPE, and assuming that you want them to exhibit the behavior of standard Unix utilities such as cat, which terminate quietly with a specific exit code when receiving SIGPIPE.
The 2nd section is for scripts that require custom handling of SIGPIPE, such as explicit cleanup and (conditional) output of error messages.
Opting into the system's default handling of SIGPIPE:
To complement wallyk's helpful answer and tokland's helpful answer:
If you want your script to exhibit the system's default behavior, as most Unix utilities (e.g., cat) do, use
Signal.trap("SIGPIPE", "SYSTEM_DEFAULT")
at the beginning of your script.
Now, when your script receives the SIGPIPE signal (on Unix-like systems), the system's default behavior will:
quietly terminate your script
report exit code 141 (which is calculated as 128 (indicating termination by signal) + 13 (SIGPIPE's number))
(By contrast, Signal.trap("PIPE", "EXIT") would report exit code 0, on receiving the signal, which indicates success.)
Note that in a shell context the exit code is often not apparent in a command such as ruby examble.rb | head, because the shell (by default) only reports the last command's exit code.
In bash, you can examine ${PIPESTATUS[#]} to see the exit codes of all commands in the pipeline.
Minimal example (run from bash):
ruby -e "Signal.trap('PIPE','SYSTEM_DEFAULT');(1..1e5).each do|i| puts i end" | head
The Ruby code tries to output 100,000 lines, but head only outputs the first 10 lines and then exits, which closes the read end of the pipe that connects the two commands.
The next time the Ruby code tries to the write end of that now broken pipe (after filling up the pipeline buffer), it triggers signal SIGPIPE, which terminates the Ruby process quietly, with exit code 141, which you can verify with echo ${PIPESTATUS[0]} afterwards.
By contrast, if you removed Signal.trap('PIPE','SYSTEM_DEFAULT'), i.e. with Ruby's default behavior, the command would break noisily (several lines of stderr output), and the exit code would be the nondescript 1.
Custom handling of SIGPIPE:
The following builds on donovan.lampa's helpful answer and adds an improvement suggested by
Kimmo Lehto, who points out that, depending on your script's purpose, receiving SIGPIPE shouldn't always terminate quietly, because it may indicate a legitimate error condition, notably in network code such as code for downloading a file from the internet.
He recommends the following idiom for that scenario:
begin
# ... The code that could trigger SIGPIPE
rescue Errno::EPIPE
# ... perform any cleanup, logging, ... here
# Raise an exception - which translates into stderr output -
# but only when outputting directly to a terminal.
# That way, failure is quiet inside a pipeline, such as when
# piping to standard utility `head`, where SIGPIPE is an expected
# condition.
raise if $stdout.tty?
# If the stack trace that the `raise` call results in is too noisy
# use something like the following instead, which outputs just the
# error message itself to stderr:
# $stderr.puts $! if $stdout.tty?
# Or, even simpler:
# warn $! if $stdout.tty?
# Exit with the usual exit code that indicates termination by SIGPIPE
exit 141
end
As a one-liner:
... rescue Errno::EPIPE raise if $stdout.tty?; exit 141
Note: Rescuing Errno::EPIPE works, because if the signal is ignored, the system call writing to the pipeline returns to the caller (instead of the caller process getting terminated), namely with standard error code EPIPE, which Ruby surfaces as exception Errno::EPIPE.
Although signal traps do work, as tokland said, they are defined application wide and can cause some unexpected behavior if you want to handle a broken pipe in some other way somewhere else in your app.
I'd suggest just using a standard rescue since the error still inherits from StandardError. More about this module of errors: http://ruby-doc.org/core-2.0.0/Errno.html
Example:
begin
vari.print("x=" + my_val + "&y=1&z=Add+Num\r\n")
rescue Errno::EPIPE
puts "Connection broke!"
end
Edit: It's important to note (as #mklement0 does in the comments) that if you were originally piping your output using puts to something expecting output on STDOUT, the final puts in the code above will raise another Errno::EPIPE exception. It's probably better practice to use STDERR.puts anyway.
begin
vari.print("x=" + my_val + "&y=1&z=Add+Num\r\n")
rescue Errno::EPIPE
STDERR.puts "Connection broke!"
end
#wallyk is right on the problem. One solution is to capture the signal with Signal.trap:
Signal.trap("PIPE", "EXIT")
If you are aware of some problem with this approach, please add a comment below.