Is there a way to create a single IO object whose read stream is the current process's STDOUT and whose write stream is the current process's STDIN?
This is similar to IO.popen, which runs a command as a subprocess and returns an IO object connected to the subprocesses standard streams. However, I don't want to run a subprocess, I want to use the current Ruby process.
Is there a way to create a single IO object
No. STDIN and STDOUT are two different file descriptors. An IO represents a single FD.
You can however, make something that acts like an IO object.
This probably contains a bunch of bugs as duplicating FDs is often bad.
require "forwardable"
class IOTee < IO
extend Forwardable
def_delegators :#in,
:close_read,
:read,
:read_nonblock,
:readchar,
:readlines,
:readpartial,
:sysread
def_delegators :#out,
:close_write,
:syswrite,
:write,
:write_nonblock
def initialize(input,output)
#in = input
#out = output
end
end
io = IOTee.new(STDIN,STDOUT) # You would swap these
io.puts("hi")
hi
=> nil
Depending on what you're doing there is IO#pipe and IO#reopen which could also be helpful.
http://ruby-doc.org/core-2.1.0/IO.html#method-i-reopen
http://ruby-doc.org/core-2.1.0/IO.html#method-c-pipe
I suspect that the above isn't really the problem you want to solve, but the problem you hit with your solution to the problem.
I suspect really making a pipe and reopening STDOUT and STDIN to either end is what you're really after. Combining them in a single IO object doesn't make much sense.
Also, if you were talking to yourself via STDIN and STDOUT, it would be very easy to reach a deadlock while you wait for yourself to read or write data.
Related
I wrote a script that operates on my Mac just fine. It has this line of code in it:
filename = "2011"
File.open(filename, File::WRONLY|File::CREAT|File::EXCL) do |logfile|
logfile.puts "MemberID,FirstName,LastName,BadEmail,gender,dateofbirth,ActiveStatus,Phone"
On Windows the script runs fine and it creates the logfile 2011, but it doesn't actually puts anything to that logfile, so the file is created, the script runs, but the logging doesn't happen.
Does anyone know why? I can't think of what would have changed in the actual functionality of the script that would cause the logging to cease.
First, for clarity I wouldn't use the flags to specify how to open/create the file. I'd use:
File.open(filename, 'a')
That's the standard mode for log-files; You want to create it if it doesn't exist, and you want to append if it does.
Logging typically requires writing to the same file multiple times through the running time of an application. People like to open the log and leave it open, but there's potential for problems if the code crashes before the file is closed or it gets flushed by Ruby or the OS. Also, the built-in buffering by Ruby and the OS can cause the file to buffer, then flush, which, when you're tailing the file, will make it jump in big chunks, which isn't much good if you're watching for something.
You can tell Ruby to force flushing immediately when you write to the file by setting sync = true:
logfile = File.open(filename, 'a')
logfile.sync = true
logfile.puts 'foo'
logfile.close
You could use fsync, which also forces the OS to flush its buffer.
The downside to forcing sync in either way is you negate the advantage of buffering your I/O. For normal file writing, like to a text file, don't use sync because you'll slow your application down. Instead let normal I/O happen as Ruby and the OS want. But for logging it's acceptable because logging should periodically send a line, not a big blob of text.
You could immediately flush the output, but that gets redundant and violates the DRY principle:
logfile = File.open(filename, 'a')
logfile.puts 'foo'
logfile.flush
logfile.puts 'bar'
logfile.flush
logfile.close
close flushes before actually closing the file I/O.
You can wrap your logging output in a method:
def log(text)
File.open(log_file, 'a') do |logout|
logout.puts(text)
end
end
That'll open, then close, the log file, and automatically flush the buffer, and negate the need to use sync.
Or you could take advantage of Ruby's Logger class and let it do all the work for you.
I understand that there are various ways to spawn new processes in Ruby (e.g. backticks, system(), exec(), etc...)
However, is it possible to spawn a new process directly with code passed as a block?
Just like forks (fork {... block ...}).
My problem is that I don't want to use forks as I don't want to copy all the memory (problematic in my case because of writing), I want to spawn a "fresh" project without calling an external ruby file.
fork is the only way to do this. However, on Linux at least, and I think on OSX too, fork is implemented as copy on write, meaning that until an area of memory is written to in the child process, it points directly to the area of the old parent process. So, no problem.
Edit: Nevermind. The above is wrong. Here's what I would do:
code = "puts 'hi'"
result = nil
popen("ruby") do |pipe|
pipe.puts code
pipe.close_write
result = pipe.read
end
It's common knowledge in most programming languages that the flow for working with files is open-use-close. Yet I saw many times in ruby codes unmatched File.open calls, and moreover I found this gem of knowledge in the ruby docs:
I/O streams are automatically closed when they are claimed by the garbage collector.
darkredandyellow friendly irc take on the issue:
[17:12] yes, and also, the number of file descriptors is usually limited by the OS
[17:29] I assume you can easily run out of available file descriptors before the garbage collector cleans up. in this case, you might want to use close them yourself. "claimed by the garbage collector." means that the GC acts at some point in the future. and it's expensive. a lot of reasons for explicitly closing files.
Do we need to explicitly close
If yes then why does the GC autoclose ?
If not then why the option?
I saw many times in ruby codes unmatched File.open calls
Can you give an example? I only ever see that in code written by newbies who lack the "common knowledge in most programming languages that the flow for working with files is open-use-close".
Experienced Rubyists either explicitly close their files, or, more idiomatically, use the block form of File.open, which automatically closes the file for you. Its implementation basically looks something like like this:
def File.open(*args, &block)
return open_with_block(*args, &block) if block_given?
open_without_block(*args)
end
def File.open_without_block(*args)
# do whatever ...
end
def File.open_with_block(*args)
yield f = open_without_block(*args)
ensure
f.close
end
Scripts are a special case. Scripts generally run so short, and use so few file descriptors that it simply doesn't make sense to close them, since the operating system will close them anyway when the script exits.
Do we need to explicitly close?
Yes.
If yes then why does the GC autoclose?
Because after it has collected the object, there is no way for you to close the file anymore, and thus you would leak file descriptors.
Note that it's not the garbage collector that closes the files. The garbage collector simply executes any finalizers for an object before it collects it. It just so happens that the File class defines a finalizer which closes the file.
If not then why the option?
Because wasted memory is cheap, but wasted file descriptors aren't. Therefore, it doesn't make sense to tie the lifetime of a file descriptor to the lifetime of some chunk of memory.
You simply cannot predict when the garbage collector will run. You cannot even predict if it will run at all: if you never run out of memory, the garbage collector will never run, therefore the finalizer will never run, therefore the file will never be closed.
You should always close file descriptors after use, that will also flush it. Often people use File.open or equivalent method with blocks to handle file descriptor lifetime. For example:
File.open('foo', 'w') do |f|
f.write "bar"
end
In that example the file is closed automatically.
According to http://ruby-doc.org/core-2.1.4/File.html#method-c-open
With no associated block, File.open is a synonym for ::new. If the
optional code block is given, it will be passed the opened file as an argument
and the File object will automatically be closed when the block
terminates. The value of the block will be returned from File.open.
Therefore, will automatically be closed when the block terminates :D
Yes
In case you don't, or if there is some other failure
See 2.
We can use the File.read() function to read the file in ruby.....
such as,
file_variable = File.read("filename.txt")
in this example file_variable can have the full value of that file....
I am trying to write to a single file from multiple threads. The problem I'm running into is that I don't see anything being written to the file until the program exits.
You need to file.flush to write it out. You can also set file.sync = true to have it flush automatically.
What is the value of the sync method on your io object? It is possible that either ruby or the underlying o/s are buffering the file output.
Check out the refences on buffering and syncing within the documentation
I've written a Windows application using the native win32 API. My app will launch other processes and capture the output and highlight stderr output in red.
In order to accomplish this I create a separate pipe for stdout and stderr and use them in the STARTUPINFO structure when calling CreateProcess. I then launch a separate thread for each stdout/stderr handle that reads from the pipe and logs the output to a window.
This works fine in most cases. The problem I am having is that if the child process logs to stderr and stdout in quick succession, my app will sometimes display the output in the incorrect order. I'm assuming this is due to using two threads to read from each handle.
Is it possible to capture stdout and stderr in the original order they were written to, while being able to distinguish between the two?
I'm pretty sure it can't be done, short of writing the spawned program to write in packets and add a time-stamp to each. Without that, you can normally plan on buffering happening in the standard library of the child process, so by the time they're even being transmitted through the pipe to the parent, there's a good chance that they're already out of order.
In most implementations of stdout and stderr that I've seen, stdout is buffered and stderr is not. Basically what this means is that you aren't guaranteed they're going to be in order even when running the program on straight command line.
http://en.wikipedia.org/wiki/Stderr#Standard_error_.28stderr.29
The short answer: You cannot ensure that you read the lines in the same order that they appear on cmd.exe because the order they appear on cmd.exe is not guaranteed.
Not really, you would think so but std_out is at the control of the system designers - exactly how and when std_out gets written is subject to system scheduler, which by my testing is subordinated to issues that are not as documented.
I was writing some stuff one day and did some work on one of the devices on the system while I had the code open in the editor and discovered that the system was giving real-time priority to the driver, leaving my carefully-crafted c-code somewhere about one tenth as important as the proprietary code.
Re-inverting that so that you get sequential ordering of the writes is gonna be challenging to say the least.
You can redirect stderr to stdout:
command_name 2>&1
This is possible in C using pipes, as I recall.
UPDATE: Oh, sorry -- missed the part about being able to distinguish between the two. I know TextMate did it somehow using kinda user visible code... Haven't looked for a while, but I'll give it a peek. But after some further thought, could you use something like Open3 in Ruby? You'd have to watch both STDOUT and STDERR at the same time, but really no one should expect a certain ordering of output regarding these two.
UPDATE 2: Example of what I meant in Ruby:
require 'open3'
Open3.popen3('ruby print3.rb') do |stdin, stdout, stderr|
loop do
puts stdout.gets
puts stderr.gets
end
end
...where print3.rb is just:
loop do
$stdout.puts 'hello from stdout'
$stderr.puts 'hello from stderr'
end
Instead of throwing the output straight to puts, you could send a message to an observer which would print it out in your program. Sorry, I don't have Windows on this machine (or any immediately available), but I hope this illustrates the concept.
I'm pretty sure that even if you don't separate them at all, you're still not guaranteed that they'll interchange one another in the correct order.
Since the intent is to annotate the output os an existing program, any possible interleaving of the two streams must be correct. The original developer will have placed appropriate flush() calls to ensure any mandatory ordering is honoured.
As previously explained, record each fragment that is written with a time stamp, and use this to recover the sequence actually seen by the output devices.