Is there a way to have IO.select return a single input character without receiving an EOF? I would like to be able to read user input from the keyboard the same way I read from any other stream, like an open TCP socket connection. It would allow me to make an event loop like this:
loop {
rd, _, _ = IO.select([long_lived_tcp_connection, stdin])
case rd[0]
when long_lived_tcp_connection
handle_server_sent_event(rd[0].read)
when stdin
handle_keypress(rd[0].read)
end
}
I've been looking into io/console but it doesn't quite give me this capability (although IO#getch comes pretty close).
You can set stdin to raw mode (taken from this answer):
begin
state = `stty -g`
`stty raw -echo -icanon isig`
loop do
rd, _, _ = IO.select([$stdin])
handle_keypress(rd[0].getc)
end
ensure
`stty #{state}`
end
IO#getc returns a single character from stdin. Another option is IO#read_nonblock to read all available data.
To read one character at a time I would use:
STDIN.each_char {|char| p char}
Related
I have the following class that is used to run a third party command line tool which I have no control over.
I run this ina Qthread in a PyQt Gui.
I turn the gui into an EXE using Pyinstaller
Problems are more prevalent when it is an EXE
class CLI_Interface:
def process_f(self, command, bsize=4096):
self.kill_process(CLI_TOOL)
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
p = Popen(command, stdout=PIPE, stderr=PIPE,
startupinfo=startupinfo, bufsize=bsize, universal_newlines=True)
try:
out, err = p.communicate(timeout=120)
except TimeoutExpired:
p.kill()
out, err = p.communicate()
return out.split(), err.split()
def kill_process(self, proc):
# Check process is running, Kill it if it is,
# return False if not.
# uses its own popen for Stderr >> stdout
# If we use the self.process_f method, it will create an infinite loop
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
try:
kill_proc = Popen("TaskKill /IM {} /T /F".format(proc), stdout=PIPE, stderr=STDOUT,
startupinfo=startupinfo, universal_newlines=True).communicate()[0]
if 'ERROR' not in kill_proc.split():
return True # Process Killed
else:
self.kill_process(proc)
except Exception as e:
return False
def download_data(self, code):
""" download data from the device based on a 5 digit code """
command = '"{}" -l {},{} {}'.format(CLI_TOOL_PATH,
code[0], code[2], code[1])
try:
p = self.process_f(command)
proc, err = p[0], p[1]
try:
if err[-2] == '-p':
return False
return True
except IndexError:
if not proc:
return False # This means there is no data but the file is still saved!!
pass
return True
except Exception as e:
return False
def ....
def ....
def ....
Thread:
class GetDataThread(QThread):
taskFinished = pyqtSignal()
notConnected = pyqtSignal()
def __init__(self, f, parent=None):
super(GetDataThread, self).__init__(parent)
self.f = f
def run(self):
is_dongle_connected()
DD = cli.download_data(self.f)
if not DD:
self.notConnected.emit()
else:
self.taskFinished.emit()
I either get a done! or error - This is normal when running from the command line.
Sometimes I get an empty list returned and I put this back into a recursive loop after killing the program.
However, it does not seem to restart properly and the problem continues - it gets stuck in a loop of nothing!.
Meanwhile, the csv files the cli tool produces are created as normal yet I have no data from stdout / stderr
Looking at processes the conhost and the cli tool are destroyed no problem.
The gui will continue to fail (until I unplug and plug in the dongle and / or restart the program / computer.
When I open the CLI and run the same command, it works fine or throws an error (which I catch in the program no problem)
I have tried setting a buffer as some files generated can reach 2.4mb
I tried setting a higher timeout to allow for it to finish.
There does not seem to be a correlation with file size though and it can get stuck at any size.
The flow is like so:
Gui >> CLI >> Dongle >> Sensor
Running on Windows 10
How can I make the connection more solid or debug what processes might still be lingering around and stopping this?
Is it blocking?
Is it a pipe buffer overflow? - If so How do I determine the correct bufsize?
Is it something to do with PyQt and Python Subprocess or Pyinstaller?
Would it be better to use QProcess instead of Subprocess?
Thanks in advance!
I want to store a binary file using Ruby's Net::FTP class. The content for the file is put into a pipe by a previous process. I need to take the bytes from the pipe (IO class) and store it (on the fly/no temporary file) to the ftp server.
If I do it this way
ftp = Net::FTP.new(##host, ##user, ##password)
ftp.debug_mode = true
ftp.passive = true
ftp.binary = true
ftp.storbinary("STOR #{name}", pipe, Net::FTP::DEFAULT_BLOCKSIZE)
ftp.close
the size of the stored file is about 500kb smaller than it should be (correct size is about 6,8 MB). The file contains gpg encrypted data. If try to decrypt, I get an error.
Storing directly from the pipe to a local file results in a file with right size and working decryption.
I'm relatively new to ruby, can someone give me a hint? Some idea for debugging? Can I provide additional informations?
Thanks for your help
Debug output from Net::FTP:
put: PASV
get: 227 Entering Passive Mode (80,237,136,162,233,60).
put: STOR test-ftp.tar.gz.gpg
get: 150 Opening BINARY mode data connection for test-ftp.tar.gz.gpg
get: 226 Transfer complete
Ruby version: ruby 2.1.5p273
OS: debian linux
Environment: ruby script is executed directly from bash
Some more code:
p_out, p_in = IO.pipe
##thread = Thread.new {
cmd = "gpg --no-tty --cipher-algo AES256 --compress-level 0 --passphrase-file #{##cmd.results[:gpg_passphrase_file]} --symmetric"
# Execute gpg
Open3.popen3 ( cmd ) { |stdin, stdout, stderr, wait_thr|
Thread.new {
cnt = IO::copy_stream pipe, stdin
##output.debug "GPG_Encryption::execute copied #{(Float(cnt)/1024/1024).round(2)} MiB bytes to gpg"
pipe.close
stdin.close
}
Thread.new {
cnt = IO::copy_stream stdout, p_in
##output.debug "GPG_Encryption::execute copied #{(Float(cnt)/1024/1024).round(2)} MiB bytes from gpg"
}
# wait for gpg finished
wait_thr.join
# Close pipe (sends eof)
p_in.close
# check result
if 0 == wait_thr.value
##output.info "gpg finished..."
else
##output.error "gpg returned an error"
##output.raw stderr.readlines.join
exit 1
end
}
}
ftp = Net::FTP.new(##host, ##user, ##password)
ftp.debug_mode = true
ftp.passive = true
ftp.binary = true
ftp.storbinary("STOR #{name}", pipe, Net::FTP::DEFAULT_BLOCKSIZE)
ftp.close
You might be closing the FTP connection before the file has been completely sent. Try removing the explicit ftp.close. You probably don't need it anyway as the connection will be closed automatically when ftp is garbage collected.
I am a beginnner in Tcl. I am trying to learn Tcl without getting involved in Tk. I came across commands like vwait and after but I was confused as most explanations involved the notion of event loop and further mostly demonstrated the concept by using Tk. I would like to understand the notion of events and event loop and how the commands I mentiond relate to them, please refere me some reference for this. The explantion should not use Tk as examples, use no Tcl extensions, assume no prior knowledge of events. Some (minimal) toy examples and real-wordl application of tcl event loop except GUI programming/Tk would be appreciated.
I came across this tutorial on Tclers wiki. I am looking for OTHER reference or explanation like this.
If you're not using Tk, the main reasons for using the event loop are for waiting in the background while you do some other I/O-bound task, and for handling server sockets.
Let's look at server sockets.
When you open a server socket with:
socket -server myCallbackProcedure 12345
You're arranging for an event handler to be set on the server socket so that when there is an incoming connection, that connection is converted you a normal socket and your supplied callback procedure (myCallbackProcedure) is called to handle the interaction with the socket. That's often in turn done by setting a fileevent handler so that incoming data is processed when it arrives instead of blocking the process waiting for it, but it doesn't have to be.
The event loop? That's a piece of code that calls into the OS (via select(), poll(), WaitForMultipleObject(), etc., depending on OS and build options) to wait until something happens on any nominated channel or a timeout occurs. It's very efficient, since the thread making the call can be suspended while waiting. If something happens, the OS call returns and Tcl arranges for the relevant callbacks to be called. (There's a queue internally.) It's a loop because once the events are processed, it's normal to go back and wait for some more. (That's what Tk does until there are no more windows for it to control, and what vwait does until the variable it is waiting for is set by some event handler.)
Asynchronous waits are managed using a time-ordered queue, and translate into setting the timeout on the call into the OS.
An example:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
puts $channel "Hi there!"
flush $channel
close $channel
}
vwait forever
# The “forever” is an idiom; it's just a variable that isn't used elsewhere
# and so is never set, and it indicates that we're going to run the process
# until we kill it manually.
A somewhat more complicated example with asynchronous connection handling so we can serve multiple connections at once (CPU needed: minimal):
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
puts "Connection from $clientHost"
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
}
}
vwait forever
An even more complicated example that will close connections 10 seconds (= 10000ms) after the last message on them:
socket -server myCallbackProcedure 12345
proc myCallbackProcedure {channel clientHost clientPort} {
global timeouts
puts "Connection from $clientHost"
set timeouts($channel) [after 10000 [list timeout $channel $clientHost]]
fileevent $channel readable [list incoming $channel $clientHost]
fconfigure $channel -blocking 0 -buffering line
puts $channel "Hi there!"
}
proc incoming {channel host timeout} {
global timeouts
after cancel $timeouts($channel)
if {[gets $channel line] >= 0} {
puts $channel "You said '$line'"
} elseif {[eof $channel]} {
puts "$host has gone"
close $channel
unset timeouts($channel)
return
}
# Reset the timeout
set timeouts($channel) [after 10000 [list timeout $channel $host]]
}
proc timeout {channel host} {
global timeouts
puts "Timeout for $host, closing anyway..."
close $channel
unset -nocomplain timeouts($channel)
}
vwait forever
I wrote a simple TCP-client for some device, which consumes and produces 8-byte packets (the code of send-command-receive-result function is below).
When I run it on linux, it works perfectly, being part of the loop (send-recv-send-recv-...), but on windows it receives only first msg from device (send-recv-send-send-...). The packets are still going - I could clearly see them with Wireshark - but something under my client just ignores them (or truncates to zero?). It doesn't even print "Data was read!" - looks like the reading stucks and gets killed by timeout function.
Before that, I used the sockets directly; changing to HandleStream yelded no difference at all. Wrapping main in withSocketsDo did nothing, too.
transmit :: Int -> HandleStream ByteString -> ByteString -> IO [Bytestring]
transmit delay sock packet = do
let input = timeout delay $ sock `readBlock` 8 <* putStrLn "\nData was read!"
sock `writeBlock` pack
strings <- whileJust input
return [str | Right str <- strings]
whileJust action = do
result <- action
case result of
Just a -> (:) <$> return a <*> whileJust action
Nothing -> return []
What am I doing wrong?
In Ruby, I'm using Process.spawn to run a command in a new process. I've opened a bidirectional pipe to capture stdout and stderr from the spawned process. This works great until the bytes written to the pipe (stdout from the command) exceed 64Kb, at which point the command never finishes. I'm thinking the pipe buffer size has been hit, and writes to the pipe are now blocked, causing the process to never finish. In my actual application, i'm running a long command that has lots of stdout that I need to capture and save when the process has finished. Is there a way to raise the buffer size, or better yet have the buffer flushed so the limit is not hit?
cmd = "for i in {1..6600}; do echo '123456789'; done" #works fine at 6500 iterations.
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
Process.wait(cmd_pid)
pipe_cmd_out.close
out = pipe_cmd_in.read
puts "child: cmd out length = #{out.length}"
UPDATE
Open3::capture2e does seem to work for the simple example I showed. Unfortunately for my actual application, I need to be able to get the pid of the spawned process as well, and have control of when I block execution. The general idea is I fork a non blocking process. In this fork, I spawn a command. I send the command pid back to the parent process, then I wait on the command to finish to get the exit status. When command is finished, exit status is sent back to parent. In the parent, a loop is iterating every 1 second checking the DB for control actions such as pause and resume. If it gets a control action, it sends the appropriate signal to the command pid to stop, continue. When the command eventually finishes, the parent hits the rescue block and reads the exit status pipe, and saves to DB. Here's what my actual flow looks like:
#pipes for communicating the command pid, and exit status from child to parent
pipe_parent_in, pipe_child_out = IO.pipe
pipe_exitstatus_read, pipe_exitstatus_write = IO.pipe
child_pid = fork do
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
pipe_child_out.write cmd_pid #send command pid to parent
pipe_child_out.close
Process.wait(cmd_pid)
exitstatus = $?.exitstatus
pipe_exitstatus_write.write exitstatus #send exitstatus to parent
pipe_exitstatus_write.close
pipe_cmd_out.close
out = pipe_cmd_in.read
#save out to DB
end
#blocking read to get the command pid from the child
pipe_child_out.close
cmd_pid = pipe_parent_in.read.to_i
loop do
begin
Process.getpgid(cmd_pid) #when command is done, this will except
#job.reload #refresh from DB
#based on status in the DB, pause / resume command
if #job.status == 'pausing'
Process.kill('SIGSTOP', cmd_pid)
elsif #job.status == 'resuming'
Process.kill('SIGCONT', cmd_pid)
end
rescue
#command is no longer running
pipe_exitstatus_write.close
exitstatus = pipe_exitstatus_read.read
#save exit status to DB
break
end
sleep 1
end
NOTE: I cannot have the parent poll the command output pipe because the parent would then be blocked waiting for the pipe to close. It would not be able to pause and resume the command via the control loop.
This code seems to do what you want, and may be illustrative.
cmd = "for i in {1..6600}; do echo '123456789'; done"
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
#exitstatus = :not_done
Thread.new do
Process.wait(cmd_pid);
#exitstatus = $?.exitstatus
end
pipe_cmd_out.close
out = pipe_cmd_in.read;
sleep(0.1) while #exitstatus == :not_done
puts "child: cmd out length = #{out.length}; Exit status: #{#exitstatus}"
In general, sharing data between threads (#exitstatus) requires more care, but it works
here because it's only written once, by the thread, after initialization. (It turns out
$?.exitstatus can return nil, which is why I initialized it to something else.) The call
to sleep() is unlikely to execute even once since the read() just above it won't complete
until the spawned process has closed its stdout.
Indeed, your diagnosis is likely correct. You could implement a select and read loop on the pipe while waiting for the process to end, but likely you can get what you want more simply with stdlib Open3::capture2e.