Ruby Net::FTP storbinary writes damaged file from pipe input - ruby

I want to store a binary file using Ruby's Net::FTP class. The content for the file is put into a pipe by a previous process. I need to take the bytes from the pipe (IO class) and store it (on the fly/no temporary file) to the ftp server.
If I do it this way
ftp = Net::FTP.new(##host, ##user, ##password)
ftp.debug_mode = true
ftp.passive = true
ftp.binary = true
ftp.storbinary("STOR #{name}", pipe, Net::FTP::DEFAULT_BLOCKSIZE)
ftp.close
the size of the stored file is about 500kb smaller than it should be (correct size is about 6,8 MB). The file contains gpg encrypted data. If try to decrypt, I get an error.
Storing directly from the pipe to a local file results in a file with right size and working decryption.
I'm relatively new to ruby, can someone give me a hint? Some idea for debugging? Can I provide additional informations?
Thanks for your help
Debug output from Net::FTP:
put: PASV
get: 227 Entering Passive Mode (80,237,136,162,233,60).
put: STOR test-ftp.tar.gz.gpg
get: 150 Opening BINARY mode data connection for test-ftp.tar.gz.gpg
get: 226 Transfer complete
Ruby version: ruby 2.1.5p273
OS: debian linux
Environment: ruby script is executed directly from bash
Some more code:
p_out, p_in = IO.pipe
##thread = Thread.new {
cmd = "gpg --no-tty --cipher-algo AES256 --compress-level 0 --passphrase-file #{##cmd.results[:gpg_passphrase_file]} --symmetric"
# Execute gpg
Open3.popen3 ( cmd ) { |stdin, stdout, stderr, wait_thr|
Thread.new {
cnt = IO::copy_stream pipe, stdin
##output.debug "GPG_Encryption::execute copied #{(Float(cnt)/1024/1024).round(2)} MiB bytes to gpg"
pipe.close
stdin.close
}
Thread.new {
cnt = IO::copy_stream stdout, p_in
##output.debug "GPG_Encryption::execute copied #{(Float(cnt)/1024/1024).round(2)} MiB bytes from gpg"
}
# wait for gpg finished
wait_thr.join
# Close pipe (sends eof)
p_in.close
# check result
if 0 == wait_thr.value
##output.info "gpg finished..."
else
##output.error "gpg returned an error"
##output.raw stderr.readlines.join
exit 1
end
}
}
ftp = Net::FTP.new(##host, ##user, ##password)
ftp.debug_mode = true
ftp.passive = true
ftp.binary = true
ftp.storbinary("STOR #{name}", pipe, Net::FTP::DEFAULT_BLOCKSIZE)
ftp.close

You might be closing the FTP connection before the file has been completely sent. Try removing the explicit ftp.close. You probably don't need it anyway as the connection will be closed automatically when ftp is garbage collected.

Related

issue about pexpect logfile_read

use pexpect SSH connections to run cmds on remote server, the command can be executed, but the results displayed on the terminal are not as expected, code like this(At first there was no time.sleep, it was added for debugging)
import logging
import time
from pexpectUtility import Session
logger = logging.getLogger(__name__)
def test_create_and_show():
cliPrompt = 'dev-r0'
hostPrompt = 'admin#dev-r0'
aa = Session()
aa.connect("admin","password", "10.10.0.10")
time.sleep(2)
aa.child.sendline("sonic-cli")
aa.child.expect(cliPrompt, 3)
tTime = 0
time.sleep(tTime)
aa.child.sendline("configure terminal")
aa.child.expect(cliPrompt, 3)
time.sleep(tTime)
aa.child.sendline("end")
aa.child.expect(cliPrompt, 3)
time.sleep(tTime)
aa.child.sendline("exit")
aa.child.expect(hostPrompt, 3)
aa.disconnect()
the pexpectUtility.py
import sys
import logging as log
if sys.platform == 'win32':
import WExpect as pexpect
spawn_class = pexpect.spawn_windows
else:
import pexpect
spawn_class = pexpect.spawn
class MutliIO:
def __init__(self, *fds):
self.fds = fds
def write(self, data):
for fd in self.fds:
fd.write(data)
def flush(self):
for fd in self.fds:
fd.flush()
class Session(spawn_class):
def __init__(self):
self.child = None
def connect(self, username, password, serverIp, protocol='ssh'):
self.protocol = protocol
self.username = username
self.password = password
self.serverIp = serverIp
if protocol == 'ssh':
cmd = "ssh -x -o StrictHostKeyChecking=no -l %s " % self.username
else:
cmd = "telnet "
cmd = cmd + serverIp
log.info('Connecting to Dut: %s\n' %(cmd))
expect_list = ['ogin: $', '[P|p]assword:', '\[confirm\] $',
'\[confirm yes/no\]:', '\[yes/no\]:', '\(yes/no\)\?',
'\[y/n\]:', '--More--', 'ONIE:/ #',
pexpect.TIMEOUT, pexpect.EOF]
self.child = spawn_class(cmd)
logfile = open('pexpect.log', 'w')
self.child.logfile_read = MutliIO(sys.stdout)
# self.child.logfile_read = MutliIO(sys.stdout, logfile)
# self.child.logfile_read = MutliIO(logfile)
try:
re = self.child.expect(expect_list, 10)
log.debug("expect pwd: {}".format(re))
except Exception as err:
log.error('%s' %err)
raise
# login
try:
self.child.sendline(self.password)
except Exception as err:
raise RuntimeError("login failed!", err)
def disconnect(self):
self.child.sendline("exit")
self.child.expect(pexpect.EOF)
self.child.close()
if self.child.logfile_read != None:
self.child.logfile_read = None
Executed commands are repeated displayed, just like batch input. log is as follows:
admin#dev-r0:~$ sonic-cli
configure terminal
configure terminal
end
exit
dev-r0# configure terminal
dev-r0(config)# end
dev-r0# exit
admin#dev-r0:~$ exit
logout
Connection to 10.10.0.10 closed.
When I set tTime to 5 (each command interval is 5 seconds) the log is as expected,I think this is not a good solution,I also want to know the root cause
admin#dev-r0:~$ sonic-cli
dev-r0# configure terminal
dev-r0(config)# end
dev-r0# exit
admin#dev-r0:~$ exit
logout
Connection to 10.10.0.10 closed.
When I directly use expect to implement the above operation, there is no need to wait for 5 seconds between commands, and the log displayed by the terminal is normal.
why pexpect has this issue? how to solve this? Thanks in advance
This is not the whole answer, but a first point to fix. After the
sendline("sonic-cli") the first expect() is going to return
immediately, as it will match the prompt admin#dev-r0:~$ which is already
there waiting, before the sonic-cli command arrives. This means the next
command configure terminal is sent immediately after sonic-cli.
You should enhance the connect() routine to expect the admin#dev-r0:~$
prompt before returning, or use this expect instead of the sleep(2) which
should not be necessary.
Referring to the sample code of pexpect on the Internet, I found that the root cause is a code problem: missing a expect() after sendline()
The changes are as follows:
# login
try:
self.child.sendline(self.password)
HOST_PROMPT = '\$' # remote server prompt
re = self.child.expect(HOST_PROMPT)
except Exception as err:
raise RuntimeError("login failed!", err)

subprocess sometimes sends returns empty

I have the following class that is used to run a third party command line tool which I have no control over.
I run this ina Qthread in a PyQt Gui.
I turn the gui into an EXE using Pyinstaller
Problems are more prevalent when it is an EXE
class CLI_Interface:
def process_f(self, command, bsize=4096):
self.kill_process(CLI_TOOL)
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
p = Popen(command, stdout=PIPE, stderr=PIPE,
startupinfo=startupinfo, bufsize=bsize, universal_newlines=True)
try:
out, err = p.communicate(timeout=120)
except TimeoutExpired:
p.kill()
out, err = p.communicate()
return out.split(), err.split()
def kill_process(self, proc):
# Check process is running, Kill it if it is,
# return False if not.
# uses its own popen for Stderr >> stdout
# If we use the self.process_f method, it will create an infinite loop
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
try:
kill_proc = Popen("TaskKill /IM {} /T /F".format(proc), stdout=PIPE, stderr=STDOUT,
startupinfo=startupinfo, universal_newlines=True).communicate()[0]
if 'ERROR' not in kill_proc.split():
return True # Process Killed
else:
self.kill_process(proc)
except Exception as e:
return False
def download_data(self, code):
""" download data from the device based on a 5 digit code """
command = '"{}" -l {},{} {}'.format(CLI_TOOL_PATH,
code[0], code[2], code[1])
try:
p = self.process_f(command)
proc, err = p[0], p[1]
try:
if err[-2] == '-p':
return False
return True
except IndexError:
if not proc:
return False # This means there is no data but the file is still saved!!
pass
return True
except Exception as e:
return False
def ....
def ....
def ....
Thread:
class GetDataThread(QThread):
taskFinished = pyqtSignal()
notConnected = pyqtSignal()
def __init__(self, f, parent=None):
super(GetDataThread, self).__init__(parent)
self.f = f
def run(self):
is_dongle_connected()
DD = cli.download_data(self.f)
if not DD:
self.notConnected.emit()
else:
self.taskFinished.emit()
I either get a done! or error - This is normal when running from the command line.
Sometimes I get an empty list returned and I put this back into a recursive loop after killing the program.
However, it does not seem to restart properly and the problem continues - it gets stuck in a loop of nothing!.
Meanwhile, the csv files the cli tool produces are created as normal yet I have no data from stdout / stderr
Looking at processes the conhost and the cli tool are destroyed no problem.
The gui will continue to fail (until I unplug and plug in the dongle and / or restart the program / computer.
When I open the CLI and run the same command, it works fine or throws an error (which I catch in the program no problem)
I have tried setting a buffer as some files generated can reach 2.4mb
I tried setting a higher timeout to allow for it to finish.
There does not seem to be a correlation with file size though and it can get stuck at any size.
The flow is like so:
Gui >> CLI >> Dongle >> Sensor
Running on Windows 10
How can I make the connection more solid or debug what processes might still be lingering around and stopping this?
Is it blocking?
Is it a pipe buffer overflow? - If so How do I determine the correct bufsize?
Is it something to do with PyQt and Python Subprocess or Pyinstaller?
Would it be better to use QProcess instead of Subprocess?
Thanks in advance!

Ruby - Character-wise IO.select

Is there a way to have IO.select return a single input character without receiving an EOF? I would like to be able to read user input from the keyboard the same way I read from any other stream, like an open TCP socket connection. It would allow me to make an event loop like this:
loop {
rd, _, _ = IO.select([long_lived_tcp_connection, stdin])
case rd[0]
when long_lived_tcp_connection
handle_server_sent_event(rd[0].read)
when stdin
handle_keypress(rd[0].read)
end
}
I've been looking into io/console but it doesn't quite give me this capability (although IO#getch comes pretty close).
You can set stdin to raw mode (taken from this answer):
begin
state = `stty -g`
`stty raw -echo -icanon isig`
loop do
rd, _, _ = IO.select([$stdin])
handle_keypress(rd[0].getc)
end
ensure
`stty #{state}`
end
IO#getc returns a single character from stdin. Another option is IO#read_nonblock to read all available data.
To read one character at a time I would use:
STDIN.each_char {|char| p char}

ruby Process.spawn stdout => pipe buffer size limit

In Ruby, I'm using Process.spawn to run a command in a new process. I've opened a bidirectional pipe to capture stdout and stderr from the spawned process. This works great until the bytes written to the pipe (stdout from the command) exceed 64Kb, at which point the command never finishes. I'm thinking the pipe buffer size has been hit, and writes to the pipe are now blocked, causing the process to never finish. In my actual application, i'm running a long command that has lots of stdout that I need to capture and save when the process has finished. Is there a way to raise the buffer size, or better yet have the buffer flushed so the limit is not hit?
cmd = "for i in {1..6600}; do echo '123456789'; done" #works fine at 6500 iterations.
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
Process.wait(cmd_pid)
pipe_cmd_out.close
out = pipe_cmd_in.read
puts "child: cmd out length = #{out.length}"
UPDATE
Open3::capture2e does seem to work for the simple example I showed. Unfortunately for my actual application, I need to be able to get the pid of the spawned process as well, and have control of when I block execution. The general idea is I fork a non blocking process. In this fork, I spawn a command. I send the command pid back to the parent process, then I wait on the command to finish to get the exit status. When command is finished, exit status is sent back to parent. In the parent, a loop is iterating every 1 second checking the DB for control actions such as pause and resume. If it gets a control action, it sends the appropriate signal to the command pid to stop, continue. When the command eventually finishes, the parent hits the rescue block and reads the exit status pipe, and saves to DB. Here's what my actual flow looks like:
#pipes for communicating the command pid, and exit status from child to parent
pipe_parent_in, pipe_child_out = IO.pipe
pipe_exitstatus_read, pipe_exitstatus_write = IO.pipe
child_pid = fork do
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
pipe_child_out.write cmd_pid #send command pid to parent
pipe_child_out.close
Process.wait(cmd_pid)
exitstatus = $?.exitstatus
pipe_exitstatus_write.write exitstatus #send exitstatus to parent
pipe_exitstatus_write.close
pipe_cmd_out.close
out = pipe_cmd_in.read
#save out to DB
end
#blocking read to get the command pid from the child
pipe_child_out.close
cmd_pid = pipe_parent_in.read.to_i
loop do
begin
Process.getpgid(cmd_pid) #when command is done, this will except
#job.reload #refresh from DB
#based on status in the DB, pause / resume command
if #job.status == 'pausing'
Process.kill('SIGSTOP', cmd_pid)
elsif #job.status == 'resuming'
Process.kill('SIGCONT', cmd_pid)
end
rescue
#command is no longer running
pipe_exitstatus_write.close
exitstatus = pipe_exitstatus_read.read
#save exit status to DB
break
end
sleep 1
end
NOTE: I cannot have the parent poll the command output pipe because the parent would then be blocked waiting for the pipe to close. It would not be able to pause and resume the command via the control loop.
This code seems to do what you want, and may be illustrative.
cmd = "for i in {1..6600}; do echo '123456789'; done"
pipe_cmd_in, pipe_cmd_out = IO.pipe
cmd_pid = Process.spawn(cmd, :out => pipe_cmd_out, :err => pipe_cmd_out)
#exitstatus = :not_done
Thread.new do
Process.wait(cmd_pid);
#exitstatus = $?.exitstatus
end
pipe_cmd_out.close
out = pipe_cmd_in.read;
sleep(0.1) while #exitstatus == :not_done
puts "child: cmd out length = #{out.length}; Exit status: #{#exitstatus}"
In general, sharing data between threads (#exitstatus) requires more care, but it works
here because it's only written once, by the thread, after initialization. (It turns out
$?.exitstatus can return nil, which is why I initialized it to something else.) The call
to sleep() is unlikely to execute even once since the read() just above it won't complete
until the spawned process has closed its stdout.
Indeed, your diagnosis is likely correct. You could implement a select and read loop on the pipe while waiting for the process to end, but likely you can get what you want more simply with stdlib Open3::capture2e.

Ruby TFTP server

I have the following code that I put together for a simple Ruby TFTP server. It works fine in that it listens to port 69 and my TFTP client connects to it and I am able to write the packets to the test.txt, but instead of just writing packets, I want to be able to TFTP a file from my client to the /temp directory.
Thanks in advance for your help!
require 'socket.so'
class TFTPServer
def initialize(port)
#port = port
end
def start
#socket = UDPSocket.new
#socket.bind('', #port)
while true
packet = #socket.recvfrom(1024)
puts packet
File.open('/temp/test.txt', 'w') do |p|
p.puts packet
end
end
end
end
server = TFTPServer.new(69)
server.start
Instead of writing to the /temp/test.txt you can use ruby's Tempfile class
So in your example:
require 'socket.so'
require 'tempfile'
class TFTPServer
def initialize(port)
#port = port
end
def start
#socket = UDPSocket.new
#socket.bind('', #port)
while true
packet = #socket.recvfrom(1024)
puts packet
Tempfile.new('tftpserver') do |p|
p.puts process_packet( packet )
end
end
end
end
server = TFTPServer.new(69)
server.start
This will create a guaranteed unique temporary file in your /tmp directory with a name based off of 'tftpserver'.
EDIT: I noticed you wanted to write to /temp (not /tmp) to do this you can do Tempfile.new('tftpserver', '/temp') to specify a specific temporary directory.
Edit 2: For anyone interested there is a library that will do this https://github.com/spiceworks/net-tftp
you'll not get it so easily, the tftp protocol is relatively easy, but put/get is not stateless, or at least if the file does not fit in a single packet, that is something like 512, but some extensions allow a bigger packet
the file on the wire is splited and you'll get a sequence of packets
each packet has a sequence number so the other end can send error for a specific packet
you should take a look at wikipedia page:
http://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
here a sample code I wrote in 2005 but id does the specular thing (it sends a file)
it's python but reasonably similar to ruby :D
def send_file(self, addr, filesend, filesize, blocksize):
print '[send_file] Sending %s (size: %d - blksize: %d) to %s:%d' % (filesend, filesize, blocksize, addr[0], addr[1])
fd = open(filesend, 'rb')
for c in range(1, (filesize / blocksize) + 2):
hdr = pack('!H', DATA) + pack('!H', c)
indata = fd.read(blocksize)
if debug > 5: print '[send_file] [%s] Sending block %d size %d' % (filesend, c, len(indata))
self.s.sendto(hdr + indata, addr)
data, addr = self.s.recvfrom(1024)
res = unpack('!H', data[:2])[0]
data = data[2:]
if res != ACK:
print '[send_file] Transfer aborted: %s' % errors[res]
break
if debug > 5:
print '[send_file] [%s] Received ack for block %d' % (filesend, unpack('>H', data[:2])[0] + 1)
fd.close()
## End Transfer
pkt = pack('!H', DATA) + pack('>H', c) + NULL
self.s.sendto(pkt, addr)
if debug: print '[send_file] File send Done (%d)' % c
you can find constants in arpa/tftp.h (you need a unix or search online)
the sequence number is a big endian (network order) short !H format for struct pack
ruby has something like python struct in String class

Resources