I have script that logs into devices and runs a show command.
I then set this output to a variable:
set output $expect_out(buffer)
and then print the variable to a file:
puts $fileId $output
When the script is run, I can see the whole output being generated, however in the file, only the bottom half of the output is saved.
This is probably because the buffer is reaching its limit. This show command is running right after another lengthy show command.
I tried using unset expect_out(buffer) but this still does not make a difference.
I also tried this solution http://wiki.tcl.tk/2958 and it still did not work (returns an error)
How can I get the script to store all of the output?
I see in the expect man page that the pattern full_buffer will match when the size of the buffer reaches match_max bytes, so you can do something like:
match_max 16000
# ...
expect {
full_buffer {
puts $fileid $expect_out(buffer)
exp_continue
}
"whatever you are currently expecting"
}
puts $fileid $expect_out(buffer)
You can also make use of the log_file to make it simple. You can control when to save and when to stop logging.
Have a look at here to know about the same.
Related
The following code is a simplification of my current situation. I have a JSON log source which I continuously fetch and write to stdout with puts.
#!/usr/bin/env ruby
require "json"
loop do
puts({ value: "foobar" }.to_json)
sleep 1
end
I want to be able to pipe the output of this script into jq for further processing, but in a 'stream'-friendly way, using unix pipes. Running the above code like so:
./my_script | jq
Results in an empty output. However, if I place an exit statement after the sleep call, the output is sent through the pipe to jq as expected. I was able to solve this problem by calling $stdout.flush following the puts call. While it's working now, I'm not sure why. $stdout.sync is set to true by default (see IO#sync). It seems to me that if sync was enabled, then Ruby should be doing no output buffering, and calling $stdout.flush should not be required - yet it is.
My follow-up question is about using tail instead of jq. It seems to me that I should be able to pipe a text stream into tail the same way I pipe it into jq, but neither method (with the $stdout.flush call or without it) works - the output is just empty.
As #Ry points out in the comments, $stdout.sync is true by default in IRB, but this is not necessarily the same for scripts.
So you should set $stdout.sync = true to be sure to prevent buffering.
I try to run an external command in Ruby, and parse its output .
IO.popen(command, :err=>[:child, :out]) {|ls_io|
ls_io.each do |line|
print line
end
}
This way of doing it works wonders… except when I parse the progress-output of a c-program that shows it progress to stdout with \r.
As long as the c-program has not outputted a \n (that is as long as it has not finished some long-operation), Ruby waits and sees nothing. Then when a \n is outputted, Ruby sees it all
1%\r2%\r3%\r…100%
task finished
I tried all of the many ways to call external commands (eg Calling shell commands from Ruby) ; but none seem to capture the progress. I also tried every opeartor such as STDOUT.sync = true, and the c-program does call fflush(stdout)
I finally found a workaroud. I do :
IO.popen(commande, :err=>[:child, :out]) {|ls_io|
while true
byte=ls_io.read(1)
if byte.nil?
break
end
print byte
end
}
It's stupid… but it works.
Any more elegant way, and much more efficient way to do this ? Performance is terrible, as if the "refresh rate" was slow.
Set the input record separator to "\r" right before your block (provided you know it in advance):
$/ = "\r"
Reference of global preset variables: http://www.zenspider.com/Languages/Ruby/QuickRef.html#pre-defined-variables
I'm a bit new to expect programming, so I need help.
Here's a sample session:
CMD> disable inactive
Accounts to be disabled are:
albert_a - UUID abcd-11-2222
brian_b - UUID bcde-22-3333
charley_c - UUID cdef-33-4444
Starting processing
...hundreds of lines of processing...
CMD> quit
Done.
I need to grab the username and UUIDs there (the UUIDs are not available through other means), then either save them into a file. How do I do that in expect?
Edit: the - UUID (space dash space "UUID") part of the list is static, and not found anywhere in the "hundreds of lines of processing", so I think I can match against that pattern... but how?
Assuming the answer to my question in the comments is 'yes', here's my suggestion.
First, you need to spawn whatever program will connect you to the server (ssh, telnet, or whatever), and login (expect user prompt, send password, expect prompt). You'll find plenty samples of that, so I'll skip that part.
Once you have done that, and have a command prompt, here's how I would send the command and expect & match output:
set file [open /tmp/inactive-users w] ;# open for writing and set file identifier
send "disable inactive\r"
expect {
-re "(\[a-z0-9_\]+) - UUID" { ;# match username followed by " - UUID"
puts $file "$expect_out(1,string)" ;# write username matched within parenthesis to file identifier
exp_continue ;# stay in the same expect loop in case another UUID line comes
}
-re "CMD>" { ;# if we hit the prompt, command execution has finished -> close file
close $file
}
timeout { ;# reasonably elegant exit in case something goes bad
puts $file "Error: expect block timed out"
close $file
}
}
Pls note that in the regexp I'm assuming that usernames can be composed of lowercase letters, numbers and underscores only.
If you need help with the login piece let me know, but you should be ok. There are plenty of samples of that out there.
Hope that helps!
In the page that describes Expect, it is written:
For example, the following example waits for "connected" from the
current process, or "busy", "failed" or "invalid password" from the
spawn_id named by $proc2.
expect {
-i $proc2 busy {puts busy\n ; exp_continue}
-re "failed|invalid password" abort
timeout abort
connected
}
As far as I understand, everything in that expect is relevant only to the spawn_id named by $proc2, while the current spawn_id isn't relevant.
That's because that the -i flag (as written prior to the first quotation):
... declares the output from the named spawn_id list be matched
against any following patterns (up to the next -i).
Perhaps the code is not written as intended?
Dor, I checked THE Expect book (Don Libes's "Exploring Expect") and you are correct.
If the -i flag is used in an expect block, then everything within that block will attempt to match the output from the spawned process with the id indicated after -i.
So, according to Don Libes, what that page says is wrong. And I would go with Don Libes on this one. :-)
Maybe you can report it to them so they can fix it?
I have a Ruby script that outputs progress messages on the same line, using the carriage return character, like this:
print "\r#{file_name} processed."
As an example, the output changes from 'file001.html' processed. to 'file002.html.' processed and so on until the script completes.
I'd like to replace the last progress message with Done., but I can't just write print "\rDone." because that piece of code outputs something like this:
Done.99.html processed.
I guess I have to empty the line after the last progress message and then print Done.. How do I do that?
You need to send the sequence of bytes that corresponds to the terminfo
variable clr_eol (capability name el) after using \r. There are several
ways that you could get that.
Simplest, assume that there's a constant value. On the terminals I've checked
it is \e[K, but I've only checked a couple. On both of those the following
works:
clear = "\e[K"
print "foo 123"
print "\r#{clear}bar\n"
You could also get the value using:
clear = `tput el`
Or you could use the terminfo gem:
require 'terminfo'
clear = TermInfo.control_string 'el'