In the page that describes Expect, it is written:
For example, the following example waits for "connected" from the
current process, or "busy", "failed" or "invalid password" from the
spawn_id named by $proc2.
expect {
-i $proc2 busy {puts busy\n ; exp_continue}
-re "failed|invalid password" abort
timeout abort
connected
}
As far as I understand, everything in that expect is relevant only to the spawn_id named by $proc2, while the current spawn_id isn't relevant.
That's because that the -i flag (as written prior to the first quotation):
... declares the output from the named spawn_id list be matched
against any following patterns (up to the next -i).
Perhaps the code is not written as intended?
Dor, I checked THE Expect book (Don Libes's "Exploring Expect") and you are correct.
If the -i flag is used in an expect block, then everything within that block will attempt to match the output from the spawned process with the id indicated after -i.
So, according to Don Libes, what that page says is wrong. And I would go with Don Libes on this one. :-)
Maybe you can report it to them so they can fix it?
Related
Is it possible to do the following in a expect script:
expect "phrase"
if timeout reached:
do this
else:
do that
See expect's man page:
expect [[-opts] pat1 body1] ... [-opts] patn [bodyn]
... ...
If the arguments to the entire expect statement require more than
one line, all the arguments may be "braced" into one so as to
avoid terminating each line with a backslash. In this one case,
the usual Tcl substitutions will occur despite the braces.
... ...
For example, the following fragment looks for a successful login.
(Note that abort is presumed to be a procedure defined elsewhere
in the script.)
expect {
busy {puts busy\n ; exp_continue}
failed abort
"invalid password" abort
timeout abort
connected
}
I want to test that echo 1 outputs 1, but
expect { `echo 1` }.to output("1").to_stdout
does not work. It says it outputs nothing to stdout, while
expect { print 1 }.to output("1").to_stdout
works just fine. Why doesn't the first one work?
expect { `echo 1` }.to output("1").to_stdout
doesn't work for two reasons:
echo runs in a subprocess. RSpec's output matcher doesn't handle output from subprocesses by default. But you can use to_stdout_from_any_process instead of to_stdout to handle subprocesses, although it's a bit slower.
output only works for output sent to the same standard output stream as the Ruby process. Backticks open a new standard output stream, send the command's standard output to it and return the contents of the stream when the command completes. I don't think you care whether you run your subprocess with backticks or some other way, so just use system (which sends the command's standard output to the Ruby process's standard output stream) instead.
Addressing those two points gives us this expectation, which passes:
expect { system("echo 1") }.to output("1\n").to_stdout_from_any_process
(I had to change the expected value for it to pass, since echo adds a newline.)
As MilesStanfield pointed out, in the case you gave it's equivalent and easier to just test the output of backticks rather than use output:
expect { `echo 1` }.to eq "1\n"
That might or might not work in the more complicated case that you presumably have in mind.
I have script that logs into devices and runs a show command.
I then set this output to a variable:
set output $expect_out(buffer)
and then print the variable to a file:
puts $fileId $output
When the script is run, I can see the whole output being generated, however in the file, only the bottom half of the output is saved.
This is probably because the buffer is reaching its limit. This show command is running right after another lengthy show command.
I tried using unset expect_out(buffer) but this still does not make a difference.
I also tried this solution http://wiki.tcl.tk/2958 and it still did not work (returns an error)
How can I get the script to store all of the output?
I see in the expect man page that the pattern full_buffer will match when the size of the buffer reaches match_max bytes, so you can do something like:
match_max 16000
# ...
expect {
full_buffer {
puts $fileid $expect_out(buffer)
exp_continue
}
"whatever you are currently expecting"
}
puts $fileid $expect_out(buffer)
You can also make use of the log_file to make it simple. You can control when to save and when to stop logging.
Have a look at here to know about the same.
I have been using sylfilter for over a year now (it is available from http://sylpheed.sraoss.jp/sylfilter/) and it works great as a filtering tool (no complaints). However, I have been trying to use procmail with sylfilter, but have been having a lot of trouble.
The web page for the filter shows:
sylfilter ~/Mail/inbox/1234
as the example to classify a message.
The return values are as following:
0 junk (spam)
1 clean (non-spam)
2 uncertain
127 other errors
I have been trying to incorporate sylfilter with procmail but not with much success. The big issue as compared with some other spam tool like bogofilter is that sylfilter does not make any changes to the e-mail message itself
(unlike bogofilter, for which examples abound on the web, and which
puts in a X-Bogosity field in the message header). I want everything
that is classified as Junk to go to $HOME/Mail/Junk and everything that
is not to be further classified into folders such as procmail rules.
Perhaps the stuff that returns 2 can go to $HOME/Mail/uncertain.
Here is my latest attempt based on suggestions made in the Fedora mailing list.
:0 Wc
| /usr/bin/sylfilter /dev/stdin
:0 a
$HOME/Mail/Junk/.
However, this does not process the e-mail message using sylfilter (and
the logfile says "No input file." before going on to process the other
rules). So, I was wondering if anyone here knew of a similar case and knew the answer to this question.
I am not familiar with sylfilter, and the (somewhat vague) problem description makes me think there is something wrong with feeding it a message on standard input. But if you can make that work, the following is how you examine a program's exit code in Procmail.
:0
* ? sylfilter /dev/stdin
$HOME/Mail/Junk/.
# You should now have the exit code in $? if you want it for further processing
SYLSTATUS=$?
:0
* SYLSTATUS ?? ^^1^^
$HOME/Mail/INBOX/.
# ... etc
The condition succeeds if sylfilter returns a success (zero) exit code; if it fails, we fall through to subsequent recipes. We save $? to a named variable so that we can examine its value even if a subsequent recipe resets the system global $? by invoking some other external program.
By the by, you should not need to hard-code the path to sylfilter. If it's in a nonstandard location, amend the PATH at the beginning of your .procmailrc rather than littering your code with explicit paths to executables. So if it's in /usr/local/really/sf/sylfilter, you'd put
PATH=/usr/local/really/sf:$PATH
If you need the message in a temporary file, try something like this;
TMP=`mktemp -t sylf.XXXXXXXX`
TRAP='rm -f $TMP'
:0c
$TMP
:0
* ? sylfilter $TMP
$HOME/Mail/Junk/.
# etc as above
The mktemp command creates a unique temporary file. The TRAP assignment sets up a command sequence to run when Procmail terminates; this takes care of cleaning out the temporary file when we are done. Because we will be the only writer to this file, we don't care about locking while writing a copy of the message to this file.
For more nitty-gritty syntax details, see also http://www.iki.fi/era/procmail/quickref.html
I'm a bit new to expect programming, so I need help.
Here's a sample session:
CMD> disable inactive
Accounts to be disabled are:
albert_a - UUID abcd-11-2222
brian_b - UUID bcde-22-3333
charley_c - UUID cdef-33-4444
Starting processing
...hundreds of lines of processing...
CMD> quit
Done.
I need to grab the username and UUIDs there (the UUIDs are not available through other means), then either save them into a file. How do I do that in expect?
Edit: the - UUID (space dash space "UUID") part of the list is static, and not found anywhere in the "hundreds of lines of processing", so I think I can match against that pattern... but how?
Assuming the answer to my question in the comments is 'yes', here's my suggestion.
First, you need to spawn whatever program will connect you to the server (ssh, telnet, or whatever), and login (expect user prompt, send password, expect prompt). You'll find plenty samples of that, so I'll skip that part.
Once you have done that, and have a command prompt, here's how I would send the command and expect & match output:
set file [open /tmp/inactive-users w] ;# open for writing and set file identifier
send "disable inactive\r"
expect {
-re "(\[a-z0-9_\]+) - UUID" { ;# match username followed by " - UUID"
puts $file "$expect_out(1,string)" ;# write username matched within parenthesis to file identifier
exp_continue ;# stay in the same expect loop in case another UUID line comes
}
-re "CMD>" { ;# if we hit the prompt, command execution has finished -> close file
close $file
}
timeout { ;# reasonably elegant exit in case something goes bad
puts $file "Error: expect block timed out"
close $file
}
}
Pls note that in the regexp I'm assuming that usernames can be composed of lowercase letters, numbers and underscores only.
If you need help with the login piece let me know, but you should be ok. There are plenty of samples of that out there.
Hope that helps!