Is it possible to spawn an application, send commands, expect results, but also 'respawn' the application in case it crashed and continue from the last point? I tried creating procedures for spawning, but I am not able to catch user shell prompt once the application gets closed
So it sounds like you are doing something like telneting or sshing
into a shell then running an application. The application dies or
hangs so the prompt does not return so expect does not return and so
you are stuck. You will need to use timeout to detect a hang and restart
your entire process ( including the spawn). Below is the boiler plate I start with when writing
expect scripts. The critical thing to help you resolve your problem
is to realize that spawn not only sets the spawn_id but also returns
the pid of the spawned process. You can use that pid to kill the
spawned process if you get an eof and/or a timeout. My boiler plate
kills the spawn process on timeout. The timeout does not bail from the
expect loop but waits for the eof before exiting. it also accumulates
the output of expect command so on timeout you may be able to see
where it dies. The boilerplate is in a proc called run. Spawn the process
and pass the pid and the spawn id . Reuse the boiler plate to define other
procs that will be the steps. put the procs in a script with a counter between
them as shown and repeat. The other answerer is correct that the unless the app
restarts where you left off you can need to start from scratch. If it does start from
where you left off. make the steps granular enough that you know what command
to repeat. and step to start from.
BoilerPlate
proc run { pid spawn_id buf } {
upvar $buf buffer; #buffer to accumulate output of expect
set bad 0;
set done 0;
exp_internal 0; # set to one for extensive debug
log_user 0; # set to one to watch action
expect {
-i $spawn_id
-re {} {
append buffer $expect_out(buffer); # accumultate expect output
exp_continue;
}
timeout {
send_user "timeout\n"
append buffer $expect_out(buffer); # accumultate expect output
exec kill -9 $pid
set bad 1
exp_continue;
}
fullbuffer {
send_user " buffer is full\n"
append buffer $expect_out(buffer); # accumultate expect output
exp_continue;
}
eof {
send_user "Eof detected\n"
append buffer $expect_out(buffer); # accumultate expect output
set done 1 ;
}
}
set exitstatus [ exp_wait -i $spawn_id ];
catch { exp_close -i $spawn_id };
if { $bad } {
if { $done } {
throw EXP_TIMEOUT "Application timeout"
}
throw BAD_ERROR "unexpected failure "
}
return $exitstatus
}
set count 0
set attempts 0 ; # try 4 times
while { $count == 0 && $attempts < 4 } {
set buff ""
set pid [spawn -noecho ssh user#host ]
try {
run $pid $::spawn_id buff
incr count
run2 $pid $::spawn_id buff
incr count
run3 $pid $::spawn_id buff
incr count
run4 $pid $::spawn_id buff
incr count
} trap EXP_TIMEOUT { a b } {
puts "$a $b"
puts " program failed at step $count"
} on error { a b } {
puts "$a $b"
puts " program failed at step $count"
} finally {
if { $count == 4 } {
puts "success"
} else {
set count 0
incr attempts
puts "$buff"
puts "restarting\n"
}
}
}
Related
I am currently encountering a problem.
When I want to download a file on a mikrotik in 6.48.6 using mtlogin and fetch tool, it works perfectly and the script waits until the router has finished downloading to send a "quit".
However, when trying the same manipulation on a router in version 7.1.5, the "quit" is sent directly, thus stopping the download because of the letter Q and thus sending "uit" thereafter in the prompt.
The prompts are similar for 6.48.6 and 7.1.5, and even when trying to add expects in the script, the result is the same.
I think the problem is in this part of the code, but don't know how to fix it.
# Run commands given on the command line.
proc run_commands { prompt command } {
global do_interact in_proc
set in_proc 1
# escape any parens in the prompt, such as "(enable)"
regsub -all "\[)(]" $prompt {\\&} reprompt
# handle escaped ;s in commands, and ;; and ^;
regsub -all {([^\\]);} $command "\\1\u0002;" esccommand
regsub -all {([^\\]);;} $esccommand "\\1;\u0002;" command
regsub {^;} $command "\u0002;" esccommand
regsub -all {[\\];} $esccommand ";" command
regsub -all {\u0002;} $command "\u0002" esccommand
set sep "\u0002"
set commands [split $esccommand $sep]
set num_commands [llength $commands]
for {set i 0} {$i < $num_commands} { incr i} {
send -- "[subst -nocommands [lindex $commands $i]]\r"
if { [lindex $commands $i] == "/system/reboot"} {
send "y\r"
}
expect {
-re "^\[^\n\r]*$reprompt" {}
-re "^\[^\n\r ]*>>.*$reprompt" { exp_continue }
-re "\[\n\r]+" { exp_continue }
}
}
if { $do_interact == 1 } {
interact
return 0
}
send "quit\r"
expect {
-re "^WARNING: There are unsaved configuration changes." {
send "y\r"
exp_continue
}
"\n" { exp_continue }
"\[^\n\r *]*Session terminated" { return 0 }
timeout { catch {close}; catch {wait};
return 0
}
eof { return 0 }
}
set in_proc 0
}
That's how it looks like
Does anyone have a solution?
I just find the solution in mtlogin at line 625!
foreach router [lrange $argv $i end] {
set router [string tolower $router]
send_user "$router\n"
# Figure out prompt.
set prompt "] > " #Just added a second whitespace after >
# alteon only "enables" based on the password used at login time
set autoenable 1
set enable 0
Hope it's gonna help you
I've got an expect script that looks a bit like this:
set timeout 15
spawn someprocess
expect "a line"
expect "another line"
expect "some other line"
Essentially, it's waiting until these lines appear. There are no actions to be taken.
I don't want to write the following for every line that I'm looking for:
expect {
"a line" {}
timeout { exit 1 }
}
I want expect to return a non-zero status code (i.e. in $?) if it times out at any point. How do I do this?
You can setup an expect_before line that is run "in parallel" with any later expect commands to test for timeout. Just add after your spawn command
expect_before timeout { exit 1 }
If you want to error out if the spawned process exits, you can combine them as follows:
expect_before {
timeout { exit 1 }
eof { exit 1 }
}
I have an expect script that performs an exec that can take some time (around 5 mins).
I have copied the script below and also the output from running the script.
If the script was timing out, I would have thought "timeout" was printed to std out?
Any pointers will be appreciated!
expect <<EOF
cd /home/vagrant/cloudstack
# 20 mins timeout for jetty to start and devcloud to be provisioned
set timeout 1200
match_max 1000000
set success_string "*Started Jetty Server*"
spawn "/home/vagrant/cloudstack_dev.sh" "-r"
expect {
-re "(\[^\r]*\)\r\n"
{
set current_line \$expect_out(buffer)
if { [ string match "\$success_string" "\$current_line" ] } {
flush stdout
puts "Started provisioning cloudstack."
# expect crashes executing the following line:
set exec_out [exec /home/vagrant/cloudstack_dev.sh -p]
puts "Finished provisioning cloudstack. Stopping Jetty."
# CTRL-C
send \003
expect eof
} else {
exp_continue
}
}
eof { puts "eof"; exit 1; }
timeout { puts "timeout"; exit 1; }
}
EOF
The output:
...
2014-03-14 06:44:08 (1.86 MB/s) - `/home/vagrant/devcloud.cfg' saved [3765/3765]
+ python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i /home/vagrant/devcloud.cfg
+ popd
+ exit 0
while executing
"exec /home/vagrant/cloudstack_dev.sh -p"
invoked from within
"expect {
-re "(\[^\r]*\)\r\n"
{
set current_line $expect_out(buffer)
if { [ string match "$success_string" "$current_line" ]..."
The function that gets run inside the cloudstack-dev.sh:
function provision_cloudstack () {
echo -e "\e[32mProvisioning Cloudstack.\e[39m"
pushd $PWD
if [ ! -e $progdir/devcloud.cfg ]
then
wget -P $progdir https://github.com/imduffy15/devcloud/raw/v0.2/devcloud.cfg
fi
python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i $progdir/devcloud.cfg
popd
}
From the Expect output, it seems as though the function is being run ok.
See http://wiki.tcl.tk/exec
the exec call by default returns an error status when the exec'ed command:
returns a non-zero exit status, or
emits any output to stderr
This second condition can be irksome. If you don't care about stderr, then use exec -ignorestderr
You should always catch an exec call. More details in the referenced wiki page, but at a minimum:
set status [catch {exec command} output]
if {$status > 0} {
# handle an error condition ...
} else {
# success
}
I found that expect command just after send command match data from send command.
Let see, my.sh:
#!/bin/sh
read line
echo ok
my.exp (some code is redundant, I emulate DejaGNU testing framework...):
set passn 0
proc pass {msg} { global passn; incr passn; send_user "PASS: $msg\n" }
set failn 0
proc fail {msg} { global failn; incr failn; send_user "FAIL: $msg\n" }
proc check {} {
global passn failn;
send_user "TOTOAL: [expr $passn + $failn], PASSED: $passn, FAIL: $failn\n"
if {$failn == 0} { exit 0 } { exit 1 }
}
set timeout 1
spawn ./my.sh
send hello\n
expect {
-ex hello {
send_user "~$expect_out(0,string)~\n"
pass hello
}
default { fail hello }
}
expect {
-ex ok { pass ok }
default { fail ok }
}
check
When run expect my.exp I get:
spawn ./my.sh
hello
~hello~
PASS: hello
ok
PASS: ok
TOTOAL: 2, PASSED: 2, FAIL: 0
I don't understand why hello matched!! Please tell me. I already reread:
http://expect.sourceforge.net/FAQ.html
expect works with
pseudoterminal devices. They are
much like normal terminals: unless you disable echo (using expect's
stty command), whatever you send is also visible as your "terminal
output" (which is expect's input).
stty_init variable is used by expect to set up newly created
pseudoterminals. If you add set stty_init -echo to the beginnig of
my.exp, the test will begin to fail.
I need to test how my remote server is handling ping request. I need to ping remote server from my windows with payload of say 50 kb. I need my tcl script sholud generate 20 such ping requests with 50 kb payload parallel so it will result 1 mb receive traffic at server at given instance. here is the code for ping test
proc ping-igp {} {
foreach i {
172.35.122.18
} {
if {[catch {exec ping $i -n 1 -l 10000} result]} {
set result 0
}
if {[regexp "Reply from $i" $result]} {
puts "$i pinged"
} else {
puts "$i Failed"
}
}
}
If you want to ping in parallel then you can use open instead of exec and use fileevents to read from the ping process.
An example of using open to ping a server with two parallel processes:
set server 172.35.122.18
proc pingResult {chan serv i} {
set reply [read $chan]
if {[eof $chan]} {
close $chan
}
if {[regexp "Reply from $serv" $result]} {
puts "$serv number $i pinged"
} else {
puts "$serv number $i Failed"
}
}
for {set x 0} {$x < 2} {incr $x} {
set chan [open "|ping $server -n 1 -l 10000"]
fileevent $chan readable "pingResult $chan {$server} $x"
}
See this page for more info: http://www.tcl.tk/man/tcl/tutorial/Tcl26.html
Here's a very simple piece of code to do the pinging in the background by opening a pipeline. To do that, make the first character of the “filename” to open be a |, when the rest of the “filename” is interpreted as a Tcl list of command line arguments, just as in exec:
proc doPing {host} {
# These are the right sort of arguments to ping for OSX.
set f [open "|ping -c 1 -t 5 $host"]
fconfigure $f -blocking 0
fileevent $f readable "doRead $host $f"
}
proc doRead {host f} {
global replies
if {[gets $f line] >= 0} {
if {[regexp "Reply from $host" $result]} {
# Switch to drain mode so this pipe will get cleaned up
fileevent $f readable "doDrain $f"
lappend replies($host) 1
incr replies(*)
}
} elseif {[eof $f]} {
# Pipe closed, search term not present
lappend replies($host) 0
incr replies(*)
close $f
}
}
# Just reads and forgets lines until EOF
proc doDrain {f} {
gets $f
if {[eof $f]} {close $f}
}
You'll also need to run the event loop; this might be trivial (you're using Tk) or might need to be explicit (vwait) but can't be integrated into the above. But you can use a clever trick to run the event loop for long enough to collect all the results:
set hosts 172.35.122.18
set replies(*)
foreach host $hosts {
for {set i 0} {$i < 20} {incr i} {
doPing $host
incr expectedCount
}
}
while {$replies(*) < $expectedCount} {
vwait replies(*)
}
Then, just look at the contents of the replies array to get a summary of what happened.