Expect exits too soon - expect

I have the following bash script (script.sh):
#!/bin/bash
read -p "Remove? (y|n): " answer
echo "You answered '$answer'."
and I would like to drive it using expect. I have the following script (expect.exp, in the same directory):
#!/usr/bin/expect -f
set timeout -1
spawn -noecho ./script.sh
expect "^Remove"
send "y\r"
but it doesn't work as expected (pun intended). The result is:
~/Playground$ ./expect.exp
Remove? (y|n): ~/Playground$
So, the expect script somehow fails on the first 'expect "^Remove"' line and exits immediately, and the rest of script.sh does not execute. What am I doing wrong here?
I have been following the basic tutorials found online (the ones with the ftp examples). I am using expect 5.45 on Kubuntu 12.10.
Edit
So it changes if I add either 'interact' or 'expect eof' at the very end. But I have no idea what happens and why. Any help?

Two things I see:
"^Remove" is a regular expression, but by default expect uses glob patterns. Try
expect -re "^Remove"
while developing your program, add exp_internal 1 to the top of the script. Then expect will show you what's happening.
Ah, I see that expect adds special meaning to ^ beyond Tcl's glob patterns.
However, because expect is not line oriented, these characters (^ and $) match the beginning and end of the data (as opposed to lines) currently in the expect matching buffer
So what you see is that you send y\r and then you expect script exits as it has nothing more to do. When your script exits, the spawned child process will be killed. Hence the need to wait for the spawned child to end first: expect eof

Problem
You are not matching any text after the shell script's prompt, so the buffer for your spawned process never gets printed. The script finishes, the spawned process closes, and that's the end of the story.
Solution
Make sure you expect a specific response or your shell script's EOF, and then print your buffer. For example:
#!/usr/bin/expect -f
spawn -noecho "./script.sh"
expect "Remove" { send "y\r" }
expect EOF { send_user $expect_out(buffer) }

Related

Write ActiveState Tcl Program for Windows that expects a password and enters it

I want to write a shell program in windows that runs another shell script and expects a password prompt from the Git bash terminal and inputs it.
This is what I have so far:
#!/bin/sh
# \
exec tclsh "$0" ${1+"$#"}
package require Expect
spawn sampleScript.sh
expect "Password:"
send "pass123"
sampleScript.sh code:
echo 'Hello, world.' >foo.txt
my program outputs the following:
'The operation completed successfully. while executing "spawn sampleScript.sh"
(file "compare.tcl" line 6)'
However, there is no foo.txt that is created in my local file folder where the scripts are. Can you help?
The key with expect programs is to let the spawned program exit gracefully. As it currently stands, after your expect script sends the password, it immediately exits, and that kills the spawned program too early.
If you don't need to interact with the sampleScript (i.e. just let it run to completion), the last line in the expect script should be
expect eof
Otherwise, use
interact
Read How to create a Minimal, Reproducible Example -- your updated code does not reproduce the error you're seeing
Tcl code:
when you send something, you usually need to "hit Enter": send "password\r"
Did you add expect eof to the Tcl script? If not, you might be killing sampleScript.sh before it has a chance to create the output file
sampleScript.sh: Is that really your sample script? Where's the password prompt?

Using expect script inside started telnet session

I am using an expect script to interact with telnet session, which started earlier from bash. The script dont listen for symbols in command line. It just send commands after timeout runs out. I am sure, that i entered the right symbol to expect (), because the script works, when i run it standalone.
I have list of command (1 command per line) to send to mikrotik. My expect script puts every command to same line and then the script send them. I separated commands with ; and it worked, but just for a short command. So I decided to separate file with commands to smaller files. Then I open telnet session (in bash) and start expect script, which opens smaller files and send commands over and over. So I don´t need to restart same session over and over. Problem is, that expect script don´t send command, when it gets awaited symbol, but it sends commands after a timeout runs out.
This is part of bash script. It could be easier, to do whole script in Expect, but I didn't add loop and if condition yet and when i tried to create some while or if structure, I failed in it.
(
echo open "12.12.13.44"
sleep 2
echo "admin+t"
sleep 2
echo "admin"
sleep 2
/usr/bin/expect /home/toor/expect2.sh
) | telnet
Here is expected symbol
expect "> "
and here is the part of expect script, which puts all commands to same line (without sleep 1 command, script sends quit in the middle of reading from file)
set timeout 1
set fid [open /home/toor/file.txt]
set content [read $fid]
close $fid
set records [split $content "\r"]
foreach record $records {
lassign $records \
commands
expect "> "
send "$commands\r"
}
sleep 1
expect "> "
send "quit\r"
I will be grateful for any advice regarding my problem with the expecting for symbol, as well as with writing to same line.
You are piping the output of expect into telnet; there is no way for expect to receive information back from the telnet process in this scenario.
I would approach this as a chain of expect scripts instead - write one which starts the session, then hands over control to your existing script.

How to suppress expect send output?

I have an expect script which I'd like to behave as a fancy ssh program that hops several machines and sets up the environment on target machine before running the commands.
I can use log_user 0/1 to turn off / on output from expect and that helps with password prompts and login banners, and commands to setup environment.
But, like ssh, once my script starts to issue commands, I don't want to see the issued command. That is I don't want to see "command" after send "command\n". All I want to see is the results of command.
How do I suppress the send output, but not the results?
Here's a snippet of the expect script:
log_user 1
foreach daline [lrange \$argv 0 end] {
send "\$daline\r"
set buffer1
}
So prior to this loop, I send password, and setup environment. Then in this loop, I run each bash command that was fed to the expect as an argument.
thanks.
Many programs echo their input. For example, if you send the date command to the shell, you will see the string date followed by a date. More precisely, you will see everything that you would ordinarily see at a terminal. This includes formatting, too.
send "date\r"
expect -re $prompt
The command above ends with expect_out (buffer) set to date\r\nFri Nov 7 20:47:32 IST 2014\r\n. More importantly, the string date has been echoed. Also, each line ends with a \r\n, including the one you sent with a \r. The echoing of date has nothing to do with the send command.
To put this another way, there is no way to send the string and have send not echo it because send is not echoing it in the first place. The spawned process is.
In many cases, the spawned process actually delegates the task of echoing to the terminal driver, but the result is the same-you see your input to the process as output from the process.
Often, echoed input can be handled by using log_user only (which you have used in different place). As an example, suppose a connection to a remote host has been spawned and you want to get the remote date, but without seeing the date command itself echoed. A common error is to write:
log_user 0 ;# WRONG
send "date\r" ;# WRONG
log_user 1 ;# WRONG
expect -re .*\n ;# WRONG
When run, the log_user command has no effect because expect does not read the echoed "date" until the expect command. The correct way to solve this problem is as follows:
send "date\r"
log_user 0
expect -re "\n(\[^\r]*)\r" ;# match actual date
log_user 1
puts "$expect_out(l,string)" ;# print actual date only
If you are sending a lot of commands to a remote shell it may be more convenient to just disable all echoing in the first place. You can spawn a shell and then send the command stty -echo, after which your commands will no longer be echoed. stty echo re enables echoing.
spawn ssh <host>
stty -echo; # Disable 'echo' here
expect something
#Your further code here
stty echo # Enable 'echo' here
#Close of connection
Reference : Exploring Expect

How do I tell expect that I have finished the interactive mode?

I am writing some expect commands in bash.
Script:
#!/bin/bash
set timeout -1
expect -c "
spawn telnet $IP $PORT1
sleep 1
send \"\r\"
send \"\r\"
expect Prompt1>
interact timeout 20 {
sleep 1
}
expect {
Prompt2> {send \"dir\r\" }
}
"
My intentions with the script are, first let it telnet into a machine, when it sees Prompt1, let it give control to me, I will execute a command to load a specific image. Then wait until Prompt2 shows up (which indicates image has been loaded). Then Let it execute the further set of commands.
After running the script, I could get into the interactive mode, load my image. The problem is getting out of interactive mode on the remote machine and giving back control to it.
The Error which I got:
expect: spawn id exp4 not open
while executing
"expect -nobrace Prompt2 {send "dir\r" }"
invoked from within
"expect {
Prompt2 {send "dir\r" }
}"
How can I do this?
Your problem is two-fold...
You should interact with an explicit return, and give it some way to know you've released control... in this case, I use three plus signs and hit enter.
After you return control, the script will need to get the prompt again, which means the first thing you do after returning control to expect is send another \r. I edited for what I think you're trying to do...
Example follows...
#!/bin/bash
set timeout -1
expect -c "
spawn telnet $IP $PORT1
sleep 1
send \"\r\"
send \"\r\"
expect Prompt1>
interact +++ return
send \"\r\"
expect {
Prompt2> {send \"dir\r\" }
}
"
return = fail
return didn't work for me because in my case, it was not a shell that can simply prompt me again with the same question. I couldn't figure out how to get it to match on what was printed before I did return.
expect_out (to fix above solution) = fail
The manual says:
Upon matching a pattern (or eof or full_buffer), any matching and previously unmatched output is saved in the variable expect_out(buffer).
But I couldn't get that to work (except where I used it below, combined with -indices which makes it work there, and no idea how to make it work to get previous output fed into a new expect { ... } block.)
expect_user
And the solution here using expect_user didn't work for me either because it had no explanation and wasn't used how I wanted, so didn't know how to apply this limited example in my actual expect file.
my solution
So what I did instead was avoid the interactive mode, and just have a way to provide input, one line at a time. It even works for arrow keys and alt+..., (in dpkg Dialog questions) but not for simply <enter> sometimes (hit alt+y for <Yes> or alt+o for <Ok> for those in dpkg Dialog). (anyone know how to send an enter? not '\n', but the enter key like dpkg Dialog wants?)
The -i $user_spawn_id part means that instead of only looking at your spawned process, it also looks at what the user types. This affects everything after it, so you use expect_after or put it below the rest, not expect_before. -indices makes it possible to read the captured part of the regular expression that matches. expect_out(1,string) is the part I wanted (all except the colon).
expect_after {
-i $user_spawn_id
# single line custom input; prefix with : and the rest is sent to the application
-indices -re ":(.*)" {
send "$expect_out(1,string)"
}
}
Using expect_after means it will apply to all following expect blocks until the next expect_after. So you can put that anywhere above your usual expect lines in the file.
and my case/purpose
I wanted to automate do-release-upgrade which does not properly support the usual Debian non-interactive flags (see here)...it just hangs and ignores input instead of proceeding after a question. But the questions are unpredictable... and an aborted upgrade means you could mess up your system, so some fallback to interaction is required.
Thanks Mike for that suggestion.
I tweaked it a bit and adapted it to my problem.
Changed code:
expect Prompt1>
interact timeout 10 return
expect {
timeout {exp_continue}
Prompt2 {send \"dir\r\" }
}
The timeout 10 value is not related to the set timeout -1 we set initally. Hence I can execute whatever commands I want on Prompt1 and once keyboard is idle for 10 seconds then script gains control back.
Even after this I faced one more problem, After Prompt1, I wanted to execute command to load a particular image. The image loading takes around 2 minutes. Even with set timeout -1 the script was timing out waiting for Prompt2. It's not the telnet timeout even, which i verified. But the solution for this is the adding exp_continue in case of timeout within the expect statement.
For your set timeout -1 to take into effect it should be placed before the spawn telnet command within expect.

Conditional statament inside expect command called from a bash script

I've been trying to automate some configuration backups on my cisco devices, i've already managed to do the script that accomplishes the task but I'm trying to improve it to handle errors too.
I think that's necessary to catch the errors on two steps, first just after the 'send \"$pass\r\"' to get login errors (access denied messages) and at the 'expect \": end\"' line, to be sure that the commands issued were able to pull the configuration from the device.
I've seen some ways to do it if you work on a expect script, but i want to use a bash script to be able to supply a list of devices from a .txt file.
#!/bin/bash
data=$(date +%d-%m-%Y)
dataOntem=$(date +%d-%m-%Y -d "-1 day")
hora=$(date +%d-%m-%Y-%H:%M:%S)
log=/firewall/log/bkpCisco.$data.log
user=MYUSER
pass=MYPASS
for firewall in `cat /firewall/script/firewall.cisco`
do
VAR=$(expect -c "
spawn ssh $user#$firewall
expect \"assword:\"
send \"$pass\r\"
expect \">\"
send \"ena\r\"
expect \"assword:\"
send \"$pass\r\"
expect \"#\"
send \"conf t\r\"
expect \"conf\"
send \"no pager\r\"
send \"sh run\r\"
log_file -noappend /firewall/backup/$firewall.$data.cfg.tmp
expect \": end\"
log_file
send \"pager 24\r\"
send \"exit\r\"
send \"exit\r\"
")
echo "$VAR"
done
You need alternative patterns in the expect statements where you want to catch errors. If you're looking for a specific error message you can specify that, alternatively just specify a timeout handler which will eventually trigger when the normal output fails to appear.
Eg. after send \"$pass\r\" instead of expect \">\" try:
expect \">\" {} timeout {puts stderr {Could not log in}; exit}
ie. if the expected output arrives before the timeout (default 10 sec) do nothing and continue, otherwise complain and exit from expect. You might also need an eof pattern to match the case where your ssh session ends.
Note that since you don't do any variable substitution in expect, you don't need \"\" around your strings, you can use {} or even nothing when it's one word, eg. expect conf and send {no pager}.
BTW I agree with bstpierre that this would be cleaner if you dropped bash and did the whole thing in expect, but if bash does the job that's ok.
If you don't use single quotes (expect -c '...'), then all the $variables will be substituted by bash not expect. May be easier to put the expect code in a separate file, or maybe a heredoc.

Resources