Spawn New Process and Return to Expect Script - bash

I am using an Expect script to automate the installation of a program. Because of an issue with one of the install dependencies, I need to pause the installation process at a specific point to edit the db.properties file. Once that file is changed, I can resume the installation process. I can spawn a new process in the middle of the installation to do this, but I get the "spawn id exp5 not open" error after closing that process.
db_edit.sh edits the appropriate file:
#!/usr/bin/sh
filename=db.properties
sed -i "s/<some_regex>/<new_db_info>/g" $filename
My automated installation script spawns the above script in the middle of its execution:
#!/usr/bin/expect
# Run the installer and log the output
spawn ./install.bin
log_file install_output.log
# Answer installer questions
# for simplicity, let's pretend there is only one
expect "PRESS <ENTER> TO CONTINUE:"
send "\r"
# Now I need to pause the installation and edit that file
spawn ./db_edit.sh
set db_edit_ID $spawn_id
close -i $db_edit_ID
send_log "DONE"
# Database Connection - the following must happen AFTER the db_edit script runs
expect "Hostname (Default: ):"
send "my_host.com\r"
# more connection info ...
expect eof
The output log install_output.log shows the following error:
PRESS <ENTER> TO CONTINUE: spawn ./db_edit.sh^M
DONEexpect: spawn id exp5 not open
while executing
"expect "Hostname (Default: ):""^M
The database info has been modified correctly, so I know the script works and it is indeed spawned. However, when I close that process, the spawn id of the installation process is also apparently closed, which causes the spawn id exp5 not open error.
Also curious is that the spawn appears to happen before it should. The response to "PRESS <ENTER>" should be "\r" or ^M to indicate that ENTER was sent.
How can I fix this to resume the installation script after closing db_edit.sh?

There is no need to automate any interactivity with that script, so don't use spawn
exec db_edit.sh
This way, you're not interfering with the spawn_id of the currently spawned process.

Related

Execute command on second terminal in bash script

I am writing a bash script to execute 2 commands at a time on 2 different terminal & original terminal wait for both 2 terminal to finish & then continue with remaining script.
I am able to open a different terminal with required command, however the original terminal seems not waiting for the 2nd one to complete & auto close before proceeding with remaining of the script.
#!/bin/bash
read -p "Hello"
read -p "Press enter to start sql installation"
for i in 1
do
xterm -hold -e mysql_secure_installation &
done
echo "completed installation"
Use the Bash wait command to cause the calling script to wait for a background process to complete. Your for loop implies that you may be launching multiple background processes in parallel even though in your question there's only one. Without any options wait will wait for all of them.
I wonder why you're launching the processes in xterm instead of directly.

Simulate the interaction with a CLI

I have a CLI that can launch processes, especially interactive shells and waits for them, they can be closed using the same CLI. I need to create some end to end tests for it using bash but I cannot see how to simulate the execution in the terminal; the output should be sent to the process in the "foreground".
Suppose that executing my-cli start launches a python script that start a subprocess (running an interactive shell) and waits to it.
in the testing script, exec(my-cli start) will replace the current process with the process running python script and not the interactive shell, so I cannot interact with the interactive shell after.
I thought about using pipes, but I think something that can simulate using a terminal will be better, any ideas ?
Example:
Suppose the code of my CLI (cli.py) is:
import subprocess
process = subprocess.Popen(['/bin/bash', '-i'], shell=False)
process.communicate()
Using expect I don't know if it's possible to communicate with the interactive shell (/bin/bash -i)
#!/usr/bin/expect -f
spawn python3 cli.py
#expect eof
send -- "echo $$\r"
As mentioned in a comment by Benjamin. I and many others have used expect in this scenario. As long as you can tell the text that will be presented in the terminal then you can use the following web pages as a guide to creating an expect script.
https://www.poftut.com/expect-scripting-tutorial-examples/
https://www.shellscript.sh/expect.html
https://www.journaldev.com/1405/expect-script-ssh-example-tutorial
Update based on the example provided. For this, I have a file foo.py:
import subprocess
process = subprocess.Popen(['/bin/bash', '-i'], shell=False)
process.communicate()
An expect file (expect-example.exp):
#!/usr/bin/expect
spawn python3 foo.py
expect "*bash*"
send "date\r"
send "exit\r"
interact
When I run this expect expect-example.exp I get the following:
$ expect expect-example.exp
spawn python3 foo.py
bash-3.2$ date
Mon 3 Jun 2019 14:03:37 BST
bash-3.2$ exit
exit
$
It is worth mentioning, given that I am running a single command date and I want to see the output from it I must include the interact command at the end of the script. Otherwise, my expect script will exit as soon as it has sent the date command and won't wait for the response.

Expect script sending command prematurely

I have an expect script that logs into several computers through ssh and starts programs. It has been working fine for a while from but now suddenly a problem has appeared.
It happens at the same time every run; after logging out of a certain computer it attempts to log into the next one before the prompt is ready. That is, the lines
#!/usr/bin/expect
set multiPrompt {[#>$] }
(...)
expect -re $multiPrompt
send "exit\r"
expect -re $multiPrompt
spawn ssh name#computer4.place.com
which should (and normally does) give the result
name#computer3:~$ exit
logout
Connection to computer3.place.com closed.
name#computer1:~$ ssh name#computer4.place.com
instead gives
name#computer3:~$ exit
logout
ssh name#computer4.place.com
Connection to computer3.thphys.nuim.ie closed.
name#computer1:~$
and then it goes bananas. In other words the ssh ... doesn't wait for the prompt to appear.

Terminating spawn sessions in expect

I'm trying to address an issue with an Expect script that logs into a very large number of devices (thousands). The script is about 1500 lines and fairly involved; its job is to audit managed equipment on a network with many thousands of nodes. As a result, it logs into the devices via telnet, runs commands to check on the health of the equipment, logs this information to a file, and then logs out to proceed to the next device.
This is where I'm running into my problem; every expect in my script includes a timeout and an eof like so:
timeout {
lappend logmsg "$rtrname timed out while <description of expect statement>"
logmessage
close
wait
set session 0
continue
}
eof {
lappend logmsg "$rtrname disconnected while <description of expect statement>"
logmessage
set session 0
continue
}
My final expect closes each spawn session manually:
-re "OK.*#" {
close
send_user "Closing session... "
wait
set session 0
send_user "closed.\n\n"
continue
}
The continues bring the script back to the while loop that initiates the next spawn session, assuming session = 0.
The set session 0 tracks when a spawn session closes either manually by the timeout or via EOF before a new spawn session is opened, and everything seems to indicate that the spawn sessions are being closed, yet after a thousand or so spawned sessions, I get the following error:
spawn telnet <IP removed>
too many programs spawned? could not create pipe: too many open files
Now, I'm a network engineer, not a UNIX admin or professional programmer, so can someone help steer me towards my mistake? Am I closing telnet spawn sessions but not properly closing a channel? I wrote a second, test script, that literally just connects to devices one by one and disconnects immediately after a connection is formed. It doesn't log in or run any commands as my main script does, and it works flawlessly through thousands of connections. That script is below:
#!/usr/bin/expect -f
#SPAWN TELNET LIMIT TEST
set ifile [open iad.list]
set rtrname ""
set sessions 0
while {[gets $ifile rtrname] != -1} {
set timeout 2
spawn telnet $rtrname
incr sessions
send_user "Session# $sessions\n"
expect {
"Connected" {
close
wait
continue
}
timeout {
close
wait
continue
}
eof {
continue
}
}
In my main script I'm logging every single connection and why they may EOF or timeout (via the logmessage process which writes a specific reason to a file), and even when I see nothing but successful spawned connections and closed connections, I get the same problem with my main script but not the test script.
I've been doing some reading on killing process IDs, but as I understand it, close should be killing the process ID of the current spawn session, and wait should be halting the script until the process is dead. I've also tried using a simple "exit" command from the devices to close the telnet connection, but this doesn't produce any better results.
I may simply need a suggestion on how to better track the opening and closing of my sessions and ensure that, between devices, no spawn sessions remain open. Any help that can be offered will be much appreciated.
Thank you!
The Error?
spawn telnet too many programs spawned? could not create
pipe: too many open files
This error is likely due to your system running out of file handles (or at least exhausting the count available to you).
I suspect the reason for this is abandoned telnet sessions which are left open.
Now let's talk about why they may still be hanging around.
Not Even, Close?
Close may not actually close the telnet connection, especially if telnet doesn't recognize the session has been closed, only expect's session with telnet (See: The close Command). In this case Telnet is most likely being kept alive waiting for more input from the network side and by a TCP keepalive.
Not all applications recognize close, which is presented as an EOF to the receiving application. Because of this they may remain open even when their input has been closed.
Tell "Telnet", It's Over.
In that case, you will need to interrupt telnet. If your intent is to complete some work and exit. Then that is exactly what we'll need to do.
For "telnet" you can cleanly exit by issuing a "send “35\r”" (which would be "ctrl+]" on the keyboard if you had to type it yourself) followed by "quit" and then a carriage return. This will tell telnet to exit gracefully.
Expect script: start telnet, run commands, close telnet
Excerpt:
#!/usr/bin/expect
set timeout 1
set ip [lindex $argv 0]
set port [lindex $argv 1]
set username [lindex $argv 2]
set password [lindex $argv 3]
spawn telnet $ip $port
expect “‘^]’.”
send – – “\r”
expect “username:” {
send – – “$username\r”
expect “password:”
send – – “$password\r”
}
expect “$”
send – – “ls\r”
expect “$”
sleep 2
# Send special ^] to telnet so we can tell telnet to quit.
send “35\r”
expect “telnet>”
# Tell Telnet to quit.
send – – “quit\r”
expect eof
# You should also, either call "wait" (block) for process to exit or "wait -nowait" (don't block waiting) for process exit.
wait
Wait, For The Finish.
Expect - The wait Command
Without "wait", expect may sever the connection to the process prematurely, this can cause the creation zombies in some rare cases. If the application did not get our signal earlier (the EOF from close), or if the process doesn't interpret EOF as an exit status then it may also continue running and your script would be none the wiser. With wait, we ensure we don't forget about the process until it cleans up and exits.
Otherwise, we may not close any of these processes until expect exits. This could cause us to run out of file handles if none of them close for a long running expect script (or one which connects to a lot of servers). Once we run out of file handles, expect and everything it started just dies, and you won't see those file handles exhausted any longer.
Timeout?, Catch all?, Why?
You may also want to consider using a "timeout" in case that the server doesn't respond when expected so we can exit early. This is ideal for severely lagged servers which should instead get some admin attention.
Catch all can help your script deal with any unexpected responses that don't necessarily prevent us from continuing. We can choose to just continue processing, or we could choose to exit early.
Expect Examples Excerpt:
expect {
"password:" {
send "password\r"
} "yes/no)?" {
send "yes\r"
set timeout -1
} timeout {
exit
# Below is our catch all
} -re . {
exp_continue
#
} eof {
exit
}
}

Script: SSH command execute and leave shell open, pipe output to file

I would like to execute a ssh command and pipe the output to a file.
In general I would do:
ssh user#ip "command" >> /myfile
the problem is that ssh close the connection once the command is executed, however - my command sends the output to the ssh channel via another programm in the background, therefore I am not receiving the output.
How can I treat ssh to leave my shell open?
cheers
sven
My understanding is that command starts some background process that perhaps will write some output to the terminal later. If command terminates before that the ssh session will be terminated and there will be no terminal for the background program to write to.
One simple and naive solution is to just sleep long enough
ssh user#ip "command; sleep 30m" >> /myfile
A better solution than sleep would be to wait for the background process(es) to finish in some more intelligent way, but that is impossible to say without further details.
Something more powerful than bash would be Python with Paramiko and PyExpect.

Resources