I use the following bash script to connect to pbx using telnet:
expect.sh:
#!/usr/bin/expect
spawn telnet [ip] 2300
expect -exact "-"
send "SMDR\r";
expect "Enter Password:"
send "PASSWORD\r";
interact
and created another script to redirect the result to a file:
#!/bin/bash
./expect.sh | tee pbx.log
I'm trying to run expect.sh at boot time so I added it to systemd. When I add it as service in /etc/systemd/system it runs but I can't get the results in the log file as if I run both scripts manually
any idea about how can I run it at boot time?
TIA
If you just want to permanently output everything received after providing your password, simply replace your interactive with expect eof, i.e. wait for end-of file which will happen when the connection is closed by the other end. You will probably also want to change the default timeout of 10 seconds with no data that will stop the command:
set timeout -1
expect eof
Related
I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.
I guess there must be a simple reason why I can't start redis like this
---- update -----
After #larsks answered my question I realize it is this one that cause my confusion "You end it with an interact statement, which conncets your console to the stdin/stdout of the process you spawned. The redis-server program is not interactive: it doesn't accept any console input."
I check the code again and found it was this code that made me think the process was stuck
#!/usr/bin/expect -f
spawn redis-server
expect "The server is now ready to accept connections"
interact
spawn redis-cli
expect ">"
...
I never saw redis-cli run.
But if I change to
#!/usr/bin/expect -f
spawn redis-server
expect "The server is now ready to accept connections"
spawn redis-cli
expect ">"
...
interact //put it in the end.
It works as I expected.
BTW the reason I use expect here is first to make sure redis server starts then delete some keys.
What do you expect the first example to do? You end it with an interact statement, which connets your console to the stdin/stdout of the process you spawned. The redis-server program is not interactive: it doesn't accept any console input. When you run redis-server, it will get as far as...
1135:M 18 Nov 13:59:51.634 * Ready to accept connections
...and then it stops, waiting for redis clients to connect and operate on it. Also, note that the Redis version I'm using ends with Ready to accept connections rather than The server is now ready to accept connections, so I'll be using that in the following examples.
We can add a puts command to the expect script to see that it isn't
actually getting stuck anywhere. If I run the following:
#!/usr/bin/expect -f
spawn redis-server
expect "Ready to accept connections"
puts "redis is running"
interact
I get as output:
spawn redis-server
1282:C 18 Nov 14:03:33.123 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1282:C 18 Nov 14:03:33.123 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=1282, just started
[...]
1282:M 18 Nov 14:03:33.124 * Ready to accept connections
redis is running
So we can see that it's not getting stuck at the spawn statement,
nor even at the expect statement.
What's not clear from your question is why you're even using expect
in this situation, since redis-server is not an interactive program
and does not produce any prompts that require automation.
The expect has been installed, it_a_test is the vps password.
scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
The command can transfer file /home/wpdatabase_backup.sql into my vps_ip:/tmp.
Now i rewrite the process into the following code:
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send it_is_a_test\r
Why can't transfer my file into remote vps_ip with expect?
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session, making the script exits and causing the failure.
So, you just have to add one expect to wait for the closure of the scp command which can be performed by waiting for eof (End Of File).
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send "it_is_a_test\r"
expect eof; # Will wait till the 'scp' completes.
Note :
The default timeout in expect is 10 seconds. So, if the scp completes, within 10 seconds, then no problem. Suppose, if the operation takes more than that, then expect will timeout and quit, which makes failure in scp transfer. So, you can set increase timeout if you want which can be modified as
set timeout 60; # Timeout is 1 min
I am working on remotely executing a command line in Windows from Debian. For that, I tried to use the bash script below. Using the expect tool, it consists in connecting via telnet to the remote server, entering username and password values and sending the command line desired.
#!/usr/bin/expect
set timeout 20
set name 192.168.1.46
set user Administrateur
set password MSapp/*2013
set cmd "TASKKILL /F /IM Tomcat6.exe"
spawn telnet 192.168.1.46
expect "login:"
send "$user\r"
expect "password:"
send "$password\r"
expect "C:\Users\Administrateur>"
send "$cmd\r"
The telnet connection is well established. But, the command line is not executed.
Could someone tell me what is wrong with my script?
Just add one more expect statement at the end, like as follows,
send "$cmd\r"
expect "C:\Users\Administrateur>"
Basically, expect will work with two feasible commands such as send and expect. If send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session.
I have an expect script that logs into several computers through ssh and starts programs. It has been working fine for a while from but now suddenly a problem has appeared.
It happens at the same time every run; after logging out of a certain computer it attempts to log into the next one before the prompt is ready. That is, the lines
#!/usr/bin/expect
set multiPrompt {[#>$] }
(...)
expect -re $multiPrompt
send "exit\r"
expect -re $multiPrompt
spawn ssh name#computer4.place.com
which should (and normally does) give the result
name#computer3:~$ exit
logout
Connection to computer3.place.com closed.
name#computer1:~$ ssh name#computer4.place.com
instead gives
name#computer3:~$ exit
logout
ssh name#computer4.place.com
Connection to computer3.thphys.nuim.ie closed.
name#computer1:~$
and then it goes bananas. In other words the ssh ... doesn't wait for the prompt to appear.