Why can not I use expect script to start redis? - bash

I guess there must be a simple reason why I can't start redis like this
---- update -----
After #larsks answered my question I realize it is this one that cause my confusion "You end it with an interact statement, which conncets your console to the stdin/stdout of the process you spawned. The redis-server program is not interactive: it doesn't accept any console input."
I check the code again and found it was this code that made me think the process was stuck
#!/usr/bin/expect -f
spawn redis-server
expect "The server is now ready to accept connections"
interact
spawn redis-cli
expect ">"
...
I never saw redis-cli run.
But if I change to
#!/usr/bin/expect -f
spawn redis-server
expect "The server is now ready to accept connections"
spawn redis-cli
expect ">"
...
interact //put it in the end.
It works as I expected.
BTW the reason I use expect here is first to make sure redis server starts then delete some keys.

What do you expect the first example to do? You end it with an interact statement, which connets your console to the stdin/stdout of the process you spawned. The redis-server program is not interactive: it doesn't accept any console input. When you run redis-server, it will get as far as...
1135:M 18 Nov 13:59:51.634 * Ready to accept connections
...and then it stops, waiting for redis clients to connect and operate on it. Also, note that the Redis version I'm using ends with Ready to accept connections rather than The server is now ready to accept connections, so I'll be using that in the following examples.
We can add a puts command to the expect script to see that it isn't
actually getting stuck anywhere. If I run the following:
#!/usr/bin/expect -f
spawn redis-server
expect "Ready to accept connections"
puts "redis is running"
interact
I get as output:
spawn redis-server
1282:C 18 Nov 14:03:33.123 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1282:C 18 Nov 14:03:33.123 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=1282, just started
[...]
1282:M 18 Nov 14:03:33.124 * Ready to accept connections
redis is running
So we can see that it's not getting stuck at the spawn statement,
nor even at the expect statement.
What's not clear from your question is why you're even using expect
in this situation, since redis-server is not an interactive program
and does not produce any prompts that require automation.

Related

How to run 2 commands on bash concurrently

I want to test my server program,(let's call it A) i just made. So when A get executed by this command
$VALGRIND ./test/server_tests 2 >>./test/test.log
,it is blocked to listen for connection.After that, i want to connect to the server in A using
nc 127.0.0.1 1234 < ./test/server_file.txt
so A can be unblocked and continue. The problem is i have to manually type these commands in two different terminals, since both of them block. I have not figured out a way to automated this in a single shell script. Any help would be appreciated.
You can use & to run the process in the background and continue using the same shell.
$VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
If you want the server to continue running even after you close the terminal, you can use nohup:
nohup $VALGRIND ./test/server_tests 2 >>./test/test.log &
nc 127.0.0.1 1234 < ./test/server_file.txt
For further reference: https://www.computerhope.com/unix/unohup.htm
From the question, it looks if the goal is to build a test script for the server, that will also capture memory check.
For the specific case of building a test script, it make sense to extend the referenced question in the comment, and add some commands to make it unlikely for the test script to hang. The script will cap the time for executing the client, executing the server, and if the test complete ahead of the time, it will attempt to shutdown the server.
# Put the server to the background
(timeout 15 $VALGRIND ./test/server_tests 2 >>./test/test.log0 &
svc_pid=$!
# run the test cilent
timeout 5 nc 127.0.0.1 1234 < ./test/server_file.txt
.. Additional tests here
# Terminate the server, if still running. May use other commands/signals, based on server.
kill -0 $svc_id && kill $svc_pid
wait $svc_pid
# Check log file for error
...

linux expect in background

I use the following bash script to connect to pbx using telnet:
expect.sh:
#!/usr/bin/expect
spawn telnet [ip] 2300
expect -exact "-"
send "SMDR\r";
expect "Enter Password:"
send "PASSWORD\r";
interact
and created another script to redirect the result to a file:
#!/bin/bash
./expect.sh | tee pbx.log
I'm trying to run expect.sh at boot time so I added it to systemd. When I add it as service in /etc/systemd/system it runs but I can't get the results in the log file as if I run both scripts manually
any idea about how can I run it at boot time?
TIA
If you just want to permanently output everything received after providing your password, simply replace your interactive with expect eof, i.e. wait for end-of file which will happen when the connection is closed by the other end. You will probably also want to change the default timeout of 10 seconds with no data that will stop the command:
set timeout -1
expect eof

Deploy a TCP Server written in Ruby

I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.

Expect script sending command prematurely

I have an expect script that logs into several computers through ssh and starts programs. It has been working fine for a while from but now suddenly a problem has appeared.
It happens at the same time every run; after logging out of a certain computer it attempts to log into the next one before the prompt is ready. That is, the lines
#!/usr/bin/expect
set multiPrompt {[#>$] }
(...)
expect -re $multiPrompt
send "exit\r"
expect -re $multiPrompt
spawn ssh name#computer4.place.com
which should (and normally does) give the result
name#computer3:~$ exit
logout
Connection to computer3.place.com closed.
name#computer1:~$ ssh name#computer4.place.com
instead gives
name#computer3:~$ exit
logout
ssh name#computer4.place.com
Connection to computer3.thphys.nuim.ie closed.
name#computer1:~$
and then it goes bananas. In other words the ssh ... doesn't wait for the prompt to appear.

Terminating spawn sessions in expect

I'm trying to address an issue with an Expect script that logs into a very large number of devices (thousands). The script is about 1500 lines and fairly involved; its job is to audit managed equipment on a network with many thousands of nodes. As a result, it logs into the devices via telnet, runs commands to check on the health of the equipment, logs this information to a file, and then logs out to proceed to the next device.
This is where I'm running into my problem; every expect in my script includes a timeout and an eof like so:
timeout {
lappend logmsg "$rtrname timed out while <description of expect statement>"
logmessage
close
wait
set session 0
continue
}
eof {
lappend logmsg "$rtrname disconnected while <description of expect statement>"
logmessage
set session 0
continue
}
My final expect closes each spawn session manually:
-re "OK.*#" {
close
send_user "Closing session... "
wait
set session 0
send_user "closed.\n\n"
continue
}
The continues bring the script back to the while loop that initiates the next spawn session, assuming session = 0.
The set session 0 tracks when a spawn session closes either manually by the timeout or via EOF before a new spawn session is opened, and everything seems to indicate that the spawn sessions are being closed, yet after a thousand or so spawned sessions, I get the following error:
spawn telnet <IP removed>
too many programs spawned? could not create pipe: too many open files
Now, I'm a network engineer, not a UNIX admin or professional programmer, so can someone help steer me towards my mistake? Am I closing telnet spawn sessions but not properly closing a channel? I wrote a second, test script, that literally just connects to devices one by one and disconnects immediately after a connection is formed. It doesn't log in or run any commands as my main script does, and it works flawlessly through thousands of connections. That script is below:
#!/usr/bin/expect -f
#SPAWN TELNET LIMIT TEST
set ifile [open iad.list]
set rtrname ""
set sessions 0
while {[gets $ifile rtrname] != -1} {
set timeout 2
spawn telnet $rtrname
incr sessions
send_user "Session# $sessions\n"
expect {
"Connected" {
close
wait
continue
}
timeout {
close
wait
continue
}
eof {
continue
}
}
In my main script I'm logging every single connection and why they may EOF or timeout (via the logmessage process which writes a specific reason to a file), and even when I see nothing but successful spawned connections and closed connections, I get the same problem with my main script but not the test script.
I've been doing some reading on killing process IDs, but as I understand it, close should be killing the process ID of the current spawn session, and wait should be halting the script until the process is dead. I've also tried using a simple "exit" command from the devices to close the telnet connection, but this doesn't produce any better results.
I may simply need a suggestion on how to better track the opening and closing of my sessions and ensure that, between devices, no spawn sessions remain open. Any help that can be offered will be much appreciated.
Thank you!
The Error?
spawn telnet too many programs spawned? could not create
pipe: too many open files
This error is likely due to your system running out of file handles (or at least exhausting the count available to you).
I suspect the reason for this is abandoned telnet sessions which are left open.
Now let's talk about why they may still be hanging around.
Not Even, Close?
Close may not actually close the telnet connection, especially if telnet doesn't recognize the session has been closed, only expect's session with telnet (See: The close Command). In this case Telnet is most likely being kept alive waiting for more input from the network side and by a TCP keepalive.
Not all applications recognize close, which is presented as an EOF to the receiving application. Because of this they may remain open even when their input has been closed.
Tell "Telnet", It's Over.
In that case, you will need to interrupt telnet. If your intent is to complete some work and exit. Then that is exactly what we'll need to do.
For "telnet" you can cleanly exit by issuing a "send “35\r”" (which would be "ctrl+]" on the keyboard if you had to type it yourself) followed by "quit" and then a carriage return. This will tell telnet to exit gracefully.
Expect script: start telnet, run commands, close telnet
Excerpt:
#!/usr/bin/expect
set timeout 1
set ip [lindex $argv 0]
set port [lindex $argv 1]
set username [lindex $argv 2]
set password [lindex $argv 3]
spawn telnet $ip $port
expect “‘^]’.”
send – – “\r”
expect “username:” {
send – – “$username\r”
expect “password:”
send – – “$password\r”
}
expect “$”
send – – “ls\r”
expect “$”
sleep 2
# Send special ^] to telnet so we can tell telnet to quit.
send “35\r”
expect “telnet>”
# Tell Telnet to quit.
send – – “quit\r”
expect eof
# You should also, either call "wait" (block) for process to exit or "wait -nowait" (don't block waiting) for process exit.
wait
Wait, For The Finish.
Expect - The wait Command
Without "wait", expect may sever the connection to the process prematurely, this can cause the creation zombies in some rare cases. If the application did not get our signal earlier (the EOF from close), or if the process doesn't interpret EOF as an exit status then it may also continue running and your script would be none the wiser. With wait, we ensure we don't forget about the process until it cleans up and exits.
Otherwise, we may not close any of these processes until expect exits. This could cause us to run out of file handles if none of them close for a long running expect script (or one which connects to a lot of servers). Once we run out of file handles, expect and everything it started just dies, and you won't see those file handles exhausted any longer.
Timeout?, Catch all?, Why?
You may also want to consider using a "timeout" in case that the server doesn't respond when expected so we can exit early. This is ideal for severely lagged servers which should instead get some admin attention.
Catch all can help your script deal with any unexpected responses that don't necessarily prevent us from continuing. We can choose to just continue processing, or we could choose to exit early.
Expect Examples Excerpt:
expect {
"password:" {
send "password\r"
} "yes/no)?" {
send "yes\r"
set timeout -1
} timeout {
exit
# Below is our catch all
} -re . {
exp_continue
#
} eof {
exit
}
}

Resources