ncurses - expect: sleep executes at wrong time - expect

I have some ncurses apps that I need to automate to test repeatedly.
I am placing the "sleep" command between "send" commands. However, what i see is that all the sleep's are executed in the beginning before the screen loads. expect concatenates the sends (I see that at the screen bottom during sleep) then issues them together.
I have tried sending all keys with "send -s" or "send -h". That marginally helps. I've replaced "-f" on line 1 with "-b" - again a tiny difference.
Why isn't "sleep" pausing at the right time.
Incidentally, my programs have a getc() loop, so i can't use "expect" command. I tried that too.
#!/usr/bin/expect -f
spawn ruby testsplit.rb
#expect
set send_human {3 3 5 5 7}
set send_slow {10 1}
exp_send -s -- "--"
exec sleep 3
send -s "+"
send -s "="
sleep 1
send -h -- "-"
send -h -- "-"
sleep 1
send -h -- "v"
interact

I would guess that you need to wait for your ruby program to start up before you continue with the sends and sleeps. Is there any string the ruby program outputs when it has started (eg. "ready") ? If so, at the point where you have expect commented out I would try expect "ready" so that Expect will wait until the ruby program has started before continuing.

Related

How to send a SIGTSTP signal to a process spawned by an expect script

I wrote an expect script like this:
#!/usr/bin/expect -f
spawn sql "user=xx dbname=xx"
interact
After I entered the sql client, I can't send the SIGTSTP signal by ctrl + z to make the current process suspend and go to the background.
The terminal will only show:
=> ^Z
What should I do to make ctrl + z achieve the above purpose?
The manual of expect gives the recipe:
During interact, raw mode is used so that all characters may be passed to the current process. If the current process does not catch job control signals, it will stop if sent a stop signal (by default ^Z). To restart it, send a continue signal (such as by "kill -CONT "). If you really want to send a SIGSTOP to such a process (by ^Z), consider spawning csh first and then running your program. On the other hand, if you want to send a SIGSTOP to Expect itself, first call interpreter (perhaps by using an escape character), and then press ^Z.
So, you may be able to do something like:
#!/usr/bin/expect -f
spawn /bin/sh
exp_send "psql hostaddr=xxxx port=xxxx user=xx dbname=xx\r"
interact
For example, let's consider the following interactive shell script named interact.sh:
#!/bin/sh
read -p "First name: " fname
read -p "Last name: " lname
echo "you entered: $fname $lname"
And the following expect script named script.exp to automate the previous one:
#!/usr/bin/expect -f
spawn /bin/sh
exp_send "./interact.sh\r"
interact
We launch the latter:
$ ./script.exp
spawn /bin/sh
./interact.sh
$ ./interact.sh
First name: Stack
Last name: ^Z (we entered CTRL-Z here)
[1]+ Stopped(SIGTSTP) ./interact.sh
sh-4.4$ jobs
[1]+ Stopped(SIGTSTP) ./interact.sh
sh-4.4$ fg
./interact.sh
Overflow
you entered: Stack Overflow
$ exit
exit
$

Minicom script called from jenkins failing on exit of '! killall -9 minicom'

I managed to make a script which sends a few commands via minicom and stores them on output.txt. The script which calls minicom, is called dut.sh
#!/bin/bash
echo "Setting up DUT"
stm_armv7 -print "DUT"
stm_armv7 -dut
echo "wait 30s"
sleep 30s
stty -F /dev/ttyACM0 115200 cs8 -cstopb -parenb
rm /home/fsnk/scripts/serial-com/output.txt
export TERM=linux-c-nc
minicom -b 115200 -D /dev/ttyACM0 -C /home/fsnk/scripts/serial-com/output.txt -S /home/fsnk/scripts/serial-com/serial -o
echo "wait another 5s"
sleep 5s
stm_armv7 -ts
So on minicom command, i give another file called just serial which has some runscript code.
# UNIX login script.
# Can be used to automatically login to almost every UNIX box.
#
# Some variables.
set a 0
set b a
print Trying to Login..
# Skip initial 'send ""', it seems to matter sometimes..
send ""
goto login
login:
if a > 3 goto failed1
expect {
"ogin:" send "root"
"assword:" send ""
timeout 5 goto loop1
}
goto loop1
loop1:
send "systemctl is-system-running --wait"
sleep 3
# Send command not more than three times.
inc b
if b > 3 goto failed1
expect {
"\nrunning" goto success1
break
"degrading" goto success2
break
timeout 5 goto failed2
}
success1:
print \nSuccessfully received running!
! killall -9 minicom
exit
success2:
print \nSuccessfully received degrading!
! killall -9 minicom
exit
failed1:
print \nConnection Failed (wrong password?)
! killall -9 minicom
exit
failed2:
print \nMessage sending failed. Didn't receive anything!
! killall -9 minicom
exit
The command ! killall -9 minicom kills the minicom terminal based from its manual. As i mentioned it earlier, when i run this locally, or when i call the script via ssh from my local machine, it runs okay. The problem occurs when i run this from jenkins.
The output.txt file gets created, but remains empty while on Jenkins, i receive a minicom message like this:
Setting up DUT
wait 30s
Welcome to minicom 2.7
OPTIONS: I18n
Compiled on Apr 22 2017, 09:14:19.
Port /dev/ttyACM0, 16:30:57
Press CTRL-A Z for help on special keys
/home/fsnk/scripts/serial-com/dut.sh: line 12: 5639 Killed minicom -b 115200 -D /dev/ttyACM0 -C /home/fsnk/scripts/serial-com/output.txt -S /home/fsnk/scripts/serial-com/serial -o
wait another 5s
Finished: SUCCESS
After the message Press CTRL-A Z for help on special keys i would expect it to login to the board (no password, only root user) and run systemctl is-system-running --wait. All the output must be on output.txt
Again, this works just as expected when run manually or trigerred from my machine via SSH, but when trigered from Jenkins (Added a build step execute shell which tries to SSH and launch the script) it doesnt work.
At this point i feel like its a minicom issue, in that case, i welcome any solution with screen
I believe it is because the killall causes minicom to return an error code to the operating system, which Jenkins evaluates and so considers it a failure. You could add a try/catch block to mark the build unstable or success if that is the cause.

How do I redirect stdout from a program continuously into a spawned process using expect?

Need to open telnet, send a few commands and then send stdout from pocketsphinx.
Currently expect will wait until the program is finished and then output everything to the telnet process.
I need pocketsphix to continuously feed the spawned telnet process.
This is what I have so far:
#!/usr/bin/expect -d
set send_human {.1 .3 1 .05 2}
spawn telnet 192.168.1.104 23
expect “*”
send "\x01"; send "2\r"
expect “:”
send -h "hello world\r"
send -h "goodbye world\r"
send -h "Test Test Test\r"
send -- [exec pocketsphinx_continuous -infile speech.wav 2> /dev/null ]\n
You can use expect command interact for connecting together two spawned processes.
By default, interact expects the user to be writing stdin and reading stdout of the Expect process
itself. The -u flag (for "user") makes interact look for the user as the process named by its argument
(which must be a spawned id).
This allows two unrelated processes to be joined together without using an explicit loop. To aid in
debugging, Expect diagnostics always go to stderr (or stdout for certain logging and debugging information).
For the same reason, the interpreter command will read interactively from stdin.
For example
set send_human {.1 .3 1 .05 2}
spawn telnet 192.168.1.104 23
expect “*”
send "\x01"; send "2\r"
expect “:”
send -h "hello world\r"
send -h "goodbye world\r"
send -h "Test Test Test\r"
set sid_telnet $spawn_id
spawn pocketsphinx_continuous -infile speech.wav 2> /dev/null
interact -u $sid_telnet

Terminate program with Ctrl-C without terminating parent script

I have a bash script that starts an external program (evtest) twice.
#!/bin/bash
echo "Test buttons on keyboard 1"
evtest /dev/input/event1
echo "Test buttons on keyboard 2"
evtest /dev/input/event2
As far as I know, evtest can be terminated only via Ctrl-C. The problem is that this terminates the parent script, too. That way, the second call to evtest will never happen.
How can I close the first evtest without closing the script, so that the second evtest will actually run?
Thanks!
P.S.: for the one that want to ask "why not running evtest manually instead of using a script?", the answer is that this script contains further semi-automated hardware debug test, so it is more convenient to launch the script and do everything without the need to run further commands.
You can use the trap command to "trap" signals; this is the shell equivalent of the signal() or sigaction() call in C and most other programming languages to catch signals.
The trap is reset for subshells, so the evtest will still act on the SIGINT signal sent by ^C (usually by quiting), but the parent process (i.e. the shell script) won't.
Simple example:
#!/bin/sh
# Run a command command on signal 2 (SIGINT, which is what ^C sends)
sigint() {
echo "Killed subshell!"
}
trap sigint 2
# Or use the no-op command for no output
#trap : 2
echo "Test buttons on keyboard 1"
sleep 500
echo "Test buttons on keyboard 2"
sleep 500
And a variant which still allows you to quit the main program by pressing ^C twice in a second:
last=0
allow_quit() {
[ $(date +%s) -lt $(( $last + 1 )) ] && exit
last=$(date +%s)
}
trap allow_quit 2

Bash script that will survive disconnection, but not user break

I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I can solve the first part of it like this:
#!/bin/bash
cmd='
#commands here, avoiding single quotes...
'
nohup bash -c "$cmd" &
tail -f nohup.out
But pressing Ctrl+C obviously just kills the tail process, not the main body. Can I have both? Maybe using Screen?
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I think this is exactly the answer on the question you formulated, this one without screen:
#!/bin/bash
cmd=`cat <<EOF
# commands here
EOF
`
nohup bash -c "$cmd" &
# store the process id of the nohup process in a variable
CHPID=$!
# whenever ctrl-c is pressed, kill the nohup process before exiting
trap "kill -9 $CHPID" INT
tail -f nohup.out
Note however that nohup is not reliable. When the invoking user logs out, chances are that nohup also quits immediately. In that case disown works better.
bash -c "$cmd" &
CHPID=$!
disown
This is probably the simplest form using screen:
screen -S SOMENAME script.sh
Then, if you get disconnected, on reconnection simply run:
screen -r SOMENAME
Ctrl+C should continue to work as expected
Fact 1: When a terminal (xterm for example) gets closed, the shell is supposed to send a SIGHUP ("hangup") to any processes running in it. This harkens back to the days of analog modems, when a program needed to clean up after itself if mom happened to pick up the phone while you were online. The signal could be trapped, so that a special function could do the cleanup (close files, remove temporary junk, etc). The concept of "losing your connection" still exists even though we use sockets and SSH tunnels instead of analog modems. (Concepts don't change; all that changes is the technology we use to implement them.)
Fact 2: The effect of Ctrl-C depends on your terminal settings. Normally, it will send a SIGINT, but you can check by running stty -a in your shell and looking for "intr".
You can use these facts to your advantage, using bash's trap command. For example try running this in a window, then press Ctrl-C and check the contents of /tmp/trapped. Then run it again, close the window, and again check the contents of /tmp/trapped:
#!/bin/bash
trap "echo 'one' > /tmp/trapped" 1
trap "echo 'two' > /tmp/trapped" 2
echo "Waiting..."
sleep 300000
For information on signals, you should be able to man signal (FreeBSD or OSX) or man 7 signal (Linux).
(For bonus points: See how I numbered my facts? Do you understand why?)
So ... to your question. To "survive" disconnection, you want to specify behaviour that will be run when your script traps SIGHUP.
(Bonus question #2: Now do you understand where nohup gets its name?)

Resources