expect: how to get rid of duplicated lines? - bash

Say I have following program:
$ cat test1.c
#include <stdio.h>
int main() {
int i;
for (i = 0; i < 10; ++i) {
int n, res;
res = scanf("%d", &n);
if (res != 1) break;
printf("%d\n", n * K);
}
}
Compilation is simple:
$ gcc -DK=2 test1.c -o t2
$ gcc -DK=3 test1.c -o t3
I am then using expect to automate process: one program sends output to antother.
$ cat connect.tcl
#!/usr/bin/env expect
spawn ./t2
set idt2 $spawn_id
spawn ./t3
set idt3 $spawn_id
send -i $idt2 "1\r"
expect {
-i $idt2 -re {(\d+)\s*(\d+)} {
set q $expect_out(2,string)
send -i $idt3 "$q\n"
expect -i $idt3 -re {(\d+)\s*(\d+)}
set a $expect_out(2,string)
send -i $idt2 "$a\n"
exp_continue
}
-i $idt2 eof {
exp_continue
}
-i $idt3 eof {
}
timeout {
puts "$expect_out(buffer)"
}
}
Run it like this:
$ expect connect.tcl
1
2
2
6
6
12
12
36
36
72
72
...... etc .....
Everything is good but I am seeing everything 2 times.
What I want is:
$ expect connect.tcl
1
2
6
12
36
72
...... etc .....
I. e. I want first program to take input from second and vice versa and print only what is need to be printed.
I tried different versions of spawn -noecho, etc, but looks like nothing works. Also please note, that I am matching pair of numbers in my expect script: of course in desired solution regular expressions will slightly differ.
Will appreciate any ideas.

By default expect logs interactions with spawned processes to stdout so I'll guess what you're seeing is one process outputting "12" and then you writing the "12" to the other. Try disabling that with log_user 0 and explicitly puts what you want to see instead.

Related

How to handle Ctrl + c in shell script?

I am trying to handle the ctrl + c in the shell script. I have code running in while loop but i am calling the binary from script and running it in background so when i want to stop the binary should stop. Code is below of hello.c
vim hello.c
#include <stdio.h>
int main()
{
while(1)
{
int n1,n2;
printf("Enter the first number\n");
scanf("%d",&n1);
printf("Enter the second number\n");
scanf("%d",&n2);
printf("Entered number are n1 = %d , n2 =%d\n",n1,n2);
}
}
Below is the Bash script which i used.
#/i/bin/sh
echo run the hello binary
./hello < in.txt &
trap_ctrlc()
{
ps -eaf | grep hello | grep -v grep | awk '{print $2}' | xargs kill -9
echo trap_ctrlc
exit
}
trap trap_ctrlc SIGHUP SIGINT SIGTERM
After starting the script the hello binary is running continuously. I have killed this binary from other terminal using kill -9 pid command.
I have tried this trap_ctrlc function but it not work. How to handle the Ctrl + c in shell script.
In in.txt i have added the input so i can pass this file directly to the binary
vim in.txt
1
2
Output:
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
And it going continuously.
Change your c program so it checks if reading data actually succeeded:
#include <stdio.h>
int main()
{
int n1,n2;
while(1) {
printf("Enter the first number\n");
if(scanf("%d",&n1) != 1) return 0; /* check here */
printf("Enter the second number\n");
if(scanf("%d",&n2) != 1) return 0; /* check here */
printf("Entered number are n1 = %d , n2 =%d\n",n1,n2);
}
}
It will now terminate when the input from in.txt is depleted.
To make something that reads from in.txt many times, you could create a loop in your bash script that feeds ./hello forever (or until it's killed).
Example:
#!/bin/bash
# a function to repeatedly print the content in "in.txt"
function print_forever() {
while [ 1 ];
do
cat "$1"
sleep 1
done
}
echo run the hello binary
print_forever in.txt | ./hello &
pid=$!
echo "background process $pid started"
trap_ctrlc() {
kill $pid
echo -e "\nkill=$? (0 = success)\n"
wait $pid
echo "wait=$? (the exit status from the background process)"
echo -e "\n\ntrap_ctrlc\n\n"
}
trap trap_ctrlc INT
# wait for all background processes to terminate
wait
Possible output:
$ ./hello.sh
run the hello binary
background process 262717 started
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
Enter the second number
Entered number are n1 = 1 , n2 =2
Enter the first number
^C
kill=0 (0 = success)
wait=143 (the exit status from the background process)
trap_ctrlc
Another option can be to kill the child after the wait is interrupted:
#!/bin/bash
function print_forever() {
while [ 1 ];
do
cat "$1"
sleep 1
done
}
echo run the hello binary
print_forever in.txt | ./hello &
pid=$!
echo "background process $pid started"
trap_ctrlc() {
echo -e "\n\ntrap_ctrlc\n\n"
}
trap trap_ctrlc INT
# wait for all background processes to terminate
wait
echo first wait=$?
kill $pid
echo -e "\nkill=$? (0 = success)\n"
wait $pid
echo "wait=$? (the exit status from the background process)"`
``

expect script - how to split the output of a command into several variables

I am trying to set up a expect script that logs in to a remote server and
fetches the 3 last created logfiles. The output (1 line) looks like below:
root#server1:/cluster/storage/var/log/alarms$
Last 3 created files are: FmAlarmLog_20180515_1.log FmAlarmLog_20180516_2.log FmAlarmLog_20180517_3.log
How can I split this output and create 3 different variables (one for each logfile) from this output?
The name of the logfiles always starts with "FmAlarmLog_"
I need to add later the part handling the fetching of those files.
#!/usr/bin/expect -f
set passwd "xxx"
set cmd1 "ls -ltr | tail -3 | awk '{print \$NF}'"
set dir "/cluster/storage/var/log/alarms"
set timeout 1
set prompt1 "*\$ "
log_user 0
spawn ssh admin#10.30.35.36
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
"*?assword:*" {
send $passwd
send "\n"
}
}
expect $prompt1 { send "cd $dir\r"}
expect $prompt1 { send "$cmd1\r"}
set Last3LogFiles {}
expect \n
expect {
-re {^([^\r\n]*)\r\n} {
lappend Last3LogFiles $expect_out(1,string)
exp_continue
}
-ex $prompt1
}
send_user "Last 3 created files are: $Last3LogFiles\n"
send "exit\n"
exit 0
Try this:
expect $prompt1
send "$cmd1\r"
# express the prompt as a regular expression.
# best practice is to match the prompt at the end of the current input,
# I don't know if you have a space at the end of your prompt
expect -re {(.*)\r\n\$ ?$}
# command output is in $expect_out(1,string)
set Last3LogFiles [regexp -inline -all {FmAlarmLog_\S+} $expect_out(1,string)]

handling tcl expect application crash

Is it possible to spawn an application, send commands, expect results, but also 'respawn' the application in case it crashed and continue from the last point? I tried creating procedures for spawning, but I am not able to catch user shell prompt once the application gets closed
So it sounds like you are doing something like telneting or sshing
into a shell then running an application. The application dies or
hangs so the prompt does not return so expect does not return and so
you are stuck. You will need to use timeout to detect a hang and restart
your entire process ( including the spawn). Below is the boiler plate I start with when writing
expect scripts. The critical thing to help you resolve your problem
is to realize that spawn not only sets the spawn_id but also returns
the pid of the spawned process. You can use that pid to kill the
spawned process if you get an eof and/or a timeout. My boiler plate
kills the spawn process on timeout. The timeout does not bail from the
expect loop but waits for the eof before exiting. it also accumulates
the output of expect command so on timeout you may be able to see
where it dies. The boilerplate is in a proc called run. Spawn the process
and pass the pid and the spawn id . Reuse the boiler plate to define other
procs that will be the steps. put the procs in a script with a counter between
them as shown and repeat. The other answerer is correct that the unless the app
restarts where you left off you can need to start from scratch. If it does start from
where you left off. make the steps granular enough that you know what command
to repeat. and step to start from.
BoilerPlate
proc run { pid spawn_id buf } {
upvar $buf buffer; #buffer to accumulate output of expect
set bad 0;
set done 0;
exp_internal 0; # set to one for extensive debug
log_user 0; # set to one to watch action
expect {
-i $spawn_id
-re {} {
append buffer $expect_out(buffer); # accumultate expect output
exp_continue;
}
timeout {
send_user "timeout\n"
append buffer $expect_out(buffer); # accumultate expect output
exec kill -9 $pid
set bad 1
exp_continue;
}
fullbuffer {
send_user " buffer is full\n"
append buffer $expect_out(buffer); # accumultate expect output
exp_continue;
}
eof {
send_user "Eof detected\n"
append buffer $expect_out(buffer); # accumultate expect output
set done 1 ;
}
}
set exitstatus [ exp_wait -i $spawn_id ];
catch { exp_close -i $spawn_id };
if { $bad } {
if { $done } {
throw EXP_TIMEOUT "Application timeout"
}
throw BAD_ERROR "unexpected failure "
}
return $exitstatus
}
set count 0
set attempts 0 ; # try 4 times
while { $count == 0 && $attempts < 4 } {
set buff ""
set pid [spawn -noecho ssh user#host ]
try {
run $pid $::spawn_id buff
incr count
run2 $pid $::spawn_id buff
incr count
run3 $pid $::spawn_id buff
incr count
run4 $pid $::spawn_id buff
incr count
} trap EXP_TIMEOUT { a b } {
puts "$a $b"
puts " program failed at step $count"
} on error { a b } {
puts "$a $b"
puts " program failed at step $count"
} finally {
if { $count == 4 } {
puts "success"
} else {
set count 0
incr attempts
puts "$buff"
puts "restarting\n"
}
}
}

Expect in Alias

I am using a Bash alias that allows me to shorten the SSH command in order for me to log into my routers. Quite trivial, but a time saver! What I would now like to do is take this a step further and fully automate the logging-in of the routers.
For example in my ~/.bashrc file I have the following entry:
sshFuncB()
{
ssh -o StrictHostKeyChecking=no superuser#$1 - | /usr/bin/expect<<EOF
set timeout 5
set send_human {.1 .3 1 .05 2}
expect {
"password: " { send -h "MYPASSWORD\r" }
"No route to host" { exit 1 }
timeout { exit 1 }
}
set timeout 2
sleep 1
expect {
"N]?" { send "y\r"; exp_continue }
timeout { exit 1 }
}
expect eof
EOF
}
alias z=sshFunc
However, when I type z myrouterhostname this does not give the desired output. I must find a way to start the SSH connection and have expect automate logging in before returning control to user.
Any ideas?
This can be done as follows,
sshFuncB()
{
expect -c "
spawn ssh -o StrictHostKeyChecking=no superuser#$1
set timeout 5
set send_human {.1 .3 1 .05 2}
expect {
\"password: \" { send -h \"MYPASSWORD\r\" }
\"No route to host\" { exit 1 }
timeout { exit 1 }
}
set timeout 2
sleep 1
expect {
\"N]?\" { send \"y\r\"; exp_continue }
timeout { exit 1 }
}
expect eof
"
}
alias z=sshFuncB
Note the use of -c flag in expect which you can refer from here of you have any doubts.
If we use double quotes for the expect code with -c flag, it will allow the bash substitutions. If you use single quotes for the same, then bash substitutions won't work. (You have used #1 inside expect, which is why I used double quotes) Since I have used double quotes for the whole expect code, we have to escape the each double quotes with backslash inside the expect statement like as follows,
expect {
# Escaping the double quote with backslash
\"password: \" {some_action_here}
}
One more update. Since this is about connecting to the router and do some of your manual operations, then it is better to have interact at the end.

Expect crashing running exec command

I have an expect script that performs an exec that can take some time (around 5 mins).
I have copied the script below and also the output from running the script.
If the script was timing out, I would have thought "timeout" was printed to std out?
Any pointers will be appreciated!
expect <<EOF
cd /home/vagrant/cloudstack
# 20 mins timeout for jetty to start and devcloud to be provisioned
set timeout 1200
match_max 1000000
set success_string "*Started Jetty Server*"
spawn "/home/vagrant/cloudstack_dev.sh" "-r"
expect {
-re "(\[^\r]*\)\r\n"
{
set current_line \$expect_out(buffer)
if { [ string match "\$success_string" "\$current_line" ] } {
flush stdout
puts "Started provisioning cloudstack."
# expect crashes executing the following line:
set exec_out [exec /home/vagrant/cloudstack_dev.sh -p]
puts "Finished provisioning cloudstack. Stopping Jetty."
# CTRL-C
send \003
expect eof
} else {
exp_continue
}
}
eof { puts "eof"; exit 1; }
timeout { puts "timeout"; exit 1; }
}
EOF
The output:
...
2014-03-14 06:44:08 (1.86 MB/s) - `/home/vagrant/devcloud.cfg' saved [3765/3765]
+ python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i /home/vagrant/devcloud.cfg
+ popd
+ exit 0
while executing
"exec /home/vagrant/cloudstack_dev.sh -p"
invoked from within
"expect {
-re "(\[^\r]*\)\r\n"
{
set current_line $expect_out(buffer)
if { [ string match "$success_string" "$current_line" ]..."
The function that gets run inside the cloudstack-dev.sh:
function provision_cloudstack () {
echo -e "\e[32mProvisioning Cloudstack.\e[39m"
pushd $PWD
if [ ! -e $progdir/devcloud.cfg ]
then
wget -P $progdir https://github.com/imduffy15/devcloud/raw/v0.2/devcloud.cfg
fi
python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i $progdir/devcloud.cfg
popd
}
From the Expect output, it seems as though the function is being run ok.
See http://wiki.tcl.tk/exec
the exec call by default returns an error status when the exec'ed command:
returns a non-zero exit status, or
emits any output to stderr
This second condition can be irksome. If you don't care about stderr, then use exec -ignorestderr
You should always catch an exec call. More details in the referenced wiki page, but at a minimum:
set status [catch {exec command} output]
if {$status > 0} {
# handle an error condition ...
} else {
# success
}

Resources