Minicom script called from jenkins failing on exit of '! killall -9 minicom' - bash

I managed to make a script which sends a few commands via minicom and stores them on output.txt. The script which calls minicom, is called dut.sh
#!/bin/bash
echo "Setting up DUT"
stm_armv7 -print "DUT"
stm_armv7 -dut
echo "wait 30s"
sleep 30s
stty -F /dev/ttyACM0 115200 cs8 -cstopb -parenb
rm /home/fsnk/scripts/serial-com/output.txt
export TERM=linux-c-nc
minicom -b 115200 -D /dev/ttyACM0 -C /home/fsnk/scripts/serial-com/output.txt -S /home/fsnk/scripts/serial-com/serial -o
echo "wait another 5s"
sleep 5s
stm_armv7 -ts
So on minicom command, i give another file called just serial which has some runscript code.
# UNIX login script.
# Can be used to automatically login to almost every UNIX box.
#
# Some variables.
set a 0
set b a
print Trying to Login..
# Skip initial 'send ""', it seems to matter sometimes..
send ""
goto login
login:
if a > 3 goto failed1
expect {
"ogin:" send "root"
"assword:" send ""
timeout 5 goto loop1
}
goto loop1
loop1:
send "systemctl is-system-running --wait"
sleep 3
# Send command not more than three times.
inc b
if b > 3 goto failed1
expect {
"\nrunning" goto success1
break
"degrading" goto success2
break
timeout 5 goto failed2
}
success1:
print \nSuccessfully received running!
! killall -9 minicom
exit
success2:
print \nSuccessfully received degrading!
! killall -9 minicom
exit
failed1:
print \nConnection Failed (wrong password?)
! killall -9 minicom
exit
failed2:
print \nMessage sending failed. Didn't receive anything!
! killall -9 minicom
exit
The command ! killall -9 minicom kills the minicom terminal based from its manual. As i mentioned it earlier, when i run this locally, or when i call the script via ssh from my local machine, it runs okay. The problem occurs when i run this from jenkins.
The output.txt file gets created, but remains empty while on Jenkins, i receive a minicom message like this:
Setting up DUT
wait 30s
Welcome to minicom 2.7
OPTIONS: I18n
Compiled on Apr 22 2017, 09:14:19.
Port /dev/ttyACM0, 16:30:57
Press CTRL-A Z for help on special keys
/home/fsnk/scripts/serial-com/dut.sh: line 12: 5639 Killed minicom -b 115200 -D /dev/ttyACM0 -C /home/fsnk/scripts/serial-com/output.txt -S /home/fsnk/scripts/serial-com/serial -o
wait another 5s
Finished: SUCCESS
After the message Press CTRL-A Z for help on special keys i would expect it to login to the board (no password, only root user) and run systemctl is-system-running --wait. All the output must be on output.txt
Again, this works just as expected when run manually or trigerred from my machine via SSH, but when trigered from Jenkins (Added a build step execute shell which tries to SSH and launch the script) it doesnt work.
At this point i feel like its a minicom issue, in that case, i welcome any solution with screen

I believe it is because the killall causes minicom to return an error code to the operating system, which Jenkins evaluates and so considers it a failure. You could add a try/catch block to mark the build unstable or success if that is the cause.

Related

Using expect on stderr (sshuttle as an example)

There is this program called sshuttle that can connects to a server and create a tunnel.
I wish to create a bash function that sequentially:
opens a tunnel to a remote server (sshuttle -r myhost 0/0),
performs 1 arbitrary commandline,
kill -s TERM <pidOfTheAboveTunnel>.
A basic idea (that works but the 5 seconds delay is a problem) is like sshuttle -r myhost 0/0 & ; sleep 5 ; mycommand ; kill -s TERM $(pgrep sshuttle)
Could expect be used to expect the string "c : Connected to server." that is received from stderr here? My attempts as a newbie were met with nothing but failure, and the man page is quite impressive.
When you use expect to control another program, it connects to that program through a pseudo-terminal (pty), so expect sees the same output from the program as you would on a terminal, in particular there is no distinction between stdout and stderr. Assuming that your mycommand is to be executed on the local machine, you could use something like this as an expect (not bash) script:
#!/usr/bin/expect
spawn sshuttle -r myhost 0/0
expect "Connected to server."
exec mycommand
exec kill [exp_pid]
close
The exec kill may not be needed if sshuttle exits when its stdin is closed, which will happen on the next line.

Bash script: how to give an alert when current program is killed

I'm trying to write a program using bash script. I'd like to give an alert when this program is killed.
The desired action is like this:
#!/bin/bash
... # The original program
if killed ; do
echo "trying to kill the demo program ... "
sleep 5s
echo "demo program killed"
fi
If you expect the signal to be delivered only to the running program and not to the shell running your script, then the basic synopsis might be:
#!/bin/bash
set -euo pipefail
sleep 1 & # The original program
pid="$!"
kill -9 "$pid" # Pick your lethal signal
wait -n "$pid" && status=0 || status="$?"
((status > 128)) && echo "${pid} got signal $((status - 128))" 1>&2 || :
Presumably, here^^^ we run the program in the background, so that we can send it the kill signal from the same snippet. In practice you would probably run it in the foreground and then check its $? return status instead of the status from wait -n.
If the killing signal is delivered to your entire process group, including the shell running your script, that is a different story. For the signal KILL (9) in particular, there is no way to mask it or report it. When the shell gets it, it dies. For other signals you could set up a trap command (see man bash for its syntax) to handle the signal gracefully in the script while still being able to detect and report the child process’ death from the signal.

Run / Close Programs over and over again

Is there a way I can write a simple script to run a program, close that program about 5 seconds later, and then repeat?
I just want to be able to run a program that I wrote over and over again but to do so Id have to close it like 5 seconds after running it.
Thanks!
If your command is non-interactive (requires no user interaction):
Launch your program in the background with control operator &, which gives you access to its PID (process ID) via $!, by which you can kill the running program instance after sleeping for 5 seconds:
#!/bin/bash
# Start an infinite loop.
# Use ^C to abort.
while :; do
# Launch the program in the background.
/path/to/your/program &
# Wait 5 seconds, then kill the program (if still alive).
sleep 5 && { kill $! && wait $!; } 2>/dev/null
done
If your command is interactive:
More work is needed if your command must run in the foreground to allow user interaction: then it is the command to kill the program after 5 seconds that must run in the background:
#!/bin/bash
# Turn on job control, so we can bring a background job back to the
# foreground with `fg`.
set -m
# Start an infinite loop.
# CAVEAT: The only way to exit this loop is to kill the current shell.
# Setting up an INT (^C) trap doesn't help.
while :; do
# Launch program in background *initially*, so we can reliably
# determine its PID.
# Note: The command line being set to the bakground is invariably printed
# to stderr. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
/path/to/your/program &
pid=$! # Save the PID of the background job.
# Launch the kill-after-5-seconds command in the background.
# Note: A status message is invariably printed to stderr when the
# command is killed. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
{ (sleep 5 && kill $pid &) } 2>/dev/null
# Bring the program back to the foreground, where you can interact with it.
# Execution blocks until the program terminates - whether by itself or
# by the background kill command.
fg
done
Check out the watch command. It will let you run a program repeatedly monitoring the output. Might have to get a little fancy if you need to kill that program manually after 5 seconds.
https://linux.die.net/man/1/watch
A simple example:
watch -n 5 foo.sh
To literally answer your question:
Run 10 times with sleep 5:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
# your script
sleep 5
let COUNTER=COUNTER+1
done
Run continuously:
#!/bin/bash
while [ 1 ]; do
# your script
sleep 5
done
If there is no input on the code, you can simply do
#!/bin/bash
while [ 1 ]
do
./exec_name
if [ $? == 0 ]
then
sleep 5
fi
done

Send command to a background process

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Kill process in bash that runs more than specified time?

I have a shutdown script for Oracle in /etc/init.d dir
on "stop" command it does:
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
..
The problem is when lsnrctl or sqlplus are unresponsive - in this case this "stop" script just never ends and server cant shutdown. The only way - is to "kill - 9 " that.
I'd like to rewrite script so that after 5min (for example) if command is not finished - it should be terminated.
How I can achieve this? Could you give me an example?
I'm under Linux RHEL 5.1 + bash.
If able to use 3rd-party tools, I'd leverage one of the 3rd-party, pre-written helpers you can call from your script (doalarm and timeout are both mentioned by the BashFAQ entry on the subject).
If writing such a thing myself without using such tools, I'd probably do something like the following:
function try_proper_shutdown() {
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
}
function resort_to_harsh_shutdown() {
for progname in ora_this ora_that ; do
killall -9 $progname
done
# also need to do a bunch of cleanup with ipcs/ipcrm here
}
# here's where we start the proper shutdown approach in the background
try_proper_shutdown &
child_pid=$!
# rather than keeping a counter, we check against the actual clock each cycle
# this prevents the script from running too long if it gets delayed somewhere
# other than sleep (or if the sleep commands don't actually sleep only the
# requested time -- they don't guarantee that they will).
end_time=$(( $(date '+%s') + (60 * 5) ))
while (( $(date '+%s') < end_time )); do
if kill -0 $child_pid 2>/dev/null; then
exit 0
fi
sleep 1
done
# okay, we timed out; stop the background process that's trying to shut down nicely
# (note that alone, this won't necessarily kill its children, just the subshell we
# forked off) and then make things happen.
kill $child_pid
resort_to_harsh_shutdown
wow, that's a complex solution. here's something easier. You can track the PID and kill it later.
my command & #where my command is the command you want to run and the & sign backgrounds it.
PID=$! #PID = last run command.
sleep 120 && doProperShutdown || kill $PID #sleep for 120 seconds and kill the process properly, if that fails, then kill it manually.. this can be backgrounded too.

Resources