I will like to write (using bash) something like
while no_user_key_pressed
{
do_something....
}
There are a few options using C++, Java, ncurses and others o/s specific. I want a simple bash portable function.
^c interrupt should be used to kill the remaining code. Imagine something like: 'Press any key to stop test'
You can use a small timeout on read -t.
The drawback is that the user must press < RETURN >, not "any key".
For example:
while ! read -t 0.01
do
echo -en "$(date)\r"
done
echo "User pressed: $REPLY"
Tested on bash 3.2 (OS X)
The ! is because read returns a failure (false) if the timeout expires.
You can trap Ctrl-c in a way that does not kill the remaining code:
$ cat test.sh
#!/usr/bin/env bash
trap 'break' INT
while true
do
date
sleep 1
done
echo done
$ ./test.sh
Tue 28 Jun 12:01:22 UTC 2016
Tue 28 Jun 12:01:23 UTC 2016
Tue 28 Jun 12:01:24 UTC 2016
^Cdone
Related
I have 2 commands which I am executing over shell command1 and command2. command1 takes long to complete (~2 minutes). So I can put it running in background using & but after that I want to execute command2 automatically. Can I do this on shell command line?
Try:
( command1; command2 ) &
Edit: Larger PoC:
demo.sh:
#!/bin/bash
echo Starting at $(date)
( echo Starting background process at $(date); sleep 5; echo Ending background process at $(date) ) &
echo Last command in script at $(date)
Running demo.sh:
$ ./demo.sh
Starting at Thu Mar 1 09:11:04 MST 2018
Starting background process at Thu Mar 1 09:11:04 MST 2018
Last command in script at Thu Mar 1 09:11:04 MST 2018
$ Ending background process at Thu Mar 1 09:11:09 MST 2018
Note that the script ended after "Last command in script", but the background process did its "Ending background process" echo 5 seconds later. All of the commands in the (...)& structure are run serially, but are all collectively forked to the background.
You can do so by putting both the commends in a shell script and do like below:-
command1 &
wait
command2 & #if you want to run second command also in background
I am using /bin/script to capture the output of a command and preserve the colors and formatting. Using subshell and assigning to variable does not always work well. E.g.,:
foo="$( ls --color 2>&1 )"
I can use /bin/script to capture stdout:
$ script -qc "echo foo" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:40:40 PM PST
foo
I can use /bin/script to capture stderr (ls of non-existent directory goes to stderr):
$ script -qc "ls vxzcvcxvc" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:38:17 PM PST
ls: cannot access vxzcvcxvc: No such file or directory
My problem arises when the script run inside /bin/script mucks with file descriptors.
I am not able to use /bin/script to capture redirected stderr:
$ script -qc "ls vxzcvcxvc 2>&1" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:47:13 PM PST
I have tried other ways as well:
$ script -qc "echo foo1 && >&2 echo foo2 && echo foo3" >& /dev/null && cat typescript
Script started on Mon 06 Nov 2017 10:46:09 PM PST
foo1
foo3
I assume /bin/script is doing its own file descriptor magic (redirecting output to file), so I am left wondering what to do if the script I am calling does its own redirection.
Tangential question: The primary culprit is a logging line that does
printf "${1}" 1>&2
in order to print logging to stderr. Is there a way to output to stderr without mucking with file descriptors (assuming this is the reason /bin/script fails to pick it up)?
this is my first question (in that forum) so please be patient ... ;-)
To the problem:
I'm trying to run a binary on an raspi that crashes by chance once in a few hours. As the binary gives its output usually to stdout, I'm trying to use it with screen and pipe its output to a file. Doing so, I've written a small wrapper script which is called by cron every five minutes. My idea was, that, if the output file don't changes over a certain period, than the process is killed and restarted.
Here 's my /etc/crontab:
*/5 * * * * pi bash /home/pi/myscript.sh >/dev/null 2>/dev/null
Here 's the myscript:
#!/bin/bash
# Input file
FILE=/home/pi/output.txt
# How many seconds before file is deemed "older"
OLDTIME=300
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(stat $FILE -c %Y)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if file older
if [ $TIMEDIFF -gt $OLDTIME ]; then
#echo "File is older, do stuff here"
bash /home/pi/check_myscript_is_running.sh
fi
Here 's the script that checks:
#!/bin/bash
case "$(pidof processname | wc -w)" in
0) echo "Restarting process: $(date)" >> ~/output.txt
screen -dm /home/pi/binary -l output.txt &
;;
1) # all ok
;;
*) echo "Removed double process: $(date)" >> ~output.txt
kill $(pidof process | awk '{print $1}')
;;
esac
But obviously the last script doesn't start the process anew and I'm getting mails from the cron:
From pi#raspberrypi Fri Jul 01 16:42:25 2016
Return-path: <pi#raspberrypi>
Envelope-to: pi#raspberrypi
Delivery-date: Fri, 01 Jul 2016 16:42:25 +0200
Received: from pi by raspberrypi with local (Exim 4.84_2)
(envelope-from <pi#raspberrypi>)
id 1bIzeD-00007c-0A
for pi#raspberrypi; Fri, 01 Jul 2016 16:42:25 +0200
From: root#raspberrypi (Cron Daemon)
To: pi#raspberrypi
Subject: Cron <pi#raspberrypi> pi /home/pi/startprocess.sh
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/pi>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=pi>
Message-Id: <E1bIzeD-00007c-0A#raspberrypi>
Date: Fri, 01 Jul 2016 16:42:25 +0200
/bin/sh: 1: pi: not found
I don't have a script startprocess.sh and I thought that with the output pipe the mails would be suppressed ...
But the main question is: Why is the script that should restart the process if the output file has'nt changed for five minutes not running ?
Cheers and regards,
JD.
If I understand right: In some environments the cron PATH is not set, so calling up pi in your crontab leads to the error mail you get.
Try to fully qualify your script call with an absolute path, e.g. like this:
*/5 * * * * /path/to/script/pi /usr/bin/bash /home/pi/myscript.sh >/dev/null 2>/dev/null
I wrote a bash script to send an email using telnet. I'm installing it on a TS-7260 running busyBox (which has an ash shell).
Something is different between Bash and Ash and I can't figure out why the following won't work. It's got to be something with the way I'm piping the echos to telnet. Here's the script:
#!/bin/ash
# Snag all the error messages from a given date, open a telnet connection to an outgoing mail server, stick the logs in an email, and send it.
# Tue Jul 2 14:06:12 EDT 2013
# TMB
# Tue Jul 9 17:12:29 EDT 2013
# Grepping the whole error file for WARNING and the piping it to a grep for the date took about four minutes to complete on the gateway. This will only get longer and the file will only get bigger as time goes by.
# Using tail to get the last 5000 lines, I get about three days of errors (2000 of them are from one day, though)
# Getting 5000 lines, then searching them by WARNING and then DATE took 15 seconds on the gateway.
yesterdayDate=$(./getYesterday)
warningLogs=$(tail -5000 /mnt/sd/blah.txt | grep WARNING | grep "$yesterdayDate")
sleep 30
{
sleep 5
echo "ehlo blah.com"
sleep 5
echo "auth plain blah"
sleep 5
echo "mail from: blah#blah.com"
sleep 5
echo "rcpt to: me#blah.com"
sleep 5
echo "data"
sleep 5
echo "Hi!"
sleep 1
echo "Here are all the warnings and faults from yesterday:"
sleep 1
echo "$yesterdayDate"
sleep 1
echo "NOTE: All times are UTC."
sleep 1
echo ""
sleep 1
echo "$warningLogs"
sleep 10
echo ""
sleep 1
echo "Good luck,"
sleep 1
echo "The Robot"
sleep 5
echo "."
sleep 20
echo "quit"
sleep 5
} | telnet blah.com port
exit
I've tried using normal parentheses too before the pipe. I've read the man page for ash and am still doing something stupid. I suspect it's some kind of child process business going on.
This works fine from bash, btw.
Thanks in advance!
Note -- I simplified the script to be just:
echo "quit" | telnet blah.com port
It does exactly what you'd expect in bash, but I see nothing happen in ash.
Replacing the echo with "sleep 10" shows sleep running as a process, but not telnet.
After some more experimentation, the problem was not with the shell at all, but with the implementation of Telnet on Busybox. On my version of BusyBox (1.00rc2), piping anything to Telnet didn't work.
echo blah | telnet -yrDumb
Should have at least made telnet complain about usage. It didn't.
I grabbed the most recent version of inetutils (1.9.1) and compiled its telnet for the TS-7260. It works like a dream (read: it works) now, and is consistent with the behavior I see using telnet and bash on my normal linux box.
Thanks for the help!
I have a long running BASH script that I am running under CYGWIN on Windows.
I would like to limit the script to run for 30 seconds, and automatically terminate if it exceeds this limit. Ideally, I'd like to be able to do this to any command.
For example:
sh-3.2$ limittime -t 30 'myscript.sh'
or
sh-3.2$ limittime -t 30 'grep func *.c'
Under cygwin the ulimit command doesn't seem to work.
I am open to any ideas.
See the http://www.pixelbeat.org/scripts/timeout script the functionality of which has been integrated into newer coreutils:
#!/bin/sh
# Execute a command with a timeout
# License: LGPLv2
# Author:
# http://www.pixelbeat.org/
# Notes:
# Note there is a timeout command packaged with coreutils since v7.0
# If the timeout occurs the exit status is 124.
# There is an asynchronous (and buggy) equivalent of this
# script packaged with bash (under /usr/share/doc/ in my distro),
# which I only noticed after writing this.
# I noticed later again that there is a C equivalent of this packaged
# with satan by Wietse Venema, and copied to forensics by Dan Farmer.
# Changes:
# V1.0, Nov 3 2006, Initial release
# V1.1, Nov 20 2007, Brad Greenlee <brad#footle.org>
# Make more portable by using the 'CHLD'
# signal spec rather than 17.
# V1.3, Oct 29 2009, Ján Sáreník <jasan#x31.com>
# Even though this runs under dash,ksh etc.
# it doesn't actually timeout. So enforce bash for now.
# Also change exit on timeout from 128 to 124
# to match coreutils.
# V2.0, Oct 30 2009, Ján Sáreník <jasan#x31.com>
# Rewritten to cover compatibility with other
# Bourne shell implementations (pdksh, dash)
if [ "$#" -lt "2" ]; then
echo "Usage: `basename $0` timeout_in_seconds command" >&2
echo "Example: `basename $0` 2 sleep 3 || echo timeout" >&2
exit 1
fi
cleanup()
{
trap - ALRM #reset handler to default
kill -ALRM $a 2>/dev/null #stop timer subshell if running
kill $! 2>/dev/null && #kill last job
exit 124 #exit with 124 if it was running
}
watchit()
{
trap "cleanup" ALRM
sleep $1& wait
kill -ALRM $$
}
watchit $1& a=$! #start the timeout
shift #first param was timeout for sleep
trap "cleanup" ALRM INT #cleanup after timeout
"$#"& wait $!; RET=$? #start the job wait for it and save its return value
kill -ALRM $a #send ALRM signal to watchit
wait $a #wait for watchit to finish cleanup
exit $RET #return the value
The following script shows how to do this using background tasks. The first section kills a 60-second process after the 10-second limit. The second attempts to kill a process that's already exited. Keep in mind that, if you set your timeout really high, the process IDs may roll over and you'll kill the wrong process but this is more of a theoretical issue - the timeout would have to be very large and you would have to be starting a lot of processes.
#!/usr/bin/bash
sleep 60 &
pid=$!
sleep 10
kill -9 $pid
sleep 3 &
pid=$!
sleep 10
kill -9 $pid
Here's the output on my Cygwin box:
$ ./limit10
./limit10: line 9: 4492 Killed sleep 60
./limit10: line 11: kill: (4560) - No such process
If you want to only wait until the process has finished, you need to enter a loop and check. This is slightly less accurate since sleep 1 and the other commands will actually take more than one second (but not much more). Use this script to replace the second section above (the "echo $proc" and "date" commands are for debugging, I wouldn't expect to have them in the final solution).
#!/usr/bin/bash
date
sleep 3 &
pid=$!
((lim = 10))
while [[ $lim -gt 0 ]] ; do
sleep 1
proc=$(ps -ef | awk -v pid=$pid '$2==pid{print}{}')
echo $proc
((lim = lim - 1))
if [[ -z "$proc" ]] ; then
((lim = -9))
fi
done
date
if [[ $lim -gt -9 ]] ; then
kill -9 $pid
fi
date
It basically loops, checking if the process is still running every second. If not, it exits the loop with a special value to not try and kill the child. Otherwise it times out and does kill the child.
Here's the output for a sleep 3:
Mon Feb 9 11:10:37 WADT 2009
pax 4268 2476 con 11:10:37 /usr/bin/sleep
pax 4268 2476 con 11:10:37 /usr/bin/sleep
Mon Feb 9 11:10:41 WADT 2009
Mon Feb 9 11:10:41 WADT 2009
and a sleep 60:
Mon Feb 9 11:11:51 WADT 2009
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
pax 4176 2600 con 11:11:51 /usr/bin/sleep
Mon Feb 9 11:12:03 WADT 2009
Mon Feb 9 11:12:03 WADT 2009
./limit10: line 20: 4176 Killed sleep 60
Check out this link. The idea is just that you would run myscript.sh as a subprocess of your script and record its PID, then kill it if it runs too long.
timeout 30s YOUR_COMMAND COMMAND_ARGUMENTS
Below are all the options for "timeout" under coreutils:
$ timeout --help
Usage: timeout [OPTION] DURATION COMMAND [ARG]...
or: timeout [OPTION]
Start COMMAND, and kill it if still running after DURATION.
Mandatory arguments to long options are mandatory for short options too.
--preserve-status
exit with the same status as COMMAND, even when the
command times out
--foreground
when not running timeout directly from a shell prompt,
allow COMMAND to read from the TTY and get TTY signals;
in this mode, children of COMMAND will not be timed out
-k, --kill-after=DURATION
also send a KILL signal if COMMAND is still running
this long after the initial signal was sent
-s, --signal=SIGNAL
specify the signal to be sent on timeout;
SIGNAL may be a name like 'HUP' or a number;
see 'kill -l' for a list of signals
--help display this help and exit
--version output version information and exit
DURATION is a floating point number with an optional suffix:
's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.
If the command times out, and --preserve-status is not set, then exit with
status 124. Otherwise, exit with the status of COMMAND. If no signal
is specified, send the TERM signal upon timeout. The TERM signal kills
any process that does not block or catch that signal. It may be necessary
to use the KILL (9) signal, since this signal cannot be caught, in which
case the exit status is 128+9 rather than 124.
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Full documentation at: <http://www.gnu.org/software/coreutils/timeout>
or available locally via: info '(coreutils) timeout invocation'
You could run the command as a background job (i.e. with "&"), use the bash variable for "pid of last command run," sleep for the requisite amount of time, then run kill with that pid.