How to use timeout command with a own function? - bash

I would like to use the timeout command with an own function, e.g.:
#!/bin/bash
function test { sleep 10; echo "done" }
timeout 5 test
But when calling this script, it seems to do nothing. The shell returns right after I started it.
Is there a way to fix this or can timeout not be used on own functions ?

One way is to do
timeout 5 bash -c 'sleep 10; echo "done"'
instead. Though you can also hack up something like this:
f() { sleep 10; echo done; }
f & pid=$!
{ sleep 5; kill $pid; } &
wait $pid

timeout doesn't seem to be a built-in command of bash which means it can't access functions. You will have to move the function body into a new script file and pass it to timeout as parameter.

timeout requires a command and can't work on shell functions.
Unfortunately your function above has a name clash with the /usr/bin/test executable, and that's causing some confusion, since /usr/bin/test exits immediately. If you rename your function to (say) t, you'll see:
brian#machine:~/$ timeout t
Try `timeout --help' for more information.
which isn't hugely helpful, but serves to illustrate what's going on.

Found this question when trying to achieve this myself, and working from #geirha's answer, I got the following to work:
#!/usr/bin/env bash
# "thisfile" contains full path to this script
thisfile=$(readlink -ne "${BASH_SOURCE[0]}")
# the function to timeout
func1()
{
echo "this is func1";
sleep 60
}
### MAIN ###
# only execute 'main' if this file is not being source
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
#timeout func1 after 2 sec, even though it will sleep for 60 sec
timeout 2 bash -c "source $thisfile && func1"
fi
Since timeout executes the command its given in a new shell, the trick was getting that subshell environment to source the script to inherit the function you want to run. The second trick was to make it somewhat readable..., which led to the thisfile variable.

Provided you isolate your function in a separate script, you can do it this way:
(sleep 1m && killall myfunction.sh) & # we schedule timeout 1 mn here
myfunction.sh

Related

How to use functions within crontab

My crontab has several similar calls where a script is called with a flock file, timeout, and output / error logs. I'd like to put this logic into a shared function that I just pass the script path and timeout length to. Is there any way to define a function within the crontab that can be used by all entries?
The best workaround I've found so far is defining my function in my .bashrc file, and wrapping every cron command with bash -ic "..." to make them run in an interactive shell, but this seems overkill, and means my crontab's functionality is linked to my .bashrc file. Is there no better way to use functions within the crontab?
Testing on Ubuntu Server 20.04 LTS
---- Edit ----
Per the comments, here's an example entry in my crontab:
0 */4 * * * IFS=; output=$(flock -n /home/me/my_script.lock timeout 3600 python3 /home/me/my_script.py 2>&1 || if [ $? -eq 124 ]; then echo "`date '+\%s'`: Killed due to timeout"; fi); if [ "$output" ]; then echo $output; fi >> /home/me/logs/my_script.log 2>&1
...and the corresponding test function I put in my .bashrc file
test() {
IFS=;
output=$(flock -n test.lock timeout 300 python3 test.py 2>&1 || if [ $? -eq 124 ]; then echo "`date '+\%s'`: Killed due to timeout"; fi);
if [ "$output" ]
then
echo $output
fi
}
...the function of course being hard-coded just for testing, if used I would add parameters for the script path and timeout length per OP.
Hopefully this better explains the desire to use a function, as the whole IFS, flock, timeout, and 'killed by' pieces are all reused for each line (of which there are multiple dozen). If there's any better solution catered to this need, all suggestions welcome, otherwise the suggestion to just call my function as a separate bash script sounds appropriate.
Put the function in a file and you can do
0 * * * * . /path/to/file; func_name arg1 arg2
Be aware thaat the default shell for the crontab is /bin/sh. If your function relies on bash features, set SHELL = /bin/bash in the crontab. See man 5 crontab

Why is the second bash script not printing its iteration?

I have two bash scripts:
a.sh:
echo "running"
doit=true
if [ $doit = true ];then
./b.sh &
fi
some-long-operation-binary
echo "done"
b.sh:
for i in {0..50}; do
echo "counting";
sleep 1;
done
I get this output:
> ./a.sh
running
counting
Why do I only see the first "counting" from b.sh and then nothing anymore? (Currently some-long-operation-binary just sleep 5 for this example). I first thought that due to setting b.sh in the background, its STDOUT is lost, but why do I see the first output? More importantly: is b.sh still running and doing its thing (its iteration)?
For context:
b.sh is going to poll a service provided by some-long-operation-binary, which is only available after some time the latter has run, and when ready, would write its content to a file.
Apologies if this is just rubbish, it's a bit late...
You should add #!/bin/bash or the like to b.sh that uses a Bash-like expansion, to make sure Bash is actually running the script. Otherwise there may be (indeed) only one loop iteration happening.
When you start a background process, it is usually a good practice to kill it and wait for it, no matter which way the script exits.
#!/bin/bash
set -e -o pipefail
declare -i show_counter=1
counter() {
local -i i
for ((i = 0;; ++i)); do
echo "counting $((i))"
sleep 1
done
}
echo starting
if ((show_counter)); then
counter &
declare -i counter_pid="${!}"
trap 'kill "${counter_pid}"
wait -n "${counter_pid}" || :
echo terminating' EXIT
fi
sleep 10 # long-running process

Control operator as optional parameter in bash script

I am trying to run a program optionally in the background. Is there a way to pass a control operator optionally. Something like:
if some_condition
bg=&
fi
myprog $bg
However, as I can see, bash is (rightly) treating $bg as an argument to myprog. I am trying to get myprog running in the background.
Is there a way to do this?
How about:
if some_condition
then
myprog &
else
myprog
fi
I'm in favour of James Brown's answer, however, if you're only looking to specify myprog once, then maybe you can use a function to determine if it should run in the background or not. However, this does mean you'll be writing more lines initially, but you could benefit from this if you're potentially running a lot in the background...
run_in_background=false
run_cmd () {
echo "Running $#"
if $run_in_background; then
"$#" &
else
"$#"
fi
}
if [[ "$1" == '--bg' ]]; then
run_in_background=true
shift
fi
run_cmd "$#"
Then you can run it like so:
./script.bash --bg sleep 1
Or if you could run it within the script itself:
... (Continuing from inside the script above)
run_cmd sleep 1
run_cmd echo hello
wait # Waits for background processes to finish
And then you can determine if the commands being passed to run_cmd will be in the foreground or background by either omitting or using the --bg flag.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Shell Script (bash/ksh): 20 seconds to read a variable

I need to wait for an input for 20 seconds, after that myscript should continue the execution.
I've tried using read -t20 var however this works only on bash. I'm using ksh on Solaris 10.
Can someone help me please?
EDIT: 20 seconds is only an example. Let's pretend it needs to wait for 1 hour. But the guy could or could not be in front the PC to write the input, he doesn't need to wait the 1 hour to enter an input, but if he's not in front of the PC so the shell should continue the execution after waiting for some time.
Thanks!
From man ksh:
TMOUT
If set to a value greater than zero, the shell terminates if a command is not entered within the prescribed number of seconds after issuing the PS1 prompt. The shell can be compiled with a maximum bound for this value which cannot be exceeded.
I'm not sure whether this works with read in ksh on Solaris. It does work with ksh93, but that version also has read -t.
This script includes this approach:
# Start the (potentially blocking) read process in the background
(read -p && print "$REPLY" > "$Tmp") & readpid=$!
# Now start a "watchdog" process that will kill the reader after
# some time:
(
sleep 2; kill $readpid >/dev/null 2>&1 ||
{ sleep 1; kill -1 $readpid >/dev/null 2>&1; } ||
{ sleep 1; kill -9 $readpid; }
) & watchdogpid=$!
# Now wait for the reading process to terminate. It will terminate
# reliably, either because the read terminated, or because the
# "watchdog" process made it terminate.
wait $readpid
# Now stop the watchdog:
kill -9 $watchdogpid >/dev/null 2>&1
REPLY=TERMINATED # Assume the worst
[[ -s $Tmp ]] && read < "$Tmp"
Look at this forum thread it has the answer in the third post.

Resources