Why does `timeout` not work with pipes? - bash

The following command line call of timeout (which makes no sense, just for testing reason) does not work as expected. It waits 10 seconds and does not stop the command from working after 3 seconds. Why ?
timeout 3 ls | sleep 10

What your command is doing is running timeout 3 ls and piping its output to sleep 10. The sleep command is therefore not under the control of timeout and will always sleep for 10s.
Something like this would give the desired effect.
timeout 3 bash -c "ls | sleep 10"

The 'ls' command shouldn't be taking 3 seconds to run. What I think is happening is you are saying (1) timeout on ls after 3 seconds (again this isn't happening since ls shouldn't take anywhere near 3 seconds to run), then (2) pipe the results into sleep 10 which does not need further arguments than the number you are giving it. Thus ls happens, timeout doesn't matter, and bash sleeps for 10 seconds.

The only way I know how to get the effect you're after, is to put the piped commands into a separate file:
cat > script
ls | sleep 10
^D
timeout 3 sh script

It is enough to set the timeout on the last command of the pipeline:
# Exits after 3 seconds with code 124
ls | timeout 3 sleep 10
# Exits after 1 second with code 0
ls | timeout 3 sleep 1

Related

What does `flock -u` actually do?

I'm playing around with the command flock, which obtains and releases locks on files. For example, if I run
flock /tmp/mylock true
then it immediately exits, presumably obtaining and then releasing the lock. If I run
flock /tmp/mylock sleep 100
then it delays 100 seconds, again obtaining and releasing the lock. And, if I run the following in two separate shells:
flock /tmp/mylock sleep 100
and
flock /tmp/mylock true
then the second command is blocked, because it can't obtain the lock while the first command runs. Once the sleep 100 completes and the lock is released, the second command runs and exits. All good.
Here's the problem. If, during that 100 second delay, I run the following in a third shell:
flock -u /tmp/mylock true
then what happens? The man page for flock says:
-u, --unlock
Drop a lock. This is usually not required, since a lock is
automatically dropped when the file is closed. However, it may
be required in special cases, for example if the enclosed com-
mand group may have forked a background process which should not
be holding the lock.
So, this should drop the lock, which should allow flock /tmp/mylock true to run, right? (I would also guess that the flock /tmp/mylock sleep 100 would immediately exit, but that's speculation.)
What happens? Nothing. flock -u /tmp/mylock true immediately exits, but flock /tmp/mylock true continues to be blocked, and flock /tmp/mylock sleep 100 continues to exit.
What does flock -u /tmp/mylock <command> actually do?
(All examples tested on Ubuntu 18.04.)
Here's an example with -u working with file descriptor 9 open on a file mylock, successfully unlocking 9 so that a backgrounded flock mylock can proceed.
Note that flock 9 cannot also have a command as in that case the "9" is taken to be a filename, not an fd.
bash -s <<\! 9>mylock 2>&1 |
flock 9; echo gotlock1
flock 9; echo gotlock2
9>&- flock mylock bash -c 'echo start_sleep;sleep 8; echo end_sleep' &
sleep 2
flock -u 9; echo unlock; sleep .1
flock 9; echo gotlock3
!
awk '{t2=systime(); if(t1==0)t1=t2; printf "%2d %s\n",t2-t1,$0; t1=t2}'
The first line makes bash run the following lines after opening fd 9, but also pipes stdout and stderr through the awk script seen at the end. This is just to annotate the output with the timing of the lines. The result is:
0 gotlock1
0 gotlock2
2 unlock
0 start_sleep
8 end_sleep
0 gotlock3
This shows the first 2 flock 9 commands run immediately. Then a flock mylock command is run in the background, after closing fd 9 just for this line. This command could have been run from a second window, for example. The output shows that it hangs, as we do not see start_sleep. This means that the preceding flock 9 did actually get an exclusive lock.
The output then shows that after sleep 2 and flock -u 9 we get the unlock echo, and only then does the background command get the lock and starts its sleep 8.
The main script immediately does a flock 9, but the output shows that this does not proceed until the background script ends with end_sleep 8 seconds later, and the main script outputs gotlock3.
The lslocks command sometimes shows 2 processes interested in the lock. The * means a wait:
COMMAND PID TYPE SIZE MODE M START END PATH
flock 23671 FLOCK 0B WRITE* 0 0 0 /tmp/mylock
flock 23655 FLOCK 0B WRITE 0 0 0 /tmp/mylock
But it does not show the result of the first flock 9 on its own, presumably because there is no process with the lock, even though the file truly is locked, as we see when the background job cannot proceed.

How to grep from background process?

I have a script, that runs an app.
The app produces some output
I need to grep from the output in order to verify it
How can I accomplish this?
For instance;
script1.sh
#!/bin/sh
app1 &
app2
******
Output:
app1 -> "App1"
app2 -> "APp2"
You can use nohup to catch the script output and grep in nohup file. Below is how you'll execute the command in nohup:
nohup ./script.sh &
It will create a nohup.out file in current directory which can be used for your purpose.
I suggest using a named pipe over a file produced by nohup. The reason is subtle, but important. Let's say that your background application takes 10 seconds to execute and produces a decent amount of data. For nohup to work, you will need to wait for the background application to finish before you can process the data from that file. You will miss out on the befit of using a background process in the first place which is parallelism. This is true not just from nohup output, but for any regular file.
Here is an example of not waiting for the background file to finish executing:
$ { for i in {0..100}; do echo $i; sleep 0.1; done } > outfile &
[1] 2069
$ grep 1 outfile
1
As you can see, the grep process immediately processes the file and exits before the background application is finished writing data.
When using a named pipe, the foreground process, grep in this case, will know that it needs to wait for the pipe to close. Notice the difference:
$ mkfifo outpipe
$ { for i in {0..100}; do echo $i; sleep 0.1; done } > outpipe &
[1] 2173
$ grep 1 outpipe
1
10
11
12
13
14
15
16
17
18
19
21
31
41
51
61
71
81
91
100
With a named pipe, we can use the output of the background process just as if it were coming from a pipe.

Check if timeout command was successful

I am trying to run a command in a bash file and put the output of command in a variable. But the command must NOT take longer than 2 seconds. I use the command:
timeout -k 2 2 ls /var/log/;
And there is no problem. The command either list the contents of log directory or kills the command in case it took more than two seconds. But when I try to put the output in a variable the commands hangs and doesn't reply or get killed! I use like this:
result=$(timeout -k 2 2 ls /var/log/);
Where is my mistake?
The timeout command will exit with status 124 if it had to kill the process; see here. So you may try something like:
timeout -k 2 2 ls /var/log/ >directory.txt
if [ $? -eq 124 ]
then
echo "Timeout exceeded!"
else
cat directory.txt
fi

How can you make sure that exactly n project is running in bash?

I have a program that processes files in a really disk-usage heavy way. I want to call this process on many fies, and experience shows that the performance is the best, when there are no more than 3 process started at the same time (otherwise they are competing for the disk-usage as resource too much and slow each other down). Is there an easy way to call commands from a list and start executing the new one when there are less than n (3) of the processes (started by the listed commands) are running at the same time?
You could use xargs. From the manpage:
--max-procs=max-procs
-P max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option with -P; otherwise chances are that
only one exec will be done.
For example, assuming your commands are one per line:
printf 'sleep %dm\n' 1 2 3 4 5 6 | xargs -L1 -P3 -I {} sh -c {}
Then, in a terminal:
$ pgrep sleep -fa
11987 sleep 1m
11988 sleep 2m
11989 sleep 3m
$ # a little while later
$ pgrep sleep -fa
11988 sleep 2m
11989 sleep 3m
12045 sleep 4m
The -L1 option uses one line at a time as the argument, and -I {} indicates that {} will be replaced with that line. To actually run the command, we pass it to sh as an argument to -c.

How do I pause my shell script for a second before continuing?

I have only found how to wait for user input. However, I only want to pause so that my while true doesn't crash my computer.
I tried pause(1), but it says -bash: syntax error near unexpected token '1'. How can it be done?
Use the sleep command.
Example:
sleep .5 # Waits 0.5 second.
sleep 5 # Waits 5 seconds.
sleep 5s # Waits 5 seconds.
sleep 5m # Waits 5 minutes.
sleep 5h # Waits 5 hours.
sleep 5d # Waits 5 days.
One can also employ decimals when specifying a time unit; e.g. sleep 1.5s
And what about:
read -p "Press enter to continue"
In Python (question was originally tagged Python) you need to import the time module
import time
time.sleep(1)
or
from time import sleep
sleep(1)
For shell script is is just
sleep 1
Which executes the sleep command. eg. /bin/sleep
Run multiple sleeps and commands
sleep 5 && cd /var/www/html && git pull && sleep 3 && cd ..
This will wait for 5 seconds before executing the first script, then will sleep again for 3 seconds before it changes directory again.
I realize that I'm a bit late with this, but you can also call sleep and pass the disired time in. For example, If I wanted to wait for 3 seconds I can do:
/bin/sleep 3
4 seconds would look like this:
/bin/sleep 4
On Mac OSX, sleep does not take minutes/etc, only seconds. So for two minutes,
sleep 120
Within the script you can add the following in between the actions you would like the pause. This will pause the routine for 5 seconds.
read -p "Pause Time 5 seconds" -t 5
read -p "Continuing in 5 Seconds...." -t 5
echo "Continuing ...."
read -r -p "Wait 5 seconds or press any key to continue immediately" -t 5 -n 1 -s
To continue when you press any one button
for more info check read manpage ref 1, ref 2
You can make it wait using $RANDOM, a default random number generator. In the below I am using 240 seconds. Hope that helps #
> WAIT_FOR_SECONDS=`/usr/bin/expr $RANDOM % 240` /bin/sleep
> $WAIT_FOR_SECONDS
use trap to pause and check command line (in color using tput) before running it
trap 'tput setaf 1;tput bold;echo $BASH_COMMAND;read;tput init' DEBUG
press any key to continue
use with set -x to debug command line

Resources