Check if timeout command was successful - bash

I am trying to run a command in a bash file and put the output of command in a variable. But the command must NOT take longer than 2 seconds. I use the command:
timeout -k 2 2 ls /var/log/;
And there is no problem. The command either list the contents of log directory or kills the command in case it took more than two seconds. But when I try to put the output in a variable the commands hangs and doesn't reply or get killed! I use like this:
result=$(timeout -k 2 2 ls /var/log/);
Where is my mistake?

The timeout command will exit with status 124 if it had to kill the process; see here. So you may try something like:
timeout -k 2 2 ls /var/log/ >directory.txt
if [ $? -eq 124 ]
then
echo "Timeout exceeded!"
else
cat directory.txt
fi

Related

change command exit code so it always returns 0 [duplicate]

I would like to return exit code "0" from a failed command. Is there any easier way of doing this, rather than:
function a() {
ls aaaaa 2>&1;
}
if ! $(a); then
return 0
else
return 5
fi
Simply append return 0 to the function to force a function to always exit successful.
function a() {
ls aaaaa 2>&1
return 0
}
a
echo $? # prints 0
If you wish to do it inline for any reason you can append || true to the command:
ls aaaaa 2>&1 || true
echo $? # prints 0
If you wish to invert the exit status simple prepend the command with !
! ls aaaaa 2>&1
echo $? # prints 0
! ls /etc/resolv.conf 2>&1
echo $? # prints 1
Also if you state what you are trying to achieve overall we might be able to guide you to better answers.
It may be helpful for some people to try timeout command for commands that expect input (like SIGINT = keyboard interrupt) to be stopped like:
timeout 10 kubectl proxy &
This will execute kubectl proxy for 10 seconds (so you can perform the actions you need using the proxy) and then will gracefully terminate kubectl proxy
example:
timeout 3 kubectl proxy &
[1] 759
Starting to serve on 127.0.0.1:8001
echo $?
0
The help of timeout will also help on specific cases
timeout --help
Usage: timeout [OPTION] DURATION COMMAND [ARG]...
or: timeout [OPTION]
Start COMMAND, and kill it if still running after DURATION.
Mandatory arguments to long options are mandatory for short options too.
--preserve-status
exit with the same status as COMMAND, even when the
command times out
--foreground
when not running timeout directly from a shell prompt,
allow COMMAND to read from the TTY and get TTY signals;
in this mode, children of COMMAND will not be timed out
-k, --kill-after=DURATION
also send a KILL signal if COMMAND is still running
this long after the initial signal was sent
-s, --signal=SIGNAL
specify the signal to be sent on timeout;
SIGNAL may be a name like 'HUP' or a number;
see 'kill -l' for a list of signals
--help display this help and exit
--version output version information and exit

Checking for status of qsub jobs running within shell script

I have been given a c shell script that launches 800 individual qsubs for a sample. I need to run this script on more than 500 samples (listed in samples.txt). To automate the process, I thought about running the script (named SrchDriver) using the following bash shell script:
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
done
This script would launch the SrchDriver script for all samples one right after another which would result in too many jobs on the server at one time. I would like to run only one sample at a time by waiting for all qsubs to finish for a particular sample.
What is the best way to put in a check for running/waiting jobs for a sample and holding the launch of the Srchdriver script for additional samples until all jobs are finished for the current sample?
I was thinking to first wait for 30 seconds and then check status of the qsubs (name of jobs is mapgaps). Next, I wanted to use a while loop to check the status every 30 seconds. Once the status is no longer 0, then proceed to the next sample. Would this be correct?
sleep 30
qstat | grep mapgaps &> /dev/null
while [ $? -eq 0 ];
do
sleep 30
qstat | grep mapgaps &> /dev/null
done;
If correct, how would I combine it with my for-loop? Would the following code below be correct?
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
sleep 30
qstat | grep mapgaps &> /dev/null
status=$?
while [ $status = 0 ]
do
sleep 30
qstat | grep mapgaps &> /dev/null
status=$?
done
done
Thanks in advance for help. Please let me know if more information is needed.
Your script should work as is, indeed. The logic is sound and the syntax is correct.
A small improvement: the while statement can take the return status of a command directly, without using $?, so you could write your script like this:
#!/bin/sh
for item in $(cat samples.txt)
do
(cd dir_"$item"/MAPGAPS && SrchDriver "$item"_Out 3)
sleep 30
while qstat | grep mapgaps &> /dev/null
do
sleep 30
done
done

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Not able to force exit on Jenkins Build

I've been having a lot of trouble with this so here goes.
I have a Jenkins build that executes the following shell script:
#!/bin/sh -x
if [ 'grep -c "It misses" log' -gt 0 ];
then exit 1;
fi
I know that the grep returns 1 when it finds something and technically Jenkins should mark the build as failed on a non-zero exit, but the jenkins still marks it as a success.
The console output for the jenkins build when running the script is:
Started by user bla
[project_name] $ /bin/sh -x /var/tmp/hudson41276.sh
+ [ grep -c "It misses" log -gt 0 ]
Finished: SUCCESS
Could anybody give me a hand and point out what I'm missing here?
Thanks,
CJ
If I understand right, you want the job to fail if "It misses" is not found in file "log". You can do this by not using the -c option of grep, just redirect the output like this:
grep "It misses" log > /dev/null
Grep will return 0 if it finds the phrase, and the job will succeed. If it does not find the phrase, grep will return 1, and the job will fail. If you want it the other way around (fail if it does find the phrase) just use grep -v. $? is your friend when you want to be sure of the exit status of a shell command.
Try this:
#!/bin/sh
set -e
grep -c "It misses" log
set -e: Exit at the first error.
grep -c 'arg': Exit 1 if nothing was grepped.
The problem is with your script, not Jenkins. The part of your script where you attempt to compare the exit code of grep looks like this:
if [ 'grep -c "It misses" log' -gt 0 ] ...
This will not even execute grep. In fact, it is simply comparing a string to a number.
You were probably attempting to do:
if [ `grep -c "It misses" log` -gt 0 ] ...
Note the use of backticks (`). Shell will execute grep and replace the backticks with the output of grep.
Bonus item: the condition in the if statement is actually a command that gets executed and its exit code determines where the execution will continue. So... why not use the grep command and it's useful exit code as the condition? (grep will exit with code 0 when it finds matches.)
if grep "It misses" log; then
exit 1
fi
It's shorter, much more readable and ever performs better because it does not need to execute so many commands.
Such a short if statement could even be replaced with a one-liner:
grep "It misses" log && exit 1
By default jenkins start shell with -e, so it exists at first error.
You could turn it off by
set +e
do failing task..

Why does `timeout` not work with pipes?

The following command line call of timeout (which makes no sense, just for testing reason) does not work as expected. It waits 10 seconds and does not stop the command from working after 3 seconds. Why ?
timeout 3 ls | sleep 10
What your command is doing is running timeout 3 ls and piping its output to sleep 10. The sleep command is therefore not under the control of timeout and will always sleep for 10s.
Something like this would give the desired effect.
timeout 3 bash -c "ls | sleep 10"
The 'ls' command shouldn't be taking 3 seconds to run. What I think is happening is you are saying (1) timeout on ls after 3 seconds (again this isn't happening since ls shouldn't take anywhere near 3 seconds to run), then (2) pipe the results into sleep 10 which does not need further arguments than the number you are giving it. Thus ls happens, timeout doesn't matter, and bash sleeps for 10 seconds.
The only way I know how to get the effect you're after, is to put the piped commands into a separate file:
cat > script
ls | sleep 10
^D
timeout 3 sh script
It is enough to set the timeout on the last command of the pipeline:
# Exits after 3 seconds with code 124
ls | timeout 3 sleep 10
# Exits after 1 second with code 0
ls | timeout 3 sleep 1

Resources