Timeout counter continues, even if flow is paused in Jenkins - jenkins-pipeline

I set the Timeout period to specific stage. As shown in below:
stage('Stage 1') {
options { timeout(time: 3, unit: 'MINUTES') }
steps {
echo "Stage 1"
sleep 20
}
}
When i paused the Jenkins build during the execution of "Stage 1".
It will pause the execution but counter of timeout continues. So, after timeout exceed during the Pause state .
it will show me the below ERROR after 3 minutes and when i Resume the build, status of build is ABORETED.
Pausing
Cancelling nested steps due to timeout
Body did not finish within grace period; terminating with extreme prejudice
Question:
When I pause the Jenkins build, it will pause the execution as well as timeout counter till I'm not resume the build.
Can you help me to solve this problem?
Thank you in advance

Related

Resource utilisation of sleep

The problem I want to tackle is as follows. I have a long(1 to 2 hours) running task that has to be run everyday. So the goto option was cron. But the catch is that I have to give a 24 hour gap between successive runs. So using cron now would involve rewriting the cron job file after every run. This might be clear after this example.
The long running job 'LR' starts at 6PM on Monday and finishes at 7:30PM sameday.
On Tuesday it's supposed to start at 7:30 PM and not 6PM (like it did on monday). This is because there has to be a 24hr gap between successive runs.
The obvious option here was to have a process running an infinite loop. start the LR job. Then sleep for 24hr and continue with the loop. This works perfectly too. In my setup there is a bash script which is running this loop.
while [ 1 == 1 ]; do
/bin/jobs/long_run.py
/bin/jobs/cleanup.sh
sleep 86400
done
So my question is what is the total amount of CPU resource spent and what is the RAM usage.
Not sure if this affects the answer in anyway; I'm running this on termux on an android phone.
Also please recommend other light weight options.
There is nothing to worry about resources, while a script executes sleep, it really sleeps. You should worry for if anything happens between two executions, like restart, downtime etc. This structure:
while true; do
sh script.sh
sleep 86400
done
does not resume and you don't save the time for the next execution anywhere. Similar to this structure is to have a wrapper, suppose f() is your jobs
f() {
echo working
}
wrapper() {
f
echo sleeping
sleep 86400
wrapper
}
wrapper
so now you call the wrapper, which works, sleeps and calls itself. You could use just this, if you are ok with what could go wrong, at least print the datetime somewhere.
You can replace the internal sleep and wrapper call with job scheduling with cron or at. Probably at is not a standard packet for all distributions (at least not for mine) while cron is. You could install it. For at the wrapper would be like this:
wrapper() {
f
at now +1 day wrapper
}
With cron, you could edit the crontab, like this but better use a crontab file like this, what you have to do is to parse date command, create the date prefix, update crontab.
Note: There may be other cron jobs for user, existing or added after that, this is considered in the last link.

Obtain the exit code for a known process id

I have a list of processes triggered one after the other, in parallel. And, I need to know the exit code of all of these processes when they complete execution, without waiting for all of the processes to finish.
While status=$?; echo $status would provide the exit code for the last command executed, how do I know the exit code of any completed process, knowing the process id?
You can do that with GNU Parallel like this:
parallel --halt=now,done=1 ::: ./job1 ./job2 ./job3
The --halt=now,done=1 means halt immediately, as soon as any one job is done, killing all outstanding jobs immediately and exiting itself with the exit status of the complete job.
There are options to exit on success, or on failure as well as by completion. The number of successful, failing or complete jobs can be given as a percentage too. See documentation here.
Save the background job id using a wrapper shell function. After that the exit status of each job can be queried:
#!/bin/bash
jobs=()
function run_child() {
"$#" &
jobs+=($!)
}
run_child sleep 1
run_child sleep 2
run_child false
for job in ${jobs[#]}; do
wait $job
echo Exit Code $?
done
Output:
Exit Code 0
Exit Code 0
Exit Code 1

How to fail the whole the primary process if one of the background processes fails in bash?

So basically I need to
run stubs, server and wait simultaneously
when wait completes, run tests
if stubs or server fail, the main process with wait and tests should fail too (because otherwise it stays hanging in there till 1 hour timeout)
The rest of the gibberish is my original question:
I'm running e2e tests in gitlab ci. the simplified script is this:
stubs & server & wait-on && test
wait-on is checking when the server starts responding on a specific url and after that the tests start running. The question is how do I fail the whole ci job if stubs or server fails?
In a sunny day scenario they run in background till the tests finish running and the container gets killed, but how do I kill the container in a rainy day scenario when at least one of them can't compile?
Update: It seems I didn't make myself clear, I am sorry. so there are 2 scenarios:
sunny: stubs and server are compiled and are run in background forever (till the container gets killed after test is completed) - this works as expected
rainy: stubs or server couldn't compile. in this case wait-on will wait forever till the container is killed by timeout (1 hour in my case) - this is what I want to improve: I don't want to wait for an hour, but finish everything with an error as soon as stubs or server had failed
Thanks for helping, I'm really bad with bash, sorry
Just check the exit status of background processes.
stubs &
stubspid=$!
server &
serverpid=$1
if wait-on && wait "$stubspid" && wait "$serverpid"; then
echo "SUCCESS"
else
echo "FAILURE"
fi
You can kill the main test process if stubs or server fails using pkill
{ stubs || pkill -f test; } & { server || pkill -f test; } & wait-on && test
thanks to chepner

Repeat unterminated command every x interval of time

I'm trying to run a command after amount of time. I found these solutions:
watch -n60 command
while true; do command; sleep 60; done
They are working good if the command terminates (for example: echo "message")
The code which I'm running doesn't terminate. That's why those solutions are not working for me. But I want to run it, terminate it after 60 seconds and run it again. How can I do that?
Use the timeout command
while true; do timeout 60 command; done
Note that if the command exits before the 60 seconds are up, it will re-execute immediately rather than waiting for the minute to be up.
This starts the command every 60 seconds and kills the process if it is unterminated:
while true; do command &; LAST_PID=$!; sleep 60; kill -9 $LAST_PID; done

Windows 10 scheduled task not running

I have a very simple bat file which does a MysqlDump.
When I manually execute the bat file, it works. When I click "execute" in my scheduled taks, it works. But the scheduled task itself, doesn't run when it should run (timer expires).
I've setup a scheduled task to run every 5 minutes. In the scheduled taks manager I can see "next time to execute" "16:55". But when it is 16:55 the text updates to "next time to execute" "17:00". But in the "previous time executed" nothing changes. It still shows the time I manually have executed the task.
So, the weird thing is, when I click "execute" for the task, it runs. But when the time expires and it should run by itself, nothing happend.
I've enabled the history for the task. But even there nothing happens.
Can you help?

Resources