Running shell script from jenkins - shell

When i try to execute the jobs from terminal it works for one hour without any issue. when i try to execute shell from Jenkins it works for just one minute and stops the execution. The output from Jenkins console output as follows :
Creating folder path in /jenkins/workspace/load_test/scripts/loadtest/loadtest1
PWD is : /jenkins/workspace/load_test/scripts/loadtest
Running /jenkins/workspace/load_test/scripts/loadtest/loadtest1/testRestApi.sh
1495126268
1495129868
3600
Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
Finished: SUCCESS
Any ideas/ suggestion to make the script run for one hour from Jenkins job ?

Have you tried with BUILD_ID=dontKillMe it is commonly used for daemons. https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller however this should let you run your script

Related

LSF - BSUB Running a script if the job is killed

Im working with the LSF, running bsub commands.
I'm implementing the -Ep switch to run a post exec script. This works great until the Job is killed or hits a memory limit, run limit etc.
Is there any way for the job to detect its running out of resource and then run the script? or to force it to run the script even if its been killed?
I guess my other option is running job with a dependency on that job which will run the "post exec" script when it finishes.
Any thoughts?
Kind Regards,
TheBigPeeler
From the documentation, you should be seeing the behaviour that you want.
A post-execution command runs after the job finishes, regardless of
the exit state of the job. Once a post-execution command is associated
with a job, that command runs even if the job fails. You cannot
configure the post-execution command to run only under certain
conditions.
I thought that maybe the interaction with JOB_INCLUDE_POSTEXEC (lsb.params) could account for the difference, but from my test the post-exec still runs in both cases. I used runlimit (bsub -W) to trigger the job kill.
Is it possible that the post exec is running, but exits early?
What version of LSF are you using? (What's the output of mbatchd -V and sbatchd -V)

running shell script with windows task scheduler

I currenty have a simple shell script that I created for a linux machine to be run using cron, but now I want to be able to run the file using windows task scheduler. I have tried to get it to work using cron for cygwin, but even after running cron-config successfully and ensuring that the shell script can be executed successfully, for some reason the cron task simply wasn't executing. So I decided to give in and use the windows task scheduler. In order to do this, I looked at the following posts about the issue:
Cgywin .sh file run as Windows Task Scheduler
http://www.davidjnice.com/cygwin_scheduled_tasks.html
in my case, the entry in the "actions" tab of the new task looks like this:
program/script: c:\cygwin64\bin\bash.exe
arguments: -l -c "/cygdrive/c/users/paul/bitcoinbot/download_all_data.sh >> cygdrive/c/users/paul/bitcoinbot/logfile.log 2>&1"
start in: c:\cygwin64\bin
Notice that I redirected the output of the shell script to a log file, so that I should be able to see there whether the program run. Other than that, I simply edited the "trigger" tab to run the task daily, and set the time to a couple of minutes in the fture to see whether it ran successfully.
Alas, when I look at the detailed event history for the task, nothing changes when the trigger time passes. And when I manually "run" the task, the event history seems to add a few different events, but the task is completed within seconds, whereas this task should take over an hour (and it does when the shell script is executed directly from the terminal). And when I look for the log file that should have been created, there is nothing.
Does anyone have any idea what might be the issue here? How can I get my task to run properly at the trigger time, and how can I make sure it does so?
Best,
Paul
EDIT:
here are the pictures showing event history, as per Ken White's request.
Please ignore the fact that it says there are 24 events. These are from multiple separate runs of the task. The events shown here are a complete list of the events triggered by a single run.
EDIT 2:
Regarding my attempts to get cron to work, I have run into the following problem when I try to start the cron service using cygrunsrv. First of all, I tried to start cron by typing
cygrunsrv -I cron -p /usr/sbin/cron.exe -a -D
Now when I type
$cygrunsrv -Q cron
Service: cron
Current State: stopped
Command: /usr/bin/cron.exe
Now, I tried to start the cron service by typing
cygrunsrv -S cron
Cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062:
The service has not been started.
Does anyone hae any idea what this error means? I tried googling it, but couldn't find any answers.

Why does scheduling Spark jobs through cron fail (while the same command works when executed on terminal)?

I am trying to schedule a spark job using cron.
I have made a shell script and it executes well on the terminal.
However, when I execute the script using cron it gives me insufficient memory to start JVM thread error.
Every time I start the script using terminal there is no issue. This issue comes when the script starts with cron.
Kindly if you could suggest something.

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

how to kill hudson job from bash script when log file doesn't change?

how can I kill hudson job from bash script when the log file doesn't change? (hudson is freezed).
Context: I have a bash script that check if a log file had change after X seconds and I want to modified it to check that if the timeout raises, and there's no error in console, this means that hudson job is freezed, so I want to be notified about this.
It might be easier to use the Build Timeout plugin.
Finally the solution was to use the following command:
#!/bin/bash
#if the log file does not change
if [ "something" ]; then
kill -9 $(pidof eclipse)
fi
This kills the eclipse instance (who's calls hudson), and continues with the build of the others elements and that it's Ok for my task.

Resources