Is there any? I could find only how to run the tasks in windows scheduler. No utility to run it as job, i.e. via CreateJobObject() - AssignProcessToJobObject().
I need my application killed if it consumes > 1.5 Gb RAM, job would be perfect to do that...
Related
i'm setting up a job that run in midnight while we're not at work so in the next morning our job is done. but unfortunately the job is not working.
Use one of the included scripts to run jobs from the system scheduler.
Windows: use kitchen.bat and run it from task scheduler
Linux: use kitchen.sh from crond
Here is the syntax to use:
https://help.pentaho.com/Documentation/8.0/Products/Data_Integration/Command_Line_Tools
I have to run multiple spark job one by one in a sequence, So I am writing a shell script. One way I can do is to check success file in output folder for job status, but i wanna know that is there any other way to check the status of spark-submit job using unix script, where I am running my jobs.
You can use command
yarn application -status <APPLICATIOM ID>
where <APPLICATIOM ID> is your application ID and check for line like:
State : RUNNING
This will give you the status of your application
To check the list of application, run via yarn you can use command
yarn application --list
You can add also -appTypes to limit the listing based on the application type
I am working around with marathon & mesos & docker very well, but it recently discovered a problem.when mesos-slave encounter an Exception , the state of task on Marathon will change to TASK_LOST , and the task can not be killed only after about 15mins.
I did a test by manually Reboot My Operation System that run mesos-slave service and docker and run the task, and then the task state shown in Marathon UI became to " Unscheduled(100%) " ,and the task can not be killed automatically either manually, until past about 15 minutes.
My question is how to reduce this time?
I tried to add marathon startup command line args with
task_launch_confirm_timeout=30000
scale_apps_interval = 30000
task_lost_expunge_initial_delay = 30000
task_launch_timeout = 30000
and add mesos-slave startup command line args with
recovery_timeout=1mins
but it doesn't work for me.
To forcefully change the time after executor commit suicide if Mesos agent process failed you should configure --recovery_timeout
Amount of time allotted for the agent to recover. If the agent takes longer than recovery_timeout to recover, any executors that are waiting to reconnect to the agent will self-terminate. (default: 15mins)
I was executing few mapreduce program on the hadoop cluster. The programs executed successfully and gave the required output.
using jps command I noticed that RunJar was still running as the process. I stopped my cluster but still the process id was up.
I know that Hadoop jar invokes base Runjar for execution of jar, but is it normal that even after job completion the process is up?
enter image description here
if yes, in that care muliple Runjar instances will keep running, how can i make sure that after job completion, run jar even stops(I don't wish to kill the process)
The RunJar process is normally the result of someone or something running “hadoop jar "
you can kill the process with:
kill 13082
Scenarios
All batch applications (Spring Batch based) have to deploy to Jboss EAP.
All batch jobs have to be launched & monitored by using the existing enterprise workload/scheduling system, e.g. ASG-Zena via shell scripts.
All batch jobs will have HTTP endpoints for start job, get state of the job, and stop job. The shell scripts will make use of the endpoints to control the batch jobs.
All batch jobs will be launched asynchronously
The shell script will return an exit code to indicate the execution result of the batch job so the enterprise scheduler system can track the success or failure of the batch jobs
[Enterprise Workload/Scheduling][Shell Scripts] <--> [HTTP][[Batch Applications] Jboss EAP]
Questions
As the batch jobs are launched asynchronously via HTTP endpoint, how can the shell script get the execution result of the batch job?
Your shell script will need to poll for the results. The script kicks off the job, then polls for the result.