How to update nomad job from one namespace to another? - nomad

I would like to change namespace of a job in Nomad cluster from one namespaceA to namespaceB.
When I change a namespace in a job UI of Nomad, I notice that it creates 2 jobs in namespaceA and namespaceB.
How can I just keep one namespace?

Updating namespace in the job of the UI will create another job in another namespace --> resulting to 2 jobs (It is the normal behavior of nomad)
To update the job namespace to another job and keep only one namespace :
Create the new namespace if it does not exist : nomad namespace apply -description "MyNewNamespace description" <my-new-namespace>
Stop the job (this step is necessary otherwise Nomad duplicates the job when changing namespace) : nomad job stop <myJob>
Change the namespace attribute in the job.nomad file
job "myJob" {
## Change namespace here
namespace = "MyNewNamespace"
}
Launch the job nomad job run <path/to>/myjob.nomad
Verify : nomad job status
Permanently delete the old job from the old namespace nomad job stop -namespace=<oldNamespace> --purge <myJob>

Related

How to start/stop specific horizon supervisor?

I have a horizon question. Is there horizon have a command where i can stop/pause specific supervisor?
for example: i have 5 supervisor running like supervisor-1, supervisor-2 ... supervisor-5 in horizon.php.
what i want to achieve is how to pause/stop the supervisor-1 for temporary and enable it later?
short answer: you can't. Because master supervisor will re-start all failed supervisor defined on horizon config file. But Laravel do support(not recommend) manage every single supervisor manually. Mohamed Said wrote a deep dive about how horizon works

Spring Scheduler code within an App with multiple instances with multiple JVMs

I have a spring scheduler task configured with either of fixedDelay or cron, and have multiple instances of this app running on multiple JVMs.
The default behavior is all the instances are executing the scheduler task.
Is there a way by which we can control this behavior so that only one instance will execute the scheduler task and others don't.
Please let me know if you know any approaches.
Thank you
We had similar problem. We fixed it like this:
Removed all #Scheduled beans from our Spring Boot services.
Created AWS Lambda function scheduled with desired schedule.
Lambda function hits our top level domain with scheduling request.
Load balancer forwards this request to one of the service instances.
This way we are sure that scheduled task is executed only once across the cluster of our services.
I have faced similar problem where same scheduled batch job was running on two server where it was intended to be running on one node at a time. But later on I found a solution to not to execute the job if it is already running on other server.
Job someJob = ...
Set<JobExecution> jobs = jobExplorer.findRunningJobExecutions("someJobName");
if (jobs == null || jobs.isEmpty()) {
jobLauncher.run(someJob, jobParametersBuilder.toJobParameters());
}
}
So before launching the job, a check is needed if the job is already in execution on other node.
Please note that this approach will work only with DB based job repository.
We had the same problem our three instance were running same job and doing the tasks three times every day. We solved it by making use of Spring batch. Spring batch can have only unique job id so if you start the job with a job id like date it will restricts duplicate jobs to start with same id. In our case we used date like '2020-1-1' (since it runs only once a day) . All three instance tries to start the job with id '2020-1-1' but spring rejects two duplicate job stating already job '2020-1-1' is running.
If my understanding is correct on your question, that you want to run this scheduled job on a single instance, then i think you should look at ShedLock
ShedLock makes sure that your scheduled tasks are executed at most once at the same time. If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread). Please note, that if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped.

Lauching a Task at a time from a Stream

I am using the File Source stream component to read files from a directory and send a File instance to a custom processor that reads the file and launches a specific task using a TaskLauncher sink. If I drop 5 files in the directory, 5 tasks launch at the same time. What I am trying to achieve is to have each Task executed one after the other, so I need to monitor the state of the Tasks to ensure the prior task has completed before launching another task. What are my options for implementing this? As a side note, I am running this on a Yarn cluster.
Thanks,
-Frank
I think asynchronous task launching by the YARN TaskLauncher could be the reason to make it look like all the tasks are launched at the same time. One possible approach you can try is to have a custom task launcher sink that launches the task and waits for the task status to be completed before it starts processing the next trigger request.

How does Spring-XD handle job execution

I can't get the information out of the documentation. Can anyone tell me how Spring-XD executes jobs? Does it assign a job to a certain container and is this job only executed on the container it is deployed to, or is each job execution assigned to another container? Can I somehow control that a certain job may be executed in parallel (with different arguments) and others may not ?
Thanks!
Peter
I am sure you would have seen some of the documentation here:
https://github.com/spring-projects/spring-xd/wiki/Batch-Jobs
To answer your questions:
Can anyone tell me how Spring-XD executes jobs? Does it assign a job to a certain container and is this job only executed on the container it is deployed to, or is each job execution assigned to another container?
After you create a new job definition using this:
xd>job create dailyfeedjob --definition "myfeedjobmodule" --deploy
the batch job module myfeedjobmodule gets deployed into the XD container. Once deployed, there is a job launching queue setup in the message broker: redis, rabbit or local. The name of the queue is job:dailyfeedjob in the message broker. Since this queue is bound to the job module deployed in the XD container, a request message sent to this queue is picked by the job module deployed inside that specific container.
Now, you can send the job launching request message (with job parameters) into the job:dailyfeedjob queue by simply setting up a stream that sends a message into this queue. For example: a trigger (fixed-delay, cron, date triggers) could do that. This also a job launch command from the shell which launches job only once.
This section would explain it more: https://github.com/spring-projects/spring-xd/wiki/Batch-Jobs#launching-a-job
Hence, the job is launched (every time it receives the job launching request) only inside the container where the job module is deployed and you can expect original the spring batch flow when the job is executed. (refer to shell doc for all the job related commands)
Can I somehow control that a certain job may be executed in parallel (with different arguments) and others may not ?
If it is for the different job parameters for the same job definition, then it would go to the same container where the job module is deployed.
But, you can still create a new job definition with the same batch job module.
xd>job create myotherdailyfeedjob --definition "myfeedjobmodule" --deploy
The only difference being it will be under that namespace. and, the job launching queue name would job:myotherdailyfeedjob. It all depends on how do you want to organize running your batch jobs.
Also, for parallel processing batch jobs you can use:
http://docs.spring.io/spring-batch/reference/html/scalability.html
and, XD provides single step partitioning support for running batch jobs:
Include this in your job module:
<import resource="classpath:/META-INF/spring-xd/batch/singlestep-partition-support.xml"/>
with partitioner and tasklet beans defined.
You can try out some of the XD batch samples from here:
https://github.com/spring-projects/spring-xd-samples

How to use custom pool assignment for FairScheduler in Hadoop?

I am trying to take advantage of multiple pools in FairScheduler. But all my jobs are submitted by a single agent process and therefore all belong to same user.
I have set mapred.fairscheduler.poolnameproperty to scheduler.pool.name and then in each job I set "scheduler.pool.name" to a specific pool from pools.xml that I want to use for that job.
I can see in job configuration web page that both properties have values as expected and scheduler webpage shows all pools I am trying to use. However all jobs are still running in the pool %username% where username is name of the user that was used to submit all jobs.
I am running hadoop version 0.20.1 from Cloudera distribution.
Any ideas how to make my jobs run in a pool that is not dependent on the name of the user, who submitted the job?
Looks like restart of jobtracker was not sufficient to effect new configuration. After restart of all tasktrackers and a jobtracker pool assignment works as expected.

Resources