I have a an app running on apache + passenger and I have a initializer to initializer rufus scheduler and then schedule jobs.
It seems like the initializer is getting executed multiple times after the app has been started which schedules duplicate jobs within rufus scheduler.
I am not sure why the initializers are getting executed multiple times without a restart.
Initializers are not the right place to do it. Each initializar is executed for every process your web server run. i.e. You apache start 4 process to accept connections to your rails application, your initializer is executed 4 times.
A simple solution would be to use a rake task as part of your deployment strategy.
Related
I have a job that is somehow getting kicked off multiple times. I want the job to kick off once and only once. If any other attempts to run the job while it's already on the queue, I want those runs to ABORT.
I've read the Laravel 8 documentation and can't figure out if I should use:
Queue\ShouldBeUnique (documented here: https://laravel.com/docs/8.x/queues#unique-jobs)
OR
Queue\Middleware\WithoutOverlapping
mentioned here: https://laravel.com/docs/8.x/queues#preventing-job-overlaps
I believe the first one aborts subsequent attempts to run the job whereas the second keeps it queued, just makes sure it doesn't run until the first job is finished. Can anyone confirm?
Confirmed locally by attempting to run multiple instances of the same job in a console window.
Implementing the Queue\ShouldBeUnique interface in the class of my job means that subsequent attempts are ABORTED.
Whereas adding ->withoutOverlapping() to the end of my job reference in the app\console\kernel.php file simply prevents it from running simultaneously. It does NOT abort the job if one is already running.
My spring batch application is running on PCF platform which is connected to MySQL database (single instance), it's running fine when only an instance is up & running but when it comes to more than one application instance, I'm getting exception org.springframework.dao.DuplicateKeyException. This might be happening because similar batch job is firing at the same time & trying to update batch instance table with same job ID. Is there any way to restrict this kind of failure or in another way, I wanted a solution where only one batch job will run at a time even there are multiple instances running.
For me , it is a good sign that DuplicateKeyException is thrown. Because it exactly achieves what you want to do is that spring-batch already makes sure that the same job execution will not executed in parallel. (i.e. Only one server instance execute the job successfully while other fail to execute)
So I see no harms in your case. If you don't like this exception , you can catch it and re-throw it as your application level exception saying something like "The job is executing by other sever instances , so skip to execute it."
If you really want that only one server instance will try to trigger to execute a job and other servers will not try to trigger in the meantime , it is not the problem of spring-batch but is a problem about how you ensure that only one server node will fires the request in the distributed environment. If the batch job is fired as a scheduled task using #Scheduled , you can consider to use a distributed lock such as ShedLock to make sure that it is executed at most once at the same time on one node only.
We have our application deployed on Websphere application server. The application is running on clustered environment with 6 Nodes. EJB timer service is configured using custom scheduler with datasource pointing to Oracle database.So when the application is deployed on cluster it triggers the Ejb timer service on Node1 which is given in the Oracle database.
Some times the value in oracle database changes automatically to some
other nodes like node2 or node3 because of which EJB timer is getting
stopped.Any Suggestions or advice on why it gets changed automatically.
EJB timer configuration
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).datasourceJNDIName = jdbc/cdb_db
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).nonPersistentTimerRetryCount = -1
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).nonPersistentTimerRetryInterval = 300
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).numAlarmThreads = 1
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).numNPTimerThreads = 1
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).pollInterval = 300
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).tablePrefix = EJBTIMER_
Server(0).components.ApplicationServer(1).components.EJBContainer(1).timerSettings.EJBTimer(0).uniqueTimerManagerForNP = false
As the first comment added to this question points out, it is the designed behavior of EJB Persistent Timers/Scheduler to have any one member run all of the tasks until that member isn't available or cannot respond quickly enough, in which case another member takes over.
If you don't like this behavior and want to change it so that your timer tasks can only run on a single member, you can accomplish that by stopping the scheduler poll daemon on all members except for the one that you want to run the tasks. Here is a knowledge center document which describes how to do that:
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/scheduler/xmp/xsch_stopstart.html
Just be aware that if you do this, you will be losing out on the ability for the scheduler to automatically start running tasks on a different member should the member that you have designated to run them go down. In this case, tasks will not run at all until either of
1) the member that is allowed to run them comes back up, or
2) you manually use the aforementioned WASScheduler MBean to start the scheduler poll daemon on a different member, thus allowing tasks to run there
I am using HangFire hosted by IIS with an app pool set to "AlwaysRunning". I am using the Autofac extension for DI. Currently, when running background jobs with HangFire they are executing sequentially. Both jobs are similar in nature and involve File I/O. The first job executes and starts generating the requisite file. The second job executes and it starts executing. It will then stop executing until the first job is complete at which point the second job is resumed. I am not sure if this is an issue related to DI and the lifetime scope. I tend to think not as I create everything with instance per dependency scope. I am using owin to bootstrap hangfire and I am not passing any BackgroundServer options, nor am I applying any hints via attributes. What would be causing the jobs to execute sequentially? I am using the default configuration for workers. I am sending a post request to web api and add jobs to the queue with the following BackgroundJob.Enqueue<ExecutionWrapperContext>(c => c.ExecuteJob(job.SearchId, $"{request.User} : {request.SearchName}"));
Thanks In Advance
I was looking for that exact behavior recently and I manage to have that by using this attribute..
[DisableConcurrentExecution(<timeout>)]
Might be that you had this attribute applied, either in the job or globally?
Is this what you were looking for?
var hangfireJobId = BackgroundJob.Enqueue<ExecutionWrapperContext>(x => x.ExecuteJob1(arguments));
hangfireJobId = BackgroundJob.ContinueWith<ExecutionWrapperContext>(hangfireJobId, x => x.ExecuteJob2(arguments));
This will basically execute the first part and when that is finished it will start the second part
I am involved in a project which requires me to create a Job Scheduler using “Quartz Scheduler” to schedule various jobs which in turn trigger Pentaho Kettle transformation(s). Kettle transformations are essentially ETL scripts performing some mundane activities in our case. Am facing a critical issue while running the scheduler:
We have around 10 jobs scheduled using Job Scheduler. For some 3 to 4 specific jobs it’s throwing following exception:
Unable to load the job from XML file [/home /transformations/jobs/TestJob.kjb] Unable to read file [file:///home /transformations/jobs/ TestJob.kjb] Could not read from "file:///home /transformations/jobs/TestJob.kjb" because it is a not a file.
org.pentaho.di.job.JobMeta.(JobMeta.java:715)
org.pentaho.di.job.JobMeta.(JobMeta.java:679)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.executeJob(JobBootstrapImpl.java:115)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.startJobsExecution(JobBootstrapImpl.java:100)
com. XYZ.transformation.jobs.impl.QuartzJobsScheduler.executeInternal(QuartzJobsScheduler.java:25)
org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86)
org.quartz.core.JobRunShell.run(JobRunShell.java:223)
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Weird thing is that, upon verifying the specified path i.e. “/home /transformations/jobs/TestJob.kjb”, file is present and I am able to read it. Moreover the Job runs successfully and does all the things which it is supposed to, yet throws the exception detailed above.
After observing closely, I strongly feel that Quartz is internally caching jobs and/or its parameters. We do load certain parameters required for the job to execute after it is triggered. Would it be possible to delete/purge the cache used by Quartz? I also tried killing all the java processes running on the box (thinking that it may kill Quartz itself, as Quartz is being run within java process) and restarting quartz and its jobs afresh, but couldn’t make it work as expected. It still stores the old parameters somewhere perhaps in some cache.
Versions used –
Spring Framework (spring-core & spring-beans) - 3.0.6.RELEASE
Quertz Scheduler - 1.8.6
Platform – Redhat Linux - 2.6.18-308.el5
Pentaho kettle – Spoon Stable Release – 4.3.0
I will do in this way:
Ensure that the Pentaho Job can run in standalone first with a shell script, java service wrapper or whatever
In the Quartz Job, then use Quartz's NativeJob to call the same standalone script
Just my two cents
Looks to me like you have an extra space in the path.
/home /transformations/jobs/TestJob.kjb
Between the e of home and the /
Remove that space, I can't possibly believe you actually have a home directory called "home "!!