What's the difference between Laravels Queue\ShouldBeUnique and Queue\Middleware\WithoutOverlapping? - laravel

I have a job that is somehow getting kicked off multiple times. I want the job to kick off once and only once. If any other attempts to run the job while it's already on the queue, I want those runs to ABORT.
I've read the Laravel 8 documentation and can't figure out if I should use:
Queue\ShouldBeUnique (documented here: https://laravel.com/docs/8.x/queues#unique-jobs)
OR
Queue\Middleware\WithoutOverlapping
mentioned here: https://laravel.com/docs/8.x/queues#preventing-job-overlaps
I believe the first one aborts subsequent attempts to run the job whereas the second keeps it queued, just makes sure it doesn't run until the first job is finished. Can anyone confirm?

Confirmed locally by attempting to run multiple instances of the same job in a console window.
Implementing the Queue\ShouldBeUnique interface in the class of my job means that subsequent attempts are ABORTED.
Whereas adding ->withoutOverlapping() to the end of my job reference in the app\console\kernel.php file simply prevents it from running simultaneously. It does NOT abort the job if one is already running.

Related

What is the correct use case for SchedulerLock lockAtMostFor?

I am using SchedulerLock in Spring Boot And I am using 2 servers.
What I'm curious about is why is "lockAtMostFor" an option that exists?
Take an example: on one of my 2 servers, the schedule runs first and then locks.
But something went wrong while running, and my server went down.
At this moment, my scheduled task ends incompletely.
Any guide I read is full of vague answers about "lock time in case a node dies".
When a node dies, it can no longer execute schedules.
But why keep holding a LOCK for a dead node?
Even if I urgently try to manually execute the schedule on the 2nd server, it is impossible to manually execute it because of the above lock.
What are options that exist for?

Spring batch limit job execution

My spring batch application is running on PCF platform which is connected to MySQL database (single instance), it's running fine when only an instance is up & running but when it comes to more than one application instance, I'm getting exception org.springframework.dao.DuplicateKeyException. This might be happening because similar batch job is firing at the same time & trying to update batch instance table with same job ID. Is there any way to restrict this kind of failure or in another way, I wanted a solution where only one batch job will run at a time even there are multiple instances running.
For me , it is a good sign that DuplicateKeyException is thrown. Because it exactly achieves what you want to do is that spring-batch already makes sure that the same job execution will not executed in parallel. (i.e. Only one server instance execute the job successfully while other fail to execute)
So I see no harms in your case. If you don't like this exception , you can catch it and re-throw it as your application level exception saying something like "The job is executing by other sever instances , so skip to execute it."
If you really want that only one server instance will try to trigger to execute a job and other servers will not try to trigger in the meantime , it is not the problem of spring-batch but is a problem about how you ensure that only one server node will fires the request in the distributed environment. If the batch job is fired as a scheduled task using #Scheduled , you can consider to use a distributed lock such as ShedLock to make sure that it is executed at most once at the same time on one node only.

Laravel: How to detect if code is being executed from within a queued job, as opposed to manually run from the CLI

I found this similar question How to check If the current app process is running within a queue environment in Laravel
But actually this is the opposite of what I want. I want to be able to distinguish between code being executed manually from an artisan command launched on the CLI, and when a job is being run as a result of a POST trigger via a controller, or a scheduled run
Basically I want to distinguish between when a job is being run via the SYNC driver, manually triggered by the developer with eyes on the CLI output, and otherwise
app()->runningInConsole() returns true in both cases so it is not useful to me
Is there another way to detect this? For example is there a way to detect the currently used queue connection? Keeping in mind that it's possible to change the queue connection at runtime so just checking the value of the env file is not enough

Hangfire is running jobs sequentially

I am using HangFire hosted by IIS with an app pool set to "AlwaysRunning". I am using the Autofac extension for DI. Currently, when running background jobs with HangFire they are executing sequentially. Both jobs are similar in nature and involve File I/O. The first job executes and starts generating the requisite file. The second job executes and it starts executing. It will then stop executing until the first job is complete at which point the second job is resumed. I am not sure if this is an issue related to DI and the lifetime scope. I tend to think not as I create everything with instance per dependency scope. I am using owin to bootstrap hangfire and I am not passing any BackgroundServer options, nor am I applying any hints via attributes. What would be causing the jobs to execute sequentially? I am using the default configuration for workers. I am sending a post request to web api and add jobs to the queue with the following BackgroundJob.Enqueue<ExecutionWrapperContext>(c => c.ExecuteJob(job.SearchId, $"{request.User} : {request.SearchName}"));
Thanks In Advance
I was looking for that exact behavior recently and I manage to have that by using this attribute..
[DisableConcurrentExecution(<timeout>)]
Might be that you had this attribute applied, either in the job or globally?
Is this what you were looking for?
var hangfireJobId = BackgroundJob.Enqueue<ExecutionWrapperContext>(x => x.ExecuteJob1(arguments));
hangfireJobId = BackgroundJob.ContinueWith<ExecutionWrapperContext>(hangfireJobId, x => x.ExecuteJob2(arguments));
This will basically execute the first part and when that is finished it will start the second part

whenever gem minutely job. If minutely job takes more than one minute then what

I am plannning to use whenever gem which among other things will also run minutely rake task. If my rake task takes more than a minute then based on the output from whenever gem it seems like the second instance of the rake task will kick-in even though the first one is not quite finished.
Will whenever gem will wait for the miutely task to finish before starting the second one?
If not then what are the workarounds. I believe this question is better served in serverfault still I am putting it here.
whenever just writes cronjobs, and makes no effort to stop them overrunning themselves. This is the job of the task that is being run.
Use PID files, or file system locks to prevent the task running over the top of itself.
In my scheduled application, I scan the process list looking for other instances of my application running with the same configuration file on the command line - then exit with a logged note if a process is already running with the same configuration file.
That keeps the program from stepping on itself ...
PID files or some type of "locking" file are prone to problems when the process exits but the lock file still exists.

Resources