Laravel multiple workers running job twice - laravel

I am using Laravel 5.6 and I am dispatching jobs to a queue and then using supervisor to activate 8 workers on that queue. I was expecting that Laravel will know NOT to run the same job twice but I was surprised to discover that it did.
Same job was taken cared of by more than one worker and therefore weird stuff started to happen.
The thing is that one year ago I wrote the same mechanism for another Laravel project (but on Laravel version 5.1) and the whole thing worked out of the box. I didn't have to configure anything.
Anyone can help?
Thank you.

I was having the exact same problem and It drove me crazy until I managed to solve it!
For some reason Laravel 5.6 creates the "jobs" table with engine=MyISAM which does not support transactions which are necessary for the locking mechanism that prevents the case of job runs twice. I believe that it was different in Laravel 5.1 because I also once wrote an app with Laravel 5.4 and it worked perfectly with 8 workers. When I did the same thing with Laravel 5.6 it didn't work. Same as you describe.
Once I've changed the Engine to InnoDB which supports transactions everything worked as expected and the locking mechanism started to work.
So basically all you need to do is:
ALTER TABLE jobs ENGINE = InnoDB;
Hope that will solve your misery...

$schedule->command('emails:send')->withoutOverlapping();

Related

Laravel 8 - Queue jobs timeout, Fixed by clearing cache & restarting horizon

My queue jobs all run fairly seamlessy in our production server, but about every 2 - 3 months I start getting a lot of timeout exceeded/too many attempts exceptions.
Our app is running with event sourcing and many events are queued so neededless to say we have a lot of jobs passing through the system (100 - 200k per day generally).
I have not found the root cause of the issues yet, but a simple re-deploy through Laravel Envoyer fixes the issue. This is most likely due to the cache:clear command being run.
Currently, the cache is handled by Redis and is on the same server as the app. I was considering moving the cache to its own server/instance but this still does not help me with the root cause.
Does anyone have any ideas what might be going on here and how I can diagnose/fix it? I am guessing the cache is just getting overloaded/running out of space/leaking etc. over time but not really sure where to go from here.
Check :
The version of your redis make an update of the predis package
The version of your Laravel
Your server
I hope I gave you some solutions

Laravel 5.2 - Creating a Job API

So in my API there are a few places where it is running a process / report that is either hitting a timeout or simply just taking WAY too long. I'd like to defer these jobs off to a queue and instead return a key in my response. The front end would then ping a service using that key to determine the status of its particular job in the queue. This way we don't have hanging ajax calls for 2 - 3 minutes. Maybe I could even create a queue viewer that would allow you to review the jobs in it and even cancel some etc.
Does Laravel have something built in or is there a package for this already? Are there other better options for dealing with this kind of issue?
this is what you are lokking for laravels queues
I don't believe this existed when I first posted this quesiton. However, Laravel now has this built for it: https://laravel.com/docs/5.6/horizon which is everything I was looking for.

How do I check to see if a job is in the Laravel queue?

Here's the situation:
I have a Laravel 4.2 application that retrieves (from a third party API) an asset. This is a long-lived asset (it only changes once every 12-24 hours) and is kind of time consuming (a large image file). I do cache the asset, so the impact has been more or less minimized, but there's still the case where the first person who logs in to my application in the morning has to wait while the application loads the asset for the first time.
I have set up a job which will be queued up and will run every eight hours. This ought to ensure that the asset in the cache is always fresh. It works by re-enqueueing the job for eight hours later after it runs.
The problem is this: I'm about to deploy this job system to production & I'm not sure how to start this thing running for the first time.
Ideally, I'd like to have an administration option where I have a button which says "Click here to submit the job", but I'd like to make it as foolproof as possible & prevent people (I'm not the only administrator) from submitting the job multiple times. To do this, however, the application would need to check & see if the job is already in the queue. I can't find a way to do that in an implementation-independent way (I'm using redis, but that may change in the future).
Another option would be to add an artisan command to run the initial process. That way I could deploy the application, run an artisan command, and forget about it.
So, to recap, I have two questions:
Is there a way to check a queue to see what jobs are in there?
Is there a better way to do this?
Thanks
When a job is in laravel queue, it will be saved in jobs table, so you can check by DB.
If it's guaranteed to be the only thing ever in the queue, you could use something like:
if (Queue::size() === 0) {
Queue::push(...);
}
You would need to run the php artisanqueue:listen in the terminal.
Here is the complete documentation if you want to learn more about:
https://laravel.com/docs/5.2/queues#running-the-queue-listener
You can use the Laravel Telescope package.
Laravel Telescope is an elegant debug assistant for the Laravel framework. Telescope provides insight into the requests coming into your application, exceptions, log entries, database queries, queued jobs, mail, notifications, cache operations, scheduled tasks, variable dumps and more. Telescope makes a wonderful companion to your local Laravel development environment.
(Source: https://laravel.com/docs/7.x/telescope)

Laravel 4.1 - queue:listen performance

I am trying to use the Queue system in Laravel (4.1). Everything works as expected, with both Redis (with the native driver) and RabbitMQ.
The only "issue" I am experiencing is the poor performance. It looks like only 4 jobs per seconds can be processed (I push 1000 jobs in the queue to test it). Have you got any tip to improve the performance?
This is a old question but I thought I would post anyway. The problem is Laravel's default listener is not really a true queue consumer, it polls the queue at regular intervals unless it is already busy with a job. Using a true AMQP requires some additional libraries to be install from pecl. You can find that plugin here. I would also suggest using this composer package for you PHP AMQP library. You would then need to write your Laravel command.
Currently I'm writing a RabbitMQ handler for Laravel that will resolve this issue.
Old question, but for anyone coming here, the queue:work command has a daemon mode that runs just like queue:listen except that it doesn't have to restart/reload Laravel each time, making it much more performant. See the docs:
http://laravel.com/docs/4.2/queues

Drupal sessions table getting huge

Over the last few months, my drupal sessions table has ballooned to several GB. It seems to have started when I upgraded to drupal 5.20 (previously I thought drupal automatically cleaned out old sessions). So I created a cron job to delete sessions older than two weeks, but this takes far too long to execute (the sessions table grows by about a million rows per week). Should drupal actually be handling this, or do I just need to cut down the maximum session age until the execution time is acceptable?
Also, I thought drupal was not supposed to create a session on the first request, thus eliminating many garbage entries for crawlers. But at least a quarter of the session entries are bots.
Came upon this when I researched the issue again.
This is caused probably due to: stock PHP configuration in some linux distros means no PHP session garbage collection runs. So Drupal session cleaning function that's supposed to clean old session from DB never runs... .
See all about it here: http://www.rymland.org/en/blogs/boaz/2_jan_09/making-php-session-expire-drupal-and-general
It sounds like a bug in your code somewhere. Drupal shouldn't create a session on first request for that exact reason.
Drupal updates are only bugfixes/security fixes for Drupal 6 and lower. So I don't see why upgrading could have caused the problem.
Have you altered Drupal core in any way?

Resources