For different reasons I need to read the PID of the queues of a supervisor in Laravel Horizon. The problem is that I can't find the information in the library.
I'm able to retrieve all supervisor like this:
public function index(SupervisorRepository $supervisors){
$supervisors = collect($supervisors->all())->sortBy('name');
return $supervisors;
}
or all workload (queue) like this:
public function index(WorkloadRepository $workloadRepository){
$workload = collect($workloadRepository->get())->sortBy('name')->values()->toArray();
return $workload;
}
but I don't see how I can get the PID of queue by the repository, last but not least I can't run shell command on my server so I must get it by either accessing Redis or Laravel Horizon.
Turn out you can't get it from horizon, you need to get it by questioning your os
Related
This is a follow up on
Laravel - Running Jobs in Sequence
I decided to go with redis rate limit. Code is below
jobClass {
protected $subscription;
public function __construct(Subscription$subscription) {
$this->subscription= $subscription;
}
public function handle() {
Redis::funnel('mailingJob')->limit(1)->then(function () {
// Job logic...
(new Mailer($this->subscription))->send();
}, function () {
// Could not obtain lock...
return $this->release(10);
});
}
}
And the controller code looks like.
<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Models\Subscriptions;
class MailController extends Controller
{
public function sendEmail() {
Subscriptions::all()
->each(function($subscription) {
SendMailJob::dispatch($subscription);
});
}
}
Now, when i run the queue, some of them works rest(around 90%) failed with the below error in horizon.
SendMailJob has been attempted too many times or run too long. The job may have
previously timed out.
What am i missing? Please someone guide me to the right direction.
My goal is to run only one job of a type at one time.
[...] has been attempted too many times or run too long is an error that doesn't tell you why the job failed. It means some other exception has caused your job to fail every time it was attempted by the worker, and the worker has tried it the maximum number of times it was allowed to by your configuration. To understand why it's failing, check your laravel.log file for the exception that caused the job to fail.
In your case, since Mailer is contacting an external system it could be that the system you're connecting to is rate limiting you, or they could be having temporary connection problems or other service downtime. Again, there should be more detail in your log files.
The Laravel documentation has a hint about this:
When using rate limiting, the number of attempts your job will need to run successfully can be hard to determine. Therefore, it is useful to combine rate limiting with time based attempts.
The core of the issue is, the job keeps failing until it can achieve a lock and run.
So I imagine that where you are running your queue worker, you are not setting the --tries flag high enough.
Although you could just set a very high --tries, it is not really scalable.
The best solution, as suggested in the documentation, would be to increase the number of tries as well as using time based attempts
You can also increase return $this->release(10); the release time here. That should have the job wait longer before trying to reacquire a lock, so will use up fewer tries!
End Goal
The aim is for my application to fire off potentially a lot of emails to the Redis queue (This bit is working) and then Redis throttle the processing of these to only a set number of emails every selected number of minutes.
For this example, I have a test job that appends the time to a file and I am attempting to throttle it to once every 60 seconds.
The story so far....
So far, I have the application successfully pushing a test amount of 50 jobs to the Redis queue. I can log in to Horizon and see these 50 jobs in the "processjob" queue. I can also log in to redis-cli and see 50 sets under the list key "queues:processjob".
My issue is that as soon as I attempt to put the throttle on, only 1 job runs and the rest fail with the following error:
Predis\Response\ServerException: ERR Error running script (call to f_29cc07bd431ccbf64637e5dcb60484560fdfa2da): #user_script:10: WRONGTYPE Operation against a key holding the wrong kind of value in /var/www/html/smhub/vendor/predis/predis/src/Client.php:370
If I remove the throttle, all works file and 5 jobs are instantly ran.
I thought maybe it was the incorrect key name but if I change the following:
public function handle()
{
//
Redis::throttle('queues:processjob')->allow(1)->every(60)->then(function(){
Storage::disk('local')->append('testFile.txt',date("Y-m-d H:i:s"));
}, function (){
return $this->release(10);
});
}
to this:
public function handle()
{
//
Redis::funnel('queues:processjob')->limit(1)->then(function(){
Storage::disk('local')->append('testFile.txt',date("Y-m-d H:i:s"));
}, function (){
return $this->release(10);
});
}
then it all works fine.
My thoughts...
Something tells me that the issue is that the redis key is of type "list" and that the jobs are all under a single list. That being said, if it didn't work this way, how would we throttle a queue as the throttle requires a unique key.
For anybody else that is having issues attempting to get this to work and is getting the same issue as I was, this is what resolved my issues:
The Fault
I assumed that Redis::throttle('queues:processjob') was meant to be referring to the queue that you wanted to be throttled. However, after some re-reading of the documentation and testing of the code, I realized that this was not the case.
The Fix
Redis::throttle('queues:processjob') is meant to point to it's own 'holding' queue and so must be a unique Redis key name. Therefore, changing it to Redis::throttle('throttle:queues:processjob') worked fine for me.
The workings
When I first looked in to this, I assumed that that Redis::throttle('this') throttled the queue that you specified. To some degree this is correct but it will not work if the job was created via another means.
Redis::throttle('this') actually creates a new 'holding' queue where the jobs go until the condition(s) you specify are met. So jobs will go to the queue 'this' in this example and when the throttle trigger is released, they will be passed to the queue specified in their execution code. In this case, 'queues:processjob'.
I hope this helps!
In need run other function without stop.
How to use thread in laravel 5.6?
For example:
public function index()
{
$id = "123456";
$this->run_bot($id);
return view("index");
}
Funtion run_bot it takes about 10 minutes !!!!
I need run run_bot in a thread.
How to craete thread in laravel 5.6?
Look into Symfomy's Process Component.
As an example, you can start the process and then later wait for it to complete:
$process = new Process('ls -lsa');
$process->start();
// ... do other things
// this is optional, you don't need to wait if not necessary
$process->wait();
The solution you are looking for is how to run asynchronous jobs. This can be done with a queue service (like AWS SQS) and the Laravel queue worker.
It will allow you to send the job (really light work so it's really speed). And then, asynchronously, retrieve and execute the job.
Everything you will need to know is here :
https://laravel.com/docs/5.6/queues
Let me know if it helped you :)
I am working with scheduling in Laravel 5.3. Previously, I was using one server to host the laravel application. Now that I am using two servers to run the Laravel App, how do I ensure that both servers are not running the same jobs at the same time?
Recently, I saw an Event method called "withoutOverlapping()". See https://laravel.com/docs/5.3/scheduling#preventing-task-overlaps
In my case, withoutOverlapping() cannot help me as I am working in a clustered environment.
Are there any workarounds or suggestions regarding this?
First of all, define if it is critical or not to avoid running task multiple times.
For example, if your app is using a task to do some sort of cleanup, there is almost no drawback to run it on every server (who care if you try to delete messages with +10 min twice?)
If it is absolutely critical to run every task only one time, you'll need to define a "main server" that will execute tasks, and a slave server that will just answer to requests but not perform any task. This is quite trivial as you just have to give every env a different name in your .env, and test against that when you define the scheduler tasks.
This is the easiest way, seriously don't bother making a database locking mecanism or whatever so you can synchronise tasks accross servers. Even OS's struggle to manage properly synchronisation against threads on the same machine, why do you want to implement the same accross different machines?
Here's what I've done when I ran into the same problems with load balancing:
class MutexCommand extends Command {
private $hash = null;
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
$this->hash = null;
}
}
protected abstract function generateHash();
protected abstract function handleInternal();
public final function handle() {
register_shutdown_function([$this,"cleanup"]);
try {
$this->hash = $this->generateHash();
//Set a value if it does not exist atomically. Will fail if it does exist.
//Essentially setnx is the mechanism to acquire the lock
if (!Redis::setnx($this->hash,true)) {
$this->hash = null; //Prevent it from being cleaned up
throw new Exception("Already running");
}
$this->handleInternal();
} finally {
$this->cleanup();
}
}
}
Then you can write your commands:
class ThisShouldNotOverlap extends MutexCommand {
public function generateHash() {
return "Unique key for mutex, you can just use the class name if you want by doing return static::class";
}
public function handleInternal() { /* do stuff */ }
}
Then whenever you try to run the same command on multiple instances one would successfully acquire the "lock" and the others should fail.
Of course this assumes that you are using a non-clustered redis cache.
If you are not using redis then there's probably similar locking mechanisms you can implement in other caches, if you are using a clustered redis then you may need to use the RedLock locking mechanism
Essentially no, there's no a natural way using Laravel to know if another Laravel app have the same job on the job dispatcher.
We have some options there to find a solution:
Create a intermediate app that manages the jobs from the other apps.
Allow only one app to dispatch jobs.
Use worker queues, you have some packages for this, I would recommend to use Laravel 5 with WebSockets and Queue Asynchronously.
First of all Laravel scheduler isn't designed to work in a clustered environment. It was never intended to be that way.
I would suggest you should have a dedicated cron instance which manages your Laravel scheduler jobs.
Is it possible to use dispatchShell from a Controller?
My mission is to start a shell job when the user has signed up.
I'm using CakePHP 2.0
If you can't mitigate the need to do this as dogmatic suggests then, read on.
So you have a (potentially) long-running job you want to perform and you don't want the user to wait.
As the PHP code your user is executing happens during a request that has been started by Apache, any code that is executed will stall that request until it completion (unless you hit Apache's request timeout).
If the above isn't acceptable for your application then you will need to trigger PHP outwith the Apache request (ie. from the command line).
Usability-wise, at this point it would make sense to notify your user that you are processing data in the background. Anything from a message telling them they can check back later to a spinning progress bar that polls your application over ajax to detect job completion.
The simplest approach is to have a cronjob that executes a PHP script (ie. CakePHP shell) on some interval (at minimum, this is once per minute). Here you can perform such tasks in the background.
Some issues arise with background jobs however. How do you know when they failed? How do you know when you need to retry? What if it doesn't complete within the cron interval.. will a race-condition occur?
The proper, but more complicated setup, would be to use a work/message queue system. They allow you to handle the above issues more gracefully, but generally require you to run a background daemon on a server to catch and handle any incoming jobs.
The way this works is, in your code (when a user registers) you insert a job into the queue. The queue daemon picks up the job instantly (it doesn't run on an interval so it's always waiting) and hands it to a worker process (a CakePHP shell for example). It's instant and - if you tell it - it knows if it worked, it knows if it failed, it can retry if you want and it doesn't accidentally handle the same job twice.
There are a number of these available, such as Beanstalkd, dropr, Gearman, RabbitMQ, etc. There are also a number of CakePHP plugins (of varying age) that can help:
cakephp-queue (MySQL)
CakePHP-Queue-Plugin (MySQL)
CakeResque (Redis)
cakephp-gearman (Gearman)
and others.
I have had experience using CakePHP with both Beanstalkd (+ the PHP Pheanstalk library) and the CakePHP Queue plugin (first one above). I have to credit Beanstalkd (written in C) for being very lightweight, simple and fast. However, with regards to CakePHP development, I found the plugin faster to get up and running because:
The plugin comes with all the PHP code you need to get started. With Beanstalkd, you need to write more code (such as a PHP daemon that polls the queue looking for jobs)
The Beanstalkd server infrastructure becomes more complex. I had to install multiple instances of beanstalkd for dev/test/prod, and install supervisord to look after the processes).
Developing/testing is a bit easier since it's a self-contained CakePHP + MySQL solution. You simply need to type cake queue add user signup and cake queue runworker.
I was able to run consolle from controller/action, see the example below.
App::uses('ShellDispatcher', 'Console');
...
public function aco_sync() {
$command = '-app '.APP.' AclExtras.AclExtras aco_sync -r adminControllers -p UserAdmin';
$args = explode(' ', $command);
$dispatcher = new ShellDispatcher($args, false);
if($dispatcher->dispatch()) {
$this->Session->flash('OK');
} else {
$this->Session->flash('Error');
}
return $this->redirect(array('action' => 'index'));
}
In CakePHP-3 you can dispatch shells from the controller & do it almost the same as in CakePHP-2. The documentation does not mention this.
// in your controller:
$shell = new \Cake\Console\Shell;
$shell->dispatchShell('shell_class param1 param2');
// or how the docs suggest
$shell->dispatchShell('shell_class', 'param1', 'param2');
Beware of stdout & stderr in unit tests.
Dispatching a shell turns on stdout and stderr logging with ConsoleLogger, and will give you all the logging in your console if you have something like the code snippet above in code that you are testing from phpunit.
function getEbayOrder(){
$this->autoRender = false;
App::import('Console/Command', 'AppShell');
App::import('Console/Command', 'EbayShell');
$job = new EbayShell();
$job->dispatchMethod('get_orders');
echo "REPONSE";
}
anything is possible, but why would you want to. If you find you need to do something in a shell and the actual application look at using libs.
you stick the code in the lib and then call the lib from both your app and the shell.
If this is to intialize AclExtras the best way is:
App::import('Console/Command', 'AppShell');
App::import('Plugin/AclExtras/Console/Command', 'AclExtrasShell');
$job = new AclExtrasShell();
$job->startup();
$job->dispatchMethod('aco_sync');
But avoid this unless you have no possibilities to run the console script.