I use command pattern in one my application, and I have following problem:
Some commands need another commands to be created before they are executed.
The need to create new commands depends on state of application, so I can not resolve whether to create new commands when adding commands to queue, but I need to resolve it just before they are executed.
Specifically, I make commands to control one strategy game. I have command to upgrade building. And it takes resources.
When the resources price is higher than my storages capacity, the program should resolve it and insert commands for upgrading resource storages before the actual upgrade of building. This is why I can not resolve the need to upgrade storages when adding this command to queue, because the command could be executed after many days and levels of storages will change over time.
The only option that came to my mind is to insert new commands before command which needs more resources than I can have in my storages, and restart the executing of commands queue from the beginning, but it is really ugly solution.
Is there some design pattern to resolve command dependencies only when command is first in queue to be executed and to insert those depencies before this command to be executed?
I need to add the commands to upgrade storages to queue, so they could be persisted for later executing when I currently have no resources to upgrade storages.
My QueueConsumer, where the queue procesing logic is, looks like this:
public function processQueue()
{
$failedCommands = [];
$success = false;
$queue = $this->queueManager->getQueue();
foreach ($queue as $key => $command) {
foreach ($this->processors as $processor) {
if ($processor->canProcessCommand($command)) {
$success = $processor->processCommand($command);
//in the processCommandMethod I am able to resolve whether I need new commands (need to upgrade storages) or not
break;
}
}
if ($success) {
$this->queueManager->removeFromQueue($command->getUuid());
} else {
$failedCommands[] = $command;
break;
}
}
if (count($failedCommands) > 0) {
//determine when the failed commands could be processed succesfully (enough resources and so on).
}
}
Could you use an IoC container? It would resolve all dependecies for you.
Related
I have an API function that stores pdf file to my s3 bucket and then sends email with the pdf file as an attachment.
Since this is my first time, I got confused because from what I understood, jobs executes from the background, thus, it should not affect the execution time of the function.
But instead, having these jobs makes the execution time almost 8 seconds.
Here's my function
$is_exist = CoachingApplication::where('user_id', $userID)->first();
if ($is_exist == null) {
$application = new CoachingApplication();
$application->user_id = $userID;
$application->applicant_name = $applicantName;
$application->attachment = $filename;
$application->instrument_rate = $instrumentRate;
if ($application->save()) {
if ($filename !== 'none') {
StoreBucketJob::dispatch($userID, $filename, $attachment_fileArray)->delay(Carbon::now()->addSeconds(3));
}
SendEmailJob::dispatch($userID, $userName, $userSlug, $userEmail, $filename)->delay(Carbon::now()->addSeconds(3));
}
}
If I remove these jobs, the function's execution time is 469ms.
Any idea why these jobs affects the api's execution time?
By default, the queue driver is setup to sync and you are probably using it.
This queue driver means your jobs will be executed within the current process and will not be dispatched in an actual queue (which is pretty useful during development).
A good way to be 100% sure that your job are indeed executed synchronously would be to just put a dd("ok"); at the first line of the handle method inside your job. handle is only executed when the job runs, not when it is dispatched.
The queue driver can be updated by editing your .env file (look for QUEUE_CONNECTION).
There are many queue drivers available and some requires additional dependencies, so you should check out the documentation available https://laravel.com/docs/8.x/queues.
I am trying to test that some data gets populated on a page that is done by a job. In my testing environment, the queue isn't running.
Is there any way to manually run the jobs from a function in a controller? I have retrieved all Jobs from my database by doing the following:
$allJobs = Jobs::all();
foreach ($allJobs as $job) {
// $job->handle(); ????
}
What I would like is to iterate over each job and process them myself. My test suite can wait for these jobs to be processed. I can't seem to find any documentation about this. Thanks!
If the goal is to be able to write tests for your jobs, it is fairly simple:
public function testJobsEvents()
{
$job = new \App\Jobs\YourJob;
$job->handle();
// Assert the side effect of your job...
}
I am working with scheduling in Laravel 5.3. Previously, I was using one server to host the laravel application. Now that I am using two servers to run the Laravel App, how do I ensure that both servers are not running the same jobs at the same time?
Recently, I saw an Event method called "withoutOverlapping()". See https://laravel.com/docs/5.3/scheduling#preventing-task-overlaps
In my case, withoutOverlapping() cannot help me as I am working in a clustered environment.
Are there any workarounds or suggestions regarding this?
First of all, define if it is critical or not to avoid running task multiple times.
For example, if your app is using a task to do some sort of cleanup, there is almost no drawback to run it on every server (who care if you try to delete messages with +10 min twice?)
If it is absolutely critical to run every task only one time, you'll need to define a "main server" that will execute tasks, and a slave server that will just answer to requests but not perform any task. This is quite trivial as you just have to give every env a different name in your .env, and test against that when you define the scheduler tasks.
This is the easiest way, seriously don't bother making a database locking mecanism or whatever so you can synchronise tasks accross servers. Even OS's struggle to manage properly synchronisation against threads on the same machine, why do you want to implement the same accross different machines?
Here's what I've done when I ran into the same problems with load balancing:
class MutexCommand extends Command {
private $hash = null;
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
$this->hash = null;
}
}
protected abstract function generateHash();
protected abstract function handleInternal();
public final function handle() {
register_shutdown_function([$this,"cleanup"]);
try {
$this->hash = $this->generateHash();
//Set a value if it does not exist atomically. Will fail if it does exist.
//Essentially setnx is the mechanism to acquire the lock
if (!Redis::setnx($this->hash,true)) {
$this->hash = null; //Prevent it from being cleaned up
throw new Exception("Already running");
}
$this->handleInternal();
} finally {
$this->cleanup();
}
}
}
Then you can write your commands:
class ThisShouldNotOverlap extends MutexCommand {
public function generateHash() {
return "Unique key for mutex, you can just use the class name if you want by doing return static::class";
}
public function handleInternal() { /* do stuff */ }
}
Then whenever you try to run the same command on multiple instances one would successfully acquire the "lock" and the others should fail.
Of course this assumes that you are using a non-clustered redis cache.
If you are not using redis then there's probably similar locking mechanisms you can implement in other caches, if you are using a clustered redis then you may need to use the RedLock locking mechanism
Essentially no, there's no a natural way using Laravel to know if another Laravel app have the same job on the job dispatcher.
We have some options there to find a solution:
Create a intermediate app that manages the jobs from the other apps.
Allow only one app to dispatch jobs.
Use worker queues, you have some packages for this, I would recommend to use Laravel 5 with WebSockets and Queue Asynchronously.
First of all Laravel scheduler isn't designed to work in a clustered environment. It was never intended to be that way.
I would suggest you should have a dedicated cron instance which manages your Laravel scheduler jobs.
Very new to queues so be gentle. To my understanding, $job->release() is supposed to put the job back on the queue. I currently have the code below but it only runs the job through the queue once. I need to be able to run it through up to 5 times and if it fails again, delete it or something.
Worker:
public function fire($job, $data)
{
if ($job->attempts() < 5) {
\Log::error($job->attempts());
$job->release();
}
}
PUSH!:
Queue::push(
'ClassName',
[
'path' => $path;
]
Trying to do this locally with sync. Tried running queue:listen and queue:work, then running the push code. Only logged 1 entry. Let me know if you need more info.
Turns out $job->release() doesn't work when using the sync driver.
Is it possible to use dispatchShell from a Controller?
My mission is to start a shell job when the user has signed up.
I'm using CakePHP 2.0
If you can't mitigate the need to do this as dogmatic suggests then, read on.
So you have a (potentially) long-running job you want to perform and you don't want the user to wait.
As the PHP code your user is executing happens during a request that has been started by Apache, any code that is executed will stall that request until it completion (unless you hit Apache's request timeout).
If the above isn't acceptable for your application then you will need to trigger PHP outwith the Apache request (ie. from the command line).
Usability-wise, at this point it would make sense to notify your user that you are processing data in the background. Anything from a message telling them they can check back later to a spinning progress bar that polls your application over ajax to detect job completion.
The simplest approach is to have a cronjob that executes a PHP script (ie. CakePHP shell) on some interval (at minimum, this is once per minute). Here you can perform such tasks in the background.
Some issues arise with background jobs however. How do you know when they failed? How do you know when you need to retry? What if it doesn't complete within the cron interval.. will a race-condition occur?
The proper, but more complicated setup, would be to use a work/message queue system. They allow you to handle the above issues more gracefully, but generally require you to run a background daemon on a server to catch and handle any incoming jobs.
The way this works is, in your code (when a user registers) you insert a job into the queue. The queue daemon picks up the job instantly (it doesn't run on an interval so it's always waiting) and hands it to a worker process (a CakePHP shell for example). It's instant and - if you tell it - it knows if it worked, it knows if it failed, it can retry if you want and it doesn't accidentally handle the same job twice.
There are a number of these available, such as Beanstalkd, dropr, Gearman, RabbitMQ, etc. There are also a number of CakePHP plugins (of varying age) that can help:
cakephp-queue (MySQL)
CakePHP-Queue-Plugin (MySQL)
CakeResque (Redis)
cakephp-gearman (Gearman)
and others.
I have had experience using CakePHP with both Beanstalkd (+ the PHP Pheanstalk library) and the CakePHP Queue plugin (first one above). I have to credit Beanstalkd (written in C) for being very lightweight, simple and fast. However, with regards to CakePHP development, I found the plugin faster to get up and running because:
The plugin comes with all the PHP code you need to get started. With Beanstalkd, you need to write more code (such as a PHP daemon that polls the queue looking for jobs)
The Beanstalkd server infrastructure becomes more complex. I had to install multiple instances of beanstalkd for dev/test/prod, and install supervisord to look after the processes).
Developing/testing is a bit easier since it's a self-contained CakePHP + MySQL solution. You simply need to type cake queue add user signup and cake queue runworker.
I was able to run consolle from controller/action, see the example below.
App::uses('ShellDispatcher', 'Console');
...
public function aco_sync() {
$command = '-app '.APP.' AclExtras.AclExtras aco_sync -r adminControllers -p UserAdmin';
$args = explode(' ', $command);
$dispatcher = new ShellDispatcher($args, false);
if($dispatcher->dispatch()) {
$this->Session->flash('OK');
} else {
$this->Session->flash('Error');
}
return $this->redirect(array('action' => 'index'));
}
In CakePHP-3 you can dispatch shells from the controller & do it almost the same as in CakePHP-2. The documentation does not mention this.
// in your controller:
$shell = new \Cake\Console\Shell;
$shell->dispatchShell('shell_class param1 param2');
// or how the docs suggest
$shell->dispatchShell('shell_class', 'param1', 'param2');
Beware of stdout & stderr in unit tests.
Dispatching a shell turns on stdout and stderr logging with ConsoleLogger, and will give you all the logging in your console if you have something like the code snippet above in code that you are testing from phpunit.
function getEbayOrder(){
$this->autoRender = false;
App::import('Console/Command', 'AppShell');
App::import('Console/Command', 'EbayShell');
$job = new EbayShell();
$job->dispatchMethod('get_orders');
echo "REPONSE";
}
anything is possible, but why would you want to. If you find you need to do something in a shell and the actual application look at using libs.
you stick the code in the lib and then call the lib from both your app and the shell.
If this is to intialize AclExtras the best way is:
App::import('Console/Command', 'AppShell');
App::import('Plugin/AclExtras/Console/Command', 'AclExtrasShell');
$job = new AclExtrasShell();
$job->startup();
$job->dispatchMethod('aco_sync');
But avoid this unless you have no possibilities to run the console script.