Configure time of Laravel scheduler from the database - laravel

I've a Laravel 5.2 application, I'm using the scheduler to runs a script every 30 minutes by now. But I'm wondering if that time can be retrieved from the database, I want that the admin user configure that time from the webpage, I have the field in the database. But I'm not really sure if the scheduler can update that time from the database, so, is it possible?
Also, I setted:
->sendOutputTo("/var/www/html/laravelProject/public/output")
to save the output in a file named output, but it's not seems that is working even when the cron is executed, I checked the cron log files and there only shows that there is not MTA installed, but I don't want for the moments to send a e-mail, I just want to save the output in a file, so, what am I missing there?

You can use when
The when method may be used to limit the execution of a task based on the result of a given truth test.
$schedule->command('yourcommand:execute')->everyMinutes()->when(function () {
if($timefromdb){
return true;
}
else{
return false;
}
});
https://laravel.com/docs/master/scheduling#schedule-frequency-options

Related

Laravel Queue affecting the execution time of API function

I have an API function that stores pdf file to my s3 bucket and then sends email with the pdf file as an attachment.
Since this is my first time, I got confused because from what I understood, jobs executes from the background, thus, it should not affect the execution time of the function.
But instead, having these jobs makes the execution time almost 8 seconds.
Here's my function
$is_exist = CoachingApplication::where('user_id', $userID)->first();
if ($is_exist == null) {
$application = new CoachingApplication();
$application->user_id = $userID;
$application->applicant_name = $applicantName;
$application->attachment = $filename;
$application->instrument_rate = $instrumentRate;
if ($application->save()) {
if ($filename !== 'none') {
StoreBucketJob::dispatch($userID, $filename, $attachment_fileArray)->delay(Carbon::now()->addSeconds(3));
}
SendEmailJob::dispatch($userID, $userName, $userSlug, $userEmail, $filename)->delay(Carbon::now()->addSeconds(3));
}
}
If I remove these jobs, the function's execution time is 469ms.
Any idea why these jobs affects the api's execution time?
By default, the queue driver is setup to sync and you are probably using it.
This queue driver means your jobs will be executed within the current process and will not be dispatched in an actual queue (which is pretty useful during development).
A good way to be 100% sure that your job are indeed executed synchronously would be to just put a dd("ok"); at the first line of the handle method inside your job. handle is only executed when the job runs, not when it is dispatched.
The queue driver can be updated by editing your .env file (look for QUEUE_CONNECTION).
There are many queue drivers available and some requires additional dependencies, so you should check out the documentation available https://laravel.com/docs/8.x/queues.

Laravel Jobs fail on Redis when attempting to use throttle

End Goal
The aim is for my application to fire off potentially a lot of emails to the Redis queue (This bit is working) and then Redis throttle the processing of these to only a set number of emails every selected number of minutes.
For this example, I have a test job that appends the time to a file and I am attempting to throttle it to once every 60 seconds.
The story so far....
So far, I have the application successfully pushing a test amount of 50 jobs to the Redis queue. I can log in to Horizon and see these 50 jobs in the "processjob" queue. I can also log in to redis-cli and see 50 sets under the list key "queues:processjob".
My issue is that as soon as I attempt to put the throttle on, only 1 job runs and the rest fail with the following error:
Predis\Response\ServerException: ERR Error running script (call to f_29cc07bd431ccbf64637e5dcb60484560fdfa2da): #user_script:10: WRONGTYPE Operation against a key holding the wrong kind of value in /var/www/html/smhub/vendor/predis/predis/src/Client.php:370
If I remove the throttle, all works file and 5 jobs are instantly ran.
I thought maybe it was the incorrect key name but if I change the following:
public function handle()
{
//
Redis::throttle('queues:processjob')->allow(1)->every(60)->then(function(){
Storage::disk('local')->append('testFile.txt',date("Y-m-d H:i:s"));
}, function (){
return $this->release(10);
});
}
to this:
public function handle()
{
//
Redis::funnel('queues:processjob')->limit(1)->then(function(){
Storage::disk('local')->append('testFile.txt',date("Y-m-d H:i:s"));
}, function (){
return $this->release(10);
});
}
then it all works fine.
My thoughts...
Something tells me that the issue is that the redis key is of type "list" and that the jobs are all under a single list. That being said, if it didn't work this way, how would we throttle a queue as the throttle requires a unique key.
For anybody else that is having issues attempting to get this to work and is getting the same issue as I was, this is what resolved my issues:
The Fault
I assumed that Redis::throttle('queues:processjob') was meant to be referring to the queue that you wanted to be throttled. However, after some re-reading of the documentation and testing of the code, I realized that this was not the case.
The Fix
Redis::throttle('queues:processjob') is meant to point to it's own 'holding' queue and so must be a unique Redis key name. Therefore, changing it to Redis::throttle('throttle:queues:processjob') worked fine for me.
The workings
When I first looked in to this, I assumed that that Redis::throttle('this') throttled the queue that you specified. To some degree this is correct but it will not work if the job was created via another means.
Redis::throttle('this') actually creates a new 'holding' queue where the jobs go until the condition(s) you specify are met. So jobs will go to the queue 'this' in this example and when the throttle trigger is released, they will be passed to the queue specified in their execution code. In this case, 'queues:processjob'.
I hope this helps!

How can I troubleshoot silently failing queued jobs?

I have a job that is dispatched with two arguments - path and filename to a file. The job parses the file using simplexml, then makes a notice of it in the database and moves the file to the appropriate folder for safekeeping. If anything goes wrong, it moves the file to another folder for failed files, as well as creates an event to give me a notification.
My problem is that sometimes the job will fail silently. The job is removed from the queue, but the file has not been parsed and it remains in the same directory. The failed_jobs table is empty (I'm using the database queue driver for development) and the failed() method has not been triggered. The Queue::failing() method I put in the app service provider has not been triggered either - I know, since both of those contain only a single log call to check whether they were hit. The Laravel log is empty (it's readable and Laravel does write to it for other errors - I double-checked) and so are relevant system log files such as e.g. php's.
At first I thought it was a timeout issue, but the queue listener has not failed or stopped, nor been restarted. I increased the timeout to 300 seconds anyway, and verified that all of the "[datetime] Processed: [job]" lines the listener generates were well within that timespan. Php execution times etc. are also far longer than required for this job.
So how on earth can I troubleshoot this when the logs are empty, nothing appears to fail, and I get no notification of what's wrong? If I queue up 200 files then maybe 180 will be processed and the remaining 20 fail silently. If I refresh the database + migrations and queue up the same 200 again, then maybe 182 will be processed and 18 will fail silently - but they won't necessarily be the same.
My handle method, simplified to show relevant bits, looks as follows:
public function handle()
{
try {
$xml = simplexml_load_file($this->path.$this->filename);
$this->parse($xml);
$parsedFilename = config('feeds.parsed path').$this->filename;
File::move($this->path.$this->filename, $parsedFilename);
} catch (Exception $e) {
// if i put deliberate errors in the files, this works fine
$errorFilename = config('feeds.error path').$this->filename;
File::move($this->path.$this->filename, $errorFilename);
event(new ParserThrewAnError($this->filename));
}
}
Okay, so I still have absolutely no idea why, but... after restarting the VM I have tested eight times with various different files and options and had zero problems. If anyone can guess the reason, feel free to reply and I'll accept your answer if it sounds reasonable. For now, I'll mark my own answer as correct once I can, in case somebody else stumbles across this later.

Programmatically change database for heroku dataclips

We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.

CakePHP: Run shell job from controller

Is it possible to use dispatchShell from a Controller?
My mission is to start a shell job when the user has signed up.
I'm using CakePHP 2.0
If you can't mitigate the need to do this as dogmatic suggests then, read on.
So you have a (potentially) long-running job you want to perform and you don't want the user to wait.
As the PHP code your user is executing happens during a request that has been started by Apache, any code that is executed will stall that request until it completion (unless you hit Apache's request timeout).
If the above isn't acceptable for your application then you will need to trigger PHP outwith the Apache request (ie. from the command line).
Usability-wise, at this point it would make sense to notify your user that you are processing data in the background. Anything from a message telling them they can check back later to a spinning progress bar that polls your application over ajax to detect job completion.
The simplest approach is to have a cronjob that executes a PHP script (ie. CakePHP shell) on some interval (at minimum, this is once per minute). Here you can perform such tasks in the background.
Some issues arise with background jobs however. How do you know when they failed? How do you know when you need to retry? What if it doesn't complete within the cron interval.. will a race-condition occur?
The proper, but more complicated setup, would be to use a work/message queue system. They allow you to handle the above issues more gracefully, but generally require you to run a background daemon on a server to catch and handle any incoming jobs.
The way this works is, in your code (when a user registers) you insert a job into the queue. The queue daemon picks up the job instantly (it doesn't run on an interval so it's always waiting) and hands it to a worker process (a CakePHP shell for example). It's instant and - if you tell it - it knows if it worked, it knows if it failed, it can retry if you want and it doesn't accidentally handle the same job twice.
There are a number of these available, such as Beanstalkd, dropr, Gearman, RabbitMQ, etc. There are also a number of CakePHP plugins (of varying age) that can help:
cakephp-queue (MySQL)
CakePHP-Queue-Plugin (MySQL)
CakeResque (Redis)
cakephp-gearman (Gearman)
and others.
I have had experience using CakePHP with both Beanstalkd (+ the PHP Pheanstalk library) and the CakePHP Queue plugin (first one above). I have to credit Beanstalkd (written in C) for being very lightweight, simple and fast. However, with regards to CakePHP development, I found the plugin faster to get up and running because:
The plugin comes with all the PHP code you need to get started. With Beanstalkd, you need to write more code (such as a PHP daemon that polls the queue looking for jobs)
The Beanstalkd server infrastructure becomes more complex. I had to install multiple instances of beanstalkd for dev/test/prod, and install supervisord to look after the processes).
Developing/testing is a bit easier since it's a self-contained CakePHP + MySQL solution. You simply need to type cake queue add user signup and cake queue runworker.
I was able to run consolle from controller/action, see the example below.
App::uses('ShellDispatcher', 'Console');
...
public function aco_sync() {
$command = '-app '.APP.' AclExtras.AclExtras aco_sync -r adminControllers -p UserAdmin';
$args = explode(' ', $command);
$dispatcher = new ShellDispatcher($args, false);
if($dispatcher->dispatch()) {
$this->Session->flash('OK');
} else {
$this->Session->flash('Error');
}
return $this->redirect(array('action' => 'index'));
}
In CakePHP-3 you can dispatch shells from the controller & do it almost the same as in CakePHP-2. The documentation does not mention this.
// in your controller:
$shell = new \Cake\Console\Shell;
$shell->dispatchShell('shell_class param1 param2');
// or how the docs suggest
$shell->dispatchShell('shell_class', 'param1', 'param2');
Beware of stdout & stderr in unit tests.
Dispatching a shell turns on stdout and stderr logging with ConsoleLogger, and will give you all the logging in your console if you have something like the code snippet above in code that you are testing from phpunit.
function getEbayOrder(){
$this->autoRender = false;
App::import('Console/Command', 'AppShell');
App::import('Console/Command', 'EbayShell');
$job = new EbayShell();
$job->dispatchMethod('get_orders');
echo "REPONSE";
}
anything is possible, but why would you want to. If you find you need to do something in a shell and the actual application look at using libs.
you stick the code in the lib and then call the lib from both your app and the shell.
If this is to intialize AclExtras the best way is:
App::import('Console/Command', 'AppShell');
App::import('Plugin/AclExtras/Console/Command', 'AclExtrasShell');
$job = new AclExtrasShell();
$job->startup();
$job->dispatchMethod('aco_sync');
But avoid this unless you have no possibilities to run the console script.

Resources