So,
a I'm trying to use the Commands class for the first time, I want to make the queue messages more readable then [2018-09-01 17:57:47][276] Processing: Illuminate\Foundation\Console\QueuedCommand
So what I have done is the following;
I registered the Command ConvertRecording With protected $signature = 'recording:convert {recording_id}'; and protected $description = 'Convert a recording from mkv to mp4 using an recording id and making use of ffmpeg';.This has an empty constructor, since I don't need to have an object passed to it... And the handle method just has some working code and some $this->log() commands...
Now, when I call the artisan command, I use the following code:
$exitCode = Artisan::queue('recording:convert', [
'recording_id' => $recording_id
]);
And it appends to the queue, but I get only messages like thse:
[2018-09-01 17:57:47][276] Processing: Illuminate\Foundation\Console\QueuedCommand
[2018-09-01 17:58:16][276] Processed: Illuminate\Foundation\Console\QueuedCommand
How could I change it to something like [2018-09-01 17:58:16] Procesing: Video with ID [video ID here]
It may be that you are looking to do something with Queued Commands that they are not really meant to do. What you are seeing in your logs is exactly what a Job is meant to do - that is, report when it is starting and when it has completed (or failed). It's the Command where the useful work is being done, and so all your output and logging should be done there.
Commands, naturally, have some console logging tools such as error, info and comment that can assist you in debugging:
$this->error('This is an error and will appear highlighted in the console');
$this->info('This is information');
$this->comment('This is a comment');
However, using these in a production environment may not work as your queue workers won't a console to log to (I may be mistaken, as I have never tried to look).
I recommend simply setting up a dedicated log file for your commands using a ServiceProvider.
Related
I would like to save the data from the left-hand-side of the TestRunner to a text file (json, plain text, or any kind of text).
I feel like this should be very easy, and that I'm simply just missing something. However, I cannot find anything to explain this. I have checked this other S.O. question: Cypress pipe console.log and command log to output, which references this currently open issue -- but this appears to be focused on collecting the browsers console log.
I even tried one of the workarounds suggested in the discussion of that open issue, cypress-log-to-output - but that put a ton of output in the terminal from which I launched the test. I did try to correlate the extra output to the relatively few entries from the TestRunner's left-hand-side, but did not see anything to match them up.
I'm just hoping to get a text file that looks like this (with perhaps a bit of detail for each entry):
1 visit /
(xhr) GET 200 /todos
2 wait #todos
(req) GET /todos Received todos
...
Or perhaps JSON.
My motivation comes from having to write Cypress tests for our CI that will be testing a very old AjaxSwing based application that makes heavy use of XHR requests, and it can be a different number of XHR requests for each test run (sometimes 8, sometimes 12 just to load the first page).
The AjaxSwing app is not changing, so I have to figure this out as best as possible. So I wanted to see a whole text file with all the information from the TestRunner's left hand side. Perhaps even compare separate runs to see if I can spot some "header" or "body" value I could use to distinguish the right XHR request to wait for.
Any help would be appreciated.
One approach using the log:added event
// top of spec
const logs = []
Cypress.on('log:added', (log) => {
const message = `${log.consoleProps.Command}: ${log.message}`
logs.push(message)
})
it('writes to logs', () => {
... // some commands that log
cy.writeFile('logs.txt', logs)
});
I have a job that is dispatched with two arguments - path and filename to a file. The job parses the file using simplexml, then makes a notice of it in the database and moves the file to the appropriate folder for safekeeping. If anything goes wrong, it moves the file to another folder for failed files, as well as creates an event to give me a notification.
My problem is that sometimes the job will fail silently. The job is removed from the queue, but the file has not been parsed and it remains in the same directory. The failed_jobs table is empty (I'm using the database queue driver for development) and the failed() method has not been triggered. The Queue::failing() method I put in the app service provider has not been triggered either - I know, since both of those contain only a single log call to check whether they were hit. The Laravel log is empty (it's readable and Laravel does write to it for other errors - I double-checked) and so are relevant system log files such as e.g. php's.
At first I thought it was a timeout issue, but the queue listener has not failed or stopped, nor been restarted. I increased the timeout to 300 seconds anyway, and verified that all of the "[datetime] Processed: [job]" lines the listener generates were well within that timespan. Php execution times etc. are also far longer than required for this job.
So how on earth can I troubleshoot this when the logs are empty, nothing appears to fail, and I get no notification of what's wrong? If I queue up 200 files then maybe 180 will be processed and the remaining 20 fail silently. If I refresh the database + migrations and queue up the same 200 again, then maybe 182 will be processed and 18 will fail silently - but they won't necessarily be the same.
My handle method, simplified to show relevant bits, looks as follows:
public function handle()
{
try {
$xml = simplexml_load_file($this->path.$this->filename);
$this->parse($xml);
$parsedFilename = config('feeds.parsed path').$this->filename;
File::move($this->path.$this->filename, $parsedFilename);
} catch (Exception $e) {
// if i put deliberate errors in the files, this works fine
$errorFilename = config('feeds.error path').$this->filename;
File::move($this->path.$this->filename, $errorFilename);
event(new ParserThrewAnError($this->filename));
}
}
Okay, so I still have absolutely no idea why, but... after restarting the VM I have tested eight times with various different files and options and had zero problems. If anyone can guess the reason, feel free to reply and I'll accept your answer if it sounds reasonable. For now, I'll mark my own answer as correct once I can, in case somebody else stumbles across this later.
I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.
I'm using Sidekiq to perform some heavy processing in the background. I looked online but couldn't find the answers to the following questions. I am using:
Class.delay.use_method(listing_id)
And then, inside the class, I have a
self.use_method(listing_id)
listing = Listing.find_by_id listing_id
UserMailer.send_mail(listing)
Class.call_example_function()
Two questions:
How do I make this function idempotent for the UserMailer sendmail? In other words, in case the delayed method runs twice, how do I make sure that it only sends the mail once? Would wrapping it in something like this work?
mail_sent = false
if !mail_sent
UserMailer.send_mail(listing)
mail_sent = true
end
I'm guessing not since the function is tried again and then mail_sent is set to false for the second run through. So how do I make it so that UserMailer is only run once.
Are functions called within the delayed async method also asynchronous? In other words, is Class.call_example_function() executed asynchronously (not part of the response / request cycle?) If not, should I use Class.delay.call_example_function()
Overall, just getting familiar with Sidekiq so any thoughts would be appreciated.
Thanks
I'm coming into this late, but having been around the loop and had this StackOverflow entry appearing prominently via Google, it needs clarification.
The issue of idempotency and the issue of unique jobs are not the same thing. The 'unique' gems look at the parameters of job at the point it is about to be processed. If they find that there was another job with the same parameters which had been submitted within some expiry time window then the job is not actually processed.
The gems are literally what they say they are; they consider whether an enqueued job is unique or not within a certain time window. They do not interfere with the retry mechanism. In the case of the O.P.'s question, the e-mail would still get sent twice if Class.call_example_function() threw an error thus causing a job retry, but the previous line of code had successfully sent the e-mail.
Aside: The sidekiq-unique-jobs gem mentioned in another answer has not been updated for Sidekiq 3 at the time of writing. An alternative is sidekiq-middleware which does much the same thing, but has been updated.
https://github.com/krasnoukhov/sidekiq-middleware
https://github.com/mhenrixon/sidekiq-unique-jobs (as previously mentioned)
There are numerous possible solutions to the O.P.'s email problem and the correct one is something that only the O.P. can assess in the context of their application and execution environment. One would be: If the e-mail is only going to be sent once ("Congratulations, you've signed up!") then a simple flag on the User model wrapped in a transaction should do the trick. Assuming a class User accessible as an association through the Listing via listing.user, and adding in a boolean flag mail_sent to the User model (with migration), then:
listing = Listing.find_by_id(listing_id)
unless listing.user.mail_sent?
User.transaction do
listing.user.mail_sent = true
listing.user.save!
UserMailer.send_mail(listing)
end
end
Class.call_example_function()
...so that if the user mailer throws an exception, the transaction is rolled back and the change to the user's flag setting is undone. If the "call_example_function" code throws an exception, then the job fails and will be retried later, but the user's "e-mail sent" flag was successfully saved on the first try so the e-mail won't be resent.
Regarding idempotency, you can use https://github.com/mhenrixon/sidekiq-unique-jobs gem:
All that is required is that you specifically set the sidekiq option
for unique to true like below:
sidekiq_options unique: true
For jobs scheduled in the future it is possible to set for how long
the job should be unique. The job will be unique for the number of
seconds configured or until the job has been completed.
*If you want the unique job to stick around even after it has been successfully processed then just set the unique_unlock_order to
anything except :before_yield or :after_yield (unique_unlock_order =
:never)
I'm not sure I understand the second part of the question - when you delay a method call, the whole method call is deferred to the sidekiq process. If by 'response / request cycle' you mean that you are running a web server, and you call delay from there, so all the calls within the use_method are called from the sidekiq process, and hence outside of that cycle. They are called synchronously relative to each other though...
Is it possible to use dispatchShell from a Controller?
My mission is to start a shell job when the user has signed up.
I'm using CakePHP 2.0
If you can't mitigate the need to do this as dogmatic suggests then, read on.
So you have a (potentially) long-running job you want to perform and you don't want the user to wait.
As the PHP code your user is executing happens during a request that has been started by Apache, any code that is executed will stall that request until it completion (unless you hit Apache's request timeout).
If the above isn't acceptable for your application then you will need to trigger PHP outwith the Apache request (ie. from the command line).
Usability-wise, at this point it would make sense to notify your user that you are processing data in the background. Anything from a message telling them they can check back later to a spinning progress bar that polls your application over ajax to detect job completion.
The simplest approach is to have a cronjob that executes a PHP script (ie. CakePHP shell) on some interval (at minimum, this is once per minute). Here you can perform such tasks in the background.
Some issues arise with background jobs however. How do you know when they failed? How do you know when you need to retry? What if it doesn't complete within the cron interval.. will a race-condition occur?
The proper, but more complicated setup, would be to use a work/message queue system. They allow you to handle the above issues more gracefully, but generally require you to run a background daemon on a server to catch and handle any incoming jobs.
The way this works is, in your code (when a user registers) you insert a job into the queue. The queue daemon picks up the job instantly (it doesn't run on an interval so it's always waiting) and hands it to a worker process (a CakePHP shell for example). It's instant and - if you tell it - it knows if it worked, it knows if it failed, it can retry if you want and it doesn't accidentally handle the same job twice.
There are a number of these available, such as Beanstalkd, dropr, Gearman, RabbitMQ, etc. There are also a number of CakePHP plugins (of varying age) that can help:
cakephp-queue (MySQL)
CakePHP-Queue-Plugin (MySQL)
CakeResque (Redis)
cakephp-gearman (Gearman)
and others.
I have had experience using CakePHP with both Beanstalkd (+ the PHP Pheanstalk library) and the CakePHP Queue plugin (first one above). I have to credit Beanstalkd (written in C) for being very lightweight, simple and fast. However, with regards to CakePHP development, I found the plugin faster to get up and running because:
The plugin comes with all the PHP code you need to get started. With Beanstalkd, you need to write more code (such as a PHP daemon that polls the queue looking for jobs)
The Beanstalkd server infrastructure becomes more complex. I had to install multiple instances of beanstalkd for dev/test/prod, and install supervisord to look after the processes).
Developing/testing is a bit easier since it's a self-contained CakePHP + MySQL solution. You simply need to type cake queue add user signup and cake queue runworker.
I was able to run consolle from controller/action, see the example below.
App::uses('ShellDispatcher', 'Console');
...
public function aco_sync() {
$command = '-app '.APP.' AclExtras.AclExtras aco_sync -r adminControllers -p UserAdmin';
$args = explode(' ', $command);
$dispatcher = new ShellDispatcher($args, false);
if($dispatcher->dispatch()) {
$this->Session->flash('OK');
} else {
$this->Session->flash('Error');
}
return $this->redirect(array('action' => 'index'));
}
In CakePHP-3 you can dispatch shells from the controller & do it almost the same as in CakePHP-2. The documentation does not mention this.
// in your controller:
$shell = new \Cake\Console\Shell;
$shell->dispatchShell('shell_class param1 param2');
// or how the docs suggest
$shell->dispatchShell('shell_class', 'param1', 'param2');
Beware of stdout & stderr in unit tests.
Dispatching a shell turns on stdout and stderr logging with ConsoleLogger, and will give you all the logging in your console if you have something like the code snippet above in code that you are testing from phpunit.
function getEbayOrder(){
$this->autoRender = false;
App::import('Console/Command', 'AppShell');
App::import('Console/Command', 'EbayShell');
$job = new EbayShell();
$job->dispatchMethod('get_orders');
echo "REPONSE";
}
anything is possible, but why would you want to. If you find you need to do something in a shell and the actual application look at using libs.
you stick the code in the lib and then call the lib from both your app and the shell.
If this is to intialize AclExtras the best way is:
App::import('Console/Command', 'AppShell');
App::import('Plugin/AclExtras/Console/Command', 'AclExtrasShell');
$job = new AclExtrasShell();
$job->startup();
$job->dispatchMethod('aco_sync');
But avoid this unless you have no possibilities to run the console script.