I have read the documentation, but I don't understand how to write the output if the task fails? I deliberately made a mistake in my task and copied it to the controller so that you can see the execution result.
The result is an error that is displayed. But when the task is executed through Task Scheduling, I get empty output to my email.
How to write the error output so that it would be present in the letter?
My kernel.php:
$schedule->call(new Load())->everyTenMinutes()->emailOutputOnFailure('myemail');
The emailOutputTo, emailOutputOnFailure, sendOutputTo, and
appendOutputTo methods are exclusive to the command and exec methods.
https://laravel.com/docs/8.x/scheduling#task-output
Related
So,
a I'm trying to use the Commands class for the first time, I want to make the queue messages more readable then [2018-09-01 17:57:47][276] Processing: Illuminate\Foundation\Console\QueuedCommand
So what I have done is the following;
I registered the Command ConvertRecording With protected $signature = 'recording:convert {recording_id}'; and protected $description = 'Convert a recording from mkv to mp4 using an recording id and making use of ffmpeg';.This has an empty constructor, since I don't need to have an object passed to it... And the handle method just has some working code and some $this->log() commands...
Now, when I call the artisan command, I use the following code:
$exitCode = Artisan::queue('recording:convert', [
'recording_id' => $recording_id
]);
And it appends to the queue, but I get only messages like thse:
[2018-09-01 17:57:47][276] Processing: Illuminate\Foundation\Console\QueuedCommand
[2018-09-01 17:58:16][276] Processed: Illuminate\Foundation\Console\QueuedCommand
How could I change it to something like [2018-09-01 17:58:16] Procesing: Video with ID [video ID here]
It may be that you are looking to do something with Queued Commands that they are not really meant to do. What you are seeing in your logs is exactly what a Job is meant to do - that is, report when it is starting and when it has completed (or failed). It's the Command where the useful work is being done, and so all your output and logging should be done there.
Commands, naturally, have some console logging tools such as error, info and comment that can assist you in debugging:
$this->error('This is an error and will appear highlighted in the console');
$this->info('This is information');
$this->comment('This is a comment');
However, using these in a production environment may not work as your queue workers won't a console to log to (I may be mistaken, as I have never tried to look).
I recommend simply setting up a dedicated log file for your commands using a ServiceProvider.
I am writing an Ansible 2.x callback plugin, and I would like to be able to fail the current playbook with an non-zero exit code based on some conditions in the v2_playbook_on_stats function.
I have tried to raise AnsibleError(), but this is caught somewhere up the chain and treated as a warning, which allows Ansible to finish with a zero exit code.
I have also tried using self._display.error(), but seems to do nothing but display an error message, and again Ansible finishes with a zero exit code.
Is there any way to do what I require? Or is a callback plugin never meant to allow the developer to change the status of a playbook to a failure?
Thank you for your time.
Also faced the same problem and i found out that i could use python's sys.exit(x) (x being the various exit codes) to stop or fail the playbook.
You can't do this with callback, strategy plugin is your choice.
Subclass required plugin (e.g. linear), extend run method to return non-zero value based on your criteria, it will be translated by PlaybookExecutor and cli as program exit code.
Is there any way we can ensure certain code to run event after the delayed job is failed or succeeds just like we can write ensure block in exception handling?
What's wrong with the following approach?
def delayed_job_method
do_the_job
ensure
something
end
I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.
What's the difference between the following two code snippets?
First:
task copyFiles(type: Copy) << {
from "folder/from"
into "dest/folder"
}
Second:
task copyFiles(type: Copy) {
from "folder/from"
into "dest/folder"
}
In short, the first snippet is getting it wrong, and the second one is getting it right.
A Gradle build proceeds in three phases: initialization, configuration, and execution. Methods like from and into configure a task, hence they need to be invoked in the configuration phase. However, << (which is a shortcut for doLast) adds a task action - it instructs the task what to do if and when it gets executed. In other words, the first snippet configures the task in the execution phase, and even worse, after its main (copy) action has been executed. Hence the configuration won't have any effect.
Typically, a task has either a type (which already brings along a task action) or a << (for an ad-hoc task). There are legitimate uses cases for having both (doing a bit of custom work after the task's "main" work), but more often that not, it's a mistake where the task gets configured too late.
I generally recommend to use doLast instead of <<, because it's less cryptic and makes it easier to spot such mistakes. (Once you understand the concepts, it's kind of obvious that task copyFiles(type: Copy) { doLast { from ... } } is wrong.)
The first block of code create a task and append an Action to it. A task is composed of actions which are instructions blocks run sequentially when the task is called
The second block create a task and configure it. These instructions are run in the gradle "configuration" lifecycle phase.
here you find a clear explanation of the differences
here you can find an in depth explanation of gradle tasks
here is the gradle guide about lifecycle