What happens with the QueueWorker when TTR ran out? - laravel

This relates to laravel 5.3, beanstalk, ttr and timeout working with Queue's and QueueWorkers. TTR: https://github.com/kr/beanstalkd/wiki/faq
If I understand correctly a job from the Queue gets the state reserved when a QueueWorker is picking it. This job state will be changed back to ready when the ttr runs out. But what happens with the QueueWorker?
Let's say the QueueWorker has a timeout set to 600 by the following command:
php artisan queue:work --tries=1 --timeout=600 --sleep=0
ttr is, as default, set to 60 seconds.
During the job a request is done to another site and it takes 120 seconds till response. After 60 seconds the job is set back to the ready state because the TTR. Will the QueueWorker keep working on the job till response has been received, maximum of 600 seconds? Or will the QueueWorker stop working on the job when TTR has been reached?

Actually, the QueueWorker will run till the job is completed. When you run queue worker without the daemon flag, it will run the code below:
return $this->worker->pop(
$connection, $queue, $delay,
$this->option('sleep'), $this->option('tries')
);
Reference:
https://github.com/laravel/framework/blob/5.2/src/Illuminate/Queue/Console/WorkCommand.php#L123
What this code does is pop its job from the queue and fire that job as a command:
public function process($connection, Job $job, $maxTries = 0, $delay = 0)
{
if ($maxTries > 0 && $job->attempts() > $maxTries) {
return $this->logFailedJob($connection, $job);
}
try {
$job->fire();
$this->raiseAfterJobEvent($connection, $job);
return ['job' => $job, 'failed' => false];
} catch (Exception $e) {
if (! $job->isDeleted()) {
$job->release($delay);
}
throw $e;
} catch (Throwable $e) {
if (! $job->isDeleted()) {
$job->release($delay);
}
throw $e;
}
}
Reference:
https://github.com/laravel/framework/blob/5.2/src/Illuminate/Queue/Worker.php#L213
Digging in the source for more information:
https://github.com/laravel/framework/tree/5.2/src/Illuminate/Queue

Related

Detect shutting-down-controller in post action in declarative pipeline

I have success, failure and aborted under post in declarative pipeline, is there a way to detect
shutting down controller and perform different actions?
e.g:
post {
success {
// do some actions for successful completion
}
failure {
// do some actions for failure completion
}
aborted {
// do some actions when job gets aborted
}
controller-shutting-down {
echo "Jenkins is shutting down."
}
}
OR:
...
aborted {
if (reason == controller-shutting-down) {
echo "Jenkins is shutting down."
} else {
// do some actions when job gets aborted
}
}
...
Is there a way to acheive this?

Laravel Queue generating `illuminate:queue:restart` continuously

Ive Laravel queue running but on Database connection, here is the config:
'database' => [
'driver' => 'database',
'connection' => 'mysql',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 190,
'block_for' => 0,
]
This is how I run it:
php artisan queue:work --queue=xyz_queue > storage/logs/queue.log
On the redis CLI, this is what is happening every second:
It is normal and expected behavior. According to the documentation
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. You may gracefully restart all of the workers by issuing the queue:restart command: php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost.
What queue:restart does is that setting current timestamp to the value of illuminate:queue:restart key.
When the queues are about to be consumed by the processes (php artisan queue:work) it gets that timestamp value from illuminate:queue:restart key and after the job is about to be completed it gets the value again from the same key.
It compares whether the the value before the job is processed is same as the after the job is processed.
If it is different, then it will stop long-lived process.
It is an efficient way(since Redis is super fast for this kind of scenarios) to detect whether the code is changed and should the jobs should be updated for this code change.
The reason it saves the value into the Redis, "most probably" your cache driver is Redis. If you change it to file then it will be saving in the file instead and making the get request to that file.
Here are the related methods;
protected function stopIfNecessary(WorkerOptions $options, $lastRestart, $job = null)
{
if ($this->shouldQuit) {
$this->stop();
} elseif ($this->memoryExceeded($options->memory)) {
$this->stop(12);
} elseif ($this->queueShouldRestart($lastRestart)) {
$this->stop();
} elseif ($options->stopWhenEmpty && is_null($job)) {
$this->stop();
}
}
protected function queueShouldRestart($lastRestart)
{
return $this->getTimestampOfLastQueueRestart() != $lastRestart;
}
protected function getTimestampOfLastQueueRestart()
{
if ($this->cache) {
return $this->cache->get('illuminate:queue:restart');
}
}

How to fix laravel no command 'Redis::throttle'?

I just use doc example,but get the error
exception 'Predis\ClientException' with message 'Command 'THROTTLE' is not a registered Redis command.
I havce search a lot about redis command,but nothing about throttle.
public function handle()
{
// Allow only 2 emails every 1 second
Redis::throttle('my-mailtrap')->allow(2)->every(1)->then(function () {
$recipient = 'steven#example.com';
Mail::to($recipient)->send(new OrderShipped($this->order));
Log::info('Emailed order ' . $this->order->id);
}, function () {
// Could not obtain lock; this job will be re-queued
return $this->release(2);
});
}
What should I do?Any help,Thanks!
throttle method is defined in Illuminate/Redis/Connections/PredisConnection.
The Redis facade allows for you to get the connection using
Redis::connection()
->throttle('my-mailtrap')
//...
http://laravel.com/docs/5.8/redis

Spring Batch restart uncompleted jobs from the same execution and step

I use the following logic to restart the Spring Batch uncompleted(for example after application abnormal termination) jobs:
public void restartUncompletedJobs() {
LOGGER.info("Restarting uncompleted jobs");
try {
jobRegistry.register(new ReferenceJobFactory(documetPipelineJob));
List<String> jobs = jobExplorer.getJobNames();
for (String job : jobs) {
Set<JobExecution> runningJobs = jobExplorer.findRunningJobExecutions(job);
for (JobExecution runningJob : runningJobs) {
runningJob.setStatus(BatchStatus.FAILED);
runningJob.setEndTime(new Date());
jobRepository.update(runningJob);
jobOperator.restart(runningJob.getId());
LOGGER.info("Job restarted: " + runningJob);
}
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
This works fine but with one side effect - it doesn't restart the failed job execution but creates a new execution instance. How to change this logic in order to restart the failed execution from the failed step and do not create a new execution ?
UPDATED
When I try the following code:
public void restartUncompletedJobs() {
try {
jobRegistry.register(new ReferenceJobFactory(documetPipelineJob));
List<String> jobs = jobExplorer.getJobNames();
for (String job : jobs) {
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(job);
for (JobExecution jobExecution : jobExecutions) {
jobOperator.restart(jobExecution.getId());
}
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
it fails with the following exception:
2018-07-30 06:50:47.090 ERROR 1588 --- [ main] c.v.p.d.service.batch.BatchServiceImpl : Illegal state (only happens on a race condition): job execution already running with name=documetPipelineJob and parameters={ID=826407fa-d3bc-481a-8acb-b9643b849035, inputDir=/home/public/images, STORAGE_TYPE=LOCAL}
org.springframework.batch.core.UnexpectedJobExecutionException: Illegal state (only happens on a race condition): job execution already running with name=documetPipelineJob and parameters={ID=826407fa-d3bc-481a-8acb-b9643b849035, inputDir=/home/public/images, STORAGE_TYPE=LOCAL}
at org.springframework.batch.core.launch.support.SimpleJobOperator.restart(SimpleJobOperator.java:283) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.launch.support.SimpleJobOperator$$FastClassBySpringCGLIB$$44ee6049.invoke(<generated>) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) [spring-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:684) [spring-aop-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.batch.core.launch.support.SimpleJobOperator$$EnhancerBySpringCGLIB$$7659d4c.restart(<generated>) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at com.example.pipeline.domain.service.batch.BatchServiceImpl.restartUncompletedJobs(BatchServiceImpl.java:143) ~[domain-0.0.1.jar!/:0.0.1]
The following code creates new executions in jobstore database:
public void restartUncompletedJobs() {
try {
jobRegistry.register(new ReferenceJobFactory(documetPipelineJob));
List<String> jobs = jobExplorer.getJobNames();
for (String job : jobs) {
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(job);
for (JobExecution jobExecution : jobExecutions) {
jobExecution.setStatus(BatchStatus.STOPPED);
jobExecution.setEndTime(new Date());
jobRepository.update(jobExecution);
Long jobExecutionId = jobExecution.getId();
jobOperator.restart(jobExecutionId);
}
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
The question is - how to continue to run the old uncompleted executions without creating new ones after application restart?
TL;DR: Spring Batch will always create new Job Execution and will not reuse a previous failed job execution to continue its execution.
Longer answer: First you need to understand three similar but different concept in Spring Batch: Job, Job Instance, Job Execution
I always use this example:
Job : End-Of-Day Batch
Job Instance : End-Of-Day Batch for 2018-01-01
Job Execution: End-Of-Day Batch for 2018-01-01, execution #1
In high-level, that's how Spring Batch's recovery works:
Assuming your first execution failed in the step 3. You can submit the same Job (End-of-Day Batch) with same Parameters (2018-01-01). Spring Batch will try to look up last Job Execution (End-Of-Day Batch for 2018-01-01, execution #1) of the submitted Job Instance (End-of-Day Batch for 2018-01-01), and found that it has previously failed in step 3. Spring Batch will then create a NEW execution, [End-Of-Day Batch for 2018-01-01, execution #2], and start the execution from step 3.
So by design, what Spring trying to recover is a previously failed Job Instance (instead of Job Execution). Spring batch will not reuse execution when you are re-running a previous-failed execution.

Change Laravel log level for queued job attempted too many times exception

I would like to change the way queue exceptions of type "A queued job has been attempted too many times. The job may have previously timed out." are logged.
I tried adding the following to the \Exception\Handler::report() method but I still see these showing up as ERROR in my logs.
public function report(Exception $exception)
{
if ($exception instanceof \Illuminate\Queue\MaxAttemptsExceededException) {
try {
$logger = $this->container->make(LoggerInterface::class);
} catch (Exception $ex) {
throw $exception; // throw the original exception
}
$logger->warning($e);
return;
}
parent::report($exception);
}
I also get alerts to a Slack channel (the most annoying part) for these when the min level is ERROR. I want to transform it to a WARNING so I prevent that from happening.
$slackHandler = new SlackHandler(
config('slack.api_token'),
config('slack.channel'),
'Laravel Error',
true,
null,
Logger::ERROR,
true,
false,
true
);
Admittedly this is similar to another question I asked, but I appear to have implemented it wrong.

Resources