Laravel Job Batching unable to cancel - laravel

I have a simple laravel job batching, my problem is when one of my queue inside batch is failed and throw an exception, it doesn't stop or cancel the execution of the batch even I add the cancel method, still processing the next queue.
this is my handle and failed method
public function handle()
{
if ($this->batch()->cancelled()) {
return;
}
$csv_data = array_map('str_getcsv', file($this->chunk_directory));
foreach ($csv_data as $key => $row) {
if(count($this->header) != count($row)) {
$data = array_combine($this->header, $row);
} else {
$this->batch()->cancel();
throw new Exception("Your file doesn't match the number of headers like your product header");
}
}
}
public function failed(\Exception $e = null)
{
broadcast(new QueueProcessing("failed", BatchHelpers::getBatch($this->batch()->id)));
}
here is my commandline result
[2021-01-11 01:17:57][637] Processing: App\Jobs\ImportItemFile
[2021-01-11 01:17:57][637] Failed: App\Jobs\ImportItemFile
[2021-01-11 01:17:58][638] Processing: App\Jobs\ImportItemFile
[2021-01-11 01:17:58][638] Processed: App\Jobs\ImportItemFile

From the Laravel 8 Queue documentation:
When a job within a batch fails, Laravel will automatically mark the batch as "cancelled"
So the default behavior is that the whole batch is marked as "canceled" and stops executing (note that the currently executing jobs will not be stopped).
In your case if the batch execution is continuing, maybe you've turned on the allowFailures() option when created a batch?
By the way you don't need to call the cancel() method. When an exception is thrown, the given job is already "failed" and the whole batch cancels.
Either remove the cancel() line, or return after the cancelation method (without throwing an exception). (see Cancelling batches)

Related

How to know if bus chain has failed or aborted? laravel

I have my sample code below, the bus is just working fine but I was wondering if it's possible to check if a bus chain has aborted or failed.
I tried assigning a variable but the assignment inside the catch function doesn't seem to work. Do we have predefined bus function or variable to determine it? I tried $bus->failedJobs but did not work as well.
$hasFailed = false;
$bus = Bus::chain($batches)->catch(function (Throwable $e) use ($hasFailed) {
$hasFailed = true;
})->dispatch();
return $hasFailed ? "failed" : "not";
$hasFailed inside the catch function won't work because it creates a copy of the variable and won't affect the original value outside the function.
You can use an inherited reference to the variable by using &$hasFailed, which allows you to update the value of the variable in the parent context:
$hasFailed = false;
$bus = Bus::chain($batches)
->catch(function (Throwable $e) use (&$hasFailed) {
$hasFailed = true;
})
->dispatch();
return $hasFailed ? "failed" : "not";
You can also use the onQueue method to dispatch the jobs in the chain to a specific queue and use Laravel Horizon to monitor the queue for failed jobs:
$bus = Bus::chain($batches)
->onConnection('redis')
->onQueue('my-bus-chain')
->dispatch();
I just realized that the queue runs in the background and is not running asynchronously. So therefore, there is no way you could determine if it has failed (outside the bus::chain clause) if the queue's job execution is not yet finished.
short answer: it's not possible.
let me know if my understanding is incorrect :)

How to run a code when a Laravel Job try is killed by timeout (Horizon)

I created a Laravel Job with 3 tries and timeout after 10 minutes. I am using Horizon.
I can handle the failure after 3 tries using the method failed, but how can I handle the timeout event each 3 tries of this job ?
Used for logging and feedback, I want my user to be notified when the first or second try fails and it will be retried later.
class MyJob implements ShouldQueue
{
public $tries = 3;
public $timeout = 600;
// [...]
public function failed(Throwable $exception)
{
// The failure of the 3 tries.
}
// Any method for catching each timeouts ?
}
You may define the $failOnTimeout property on the job class
/**
* Indicate if the job should be marked as failed on timeout.
*
* #var bool
*/
public $failOnTimeout = true;
https://laravel.com/docs/9.x/queues#failing-on-timeout
I dont think there is a method for that,
But you can do something like catch the Error thrown if the job fails and verify that its from timeout exception which I believe would throw the exception handler Symfony\Component\Process\Exception\ProcessTimedOutException.
Something like;
public function handle() {
try {
// run job
} catch (\Throwable $exception) {
// manually fail it if attempt is more than twice
if ($this->attempts() > 2)
$this->fail($exception);
// Check if the error it timeout related
if ( $exception instanceof \Symfony\Component\Process\Exception\ProcessTimedOutException ) {
// Whatever you want to do when it fails due to timeout
}
// release the job back to queue after 5 seconds
$this->release(5);
return;
}
}
Just try running a job and make sure it fails because of timeout, to verify the actual timeout class exception
Ok I found the solution.
TLDR;
Put a pcntl_signal at the beginning of your your job handle() and then you can do something like call a onTimeout() method :
public function handle()
{
pcntl_signal(SIGALRM, function () {
$this->onTimeout();
exit;
});
// [...]
}
public function onTimeout()
{
// This method will be called each
}
The story behind :
The Queue documentation says : The pcntl PHP extension must be installed in order to specify job timeouts.
So, digging into the pcntl PHP documentation I found interesting pcntl_* functions. And a call of pcntl_signal into Illuminate/Queue/Worker.php l221.
It looks that if we register a method using pcntl_signal it replace the previous handler. I tried to load the Laravel one using pcntl_signal_get_handler but I can't manage to call it. So the workaroud is to call exit; so Laravel will consider the process as lost and mark it as timeout (?). There is the 3 tries, the retry_after is respected, and at the last try the job fails ... It may be cleaner to keep the original handler, but as it work well on my case so I will stop investigate.

How to call kafkaconsumer api from partition assignor' s implementation

I have implemented my own partition assignment strategy by implementing RangeAssignor in my spring boot application.
I have overridden its subscriptionUserData method and adding some user data. Whenever this data is getting changed I want to trigger partition rebalance by invoking below kafkaConsumer's api
kafkaconsumer apis enforce rebalance
I am not sure how can I get the object of kafka consumer and invoke this api.
Please suggest
You can call consumer.wakeup() function
consumer.wakeup() is the only consumer method that is safe to call from a different thread. Calling wakeup will cause poll() to exit with WakeupException, or if consumer.wakeup() was called while the thread was not waiting on poll, the exception will be thrown on the next iteration when poll() is called. The WakeupException doesn’t need to be handled, but before exiting the thread, you must call consumer.close(). Closing the consumer will commit off‐ sets if needed and will send the group coordinator a message that the consumer is leaving the group. The consumer coordinator will trigger rebalancing immediately
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
System.out.println("Starting exit...");
consumer.wakeup(); **//1**
try {
mainThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
} });
...
Duration timeout = Duration.ofMillis(100);
try {
// looping until ctrl-c, the shutdown hook will cleanup on exit
while (true) {
ConsumerRecords<String, String> records =
movingAvg.consumer.poll(timeout);
System.out.println(System.currentTimeMillis() +
"-- waiting for data...");
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
}
for (TopicPartition tp: consumer.assignment())
System.out.println("Committing offset at position:" +
consumer.position(tp));
movingAvg.consumer.commitSync();
}
} catch (WakeupException e) {
// ignore for shutdown. **//2**
} finally {
consumer.close(); **//3**
System.out.println("Closed consumer and we are done");
}
ShutdownHook runs in a separate thread, so the only safe action we can take is to call wakeup to break out of the poll loop.
Another thread calling wakeup will cause poll to throw a WakeupException. You’ll want to catch the exception to make sure your application doesn’t exit unexpect‐ edly, but there is no need to do anything with it.
Before exiting the consumer, make sure you close it cleanly.
full example at:
https://github.com/gwenshap/kafka-examples/blob/master/SimpleMovingAvg/src/main/java/com/shapira/examples/newconsumer/simplemovingavg/SimpleMovingAvgNewConsumer.java

Laravel 4 beanstalkd exception catching issue in job processing

I am using beastalkd for the job queue processing email validations.
job processor implementation is like
public function fire($job, $data)
{
// processing
try {
//some processing including some smtp simulation checks
} catch (Exception $e) {
//saving some status
}
//further processing
$job->delete();
}
Like the example above at some point there is an exception throwing which is indented to happen as per the process. The exception is catching properly and doing some actions in catch block. The problem is after catching the exception that released back onto the queue.
Is there any way to continue processing the current attempt of the job even after the exception catches. ?

Stop queue on error

I am trying to delete my queue on error.
I have tried the following:
file: global.php
Queue::failing(function($connection, $job, $data)
{
// Delete the job
$job->delete();
});
But when my queue fails like this one does:
public function fire($job, $data){
undefined_function(); // this function is not defined and will trow a error
}
Then the job is not deleted for some reasons.
Any ideas?
From their user manual, section Checking The Number Of Run Attempts:
If an exception occurs while the job is being processed, it will automatically be released back onto the queue.
I think you need to set the maximum number of tries to 1, so the job will fail on the first error.
php artisan queue:listen --tries=1
you can prevent your job from throwing exception
public function handle()
{
try {
//your code here
}catch (\Exception $e){
return true;
}
}

Resources