Laravel job stays in "Processing" state although finished - laravel

I have a problem with the laravel queue system. For performance purposes we use laravel queuing system with amazons SQS for heavier calculations. This works fine at least for most of our jobs. Some of them where the raw calculation time is about 25 seconds keep blocking the queue in the "Procesing" state for 6 minutes.
We did log the complete handle function of the job and the output was right at any time. As a matter of fact the last log statement (end of the function) was printed 20 seconds after entering the function. The data was calculated as expected and the database was up to date but the job was still "Processing".
After we intentionally crashed the job at the end of the handle function the calculated data was stored perfectly but obviously the queue crashed as well. So i guess it has to be something happing after the handle function. Maybe something with allocated memory?
The config of the queue is the default sqs driver configuration:
'sqs' => array(
'driver' => 'sqs',
'key' => env('AWS_KEY', 'secret'),
'secret' => env('AWS_SECRET', 'secret'),
'queue' => env('AWS_SQS_QUEUE', 'secret'),
'region' => env('AWS_SQS_REGION', 'secret'),
),
Edit:
I found out it is not only the queue but when I execute the job as a command the same behavior appears:
I print "Done." as the last statement in the command and after it gets printed the console halts for a few seconds before returning to the console input.
When I comment out the part where the most queries are the issue is gone, like the more queries I use the more I have to wait for the console.
I hope any of you guys know what causes this behavior and how we can fix it.
Thanks in advance

Ok I found the issue.
The problem was that telescope was enabled. So after the code was executed telescope was busy logging all requests and cache hits.
After disabling telescope there was no delay any more.

Related

Redis memory filled up to 2 GB with Laravel Horizon

redis info memory:
used_memory_human:1.95G
used_memory_rss_human:1.93G
used_memory_peak_human:2.04G
used_memory_peak_perc:95.63%
I use Laravel Horizon and it is the only thing that uses redis.
Now it reached 2GB limit and stay like this
First question:
Why there is 2 GB limit and how to increase it? maxmemory set to 0
Second question:
I think there not that much jobs pending on Laravel Horizon to fill up 2GB, looks like trimmer or something not working. Jobs are small, not much information stored. There is about 1-2k jobs per hour and maybe pending around 3-4k.
My trim settings from horizon.php:
'trim' => [
'recent' => 60,
'pending' => 43200,
'completed' => 60,
'recent_failed' => 10080,
'failed' => 10080,
'monitored' => 10080,
],
Where to look next? Everithing working so far but I don't like the situation. Once with big batch of jobs we faced allocate memory issue (Allowed memory size of 536870912 bytes exhausted)

Beanstalkd Queue either fails or runs infinitely

I am using a Beanstalkd queue to deploy jobs in my Laravel 5.3 application. I use Laravel Forge to administer the server.
I have one of two scenarios that occur:
1) I set a max number of attempts, which causes every job pushed to the queue to be placed on the failed jobs table - even if its task is completed successfully, resulting in this exception on the jobs table:
Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously timed out
And this in my error log:
Pheanstalk\Exception\ServerException: Server reported NOT_FOUND
2) If I remove the max attempts, the jobs run successfully but in an infinite loop.
I am assuming that I am not removing these jobs from the queue properly and so in scenario #1 the job is failing because just wants to keep running.
My controller pushes my job to the queue like this:
Queue::push('App\Jobs\UpdateOutlookContact#handle', ['userId' => $cs->user_id, 'memberId' => $member->id, 'connection' => $connection]);
Here is the handle function of my job:
public function handle($job, $data)
{
Log::info('Outlook syncMember Job dispatched');
$outlook = new Outlook();
$outlook->syncMember($data['userId'], $data['memberId'], $data['connection']);
$job->delete();
}
Here is a picture of my queue configuration from the Laravel Forge admin panel. I am currently using the default queue. If "Tries" is changed to ANY, the jobs succeed but run in an infinite loop.
How do I properly remove these jobs from the queue?

Many Logstash instances reading from Redis

I have one Logstash process running inside one node consuming from a Redis list, but I'm afraid that just one process cannot handle the data throughput without a great delay.
I was wondering if I run one more process for Logstash inside this same machine will perform a little better, but I'm not certain about that. I know that my ES index is not a bottleneck.
Would Logstash duplicate my data, if I consume the same list? This approach seems to be a right thing to do?
Thanks!
Here my input configuration:
input {
redis {
data_type => "list"
batch_count => 300
key => "flight_pricing_stats"
host => "my-redis-host"
}
}
You could try adjusting logstash input threads, if you are going to run another logstash process in the same machine. Default is 1.
input {
redis {
data_type => "list"
batch_count => 300
key => "flight_pricing_stats"
host => "my-redis-host"
threads => 2
}
}
You could run more than one logstash against the same redis, events should not get duplicated. But I'm not sure that would help.
If you're not certain whats going on, I recommend the logstash monitoring API. It can help you narrow down your real bottlenck.
And also an interesting post from elastic on the subject: Logstash Lines Introducing a benchmarking tool for Logstash

CPU is utilizing 100% resource and therefore Queue failed

My code is like below.
for($i = 0; $i <= 100; $i++) {
$objUser = [
"UserName" => $request["UserName"] . $i,
"EmailAddress" => $request["EmailAddress"] . $i,
"RoleID" => RoleEnum::ProjectManager,
"Password" => $request["Password"],
];
$RegisterResponse = $this->Register->Register($objUser);
$Data = $RegisterResponse["Data"];
$job = (new AccountActivationJob($Data));
dispatch($job);
}
Above code is creating 100 users and Each time a queue is being created to send email notification. I am using database default queue.
I have shared hosting account on GoDaddy. Due to some reasons the CPU usage reaches 100. Here is the screenshot.
Finally loop stops in between. Below is the screenshot after 5 mins.
Here, My problem is: It is not able to continue creating 100 users. I am doing this to test the sample queue implementation where multiple users send request for registration. Am I doing anything wrong?
As stated above, GoDaddy has a lot of resource limitations. You can only send 100 Emails an hour is what I have heard.
That also not at a single time. If it detects you are sending a lot of emails, your process is blocked.
Instead, you can queue up the messages to be sent 1 per 20 seconds or 30 seconds. It will help keep the resources in limits, and your emails are sent to the customers without any problem.
You can use the sleep function for this.
Godaddy does have a limit of resources you can use. If you go over it, it will kill the processes on ssh.
The limits are avaiable here
Try running the php process with a different nice parameter.
That's what I do when i need to use an artisan command that does use a lot of resources..
I did the findings and found that I should move to VPS instead of Shared hosting. here are the nice and cheap plans by GoDaddy. https://in.godaddy.com/hosting/vps-hosting

BackgroundJobs stopping job after completion

I've used Delayed_job in the past. I have an old project that runs on a server where I can't upgrade from Ruby 1.8.6 to 1.8.7, and therefore can't use Delayed Job, so I'm trying BackgroundJobs http://codeforpeople.rubyforge.org/svn/bj/trunk/README
I have it working so that my job runs, but something doesn't seem right. For example, if I run the job like this:
jobs = Bj.submit "echo hi", :is_restartable => false, :limit => 1, :forever => false
Then I see the job in the bj_job table and I see that it completed along with 'hi' in stdout. I also see only one job in the table and it doesn't keep re-running it.
For some reason if I do this:
jobs = Bj.submit "./script/runner ./jobs/calculate_mean_values.rb #{self.id}", :is_restartable => false, :limit => 1, :forever => false
The job still completes as expected, however, it keeps inserting new rows in the bj_job table, and the method gets run over and over until I stop my dev server. Is that how it is supposed to work?
I'm using Ruby 1.8.6 and Rails 2.1.2 and I don't have the option of upgrading. I'm using the plugin flavor of Bj.
Because I just need to run the process once after the model is saved, I have it working by using script/runner directly like this:
system " RAILS_ENV=#{RAILS_ENV} ruby #{RAILS_ROOT}/script/runner 'CompositeGrid.calculate_values(#{self.id})' & "
But would like to know if I'm doing something wrong with Background Jobs,
OK, this was stupid user error. As it turns out, I had a call back that was restarting the process and creating an endless loop. After fixing the call back it is working exactly as expected.

Resources