Redis Throttling / Jobs MaxAttemptsExceededException - laravel

I need to make 2 calls to a 3rd party API for about 350K products (EANs). The limit for this API is 1200 requests/hourly. I currently use the following code to throttle to jobs according to the 3rd party api limit:
Redis::throttle('bol_import_product_offers')
->allow(1)
->every(3)
->then(function () {
$this->process();
}, function () {
$this->release(10);
});
This works fine until some time jobs start to fail with the exception:
MaxAttemptsExceededException: Job has been attempted too many times or
run too long. The job may have previously timed out.
I assume this has something to do with the Horizon config file with timeouts/wait/retries - but I can't seem to find the issue. This is my Horizon config file: https://gist.github.com/liamseys/40538aab2cf0425d83ca4e5feac4d2ff.

Related

DISCORD.JS SendPing to Host

I'm developing a bot in Discord.js, and because I use lavalink, I hosted it (lavalink server) on a free host, and to keep it online I need to do some pings constantly, I was wondering if, is there any way to make my bot (which is currently my vps) send a ping every time interval to the "url/host" where my lavalink is. if you have any solution I will be grateful!
You have two ways:
Using Uptimer Robot (fastest way)
Uptimer Robot is an online service that can do HTTP requestes each 5 minutes.
Very simple and fast to use, see more here.
making the request from your bot vps
Installing node-fetch
Type this in yout terminal:
npm i node-fetch
Making the request
Insert this where You want in the bot code.
const fetch = require('node-fetch');
const intervalTime = 300000; // Insert here the interval for doing the request in milliseconds, like now 300000 is equal to 5 minutes
const lavalinkURL = 'insert here the lavalink process url';
setInterval(() => {
fetch(lavalinkURL);
}, intervalTime)

Beanstalkd Queue either fails or runs infinitely

I am using a Beanstalkd queue to deploy jobs in my Laravel 5.3 application. I use Laravel Forge to administer the server.
I have one of two scenarios that occur:
1) I set a max number of attempts, which causes every job pushed to the queue to be placed on the failed jobs table - even if its task is completed successfully, resulting in this exception on the jobs table:
Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously timed out
And this in my error log:
Pheanstalk\Exception\ServerException: Server reported NOT_FOUND
2) If I remove the max attempts, the jobs run successfully but in an infinite loop.
I am assuming that I am not removing these jobs from the queue properly and so in scenario #1 the job is failing because just wants to keep running.
My controller pushes my job to the queue like this:
Queue::push('App\Jobs\UpdateOutlookContact#handle', ['userId' => $cs->user_id, 'memberId' => $member->id, 'connection' => $connection]);
Here is the handle function of my job:
public function handle($job, $data)
{
Log::info('Outlook syncMember Job dispatched');
$outlook = new Outlook();
$outlook->syncMember($data['userId'], $data['memberId'], $data['connection']);
$job->delete();
}
Here is a picture of my queue configuration from the Laravel Forge admin panel. I am currently using the default queue. If "Tries" is changed to ANY, the jobs succeed but run in an infinite loop.
How do I properly remove these jobs from the queue?

How to ping a server using Ajax in Laravel every 5 minutes?

I have a HTML table full of server IP addresses, and I want to ping them them every 5 minutes to check if the server is alive (and eventually highlight table rows depending if the server is dead/alive).
Currently I'm using Ajax with a 5 minute interval which calls a method in my controller:
var checkSims = function() {
$.ajax({
type: "GET",
url: '/checkSimStatus',
success: function(msg) {
onlineSims = msg['online'];
offlineSims = msg['offline'];
console.log(onlineSims);
console.log(offlineSims);
},
error: function() {
console.log('false');
}
});
}
var interval = 1000 * 60 * 1; // where X is your every X minutes
setInterval(checkSims, interval);
However, this is not asynchronous and while this controller method is pinging the IPs the webserver cannot serve requests.
I've read about Laravel's queue system but I'm not sure this would suit me as I need one specific page to trigger the job, and would need to use JS to highlight table rows.
#f7n if you have done it with ajax, how will it work if that page where HTML table with IP address not open in a browser?
I think you must use cron job on a server. Also, if you use VPS (Linux) or something else you can write simple code with bash shell script and run it on the daemon. Also, you can create simple code like below, create php script where it will parse (grab) page with HTML table of IP addresses and ping server.
#!/bin/bash
echo "Press [CTRL+C] to stop.."
while true
do
php parse_and_ping.php
sleep 300
done
sleep 300 is mean, It will work every 5 minutes. Just save It on .sh file (run_shell.sh) and run It on a terminal or on the daemon of Linux server.

Laravel liebig/cron executes the cronjob twice for same time

I am using laravel 4.2.
I've a project requirement to send some analysis report email to all the users every Monday 6 am.
Obviously its a scheduled task, hence I've decided to use cron-job.
For this I've installed liebig/cron package. The package is installed successfully. To test email, I've added following code in app/start/global.php:
Event::listen('cron.collectJobs', function() {
Cron::setEnablePreventOverlapping();
// to test the email, I am setting the day of week to today i.e. Tuesday
Cron::add('send analytical data', '* * * * 2', function() {
$maildata = array('email' => 'somedomain#some.com');
Mail::send('emails.analytics', $maildata, function($message){
$message->to('some_email#gmail.com', 'name of user')->subject('somedomain.com analytic report');
});
return null;
}, true);
Cron::run();
});
Also in app\config\packages\liebig\cron\config.php the key preventOverlapping is set to true.
Now, if I run it like php artisan cron:run, it sends the same email twice with the same time.
I've deployed the same code on my DigitalOcean development server (ubuntu) and set its crontab to execute this command every minute but still it is sending the same email twice.
Also it is not generating lock file in app/storage directory, according to some search results I've come to know that it creates a lock file to prevent overlapping. the directory has full permissions granted.
Can anybody knows how to solve it?
Remove Cron::run().
Here's what's happening:
Your Cron route or cron:run command is invoked.
Cron fires off the cron.collectjobs event to get a list of events.
You call Cron::run() and run all the events.
Cron calls Cron::run() and runs all the events.
In the cron.collectjobs event you should only be making a list of jobs using Cron::add().
The reason you're not seeing a lock file is either that preventOverlapping is set to false (it's true by default), or that the jobs are running so fast you don't see it being created and deleted. The lock file only exists for the time the jobs run, which may only be milliseconds.

Node.JS Response Time

Threw Node.JS on an AWS instance and was testing the request times, got some interesting results.
I used the following for the server:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
}).listen(8080);
I have an average 90ms delay to this server, but the total request takes ~350+ms. Obviously a lot of time is wasted on the box. I made sure the DNS was cached prior to the test.
I did an Apache bench on the server with a cocurrency of 1000 - it finished 10,000 requests in 4.3 seconds... which means an average of 4.3 milliseconds.
UPDATE: Just for grins, I installed Apache + PHP on the same machine and did a simple "Hello World" echo and got a 92ms response time on average (two over ping).
Is there a setting somewhere that I am missing?
While Chrome Developer Tools is a good way to investigate front end performance, it gives you very rough estimate of actual server timings / cpu load. If you have ~350 ms total request time in dev tools, subtract from this number DNS lookup + Connecting + Sending + Receiving, then subtract roundtrip time (90 ms?) and after that you have first estimate. In your case I expect actual request time to be sub-millisecond. Try to run this code on server:
var http = require('http');
function hrdiff(t1, t2) {
var s = t2[0] - t1[0];
var mms = t2[1] - t1[1];
return s*1e9 + mms;
}
http.createServer(function(req, res) {
var t1 = process.hrtime();
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
var t2 = process.hrtime();
console.log(hrdiff(t1, t2));
}).listen(8080);
Based on ab result you should estimate average send+request+receive time to be at most 4.2 ms ( 4200 ms / 10000 req) (did you run it on server? what concurrency?)
I absolutely hate answering my own questions, but I want to pass along what I have discovered with future readers.
tl;dr: There is something wrong with res.write(). Use express.js or res.end()
I just got through conducting a bunch of tests. I setup multiple types of Node server and mixed in things like PHP and Nginx. Here are my findings.
As stated previously, with the snippet I included above, I was loosing around 250ms/request, but the Apache benchmarks did not replicate that issues. I then proceeded to do a PHP test and got results ranging from 2ms - 20ms over ping... a big difference.
This prompted some more research, I started a Nginx server and proxied the node through it, and somehow, that magically changed the response from 250ms to 15ms over ping. I was on par with that PHP script, but that is a really confusing result. Usually additional hops would slow things down.
Intrigued, I made an express.js server as well - and something even more interesting happened, the ping was 2ms over on its own. I dug around in the source for quite a while and noticed that it lacked a res.write() command, rather, it went straight to the res.end(). I started another server removing the "Hello World" from the res.write and added it to the res.end and amazingly, the ping was 0ms over ping.
I did some searching on this, wanted to see if it was a well-known issue and came across this SO question, who had the exact same problem. nodejs response speed and nginx
Overall, intresting stuff. Make sure you optimize your responses and send it all at once.
Best of luck to everyone!

Resources