Laravel application stuck using Octane after x amount of requests - laravel

I'm currently upgrading an application to use Octane. At a glance I thought this was working great until I used K6 to do some load testing.
It might be worth mentioning at this point that I'm using Docker.
When running a 60s test with 20 vus I can see all the requests hitting (via CLI logs) and then at about 99% the logs stop and then test finishes with request timeout errors (99% success).
If I try to visit the application locally the requests are in a constant pending state (never terminates).
If I enter the container and try to interact with the application it simply does nothing (like the request, constantly hanging). I cant even exit the container at this point. If I try to kill the container I can't, I have to restart Docker.
If I cut the vus down to 10, the test finishes successfully. If I run the test again at about half way through it again dies.
It's as if once the application hits x amount of requests, it dies.
Also note that I have run these tests many times before installing Octane and they ran fine.
Has anyone else had any experience with Octane who has experienced the above?
Regards

Related

Heroku Webhook Fails Sometimes

I'm creating a chatbot with dialogflow and I have a webhook hosted on heroku (so that I can use python scripts). The webhook works fine most of the time. However when I haven't used it in a while it will always fail on the first use with a request timeout. Has anyone else come across this issue? Is there a way to wake up the webserver before running the script I have written?
Heroku's free dynos will sleep after 30 minutes of inactivity.
Preventing them from sleeping is easy. You need to use any of their paid plans.
See https://www.heroku.com/pricing
Once you use a Hobby dyno, your app will never sleep anymore and you shouldn't be getting request timeouts.
Alternatively, you can also benchmark what's taking a long time to boot your app. With a faster boot time, the first request would be slow but wouldn't get a timeout.
Heroku times out requests after 30 seconds.

JMeter response time decreases after the first run of test plan

I have a test plan setup which I am using on my web application. It is pretty simple , a user logs in and then navigates through some of the pages. Everything's working fine except the fact that whenever I run the test plan for the first time(say first time after restarting the web application server) the average response time recorded are around 18000ms but in the susequent runs it is always around 3000ms until i restart the server. I just want to know why this is happening. Pardon me, I am newbie to this and thanks in advance.
You can start to exclude some part of test plan and try again. If this response time does not decrase then you can focus your web application server thread pool size. If it is very small and your Jmeter test plan needs more than this size then application server try to create new threads. When you increase your min thread pool size on app server the response time is still high then need to focus what your test plan does. By the way, I want to have a look at your test plan if you share.

Delayed_job going into a busy loop when lodging second task

I am running delayed_job for a few background services, all of which, until recently, run in isolation e.g. send an email, write a report etc.
I now have a need for one delayed_job, as its last step, to lodge another delayed_job.
delay.deploy() - when delayed_job runs this, it triggers a deploy action, the last step of which is to ...
delay.update_status() - when delayed_job runs this job, it will check the status of the deploy we started. If the deploy is still progressing, we call delay.update_status() again, if the deploy has stopped we write the final deploy status to a db record.
Step 1 works fine - after 5 seconds, delayed_job fires up the deploy, which starts the deployment, and then calls delay.update_status().
But here,
instead of update_status() starting up in 5 seconds, delayed_job goes into a busy loop, firing of a bunch of update_status calls, and looping really hard without pause.
I can see the logs filling up with all these calls, the server slows down, until the end-condition for update_status is reached (deploy has eventually succeeded or failed), and things get quiet again.
Am I using Delayed_Job::delay() incorrectly, am I missing a basic tenent of this use-case ?
OK it turns out this is "expected behaviour" - if you are already in the code running for a delayed_job, and you call .delay() again, without specifying a delay, it will run immediately. You need to add the parameter run_at:
delay(queue: :deploy, run_at: 10.seconds.from_now).check_status
See the discussion in google groups

Upgrade process on Heroku?

If I update an application running on Heroku using git push and this application is running on multiple dynos - how is the upgrade process run by Heroku?
All dynos at the same time?
One after another?
...?
In other words: Will there be a down-time of my "cluster", or will there be a small time-frame where different versions of my app are running in parallel, or ...?
well can not tell the internal state but what i have experienced is
Code push complete
Code compiled (slug is compiled )
After that all dynos get the latest code and get restarted. (restart take up to 30 seconds or so and during this time all requests get queue).
So there will be no down time as during the restart process all the requests get queued and there i dont think that that multiple versions of your code will be running after the deployment.
Everyone says there's 'no downtime' when updating a Heroku app, but for your app this may not be true.
I've recently worked on a reasonably sized Rails app that takes at least 25 seconds to start, and often fails to start inside the 30 seconds that Heroku allows before returning errors to your clients.
During this entire time, your users are waiting for something to happen. 30 seconds is a long time, and they may not be patient enough to wait.
Someone once told me that if you have more than 1 dyno, that they are re-started individually to give you no downtime. This is not true - Heroku Stops all dynos and then Starts all Dynos.
At no time will there be 2 versions of your app running on Heroku

How can I run something in a thread with passenger?

I'm using Phusion Passenger with my nginx to deploy rails/sinatra applications, and I'm currently having a problem.
I want to run a class that checks for new submissions to reddit.com every 30 seconds. But since passenger shuts down the application after x seconds of idle time, it won't keep checking.
Yes, I've tried to set passenger_pool_idle_time to 0, but it still shuts it down.
If you want more details, see the application at github
Thanks in advance.
you could use cron to call into your server ever so often, to make sure it's still running. What may be happening is that passenger is starting up an initial process, then forking it for each worker process it needs later. After awhile, it kills the initial process (thinking it has spawned all the children it ever needs), so setting it to not do that might fix it.

Resources