How to deal with excessive requests on heroku - heroku

I am experiencing a once per 60-90 minute spike in traffic that's causing my Heroku app to slow to a crawl for the duration of the spike - NewRelic is reporting response times of 20-50 seconds per request, with 99% of that down to the Heroku router queuing requests up. The request count goes from an average of around 50-100rpm up to 400-500rpm
Looking at the logs, it looks to me like a scraping bot or spider trying to access a lot of content pages on the site. However it's not all coming from a single IP.
What can I do about it?
My sysadmin / devops skills are pretty minimal.
Guy

Have your host based firewall throttle those requests. Depending on your setup, you can also add Nginx in to the mix, which can throttle requests too.

Related

GKE and RPS - 'Usage is at capacity' - and performance issues

We have a GKE cluster with Ingress (kubernetes.io/ingress.class: "gce") where one backend is serving our production site.
The cluster is regional one with 3 zones enabled (autoscaling enabled).
The backend serving production site is a Varnish server running as Deployment - single replica. Behind Varnish there are multiple Nginx/PHP pods running under HorizontalPodAutoscaler.
The performance of of the site is slow. We have noticed by using GCP console that all traffic is routed only to one Backend and there is only 1/1 healthy endpoint in one zone?
We are getting exclamation mark next to the serving Backend with message 'Usage is at capacity, max = 1' and 'Backend utilization: 0%'. The other backend in second zone has no endpoint configured? And there is no third backed in third zone?
Initially we were getting a lot of 5xx responses from the backend at around 80RPS rate so we have turned on CDN via BackendConfig.
This has reduced 5xx responses including RPS on the backend to around 9RPS and around 83% RPS is being served from CDN.
We are trying to figure it out if it is possible to improve our backend utilization as clearly serving 80RPS from one Varnish server which has many pods behind should be easily achievable. We can not find any underperforming POD (varnish itself or nginx/php) in this scenario.
Is GKE/GCP throttling the backend/endpoint to only support 1RPS?
Is there any way to increase RPS per endpoint and increase number of endpoints, at least one per zone?
Is there any documentation available that explain how to scale such architecture on GKE/GCP?

Facing an issue with Magento 2.2.9 version

My sites get down every 2-3 days. It doesn't show any error on upfront, the browser keeps on loading for a very long time, but no data appears. When I check the apache error logs I found Max Request Workers limit exhausted. For the last 10 days, I am increasing the same the frequency is increased to 5days but still getting down. The site was launched 45 days ago, running perfectly for 30 days. Even we have not observed any hike in the traffic. The site is hosted at the AWS plan is t2.2xlarge.
Do you use many filters for layered navigation? When bots hit it if using sql search it will exceed max connections and lock things up and repeat over and over. One possible area to look at. I already had this issue and had to block all bad bots in robot.txt. Check mostly for Chinese bots and block by IP in htaccess or firewall tune robot.txt to instruct delay 10 for bots. Connect your site to cloudflare and tune things to disallow huge hits. In general, mostly Chinese bots are the ones who don't respect rules and robot.txt si personally blocked all China.

Notification System - Socket.io or Ajax?

I'm using Laravel5 and, I want to create a notification system for my (web) project. What I want to do is, notifying the user for new notifications such as;
another user starts following him,
another user writes on his wall,
another user sends him a message, etc,
(by possibly highlighting an icon on the header with a drop-down menu. The ones such as StackOverflow).
I found out the new tutorials on Laracast: Real-time Laravel with Socket.io, where a kind of similar thing is achieved by using Node, Redis and Socket.io.
If I choose using socket.io and I have 5000 users online, I assume I will have to make 5000 connections, 5000 broadcastings plus the notifications, so it will make a lot of number of requests. And I need to start for every user on login, on the master blade, is that true?
Is it a bad way of doing it? I also think same thing can be achieved with Ajax requests. Should I tend to avoid using too many continuous ajax requests?
I want to ask if Socket.io is a good way of logic for creating such system, or is it a better approach to use Ajax requests in 5 seconds instead? Or is there any alternative better way of doing it? Pusher can be an alternative, however, I think free is a better alternative in my case.
A few thoughts:
Websockets and Socket.io are two different things.
Socket.io might use Websockets and it might fall back to AJAX (among different options).
Websockets are more web friendly and resource effective, but they require work as far as coding and setup is concerned.
Also using SSL with Websockets for production is quite important for many reasons, and some browsers require that the SSL certificate be valid... So there could be a price to pay.
Websockets sometimes fail to connect even when supported by the browser (that's one reason using SSL is recommended)... So writing an AJAX fallback for legacy or connectivity issues, means that the coding of Websockets usually doesn't replace the AJAX code.
5000 users at 5 seconds is 1000 new connections and requests per second. Some apps can't handle 1000 requests per second. This shouldn't always be the case, but it is a common enough issue.
The more users you have, the close your AJAX acts like a DoS attack.
On the other hand, Websockets are persistent, no new connections - which is a big resources issue - especially considering TCP/IP's slow start feature (yes, it's a feature, not a bug).
Existing clients shouldn't experience a DoS even when new clients are refused (server design might effect this issue).
A Heroku dyno should be able to handle 5000 Websocket connections and still have room for more, while still answering regular HTTP requests.
On the other hand, I think Heroku imposes an active requests per second and/or backlog limit per dyno (~50 requests each). Meaning that if more than a certain amount of requests are waiting for a first response or for your application to accept the connection, new requests will be refused automatically.... So you have to make sure you have no more than 100 new requests at a time. For 1000 requests per second, you need your concurrency to allows for 100 simultaneous requests at 10ms per request as a minimal performance state... This might be easy on your local machine, but when network latency kicks in it's quite hard to achieve.
This means that it's quite likely that a Websocket application running on one Heroku Dyno would require a number of Dynos when using AJAX.
These are just thoughts of things you might consider when choosing your approach, no matter what gem or framework you use to achieve your approach.
Outsourcing parts of your application, such as push notifications, would require other considerations such as scalability management (what resources are you saving on?) vs. price etc'

Performance with several Web processes running

I'm in development for a Rails app and I am confused about what I'm seeing. I am new to this so I may misinterpreting the information. When I run one web process I am getting good results. But when I up the web process I am not getting the results I expect. I am trying to calculate how many I will need to run in production so I can determine my costs.
Based on New Relic I have response times of 40-60 MS per request at 3000 requests per minute (or about 50 requests per second) on one Heroku Dyno. Things are working just fine and the web processes are not even being pushed. Some responses are timing out at 1 second on Blitz, but I expect because I'm pushing as much as I can through one Dyno.
Now I try cranking up the Dyno's. First to 10 then to 50. I rush with Blitz again and get the same results as above. With 50 dynos running I blitz the website with 250 concurrent users and I get response times of 2 or 3 seconds. New Relic is reading the same traffic as one dyno, 3000 requests per second with 60MS request times. Blitz is seeing 3 second response times and getting a max of 50 to 60 rps.
In the logs I can see the activity split nicely between the different web processes. I am just testing against the home page and not accessing any external services or database calls. I'm not using caching.
I don't understand why a single web process easily handle up to 60 requests per second (per Blitz), but when increasing to 10 or even 50 web processes I don't get any additional performance. I did the Blitz rushes several times ramping up concurrent users.
Anyone have any ideas about what's happening?
I ended up removing the app server, Unicorn, from my app and adding it back in again. Now I'm getting 225 hits per sec with 250 concurrent users on one dyno. Running multiple Dyno's seems to work just fine as well. Must have been an error in setting up the App Server. Never did track down the exact error though.

How many dynos would require to host a static website on Heroku?

I want to host a static website on Heroku, but not sure how many dynos to start off with.
It says on this page: https://devcenter.heroku.com/articles/dyno-requests that the number of requests a dyno can serve, depends on the language and framework used. But I have also read somewhere that 1 dyno only handles one request at a time.
A little confused here, should 1 web dyno be enough to host a static website with very small traffic (<1000 views/month, <10/hour)? And how would you go about estimating additional use of dynos as traffic starts to increase?
Hope I worded my question correctly. Would really appreciate your input, thanks in advance!
A little miffed since I had a perfectly valid answer deleted but here's another attempt.
Heroku dynos are single threaded, so they are capable of dealing with a single request at a time. If you had a dynamic page (php, ruby etc) then you would look at how long a page takes to respond at the server, say it took 250ms to respond then a single dyno could deal with 4 requests a second. Adding more dynos increases concurrency NOT performance. So if you had 2 dynos, in this scenario you be able to deal with 8 requests per second.
Since you're only talking static pages, their response time should be much faster than this. Your best way to identify if you need more is to look at your heroku log output and see if you have sustained levels of the 'queue' value; this means that the dynos are unable to keep up and requests are being queued for processing.
Since most HTTP 1.1 clients will create two TCP connections to the webserver when requesting resources, I have a hunch you'll see better performance on single clients if you start two dynos, so the client's pipelined resources requests can be handled pipelined as well.
You'll have to decide if it is worth the extra money for the (potentially slight) improved performance of a single client.
If you ever anticipate having multiple clients requesting information at once, then you'll probably want more than two dynos, just to make sure at least one is readily available for additional clients.
In this situation, if you stay with one dyno. The first one is free, the second one puts you over the monthly minimum and starts to trigger costs.
But, you should also realize with one dyno on Heroku, the app will go to sleep if it hasn't been accessed recently (I think this is around 30 minutes). In that case, it can take 5-10 seconds to wake up again and can give your users a very slow initial experience.
There are web services that will ping your site, testing for it's response and keeping it awake. http://www.wekkars.com/ for example.

Resources