GKE and RPS - 'Usage is at capacity' - and performance issues - performance

We have a GKE cluster with Ingress (kubernetes.io/ingress.class: "gce") where one backend is serving our production site.
The cluster is regional one with 3 zones enabled (autoscaling enabled).
The backend serving production site is a Varnish server running as Deployment - single replica. Behind Varnish there are multiple Nginx/PHP pods running under HorizontalPodAutoscaler.
The performance of of the site is slow. We have noticed by using GCP console that all traffic is routed only to one Backend and there is only 1/1 healthy endpoint in one zone?
We are getting exclamation mark next to the serving Backend with message 'Usage is at capacity, max = 1' and 'Backend utilization: 0%'. The other backend in second zone has no endpoint configured? And there is no third backed in third zone?
Initially we were getting a lot of 5xx responses from the backend at around 80RPS rate so we have turned on CDN via BackendConfig.
This has reduced 5xx responses including RPS on the backend to around 9RPS and around 83% RPS is being served from CDN.
We are trying to figure it out if it is possible to improve our backend utilization as clearly serving 80RPS from one Varnish server which has many pods behind should be easily achievable. We can not find any underperforming POD (varnish itself or nginx/php) in this scenario.
Is GKE/GCP throttling the backend/endpoint to only support 1RPS?
Is there any way to increase RPS per endpoint and increase number of endpoints, at least one per zone?
Is there any documentation available that explain how to scale such architecture on GKE/GCP?

Related

Kubernetes | Multiple pods - performance problem

i m using kubernetes cluster for web app. But i m running into problem when pods start scale.
More pods -> slower app (every click is longer).
From my point of view, there is problem with caches. I m trying solve it, by volume or persistent volume, which all pods share together. But it has still same output, it seems like every pod want to create new cache.
Is there any solution other to redesign code ?
For cache issues have you considered :
Ingress controllers like nginx to cache static content and deliver from it straight? https://medium.com/#vdboor/using-nginx-ingress-as-a-static-cache-91bc27be04a1
CDN may be if the cache is not private or dynamic in nature ?
With increasing pods, increasing times to me doesn't sound like its a cache issue or not the cache alone. The webserver is playing a big part or the load balancer an/or the firewall sitting in the front is capping the bandwidth. Round trip from browser to pod back to browser should be same if you have 1 or 100 pods provided there is no network latency. In your case an increase traffic is slowing the connection speed. I have had similar issues with the network capping bandwidth in front of the pods.

How to deal with excessive requests on heroku

I am experiencing a once per 60-90 minute spike in traffic that's causing my Heroku app to slow to a crawl for the duration of the spike - NewRelic is reporting response times of 20-50 seconds per request, with 99% of that down to the Heroku router queuing requests up. The request count goes from an average of around 50-100rpm up to 400-500rpm
Looking at the logs, it looks to me like a scraping bot or spider trying to access a lot of content pages on the site. However it's not all coming from a single IP.
What can I do about it?
My sysadmin / devops skills are pretty minimal.
Guy
Have your host based firewall throttle those requests. Depending on your setup, you can also add Nginx in to the mix, which can throttle requests too.

Why is the latency of my GAE app serving static files so high?

I was checking the performance of my Go application on GAE, and I thought that the response time for a static file was quite high (183ms). Is it? Why is it? What can I do about it?
64.103.25.105 - - [07/Feb/2013:04:10:03 -0800] "GET /css/bootstrap-responsive.css
HTTP/1.1" 200 21752 - "Go http package" "example.com" ms=183 cpu_ms=0
"Regular" 200 ms seems on the high side of things for static files. I serve a static version of the same "bootstrap-responsive.css" from my application and I can see two types of answer times:
50-100ms (most of the time)
150-500ms (sometimes)
Since I have a ping roundtrip of more or less 50ms to google app engine, it seems the file is usually served within 50ms or so.
I would guess the 150-300ms response time is related to google app engine frontend server being "cold cached". I presumed that retrieving the file from some persistent storage, involves higher latencies than if it is in the frontend server cache.
I also assume that you can hit various frontend servers and get sporadic higher latencies.
Lastly, the overall perceived latency from a browser should be closely approximated by:
(tc)ping round trip + tcp/http queuing/buffering at the frontend server + file serving application time (as seen in your google app logs) + time to transfer the file.
If the frontend server is not overloaded and the file is small, the latency should be close to ping + serving time.
In my case, 50ms (ping) + 35ms (serving) = 85ms, is quite close to what I see in my browser 95ms.
Finally, If your app is serving a lot of requests, they maybe get queued, introducing a delay that is not "visible" in the application logs.
For a comparison I tested a site using tools.pingdom.com
Pingdom reported a Load time of 218ms
Here was the result from the logs:
2013-02-11 22:28:26.773 /stylesheets/bootstrap.min.css 200 35ms 45kb
Another test resulting in 238ms from Pingdom and 2ms in the logs.
Therefore, I would say that your 183ms seems relatively good. There are so many factors at play:
Your location to the server
Is the server that is serving the resource overloaded?
You could try serving the files using a Go instance instead of App Engine's static file server. I tested this some time ago, the results were occasionally faster, but the speeds were less consistent. Response time also increased under load, due to App Engine Instance being Limited to 10 Concurrent Requests. Not to mention you will be billed for the instance time.
Edit:
For a comparison to other Cloud / CDN providers see Cedexis's - Free Country Reports
You should try setting caching on static files.

How can I increase SSL performance with Elastic Beanstalk

I really like Elastic Beanstalk and managed to get my webapp (Spring MVC, Hibernate, ...) up and running using SSL on a Tomcat7 64-bit container.
A major concern to me is performance (I thought using the Amazon cloud would help here).
To benchmark my server performance I am using blitz.io (which uses the amazon cloud to have multiple clients access my webservice simultaneously).
My very first simple performance test already got me wondering:
I benchmarked a health check url (which basically just prints "I'm ok").
Without SSL: Looks fine.
13 Hits/s with a response time of 9ms
230 Hits/s with a response time of 8ms
With SSL: Not so fine.
13 Hits/s with a response time of 44ms (Ok, this should be a bit larger due to encryption overhead)
30 Hits/s with a response time of 3.6s!
Going higher left me with connection timeouts (timeout = 10s).
I tried using a larger EC2 instance in the background with essentially the same result.
If I am not mistaken, the Load Balancer before the EC2 Instances serves as an endpoint for SSL encryption. How do I increase this performance?
Can this be done with elastic beanstalk? Or do I need to setup my own load balancer etc.?
I also did some tests using Heroku (albeith with a slightly different technology stack, play! vs. SpringMVC). Here I also saw the increased response time, but it stayed mostly constant. I am assuming they are using quite performant SSL endpoints. How do I get that for Elastic Beanstalk?
It seems my testing method was flawed.
Amazon's Elastic Load Balancers seem to go up to 10k SSL requests per second.
See this great writeup:
http://blog.mattheworiordan.com/post/24620577877/part-2-how-elastic-are-amazon-elastic-load-balancers
SSL requires a handshaking before a secure transmission channel is opened. Once the handshaking is done, which involves several roundtrips, the data is transmitted.
When you are just hitting a page using a load tester, it is doing the handshake for each and every hit. It is not reusing an already established session.
That's not how browsers are going to do. Browse will do handshake once and then reuse the open encrypted session for all the subsequent requests for a certain duration.
So, I would not be very worried about the results. I suggest you try a tool like www.browsermob.com to see how long a full page with many image, js, css etc takes to load over SSL vs non-SSL. That will be a fair comparison.
Helps?

What is your Health Check Settings for Elastic Load Balancer

What is your health check settings for elastic load balancer? I am not really well into this as my goal is to get the good settings to put where the ELB immediately failover the traffic when my 1st ec2 instance is down to the 2nd ec2 instance. Can anyone mind to share their configuration and knowledge?
Thanks.
James
Health check settings in ELB are important, but usually not that important.
1) ELB doesn't support active/passive application instances - only active/active.
2) If an application stops accepting connections or slows dramatically, load will automatically shift to the available / faster instances. This happens without the help of health checks.
3) Health checks prevent ELB from having to try to send a request to an instance in order to find out it is not well. This is good because a request to an unhealthy back end can sacrifice the request (an error will be sent to the client).
4) If your health check settings are too sensitive (such as using a 1 second timeout when some percent of your requests take longer than that) then it can pull instances out of service too easily. Too much of this and your site will appear to be down from time to time.
If you are trying a scenario with multiple availability zones and only one back-end in each zone, then the health checks are more important. If there are NO healthy back-ends in a zone, ELB will try to forward requests to another zone that has at least one healthy instance. In this case, the frequency of health checks determines the failover time, so you'll want faster checks.

Resources