Traefik adds huge overhead to the requests in Kubernetes - performance

I am using Traefik as the ingress controller for my Kubernetes setup. I decided to run some performance test for my application but I faced a huge difference when I sent the requests through the Traefik.
The test consists of sending 10K request in parallel and the application returned the compiled result and based on the logs of my application it needs around 5 milliseconds to process one request. The results for the performance test are as below:
Native application:
Execution time in milliseconds: 61062
Application on Kubernetes (without going through Traefik and just using its IP):
Execution time in milliseconds: 62337
Application on Kubernetes and using Traefik:
Execution time in milliseconds: 159499
My question is why this huge difference exists and is there a way to reduce it (except adding more replicas).
I am using these yaml files for setting up Traefik:
https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml
https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml

I tried Ambassador as my API gateway in kubernetes and its result was much better than Traefik and very close to using the IP of the container (63394 milliseconds). Obviously, Traefik is not as good as people think.

Related

Configure timing of opening ports in Spring-Boot application

Question:
Is there an option within spring or its embedded servlet container to open ports when spring is ready to handle traffic?
Situation:
In the current setup i use a spring boot application running in google cloud run.
Circumstances:
Cloud run does not support liveness/readyness probes, it considers an open port as "application ready".
Cloud run sends request to the container although spring is not ready to handle requests.
Spring start its servlet container, open its ports while still spinning up its beans.
Problem:
Traffic to an unready application will result in a lot of http 429 status codes.
This affects:
new deployments
scaling capabilities of cloud run
My desire:
Configure spring/servlet container to delay opening ports when application is actually ready
Delaying opening ports to the time the application is ready would ease much pain without interfering too much with the existing code base.
Any alternatives not causing too much pain?
Things i found and considered not viable
Using native-image is not an option as it is considered experimental and consumes more RAM at compile time than our deployment pipeline agents allow to allocate (max 8GB vs needed 13GB)
another answer i found: readiness check for google cloud run - how?
which i don't see how it could satisfy my needs, since spring-boot startup time is still slow. That's why my initial idea was to delay opening ports
I did not have time to test the following, but one thing i stumbled upon is
a blogpost about using multiple processes within a container. Though it is against the recommendation of containers principles, it seems viable for the time until cloud run supports probes of any type.
As you are well aware of the fact that “Cloud Run currently does not have a readiness/liveness check to avoid sending requests to unready applications” I would say there is not much that can be done on Cloud Run’s side except :
Try and optimise the Spring boot app as per the docs.
Make a heavier entrypoint in Cloud Run service that takes care of
more setup tasks. This stackoverflow thread mentions how “A
’heavier’ entrypoint will help post-deploy responsiveness, at the
cost of slower cold-starts” ( this is the most relevant solution
from a Cloud Run perspective and outlines the issue correctly)
Run multiple processes in a container in Cloud Run as you
mentioned.
This question seems more directed at Spring Boot specifically and I found an article with a similar requirement.
However, if you absolutely need the app ready to serve when requests come in, we have another alternative to Cloud Run, Google Kubernetes Engine (GKE) which makes use of readiness/liveness probes.

503 error on server load tests on Wildfly server on Jelastic

I have an app deployed on a wildfly server on the Jelastic PaaS. This app functions normally with a few users. I'm trying to do some load tests, by using JMeter, in this case calling a REST api 300 times in 1 second.
This leads to around 60% error rate on the requests, all of them being 503 (service temporarily unavailable). I don't know what things I have to tweak in the environment to get rid of those errors. I'm pretty sure it's not my app's fault, since it is not heavy and i get the same results even trying to test the load on the Index page.
The topology of the environment is simply 1 wildfly node (with 20 cloudlets) and a Postgres database with 20 cloudlets. I had fancier topologies, but trying to narrow the problem down I cut the load balancer (NGINX) and the multiple wildfly nodes.
Requests via the shared load balancer (i.e. when your internet facing node does not have a public IP) face strict QoS limits to protect platform stability. The whole point of the shared load balancer is it's shared by many users, so you can't take 100% of its resources for yourself.
With a public IP, your traffic goes straight from the internet to your node and therefore those QoS limits are not needed or applicable.
As stated in the documentation, you need a public IP for production workloads (a load test should be considered 'production' in this context).
I don't know what things I have to tweak in the environment to get rid of those errors
we don't know either and as your question doesn't provide sufficient level of details we can come up only with generic suggestions like:
Check WildFly log for any suspicious entries. HTTP 503 is a server-side error so it should be logged along with the stacktrace which will lead you to the root cause
Check whether Wildfly instance(s) have enough headroom to operate in terms of CPU, RAM, et, it can be done using i.e. JMeter PerfMon Plugin
Check JVM and WildFly specific JMX metrics using JVisualVM or the aforementioned JMeter PerfMon Plugin
Double check Undertow subsystem configuration for any connection/request/rate limiting entries
Use a profiler tool like JProfiler or YourKit to see what are the slowest functions, largest objects, etc.

Difference in load test results

What can be the reason for difference in results of load test run at different times with SAME bandwidth?
If I run load test at midnight the response times would be better and during they are real bad. Thanks for your help.
Maybe during the day the application is being used by real users and your artificial load is being added to the natural load?
Another option is that network is more busy during the day so channel bandwidth is fully utilized.
Load testing tool itself metrics don't tell the full story, you can only make assumptions by looking at TCP connect time metric.
If you have an APM system in place you can assess what's going on with the system during the daytime and night time and detect the factors which are impacting the response time. If you don't - you can set up your own by using i.e. JMeter PerfMon Plugin
Adding to Dmitri's note, there could be multiple reasons / causes for difference in results.
As Dmitri pointed check your APM tool to see server health while tests is executing
Do you integrate with any downstream applications? Do these applications reside in a stable and dedicated perf testing environment or they are live production environments? if it is later, then you should expect a latency in response during day time
Authentication / token validation - usually gateways are configured to validate incoming bearer token. when you execute during morning time there is a possibility that your gateway could be busy serving other real users requests (assuming this a production AD / Okta / PingID servers)

GKE and RPS - 'Usage is at capacity' - and performance issues

We have a GKE cluster with Ingress (kubernetes.io/ingress.class: "gce") where one backend is serving our production site.
The cluster is regional one with 3 zones enabled (autoscaling enabled).
The backend serving production site is a Varnish server running as Deployment - single replica. Behind Varnish there are multiple Nginx/PHP pods running under HorizontalPodAutoscaler.
The performance of of the site is slow. We have noticed by using GCP console that all traffic is routed only to one Backend and there is only 1/1 healthy endpoint in one zone?
We are getting exclamation mark next to the serving Backend with message 'Usage is at capacity, max = 1' and 'Backend utilization: 0%'. The other backend in second zone has no endpoint configured? And there is no third backed in third zone?
Initially we were getting a lot of 5xx responses from the backend at around 80RPS rate so we have turned on CDN via BackendConfig.
This has reduced 5xx responses including RPS on the backend to around 9RPS and around 83% RPS is being served from CDN.
We are trying to figure it out if it is possible to improve our backend utilization as clearly serving 80RPS from one Varnish server which has many pods behind should be easily achievable. We can not find any underperforming POD (varnish itself or nginx/php) in this scenario.
Is GKE/GCP throttling the backend/endpoint to only support 1RPS?
Is there any way to increase RPS per endpoint and increase number of endpoints, at least one per zone?
Is there any documentation available that explain how to scale such architecture on GKE/GCP?

How can I increase SSL performance with Elastic Beanstalk

I really like Elastic Beanstalk and managed to get my webapp (Spring MVC, Hibernate, ...) up and running using SSL on a Tomcat7 64-bit container.
A major concern to me is performance (I thought using the Amazon cloud would help here).
To benchmark my server performance I am using blitz.io (which uses the amazon cloud to have multiple clients access my webservice simultaneously).
My very first simple performance test already got me wondering:
I benchmarked a health check url (which basically just prints "I'm ok").
Without SSL: Looks fine.
13 Hits/s with a response time of 9ms
230 Hits/s with a response time of 8ms
With SSL: Not so fine.
13 Hits/s with a response time of 44ms (Ok, this should be a bit larger due to encryption overhead)
30 Hits/s with a response time of 3.6s!
Going higher left me with connection timeouts (timeout = 10s).
I tried using a larger EC2 instance in the background with essentially the same result.
If I am not mistaken, the Load Balancer before the EC2 Instances serves as an endpoint for SSL encryption. How do I increase this performance?
Can this be done with elastic beanstalk? Or do I need to setup my own load balancer etc.?
I also did some tests using Heroku (albeith with a slightly different technology stack, play! vs. SpringMVC). Here I also saw the increased response time, but it stayed mostly constant. I am assuming they are using quite performant SSL endpoints. How do I get that for Elastic Beanstalk?
It seems my testing method was flawed.
Amazon's Elastic Load Balancers seem to go up to 10k SSL requests per second.
See this great writeup:
http://blog.mattheworiordan.com/post/24620577877/part-2-how-elastic-are-amazon-elastic-load-balancers
SSL requires a handshaking before a secure transmission channel is opened. Once the handshaking is done, which involves several roundtrips, the data is transmitted.
When you are just hitting a page using a load tester, it is doing the handshake for each and every hit. It is not reusing an already established session.
That's not how browsers are going to do. Browse will do handshake once and then reuse the open encrypted session for all the subsequent requests for a certain duration.
So, I would not be very worried about the results. I suggest you try a tool like www.browsermob.com to see how long a full page with many image, js, css etc takes to load over SSL vs non-SSL. That will be a fair comparison.
Helps?

Resources