Low multicore CPU utilization of SpringBoot and Jakarta MicroProfiles - spring-boot

I did simple performance comparison of SpringBoot versus Jakarta MicroProfile servers. Projects is available at:
https://github.com/HotswapProjects/pingperf
https://github.com/HotswapProjects/pingperf-spring-boot
The test uses Docker for simple server embedding, and JMeter as a client. The SpringBoot and Microprofiles based on Tomcat9 are using default Tomcat's thread pool settings (maxThreads=200). There is one problem I can't resolve. If the server is under heavy load from JMeter (50 threads, hardware Ryzen 1600, os linux), then CPU utilization is only 60% (checked in htop). I did the tests also out of Docker and there is no improvement, CPU utilization is still 60%. Is it possible to tune Tomcat9 settings to reach 100% CPU utilization in this test or is it platform problem?

Related

Spring Webflux cpu in a container environment

In our environment with kubernetes, our pods usually have less than 1 cpu core reserved.
Knowing that spring webflux works with the concept of eventloop + workers,
how would that work? is it recommended that we reserve at least 1 cpu core for this pod?
If i still use webflux with less than 1cpu request in kubernetes, will my eventloop be underperformance?

Spring boot API request limit issue

I have developed a microservice using Spring Boot and deployed it as a docker container and when doing performance testing the service I see that the maximum number of threads that created to the service is 20 at any point in time even though the number of calls made is much higher. I have even set the max thread to 4000 and also set Max connection to 10000 along with all db configuration and server has 24 core and 64 gb ram still no improvement Are there any limitations with respect to number of calls that can be made to a microservice developed using Spring Boot or is the issue is with docker container? Or is this normal?

From an Application Server to Spring Boot - How Tos for Performance Tuning

Currently we have Java applications deployed in an Application server (Websphere to be exact). To fix common performance and memory related problems we encounter, we do tweakings like:
Adjust the thread pool setting - to prevent waiting threads.
Adjust the application server's garbage collection behavior.
Now, there is a plan to move them to containers (via Docker and using Spring Boot). So essentially they would be converted to Spring boot apps running on Docker containers. Now my question is, what is the equivalent of doing #1 and #2 in this kind of setup? is there still a way to adjust thread pool and garbage collection? or is it a different way now? or this shouldn't be an issue because docker swarm can manage all this and scale?
Edit: for the meantime, docker swarm will be used for managing containers. Kubernetes is still not in the picture.

How to achieve high TPS with Spring Boot

I am working on a application (Banking) which has a TPS requirement of 100 and multiple concurrent users.
Will Spring Boot 1.x.x allow me to achieve this?
Note: I would have used Spring Boot 2.x.x which supports Reactive paradigm but there is some legacy code which I have to use and it does not work on 2.x.x.
You can hit these numbers running a Java application on any reasonable hardware. LMAX claims that Disruptor can do over 100k TPS with 1ms latency. Spring Boot, or Java in general, won't be the limiting factor.
What will be the problem are the business requirements. If your application is to produce complex reports from over utilised database that's located in another data centre, well just the packet round-trip from CA to Netherlands is 150ms. If your SQL queries will take 30+ seconds, you are toast.
You can take a look at Tuning Tomcat For A High Throughput, Fail Fast System. It gives a good insight what can be tuned in a standard Tomcat deployment (assuming you will use Tomcat in Spring Boot). However it's unlikely that HTTP connections (assuming you will expose HTTP API) will be the initial bottleneck.

Karaf Jetty performance issue

I'm using Apache Karaf 2.3.3 with Jetty 9.0.7 bundles installed. Jetty is hosting a Spring-MVC based application which is a RESTFul web service.
When running a benchmark using Karaf + Jetty, on a specific hardware setup I'm getting speed of 40k HTTP operations per second. If I use standalone Jetty (the same version) - it is over 60k HTTP operations per second.
I have default configuration of both Jetty servers, and I can't see any differences. In both cases I set -Xmx to 6G, but tried with various settings. JRE is 1.7.0_45.
CPU and memory consumption reported by the operating system (Ubuntu) is similar in both cases.
Karaf is configured to use Equinox under the covers.
Any ideas?

Resources