reactor.netty.ioWorkerCount different default count - spring-boot

According to netty documentation, default reactor.netty.ioWorkerCount count is max(4, number of cores), which seems true on local environment. I have a 6 core laptop, and the number of reactor-http-io threads were 6.
But on deploying docker image in kuberenetes we found that reactor-http-epoll (linux) thread count was 36. Our CPU configuration is: Request 4, limit 6.
This doubt was also raised by #ROCKY in one of the comments in Threading model of Spring WebFlux and Reactor.
It seems like it is still unanswered.
So is there something that explains this behaviour?

I think we have have found the answer. Our machine had 36 cores. But our pod had the configuration of 4 cores. So it seems like netty was picking up the machine configuration and not our pod configuration, So this seems like a bug in netty or something else that we are missing.
We are using Spring boot 2.2.0.RELEASE and reactor-netty 0.9.9 version.

Related

Benefits of fully immutable containers

What is the actual benefit of restarting a container when updating its configuration instead of updating the configuration at runtime (e.g. Spring Boot supports listening to ConfigMap changes or Spring Cloud Config Server has this feature)? I can actually see none and I can see some drawbacks such as a need to reset TCP connections.
Unlike Spring Boot, other stacks such as Node.js, Go or Rust actually don't have as big overhead when booting up. The problem with Spring Boot is that it just takes longer than any other "modern" stacks to start up because it's booting up JVM and Tomcat. Those two technologies were here well before Docker and Kubernetes were a thing and honestly, that's the price to pay to run Spring Boot in containers.
And what's the benefit? If you're a single developer, probably none. If you work in a team and everybody tinkers with live ConfigMaps and environment variables, it can get hairy really quickly.
Assuming you're using for example Terraform to manage your configurations then everybody gets a nice overview of what is going on and which values are injected where.

Spring-boot 2.2.x increased CPU

We have a Spring-boot REST application running on 3 production machines. A recent update from Spring-boot 2.1.8 to 2.2.2 has shown an initial increase of CPU by at least double. This load then increases over time whereas the old version stays steady.
I have managed to narrow this down to 2.2.x as building with 2.1.11 is ok, but 2.2.0 shows the problem.
To give an idea of scale, the old version stays at around 6% regardless of load, whereas the new version starts at around 15% and gradually increases to over 100% after about 10 hours.
I can see the initial rise with an identical build, only changing the Spring-boot version. The application uses spring-boot-starter-web and spring-boot-starter-actuator.
Any ideas? Should I raise this over at https://github.com/spring-projects/spring-boot/issues ?
This is very likely to be linked to a bug in Spring Framework that was fixed in Spring Framework 5.2.6 (or Spring Boot 2.2.7). There was a memory leak in case of concurrent requests/responses with the same media type.
See the dedicated issue as well as a report sent by a developer with lots of details. Note that this happens with both MVC and WebFlux.
We've seen this issue in some of our services, but upgrading to 2.2.7 looks to have resolved this for one (stable for two weeks).
We're starting to roll this out to more services in the hope that it can be rolled out everywhere, so might be worth trying this out?

spring-cloud-config refresh cause thread leaks

dependency:
spring-boot: 1.5.2
spring-cloud: Dalston.SR3
using spring-cloud series:
config erueka zuul bus kafka
I use git webhook and bus to auto refresh zuul routing config. After a week, found out that over 3000 threads were created.
thread dump report here: report from fastthread
And figure out that, everytime when call the XXX/bus/refresh endpoint, will cause threads number increase by 7.
increased thread list:
DiscoveryClient-0
DiscoveryClient-1
DiscoveryClient-2
...
After some debugging and tracing, I find that when refresh ,
first will call EurekaClientConfiguration#eurekaClient and then call RefreshableEurekaClientConfiguration#eurekaClient.
Since annotation of these are #ConditionalOnMissingRefreshScope and #ConditionalOnRefreshScope, I think only one of them will be invoked.
I am not sure whether or not is this cause the problem. But when I removed config parts, everything works fine. Can anyone help? Thx!

Different versions in transitive dependencies in Gradle

In my project I am forced to use these packages:
com.sparkjava:spark-core:2.3, which ends up using jetty-server:9.3.2.v20150730
org.apache.spark:spark-core_2.10:1.2.0, which ends up using jetty-server:8.1.14.v20131031
Note that com.sparkjava and org.apache.spark have nothing to do with each other. They are called both spark funnily.
The issue here is that both jetty versions are incompatible, so if I force jetty 8.X the system crashes, if I force jetty 9.X the system crashes again, I get java.lang.NoClassDefFoundError: org/eclipse/jetty/server/ServerConnector in one case and java.lang.NoClassDefFoundError: org/eclipse/jetty/server/bio/SocketConnector in the other.
What I am expected to do in such a situation ?
Note: I've tried to shadow jetty, but the dependency manager resolves just one (9.X by default, or 8.X if I force it) and then it shadows it, so it's really not helping.
It would be exceedingly difficult to resolve this situation.
Jetty 8.1 is about 4 major version behind Jetty 9.3, which represents many hundreds of releases of difference.
Note: Jetty versioning is [servlet_support].[major_ver].[minor_ver].
Jetty 8.x is Servlet 3.0, while Jetty 9.x is Servlet 3.1
The architecture of the connectors has evolved tremendously in that time frame, from being old school blocking Sockets in Jetty 8 to no blocking connectors at all in Jetty 9, with Jetty 9 needing to evolve the connectors to support features in TLS/1.2, and ALPN in order to properly support HTTP/2, and the internal I/O handling to support the new Servlet 3.1 Async I/O feature set.
Solution #1:
You won't be able to have both versions running in the same VM without some sort of classloader isolation, and careful configuration to ensure they don't claim the same resources (listening ports, temp files, etc)
Solution #2:
Upgrade (or downgrade) one or the other spark dependency till you hit a common jetty version. (Spark_2.11 / 2.0.0 seems to support Jetty 9.2.x)
Solution #3:
Apache Spark is open source, go submit a patch that upgrades its use of Jetty to 9.3 (this might be difficult as Apache Spark isn't ready to use Java 8 yet, which is a requirement for Jetty 9.3)

HawtIO + Camel plugin - Multiple context not showing up - Limits to max3

Our application is enterprise application which contains multiple web application. Each web application contains one or more camel context. Recently we are exploring the option of using HawtIO for monitoring and administrative purposes.
We are using camel (fuse) version -2.12.0.redhat-610379 with Wildfly 8.1(Dev env -prod being WAS8.5). I have tried with HawtIO web app version ranging from 1.4.10 to 14 and with no-slf4j version as well. But HawtIO is showing maximum 3 camelcontext only. I have tried giving managementNamePattern as well but still no postive results.
If I comment out some of listed camel contexts then other one are getting listed. Please note that each camel context would contain around 10 to 15 routes and endpoint (spring beans) will be around 30 .
But I am able to find unlisted camel context in JMX Dashboard under org.apache.camel. Kindly let me know any work around for it or if I am missing something in configuration. My camel context would refer multiple route context.
Not sure if you still need to know this, but what you may need to do is in the HawtIO preferences, under Jolokia, increase the "Max Collection Size", as HawtIO just grabs everything and then appears to filter on the client side, so if you have a lot of MBeans, you won't see everything (as it only fetches the first 500 entries by default).
I had a similar issue - but while I was seeing all the camel contexts, I was not seeing all the routes, which was the big issue for me.
It defaults to 500. I increased it to 5000, which was enough for me. You may wish to try fiddling with that yourself, and see if it makes a difference.

Resources