The embedded tomcat server uses server.tomcat.background-processor-delay which is by default 10 sec. So by going through the documentation it says that Delay between the invocation of backgroundProcess methods. Can someone be more clear about what this parameter means and what should be an optimal values?
Related
Spring lists SO as the only place to ask questions on their community page, which is why I ask this rather generic question here. It may not be the best fit for SO, but, according to Spring's community overview page, there's no other adequate place to ask such questions.
I have a spring boot application built on spring cloud gateway (version 2) which also uses an embedded hazelcast cluster. It runs in multiple instances, which communicate via hazelcast. Everything works fine, except under heavy load. If one instance fails, restarting it is no longer possible.
When the instance is restarted while the cluster of instances is under heavy load, it will start creating and wiring beans, up to some point, after which it will not do anything spring-related anymore. Hazelcast-generated messages are visible in the log (with root log level DEBUG), past that point, but nothing generated by spring or the application itself.
In order to restart that one instance that failed, I need to stop the load generation, wait some 10-15 minutes, then restart the failed instance. Then the new/restarted instance starts up rather quickly, with no problems at all.
The load consists of http requests which get proxied to another application, and is of such nature that it generates a lot of read accesses to hazelcast's distributed storage, but very few writes.
My problem: I have no idea how to debug this. Since the http endpoint never becomes available, there's no way I can query metrics or other actuator information.
So my question is: what tools or mechanisms can I employ to debug this problem? I.e. how can I find out exactly how the boot sequence under heavy load of the other instances of the hazelcast cluster differs from the boot sequence when there is no load at all in the cluster? Once I have this information, the problem is narrowed down enough for me to investigate it further on my own.
I didn't find a way to debug the problem, but had an idea of what might cause it, tried it, and it was a fix.
My application was running as a Kubernetes deployment. A few beans inside the application were relying on a usable CP subsystem during their initialization. Spring's bean initialization process is by necessity sequential and blocking, to account for inter-bean dependencies.
I hypothesized that under heavy load, for whatever reason, the initialization of those beans was blocking forever. As a first experiment, I made that initialization code async, so that Spring can finish bean wiring, even if, until that async part finished too, the instance was unable to perform usable work, to see if that was the problem, at least.
To my surprise, that fully fixed the problem. This way, Spring finished bean wiring, the HZ-dependant initialization also finished rather quickly, when executed async, even under high load, and the instance became usable soon after being started.
I didn't have the time to dig deeper to find out what the precise failure mechanism was. What I believe might have been the problem is the interaction between HZ and K8s. K8s-based discovery works using a K8S service. A pod/instance isn't added to the service until it becomes healthy. If a bean inside the application prevents initialization, the instance is never added to the service. As such, discovery never finds the new/restarted instance. I don't know what effect this might have on the HZ cluster's inner workings.
I have this property into Spring Boot application:
server:
connection-timeout: 12000
I get warning:
Deprecated Each server behaves differently. Use server specific properties instead.
Gradle: org.springframework.boot:spring-boot-autoconfigure:2.6.8 (spring-boot-autoconfigure-2.6.8.jar)
is there some better configuration property that I can use?
I don't even know why you receive a deprecated warning.
According to the documentation from Spring Boot version 2.3 and onwards this property is removed not deprecated any more.
As you can read here, there are some other properties which you can use instead depending on the server that runs your spring boot application.
server.tomcat.connection-timeout should be used if you have tomcat as running server.
server.netty.connection-timeout should be used if netty is used.
server.jetty.connection-idle-timeout should be used if jetty is used
Basically each server has it's own implementation, so you must read your server's documentation to see what it allows and how this behaves. There might be slight differences from how one server behaves and how it interprets connection-timeout and how another server behave and interprets a similar configuration.
This is I think the reason that Spring decides to move to server specific configuration on property connection-timeout instead of a general property and also a very important reason was that some servers may not even have this configuration available to them. So then you have a general property configured in your spring boot application which the server that runs the application can't even respect.
Therefore you now have specific properties for specific servers and now you can be sure upfront whether this configuration is available in your server and you can also read the server documentation to understand exactly what the behavior will be.
Although this setting is being deprecated, we still can use the timeout function.
According to official document, we can use #Transactional(timeout = 1) to do the track in the controller
https://www.baeldung.com/spring-rest-timeout
Our application is enterprise application which contains multiple web application. Each web application contains one or more camel context. Recently we are exploring the option of using HawtIO for monitoring and administrative purposes.
We are using camel (fuse) version -2.12.0.redhat-610379 with Wildfly 8.1(Dev env -prod being WAS8.5). I have tried with HawtIO web app version ranging from 1.4.10 to 14 and with no-slf4j version as well. But HawtIO is showing maximum 3 camelcontext only. I have tried giving managementNamePattern as well but still no postive results.
If I comment out some of listed camel contexts then other one are getting listed. Please note that each camel context would contain around 10 to 15 routes and endpoint (spring beans) will be around 30 .
But I am able to find unlisted camel context in JMX Dashboard under org.apache.camel. Kindly let me know any work around for it or if I am missing something in configuration. My camel context would refer multiple route context.
Not sure if you still need to know this, but what you may need to do is in the HawtIO preferences, under Jolokia, increase the "Max Collection Size", as HawtIO just grabs everything and then appears to filter on the client side, so if you have a lot of MBeans, you won't see everything (as it only fetches the first 500 entries by default).
I had a similar issue - but while I was seeing all the camel contexts, I was not seeing all the routes, which was the big issue for me.
It defaults to 500. I increased it to 5000, which was enough for me. You may wish to try fiddling with that yourself, and see if it makes a difference.
I've 2 applications each one uses different spring application context configuration on the same JVM, and every time i tries to run both of them together i found a problem that the last one configuration always overrides the previous one so the spring context loaded with the last one configuration, any advices how to overcome this, by letting every application runs with it's configuration without affected by other spring context.
I have a simple JSP/Servlet maven application which allows a user to upload an archive file. The application will then unzip the archive which contains XML files, and parse them using basic SAX parsing. It will generate an in-memory representation of these files, and write them to a Neo4J Graph Database, currently in embedded mode.
During development, I used a GlassFish v3 but with production in sight, the request has been made to move from Glassfish to Tomcat and so I did. Apart from a few small issues with Tomcat forcing me to add JSF dependencies despite the fact that I'm not using any JSF, there is one big issue I have with Tomcat atm.
The largest testfile I have takes about 8 seconds to upload and parse on glassfish v3. After that, it takes about 2 seconds less, due to the fact that I don't clean up the uploaded file (yet).
The same file on Tomcat7 takes about 90 seconds to upload and parse the first time. The other times it takes about 20 seconds less, presumably because of the same reason.
In any case, there's a difference in performance of factor 10. I'm a little bit surprised, since I thought that using Tomcat would actually increase the speed due to it being more lightweight than Glassfish, since I'm not really using the advanced functionalities provided by Glassfish.
Has anyone encountered a similar issue, and what did you do to resolve this? Is this even resolvable, or is it due to the way that Tomcat works...
EDIT: The difference appears to be in the code section that is responsible for writing the in-memory representation of the files to the actual database... No idea why though...
I could not find a comparison of Tomcat with Glassfish but yes, the new Glassfish versions are very light weight and have very good performance. I have experienced the same. I guess running an application server instead of a Tomcat is no more huge administration and hardware waste (and you can use light weight EJB 3 and 3.1 if you like). Glassfish installations can be very small in size if you only select the necessary modules.
Check this page. It compares Jboss, Glassfish and Resin
http://hwellmann.blogspot.com/2011/06/java-ee-6-server-comparison.html
And this one compares Glassfish 3.1 and Jboss 6 & 7.
http://hwellmann.blogspot.com/2011/10/jboss-as-7-catching-up-with-java-ee-6.html