WebSocketDeploymentInfo, the default worker will be used - spring-boot

In my SpringBoot application logs I see the following WARNs:
UT026009: XNIO worker was not set on WebSocketDeploymentInfo, the default worker will be used
UT026010: Buffer pool was not set on WebSocketDeploymentInfo, the default pool will be used
From a Google search it seems they could be related to Undertow suggesting for an improvement which seems to be impossible to be implemented.
Does anyone have any further clarifications on these, and maybe a suggestion on how to make the logs disappear since the application runs just fine?

It is a heads up for configuration of buff pool and does not effect in using.
As suggested from https://blog.csdn.net/weixin_39841589/article/details/90582354,
#Component
public class CustomizationBean implements WebServerFactoryCustomizer<UndertowServletWebServerFactory> {
#Override
public void customize(UndertowServletWebServerFactory factory) {
factory.addDeploymentInfoCustomizers(deploymentInfo -> {
WebSocketDeploymentInfo webSocketDeploymentInfo = new WebSocketDeploymentInfo();
webSocketDeploymentInfo.setBuffers(new DefaultByteBufferPool(false, 1024));
deploymentInfo.addServletContextAttribute("io.undertow.websockets.jsr.WebSocketDeploymentInfo", webSocketDeploymentInfo);
});
}
}

exclusion undertow-websockets
<exclusion><artifactId>undertow-websockets-jsr</artifactId><groupId>io.undertow</groupId></exclusion>

If not using WebSocket in Undertow on SpringBoot.
implementation ("org.springframework.boot:spring-boot-starter-undertow") {
exclude group: "io.undertow", module: "undertow-websockets-jsr"
}

Related

Load balancing problems with Spring Cloud Kubernetes

We have Spring Boot services running in Kubernetes and are using the Spring Cloud Kubernetes Load Balancer functionality with RestTemplate to make calls to other Spring Boot services. One of the main reasons we have this in place is historical - in that previously we ran our services in EC2 using Eureka for service discovery and after the migration we kept the Spring discovery client/client-side load balancing in place (updating dependencies etc for it to work with the Spring Cloud Kubernetes project)
We have a problem that when one of the target pods goes down we get multiple failures for requests for a period of time with java.net.NoRouteToHostException ie the spring load balancer is still trying to send to that pod.
So I have a few questions on this:
Shouldn't the target instance get removed automatically when this happens? So it might happen once but after that, the target pod list will be repaired?
Or if not is there some other configuration we need to add to handle this - eg retry / circuit breaker, etc?
A more general question is what benefit does Spring's client-side load balancing bring with Kubernetes? Without it, our service would still be able to call other services using Kubernetes built-in service / load-balancing functionality and this should handle the issue of pods going down automatically. The Spring documentation also talks about being able to switch from POD mode to SERVICE mode (https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#loadbalancer-for-kubernetes). But isn't this service mode just what Kubernetes does automatically? I'm wondering if the simplest solution here isn't to remove the Spring Load Balancer altogether? What would we lose then?
An update on this: we had the spring-retry dependency in place, but the retry was not working as by default it only works for GETs and most of our calls are POST (but OK to call again). Adding the configuration spring.cloud.loadbalancer.retry.retryOnAllOperations: true fixed this, and hence most of these failures should be avoided by the retry using an alternative instance on the second attempt.
We have also added a RetryListener that clears the load balancer cache for the service on certain connection exceptions:
#Configuration
public class RetryConfig {
private static final Logger logger = LoggerFactory.getLogger(RetryConfig.class);
// Need to use bean factory here as can't autowire LoadBalancerCacheManager -
// - it's set to 'autowireCandidate = false' in LoadBalancerCacheAutoConfiguration
#Autowired
private BeanFactory beanFactory;
#Bean
public CacheClearingLoadBalancedRetryFactory cacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
return new CacheClearingLoadBalancedRetryFactory(loadBalancerFactory);
}
// Extension of the default bean that defines a retry listener
public class CacheClearingLoadBalancedRetryFactory extends BlockingLoadBalancedRetryFactory {
public CacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
super(loadBalancerFactory);
}
#Override
public RetryListener[] createRetryListeners(String service) {
RetryListener cacheClearingRetryListener = new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) { return true; }
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
logger.warn("Retry for service {} picked up exception: context {}, throwable class {}", service, context, throwable.getClass());
if (throwable instanceof ConnectTimeoutException || throwable instanceof NoRouteToHostException) {
try {
LoadBalancerCacheManager loadBalancerCacheManager = beanFactory.getBean(LoadBalancerCacheManager.class);
Cache loadBalancerCache = loadBalancerCacheManager.getCache(CachingServiceInstanceListSupplier.SERVICE_INSTANCE_CACHE_NAME);
if (loadBalancerCache != null) {
boolean result = loadBalancerCache.evictIfPresent(service);
logger.warn("Load Balancer Cache evictIfPresent result for service {} is {}", service, result);
}
} catch(Exception e) {
logger.error("Failed to clear load balancer cache", e);
}
}
}
};
return new RetryListener[] { cacheClearingRetryListener };
}
}
}
Are there any issues with this approach? Could something like this be added to the built in functionality?
Shouldn't the target instance get removed automatically when this
happens? So it might happen once but after that the target pod list
will be repaired?
To resolve this issue you have to use the Readiness and Liveness Probe in Kubernetes.
Readiness will check the health of the endpoint that your application has, on the period of interval. If the application fails it will mark your PODs as Unready to accept the Traffic. So no traffic will go to that POD(replica).
Liveness will restart your application if it fails so your container or we can say POD will come up again and once we will get 200 response from app K8s will mark your POD as Ready to accept the traffic.
You can create the simple endpoint in the application that give response as 200 or 204 as per need.
Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Make sure you application using the Kubernetes service to talk with each other.
Application 1 > Kubernetes service of App 2 > Application 2 PODs
To enable load balancing based on Kubernetes Service name use the
following property. Then load balancer would try to call application
using address, for example service-a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
The most typical way to use Spring Cloud LoadBalancer on Kubernetes is
with service discovery. If you have any DiscoveryClient on your
classpath, the default Spring Cloud LoadBalancer configuration uses it
to check for service instances. As a result, it only chooses from
instances that are up and running. All that is needed is to annotate
your Spring Boot application with #EnableDiscoveryClientto enable
K8s-native Service Discovery.
References : https://stackoverflow.com/a/68536834/5525824

Report metrics during shutdown of spring-boot app

I have a shutdownhook which is successfully executed, but the metrics is not reported. Any advice is appreciated! I guess the issues can be
StatsDMetricWriter might be disposed before the shutdown hook? How can I verify? Or is there a way to ensure the ordering of the configured singletons?
The time gap between metric generation and app shutdown < configured delay. I tried spawning a new Thread with Thread.sleep(20000). But it didn't work
The code snippets are as follows:
public class ShutDownHook implements DisposableBean {
#Autowired
private MetricRegistry registry;
#Override
public void destroy() throws Exception {
registry.counter("appName.deployments.count").dec();
//Spawned new thread here with high sleep with no effect
}
}
My Metrics Configuration for dropwizard is as below:
#Bean
#ExportMetricReader
public MetricRegistryMetricReader metricsDWMetricReader() {
return new MetricRegistryMetricReader(metricRegistry);
}
#Bean
#ExportMetricWriter
public MetricWriter metricWriter() {
return new StatsdMetricWriter(app, host, port);
}
The reporting time delay is set as 1 sec:
spring.metrics.export.delay-millis=1000
EDIT:
The problem is as below:
DEBUG 10452 --- [pool-2-thread-1] o.s.b.a.m.statsd.StatsdMetricWriter : Failed to write metric. Exception: class java.util.concurrent.RejectedExecutionException, message: Task com.timgroup.statsd.NonBlockingUdpSender$2#1dd8867d rejected from java.util.concurrent.ThreadPoolExecutor -- looks like ThreadPoolExecutor is shutdown before the beans are shutdown.
Any Suggestions please?
EDIT
com.netflix.hystrix.contrib.metrics.eventstream.HystrixMetricsPoller.getCommandJson() has the following piece of code
json.writeNumberField("reportingHosts", 1); // this will get summed across all instances in a cluster
I'm not sure how/why the numbers will add up? Where can I find that logic?

Spring Boot Undertow add RequestLimitingHandler to DeploymentInfo

I am using Spring Boot with Undertow and trying to implement some limits on the number of requests Undertow will accept so as not to become overloaded under stress.
I've seen the answer to the question at Spring Boot Undertow add both blocking handler and NIO handler in the same application, and it appears promising, but I'm not clear what HttpHandler should be passed as the argument to the RequestLimitingHandler constructor.
Is there an easy way to add a RequestLimitingHandler to the UndertowEmbeddedServletContainerFactory bean, perhaps using the addDeploymentInfoCustomizers method?
Alternatively, if I look deeper and get into the Xnio code on which Undertow is based, it looks like there is an option to set Options.WORKER_TASK_LIMIT, but upon further investigation, it looks like the XnioWorker class ignores this setting after the 3.0.10.GA release and simply sets taskQueue to an unbounded LinkedBlockingQueue. Am I mistaken and could this also be an option?
Answering my own question in case it helps others in the future. Solution is to create a new Undertow HandlerWrapper and instantiate the new RequestLimitingHandler object within the wrap() method, like so:
#Bean
public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory(RootHandler rootHandler) {
UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
factory.addDeploymentInfoCustomizers(deploymentInfo -> deploymentInfo.addInitialHandlerChainWrapper(new HandlerWrapper() {
#Override
public HttpHandler wrap(HttpHandler handler) {
return new RequestLimitingHandler(maxConcurrentRequests, queueSize, handler);
}
}));
return factory;
}

Set heartbeatintervalseconds using spring xml

I am using spring-data-Cassandra v1.3.2 in my project.
Is it possible to set heartbeatintervalseconds using spring configuration XML file.
Getting 4 lines of hearbeat DEBUG logs every 30 seconds in my application logs and i am not sure how to avoid them.
Unfortunately, no.
After reviewing the SD Cassandra CassandraCqlClusterParser class, it is apparent that you can specify both "local" and "remote" connection pooling options, however, neither handler handles all the Cassandra Java driver "pooling options" appropriately (such as heartbeatIntervalSeconds).
It appears several other options are missing as well: idleTimeoutSeconds, initializationExecutor, poolTimeoutMillis, and protocolVersion.
Equally unfortunate is it appears the SD Cassandra PoolOptionsFactoryBean does not support these "pooling options" either.
However, not all is lost.
While your SD Cassandra application may resolve it's configuration primarily from XML, it does not preclude you from using a combination of Java config and XML.
For instance, you could use a Spring Java config class to configure your cluster and express your PoolingOptions in Java config...
#Configuration
#ImportResource("/class/path/to/cassandra/config.xml")
class CassandraConfig {
#Bean
PoolingOptions poolingOptions() {
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setHeartbeatIntervalSeconds(30);
poolingOptions.setIdleTimeoutSeconds(300);
poolingOptions.setMaxConnectionsPerHost(50);
poolingOptions.set...
return poolingOptions;
}
#Bean
CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean()
cluster.setContactPoints("..");
cluster.setPort(1234);
cluster.setPoolingOptions(poolingOptions());
cluster.set...
return cluster;
}
}
Hope this helps.
As an FYI, you may want to upgrade to the "current" Spring Data Cassandra version, 1.4.1.RELEASE.
Sadly, but the answer is no. It's not possible to configure the heartbeat interval using XML configuration. Only the following local/remote properties can be configured in PoolingOptions:
min-simultaneous-requests
max-simultaneous-requests
core-connections
max-connections
If you switch to Java-based configuration, then you're able to configure PoolingOptions by extending AbstractClusterConfiguration:
#Configuration
public class MyConfig extends AbstractClusterConfiguration {
#Override
protected PoolingOptions getPoolingOptions() {
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setHeartbeatIntervalSeconds(10);
return poolingOptions
}
}

how to modify tomcat8 acceptCount in spring boot

how to modify the tomcat default thread count using spring boot?
when i use spring mvc,i can find the tomcat,and modify the in conf/server.xml,then i modify the maxProcessors and acceptCount,but in spring boot i can't do that.
in org.apache.catalina.connector, i can't find the properties.
try to check what everything you can modify via properties: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#common-application-properties
server.tomcat.max-threads = 0 # number of threads in protocol handler
otherwise you will have to get your hands dirty with programmatic configuration - http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-tomcat by providing your own TomcatEmbeddedServletContainerFactory
acceptCount not support to modify in properties files, you can you following code to modify:
#Bean
public TomcatEmbeddedServletContainerFactory tomcatEmbeddedServletContainerFactory() {
TomcatEmbeddedServletContainerFactory tomcatFactory = new TomcatEmbeddedServletContainerFactory();
tomcatFactory.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
//tomcat default nio connector
Http11NioProtocol handler = (Http11NioProtocol)connector.getProtocolHandler();
//acceptCount is backlog, default value is 100, you can change which you want value in here
handler.setBacklog(100);
}
});
return tomcatFactory;
}
In current spring boot it should be possible through server.tomcat.accept-count application property, see: https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#server-properties

Resources