How do I make use of I/O threads? - quarkus

According to article: https://quarkus.io/blog/resteasy-reactive-smart-dispatch/ I should be able to use I/O threads by "just" annotating methods with #NonBlocking.
When using the latest quarkus quickstarts, and modifying the getting-started example:
#Path("/hello")
public class GreetingResource {
#Inject
GreetingService service;
#GET
#Produces(MediaType.TEXT_PLAIN)
#Path("/greeting/{name}")
#Blocking
public String greeting(#PathParam String name) {
System.out.println( "greeting, isWorker: " + ( (io.vertx.core.impl.VertxThread)Thread.currentThread() ).isWorker() );
return service.greeting(name);
}
#GET
#Produces(MediaType.TEXT_PLAIN)
#NonBlocking
public String hello() {
System.out.println( "hello, isWorker: " + ( (io.vertx.core.impl.VertxThread)Thread.currentThread() ).isWorker() );
return "hello";
}
}
I would expect to get an I/O thread for the hello method. However, this is the result:
2021-12-28 14:07:17,990 INFO [io.quarkus] (Quarkus Main Thread) getting-started 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.6.1.Final) started in 0.543s. Listening on: http://localhost:8080
2021-12-28 14:07:17,990 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2021-12-28 14:07:17,991 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, resteasy, smallrye-context-propagation, vertx]
2021-12-28 14:07:17,991 INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (vert.x-worker-thread-0) Live reload total time: 0.676s
hello, isWorker: true
greeting, isWorker: true
So in both cases (according to the vert.x-worker-thread) a worker and not I/O thread.
Quarkus version is 2.6.1Final
What do I miss?

Checking if the current thread is of type io.vertx.core.impl.VertxThread is not the way to check you are on the event-loop, as the same type of thread can be used for the worker pool as well.
The proper way to do this check in Quarkus is to use:
var onEventLoop = !io.quarkus.runtime.BlockingOperationControl.isBlockingAllowed()

Related

How to force a worker thread to process ServerEndpointConfig.Configurator's 'modifyHandshake' method?

I'm modifying the handshake of Websocket by implementing ServerEndpointConfig.Configurator and overriding 'modifyHandshake', but the code is blocking and running on an IO thread,
how can I force it to run on a worker thread?
'quarkus.websocket.dispatch-to-worker=true' is working only for #serverendpoint #onopen.
I tried to annotate the 'modifyHandshake' with #Blocking but still - it's running on an IO thread.
Expected behavior -
modifyHandshake should be invoked on a worker thread.
Actual behavior
modifyHandshake invoked on an IO thread.
How to Reproduce?
public class WebSocketEndpointConfigurator extends ServerEndpointConfig.Configurator {
#Override
public void modifyHandshake(ServerEndpointConfig config, HandshakeRequest request, HandshakeResponse response) {
// executing blocking code
Thread.sleep(3000)
}
}
WARN [io.vertx.core.impl.BlockedThreadChecker] (vertx-blocked-thread-checker) Thread Thread[vert.x-eventloop-thread-7,5,main] has been blocked for 128597346 ms, time limit is 2000 ms: io.vertx.core.VertxException: Thread blocked
Quarkus 2.4.1.Final

WebSocketDeploymentInfo, the default worker will be used

In my SpringBoot application logs I see the following WARNs:
UT026009: XNIO worker was not set on WebSocketDeploymentInfo, the default worker will be used
UT026010: Buffer pool was not set on WebSocketDeploymentInfo, the default pool will be used
From a Google search it seems they could be related to Undertow suggesting for an improvement which seems to be impossible to be implemented.
Does anyone have any further clarifications on these, and maybe a suggestion on how to make the logs disappear since the application runs just fine?
It is a heads up for configuration of buff pool and does not effect in using.
As suggested from https://blog.csdn.net/weixin_39841589/article/details/90582354,
#Component
public class CustomizationBean implements WebServerFactoryCustomizer<UndertowServletWebServerFactory> {
#Override
public void customize(UndertowServletWebServerFactory factory) {
factory.addDeploymentInfoCustomizers(deploymentInfo -> {
WebSocketDeploymentInfo webSocketDeploymentInfo = new WebSocketDeploymentInfo();
webSocketDeploymentInfo.setBuffers(new DefaultByteBufferPool(false, 1024));
deploymentInfo.addServletContextAttribute("io.undertow.websockets.jsr.WebSocketDeploymentInfo", webSocketDeploymentInfo);
});
}
}
exclusion undertow-websockets
<exclusion><artifactId>undertow-websockets-jsr</artifactId><groupId>io.undertow</groupId></exclusion>
If not using WebSocket in Undertow on SpringBoot.
implementation ("org.springframework.boot:spring-boot-starter-undertow") {
exclude group: "io.undertow", module: "undertow-websockets-jsr"
}

Is there any way to know the load time for a java application?

I am trying to compare performance of SpringBoot and Micronaut.
I have some applications implemented with both frameworks, and can get some info about JVM with Micrometer, but the information about the time each of these frameworks need to load from scratch and start working is something I am missing.
Is there any way to get it?
Thanks.
Spring boot logs startup time in format:
Started {applicationName} in {time} seconds (JVM running for {jvmTime})
e.g.
2019-05-18 20:50:07.099 INFO 6904 --- [ main] c.e.demo.DemoApplication : Started DemoApplication in 2.156 seconds (JVM running for 3.164)
If you want to have access to startup time programmatically in your application you can JVM running time on ApplicationStartedEvent:
#Component
public class StartupListener {
#EventListener
public void onStartup(ApplicationStartedEvent event) {
double startupTime = ManagementFactory.getRuntimeMXBean().getUptime() / 1000.0;
System.out.println("Application started in: " + startupTime);
}
}
Just to complete the answer with the Micronaut part:
#Singleton
#Requires(notEnv = Environment.TEST)
#Slf4j
public class InitialEventListener implements ApplicationEventListener<ServiceStartedEvent> {
#Getter
private long currentTimeMillis;
#Async
#Override
public void onApplicationEvent(ServiceStartedEvent event) {
currentTimeMillis = System.currentTimeMillis();
log.info("ServiceStartedEvent at " + currentTimeMillis + ":" + event);
}
}

Spring-kafka and kafka 0.10

I'm currently trying to use kafka and spring-kafka in order to consumer messages.
But I have trouble executing several consumers for the same topic and have several questions:
1 - My consumers tends to disconnect after some time and have trouble reconnecting
The following WARN is raised regularly on my consumers:
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.poc.crawler.kafka.KafkaListener : Consuming {"some-stuff": "yes"} from topic [job15]
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.p.c.w.services.impl.CrawlingService : Start of crawling
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.p.c.w.services.impl.CrawlingService : Url has already been treated ==> skipping
2017-09-06 15:32:35.054 WARN 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Auto-commit of offsets {job15-3=OffsetAndMetadata{offset=11547, metadata=''}, job15-2=OffsetAndMetadata{offset=15550, metadata=''}} failed for group group-3: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [job15-3, job15-2] for group group-3
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] s.k.l.ConcurrentMessageListenerContainer : partitions revoked:[job15-3, job15-2]
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group group-3
This cause the consumer to stop and wait for several seconds.
As mentionned in the message, I increased the consumers session.timeout.ms to something like 30000. I still get the message.
As you can see in the provided logs the disconnection occurs right after a record has finished its process.
So ... a lot before 30s of innactivity.
2- Two consumers application receives the same message REALLY often
While looking at my consumers' logs I saw that they tend to treat the same message. I understood Kafka is at-least-once but I never thought I would encounter a lot of duplication.
Hopefully I use redis but I probably have missunderstood some tuning / properties I need to do.
THE CODE
Note: I'm using ConcurrentMessageListenerContainer with auto-commit=true but run with 1 Thread. I just start several instances of the same application because the consumer uses services that aren't thread-safe.
KafkaContext.java
#Slf4j
#Configuration
#EnableConfigurationProperties(value = KafkaConfig.class)
class KafkaContext {
#Bean(destroyMethod = "stop")
public ConcurrentMessageListenerContainer kafkaInListener(IKafkaListener listener, KafkaConfig config) {
final ContainerProperties containerProperties =
new ContainerProperties(config.getIn().getTopic());
containerProperties.setMessageListener(listener);
final DefaultKafkaConsumerFactory<Integer, String> defaultKafkaConsumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs(config));
final ConcurrentMessageListenerContainer messageListenerContainer =
new ConcurrentMessageListenerContainer<>(defaultKafkaConsumerFactory, containerProperties);
messageListenerContainer.setConcurrency(config.getConcurrency());
messageListenerContainer.setAutoStartup(false);
return messageListenerContainer;
}
private Map<String, Object> consumerConfigs(KafkaConfig config) {
final String kafkaHost = config.getHost() + ":" + config.getPort();
log.info("Crawler_Worker connecting to kafka at {} with consumerGroup {}", kafkaHost, config.getIn().getGroupId());
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaHost);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.GROUP_ID_CONFIG, config.getIn().getGroupId());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JacksonNextSerializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
return props;
}
}
Listener
#Slf4j
#Component
class KafkaListener implements IKafkaListener {
private final ICrawlingService crawlingService;
#Autowired
public KafkaListener(ICrawlingService crawlingService) {
this.crawlingService = crawlingService;
}
#Override
public void onMessage(ConsumerRecord<Integer, Next> consumerRecord) {
log.info("Consuming {} from topic [{}]", JSONObject.wrap(consumerRecord.value()), consumerRecord.topic());
consumerService.apply(consumerRecord.value());
}
}
The main issue here is that your consumer group is continuously being rebalanced. You are right about increasing session.timeout.ms, but I don't see this config applied in your configuration. Try removing:
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
and setting:
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
You can increase MAX_POLL_RECORDS_CONFIG to get a better performance on communication with brokers. But if you process messages in one thread only it is safer to keep this value low.

Report metrics during shutdown of spring-boot app

I have a shutdownhook which is successfully executed, but the metrics is not reported. Any advice is appreciated! I guess the issues can be
StatsDMetricWriter might be disposed before the shutdown hook? How can I verify? Or is there a way to ensure the ordering of the configured singletons?
The time gap between metric generation and app shutdown < configured delay. I tried spawning a new Thread with Thread.sleep(20000). But it didn't work
The code snippets are as follows:
public class ShutDownHook implements DisposableBean {
#Autowired
private MetricRegistry registry;
#Override
public void destroy() throws Exception {
registry.counter("appName.deployments.count").dec();
//Spawned new thread here with high sleep with no effect
}
}
My Metrics Configuration for dropwizard is as below:
#Bean
#ExportMetricReader
public MetricRegistryMetricReader metricsDWMetricReader() {
return new MetricRegistryMetricReader(metricRegistry);
}
#Bean
#ExportMetricWriter
public MetricWriter metricWriter() {
return new StatsdMetricWriter(app, host, port);
}
The reporting time delay is set as 1 sec:
spring.metrics.export.delay-millis=1000
EDIT:
The problem is as below:
DEBUG 10452 --- [pool-2-thread-1] o.s.b.a.m.statsd.StatsdMetricWriter : Failed to write metric. Exception: class java.util.concurrent.RejectedExecutionException, message: Task com.timgroup.statsd.NonBlockingUdpSender$2#1dd8867d rejected from java.util.concurrent.ThreadPoolExecutor -- looks like ThreadPoolExecutor is shutdown before the beans are shutdown.
Any Suggestions please?
EDIT
com.netflix.hystrix.contrib.metrics.eventstream.HystrixMetricsPoller.getCommandJson() has the following piece of code
json.writeNumberField("reportingHosts", 1); // this will get summed across all instances in a cluster
I'm not sure how/why the numbers will add up? Where can I find that logic?

Resources