I have an api like this
#GetMapping("/async-test")
public String api2() throws InterruptedException {
log.info("Before async");
myService.myAsync();
log.info("after async");
return "nothing important";
}
And myService.myAsync implementation like this
#Async
public void myAsync() {
CompletableFuture.supplyAsync(() -> {
try {
log.info("before thread ....");
Thread.sleep(3000);
log.info("after thread ....");
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
log.info("before returning result ....");
return "Result of the asynchronous computation";
}, SegmentContextExecutors.newSegmentContextExecutor());
}
The issue here is that API waits for 3 seconds to reply when using SegmentContextExecutors.newSegmentContextExecutor(), but if I remove the segment executor, the function run in the background successfully, but for sure I lose the segment trace id for logging.
So what should be done to fix this ?
Tech stack
Spring boot v.2.7.2
Java 11
Aws x-ray v.2.11.2
Related
Hi I have played a lot with the following code and has read https://github.com/quarkusio/quarkus/issues/21111
I think I am facing a very similar issue, where it will work the first 4 times and then it stops working and things are stuck and eventually showing.
2022-09-15 23:21:21,029 ERROR [io.sma.rea.mes.provider] (vert.x-eventloop-thread-16) SRMSG00201: Error caught while processing a message: io.vertx.core.impl.NoStackTraceThrowable: Timeout
I have seen such exact behaviours in multiple bug reports and discussion threads.
I am using quarkus-hibernate-reactive-panache + quarkus-smallrye-reactive-messaging with kafka (v2.12)
#Incoming("words-in")
#ReactiveTransactional
public Uni<Void> storeToDB(Message<String> message) {
return storeMetamodels(message).onItemOrFailure().invoke((v, throwable) -> {
if (throwable == null) {
Log.info("Successfully stored");
message.ack();
} else {
Log.error(throwable, throwable);
message.nack(throwable);
}
});
}
private Uni<Void> storeMetamodels(Message<String> message) {
List<EntityMetamodel> metamodels = Lists.newArrayList();
for (String metamodelDsl : metamodelDsls.getMetamodelDsls()) {
try {
EntityMetamodel metamodel = new EntityMetamodel();
metamodel.setJsonSchema("{}")
metamodels.add(metamodel);
} catch (IOException e) {
Log.error(e, e);
}
}
return Panache.getSession().chain(session -> session.setBatchSize(10)
.persistAll(metamodels.toArray((Object[]) new EntityMetamodel[metamodels.size()])));
}
NOTE This same code works if it is running on RestEasy Reactive but I need to move the actual processing and storing to DB away from rest easy as it will be a large process and I do not want it to be stuck on the Rest API waiting for a few minutes.
Hope some Panache or smallrye reactive messaging experts can shed some lights.
Could you try this approach, please?
#Inject
Mutiny.SessionFactory sf;
#Incoming("words-in")
public Uni<Void> storeToDB(Message<String> message) {
return storeMetamodels(message).onItemOrFailure().invoke((v, throwable) -> {
if (throwable == null) {
Log.info("Successfully stored");
message.ack();
} else {
Log.error(throwable, throwable);
message.nack(throwable);
}
});
}
private Uni<Void> storeMetamodels(Message<String> message) {
List<EntityMetamodel> metamodels = Lists.newArrayList();
for (String metamodelDsl : metamodelDsls.getMetamodelDsls()) {
try {
EntityMetamodel metamodel = new EntityMetamodel();
metamodel.setJsonSchema("{}")
metamodels.add(metamodel);
} catch (IOException e) {
Log.error(e, e);
}
}
return sf
.withTransaction( session -> session
.setBatchSize(10)
.persistAll(metamodels.toArray((Object[]) new EntityMetamodel[metamodels.size()]))
);
}
I suspect you've hit a bug where the session doesn't get closed at the end of storeToDB. Because the session doesn't get closed when injected using Panache or dependency injection, the connection stays open and you hit the limit of connections that can stay open.
At the moment, using the session factory makes it easier to figure out when the session gets closed.
I have a springboot Kafka Consumer & Producer. The consumer is expected to read data from topic 1 by 1, process(time consuming) it & write it to another topic and then manually commit the offset.
In order to avoid rebalancing, I have tried to call pause() and resume() on KafkaContainer but the consumer is always running & never responds to pause() call, tried it even with a while loop and faced no success(unable to pause the consumer). KafkaListenerEndpointRegistry is Autowired.
Springboot version = 2.6.9, spring-kafka version = 2.8.7
#KafkaListener(id = "c1", topics = "${app.topics.topic1}", containerFactory = "listenerContainerFactory1")
public void poll(ConsumerRecord<String, String> record, Acknowledgment ack) {
log.info("Received Message by consumer of topic1: " + value);
String result = process(record.value());
producer.sendMessage(result + " topic2");
log.info("Message sent from " + topicIn + " to " + topicOut);
ack.acknowledge();
log.info("Offset committed by consumer 1");
}
private String process(String value) {
try {
pauseConsumer();
// Perform time intensive network IO operations
resumeConsumer();
} catch (InterruptedException e) {
log.error(e.getMessage());
}
return value;
}
private void pauseConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isRunning()) {
log.info("Attempting to pause consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).pause();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
private void resumeConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isContainerPaused() || registry.getListenerContainer("c1").isPauseRequested()) {
log.info("Attempting to resume consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).resume();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
Am I missing something? Could someone please guide me with the right way of achieving the required behaviour?
You are running the process() method on the listener thread so pause/resume will not have any effect; the pause only takes place when the listener thread exits the listener method (and after it has processed all the records received by the previous poll).
The next version (2.9), due later this month, has a new property pauseImmediate, which causes the pause to take effect after the current record is processed.
You can try like this. This work for me
public class kafkaConsumer {
public void run(String topicName) {
try {
Consumer<String, String> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Collections.singleton(topicName));
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(80000));
for (TopicPartition partition : consumerRecords.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = consumerRecords.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
kafkaEvent = record.value();
consumer.pause(consumer.assignment());
/** Implement Your Business Logic Here **/
Once your processing done
consumer.resume(consumer.assignment());
try {
consumer.commitSync();
} catch (CommitFailedException e) {
}
}
}
} catch (Exception e) {
continue;
}
}
} catch (Exception e) {
}
}
I have a DefaultMessageListenerContainer which is processing a message from the queue.
While the message is being processed -- stop , shutdown methods are called on DefaultMessageListenerContainer. Does this close database connections?
Looks like it is closing the database connections and hence the message being processed is getting interrupted from completely processing.
I see these errors :
o.s.jdbc.support.SQLErrorCodesFactory : Error while extracting database name
Closed Connection; nested exception is java.sql.SQLRecoverableException: Closed Connection
could these be because the DefaultMessageListenerContainer was stopped and shutdown ?
My code is as follows . startStopContainer is where I am trying to stop and shutdown container. I want to shutdown container only if listener completed processing the current message. I added logic to figure out if listener completed processing .
Is the below logic the only way or is there a better way to figure out if listener completed processing. Please suggest. Thank you.
public class MyMessageConsumerFacade {
private ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(url);
connectionFactory.setUserName(userName);
connectionFactory.setPassword(password);
return connectionFactory;
}
#Bean
public MessageListenerContainer listenerContainer() {
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setDestinationName(queueName);
container.setMessageListener(new MyJmsListener());
return container;
}
}
public class MyJmsListener implements MessageListener {
public boolean onMessageCompleted;
public void onMessage(Message message) {
onMessageCompleted = false;
processMessage(message);
onMessageCompleted = true;
}
}
private String startStopContainer(ExecutionContext etk) {
String response = "success";
AnnotationConfigApplicationContext context = null;
DefaultMessageListenerContainer myNewContainer = null;
if (context == null) {
context = new AnnotationConfigApplicationContext(MyMessageConsumerFacade.class);
}
if (myNewContainer == null) {
myNewContainer = context.getBean(DefaultMessageListenerContainer.class);
}
MyCaseMessageJmsListener messageJmsListener = (MyCaseMessageJmsListener) myNewContainer.getMessageListener();
if (!myNewContainer.isRunning()) {// container not running
myNewContainer.start();
}
//due to some business logic we need to stop listener every 5 minutes, so sleep for 5 minutes and then stop
try {
Thread.sleep(300000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (myNewContainer.isRunning()) {
myNewContainer.stop();
}
//Before shutting down container , make sure listener processed all msgs completely
if(!messageJmsListener.isOnMessageCompleted) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if(messageJmsListener.isOnMessageCompleted) {
myNewContainer.shutdown();
}
if (context != null) {
context.destroy();
}
return response;
}
Is there a better way than this?
No; the container knows nothing about JDBC or any connections thereto.
Stopping the container only stops the JMS consumer(s) the consumers from receiving messages; shutDown() on the container closes the consumer(s).
Something else is closing your JDBC connection.
I created simple client and server. Client sends rpc requests:
RabbitTemplate template.convertSendAndReceive(...) ;
Server receive it, and answers back:
#RabbitListener(queues = "#{queue.getName()}")
public Object handler(#Payload String key)...
Then I make client send rpc requests asynchronously, simultaneously(which produces lot of concurrent rpc requests).
And unexpectedly receive an error:
org.springframework.amqp.AmqpResourceNotAvailableException: The channelMax limit is reached. Try later.
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:59)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:1208)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$200(CachingConnectionFactory.java:1196)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:599)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:582)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:552)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:534)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1400(CachingConnectionFactory.java:99)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel
Rabbitmq client seems create too many channels. How to fix it?
And why my client create them so many?
Channels are cached so there should only be as many channels as there are actual RPC calls in process.
You may need to increase the channel max setting on the broker.
EDIT
If your RPC calls are long-lived, you can reduce the time the channel is used by using the AsyncRabbitTemplate with an explicit reply queue, and avoid using the direct reply-to feature.
See the documentation.
EDIT2
Here is an example using the AsyncRabbitTemplate; it sends 1000 messages on 100 threads (and the consumer has 100 threads).
The total number of channels used was 107 - 100 for the consumers and only 7 were used for sending.
#SpringBootApplication
public class So56126654Application {
public static void main(String[] args) {
SpringApplication.run(So56126654Application.class, args);
}
#RabbitListener(queues = "so56126654", concurrency = "100")
public String slowService(String in) throws InterruptedException {
Thread.sleep(5_000L);
return in.toUpperCase();
}
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
ExecutorService exec = Executors.newFixedThreadPool(100);
return args -> {
System.out.println(asyncTemplate.convertSendAndReceive("foo").get());
for (int i = 0; i < 1000; i++) {
int n = i;
exec.execute(() -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo" + n);
try {
System.out.println(future.get(10, TimeUnit.SECONDS));
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
catch (TimeoutException e) {
e.printStackTrace();
}
});
}
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(ConnectionFactory connectionFactory) {
return new AsyncRabbitTemplate(connectionFactory, "", "so56126654", "so56126654-replies");
}
#Bean
public Queue queue() {
return new Queue("so56126654");
}
#Bean
public Queue reeplyQueue() {
return new Queue("so56126654-replies");
}
}
I am successfully connecting to a local websocket server with tyrus, but the onMessage method does not get called. I setup Fiddler as proxy in between and I see that the server responds with two messages, however, they are not printed out in my code. I more or less adapted the sampe code:
The onOpen Message is printed out
public static void createAndConnect(String channel) {
CountDownLatch messageLatch;
try {
messageLatch = new CountDownLatch(1);
final ClientEndpointConfig cec = ClientEndpointConfig.Builder.create().build();
ClientManager client = ClientManager.createClient();
client.connectToServer(new Endpoint() {
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("On Open and is Open " + session.isOpen());
session.addMessageHandler((Whole<String>) message -> {
System.out.println("Received message: " + message);
messageLatch.countDown();
});
}
}, cec, new URI("ws://192.168.1.248/socket.io/1/websocket/" + channel));
messageLatch.await(5, TimeUnit.SECONDS); //I also tried increasing timeout to 30sec, doesn't help
} catch (Exception e) {
e.printStackTrace();
}
}
That's a known issue - it will work if you rewrite lambda to anonymous class or use Session#addMessageHandler(Class, MessageHandler) (you can use lambdas here).