Akka Camel - JMS messages lost - should wait for initialization of Camel? - jms

My experimental application is quite simple, trying what can be done with Actors and Akka.
After JVM start, it creates actor system with couple of plain actors, JMS consumer (akka.camel.Consumer) and JMS producer (akka.camel.Producer). It sends couple of messages between actors and also JMS producer -> JMS server -> JMS consumer. It basically talks to itself via JMS service.
From time to time I was experiencing weird behaviour: it seemed that from time to time, first of messages which where supposed to be sent to JMS server was somehow lost. By looking at my application logs, I could see that applications is trying to send the message but it was never received by JMS server. (For each run I have to start JVM&Application again).
Akka Camel Documentation mentions that it's possible that some components may not be fully initialized at the begining: "Some Camel components can take a while to startup, and in some cases you might want to know when the endpoints are activated and ready to be used."
I tried to implement following to wait for Camel initialization
val system = ActorSystem("actor-system")
val camel = CamelExtension(system)
val jmsConsumer = system.actorOf(Props[JMSConsumer])
val activationFuture = camel.activationFutureFor(jmsConsumer)(timeout = 10 seconds, executor = system.dispatcher)
val result = Await.result(activationFuture,10 seconds)
which seems to help with this issue. (Although, when removing this step now, I'm not able to recreate this issue any more... :/).
My question is whether this is correct way to ensure all components are fully initialized?
Should I use
val future = camel.activationFutureFor(actor)(timeout = 10 seconds, executor = system.dispatcher)
Await.result(future, 10 seconds)
for each akka.camel.Producer and akka.camel.Consumer actor to be sure that everything is initialized properly?
Is that all I should to do, or something else should be done as well? Documentation is not clean on that and it's not easy to test as issue was happening only occasionaly...

You need to initialize the camel JMS component and also Producer before sending any messages.
import static java.util.concurrent.TimeUnit.SECONDS;
import scala.concurrent.Future;
import scala.concurrent.duration.Duration;
import akka.dispatch.OnComplete;
ActorRef producer = system.actorOf(new Props(SimpleProducer.class), "simpleproducer");
Timeout timeout = new Timeout(Duration.create(15, SECONDS));
Future<ActorRef> activationFuture = camel.activationFutureFor(producer,timeout, system.dispatcher());
activationFuture.onComplete(new OnComplete<ActorRef>() {
#Override
public void onComplete(Throwable arg0, ActorRef arg1)
throws Throwable {
producer.tell("First!!");
}
},system.dispatcher());

Related

Restart listener and continue from latest message

Case
Clients are ReplyingKafkaTemplate instances.
Server is a ConcurrentMessageListenerContainer created using #KafkaListener and #SendTo annotations on a method.
ContainerFactory uses ContainerStoppingErrorHandler.
Request topic has only 1 partition.
Group ids are static. eg. test-consumer-group.
Requests are sent with timeouts.
Due to an exception thrown, server goes down
but the client keeps dispatching requests which queue up on the
request topic.
Current Behavior
When the server comes back up it continues processing old requests which would have timed out.
Desired Behavior
Instead, it would be better to continue with the last message; thereby skipping past even unprocessed messages as corresponding requests would timeout and retry.
Questions
What is the recommended approach to achieve this?
From the little that I understand, it looks like I'll have to manually set the initial offset. What's the simplest way to implement this?
Your #KafkaListener class must extends AbstractConsumerSeekAware and do something like this:
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
super.onPartitionsAssigned(assignments, callback);
callback.seekToEnd(assignments.keySet());
}
So, every time when your consumer joins the group it is going to seek all the assigned partitions to the end skipping all the old records.

JmsListener called again and again when a error happen in the method

In a spring boot application, I have a class with jms listener.
public class PaymentNotification{
#JmsListener(destination="payment")
public void receive(String payload) throws Exception{
//mapstring conversion
....
paymentEvent = billingService.insert(paymentEvent); //transactional method
//call rest...
billingService.save(paymentEvent);
//send info to jms
}
}
I saw then when a error happen, data is inserted in the database, that ok, but it's like receive method is called again and again... but queue is empty when I check on the server.
If there is an error, I don't want method is called again, Is there something for that.
The JMS Message Headers might contain additional information to help with your processing. In particular JMSRedelivered could be of some value. The Oracle doc states that "If a client receives a message with the JMSRedelivered field set, it is likely, but not guaranteed, that this message was delivered earlier but that its receipt was not acknowledged at that time."
I ran the following code to explore what was available in my configuration (Spring Boot with IBM MQ).
#JmsListener(destination="DEV.QUEUE.1")
public void receive(Message message) throws Exception{
for (Enumeration<String> e = message.getPropertyNames(); e.hasMoreElements();)
System.out.println(e.nextElement());
}
From here I could find JMSXDeliveryCount is available in JMS 2.0. If that property is not available, then you may well find something similar for your own configuration.
One strategy would be to use JMSXDeliveryCount, a vendor specific property or maybe JMSRedelivered (if suitable for your needs) as a way to check before you process the message. Typically, the message would be sent to a specific blackout queue where the redelivery count exceeds a set threshold.
Depending on the messaging provider you are using it might also be possible to configure back out queue processing as properties of the queue.

DefaultMessageListenerContainer stops processing messages

I'm hoping this is a simple configuration issue but I can't seem to figure out what it might be.
Set-up
Spring-Boor 2.2.2.RELEASE
cloud-starter
cloud-starter-aws
spring-jms
spring-cloud-dependencies Hoxton.SR1
amazon-sqs-java-messaging-lib 1.0.8
Problem
My application starts up fine and begins to process messages from Amazon SQS. After some amount of time I see the following warning
2020-02-01 04:16:21.482 LogLevel=WARN 1 --- [ecutor-thread14] o.s.j.l.DefaultMessageListenerContainer : Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers.
The above warning gets printed multiple times and eventually I see the following two INFO messages
2020-02-01 04:17:51.552 LogLevel=INFO 1 --- [ecutor-thread40] c.a.s.javamessaging.SQSMessageConsumer : Shutting down ConsumerPrefetch executor
2020-02-01 04:18:06.640 LogLevel=INFO 1 --- [ecutor-thread40] com.amazon.sqs.javamessaging.SQSSession : Shutting down SessionCallBackScheduler executor
The above 2 messages will display several times and at some point no more messages are consumed from SQS. I don't see any other messages in my log to indicate an issue, but I get no messages from my handlers that they are processing messages (I have 2~) and I can see the AWS SQS queue growing in the number of messages and the age.
~: This exact code was working fine when I had a single handler, this problem started when I added the second one.
Configuration/Code
The first "WARNing" I realize is caused by the currency of the ThreadPoolTaskExecutor, but I can not get a configuration which works properly. Here is my current configuration for the JMS stuff, I have tried various levels of max pool size with no real affect other than the warings start sooner or later based on the pool size
public ThreadPoolTaskExecutor asyncAppConsumerTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setThreadGroupName("asyncConsumerTaskExecutor");
taskExecutor.setThreadNamePrefix("asyncConsumerTaskExecutor-thread");
taskExecutor.setCorePoolSize(10);
// Allow the thread pool to grow up to 4 times the core size, evidently not
// having the pool be larger than the max concurrency causes the JMS queue
// to barf on itself with messages like
// "Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers"
taskExecutor.setMaxPoolSize(10 * 4);
taskExecutor.setQueueCapacity(0); // do not queue up messages
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(60);
return taskExecutor;
}
Here is the JMS Container Factory we create
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(SQSConnectionFactory sqsConnectionFactory, ThreadPoolTaskExecutor asyncConsumerTaskExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(sqsConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
// The JMS processor will start 'concurrency' number of tasks
// and supposedly will increase this to the max of '10 * 3'
factory.setConcurrency(10 + "-" + (10 * 3));
factory.setTaskExecutor(asyncConsumerTaskExecutor);
// Let the task process 100 messages, default appears to be 10
factory.setMaxMessagesPerTask(100);
// Wait up to 5 seconds for a timeout, this keeps the task around a bit longer
factory.setReceiveTimeout(5000L);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
}
I added the setMaxMessagesPerTask & setReceiveTimeout calls based on stuff found on the internet, the problem persists without these and at various settings (50, 2500L, 25, 1000L, etc...)
We create a default SQS connection factory
public SQSConnectionFactory sqsConnectionFactory(AmazonSQS amazonSQS) {
return new SQSConnectionFactory(new ProviderConfiguration(), amazonSQS);
}
Finally the handlers look like this
#JmsListener(destination = "consumer-event-queue")
public void receiveEvents(String message) throws IOException {
MyEventDTO myEventDTO = jsonObj.readValue(message, MyEventDTO.class);
//messageTask.process(myEventDTO);
}
#JmsListener(destination = "myalert-sqs")
public void receiveAlerts(String message) throws IOException, InterruptedException {
final MyAlertDTO myAlert = jsonObj.readValue(message, MyAlertDTO.class);
myProcessor.addAlertToQueue(myAlert);
}
You can see in the first function (receiveEvents) we just take the message from the queue and exit, we have not implemented the processing code for that.
The second function (receiveAlerts) gets the message, the myProcessor.addAlertToQueue function creates a runnable object and submits it to a threadpool to be processed at some point in the future.
The problem only started (the warning, info and failure to consume messages) only started when we added the receiveAlerts function, previously the other function was the only one present and we did not see this behavior.
More
This is part of a larger project and I am working on breaking this code out into a smaller test case to see if I can duplicate this issue. I will post a follow-up with the results.
In the Mean Time
I'm hoping this is just a config issue and someone more familiar with this can tell me what I'm doing wrong, or that someone can provide some thoughts and comments on how to correct this to work properly.
Thank you!
After fighting this one for a bit I think I finally resolved it.
The issue appears to be due to the "DefaultJmsListenerContainerFactory", this factory creates a new "DefaultJmsListenerContainer" for EACH method with a '#JmsListener' annotation. The person who originally wrote the code thought it was only called once for the application, and the created container would be re-used. So the issue was two-fold
The 'ThreadPoolTaskExecutor' attached to the factory had 40 threads, when the application had 1 '#JmsListener' method this worked fine, but when we aded a second method then each method got 10 threads (total of 20) for listening. This is fine, however; since we stated that each listener could grow up to 30 listeners we quickly ran out of threads in the pool mentioned in 1 above. This caused the "Number of scheduled consumers has dropped below concurrentConsumers limit" error
This is probably obvious given the above, but I wanted to call it out explicitly. In the Listener Factory we set the concurrency to be "10-30", however; all of the listeners have to share that pool. As such the max concurrency has to be setup so that each listeners' max value is small enough so that if each listener creates its maximum that it doesn't exceed the maximum number of threads in the pool (e.g. if we have 2 '#JmsListener' annotated methods and a pool with 40 threads, then the max value can be no more than 20).
Hopefully this might help someone else with a similar issue in the future....

How to send message to WebSphere MQ using Apche Camel and Receive message from MQ Queue

I didnt see enough examples on web using apache camel with websphere mq to send and receive messages. I had a example code but I got struck at the middle of code. could any one help on this..
import org.apache.camel.CamelContext;
import org.apache.camel.Endpoint;
import org.apache.camel.Exchange;
import org.apache.camel.ExchangePattern;
import org.apache.camel.Producer;
import org.apache.camel.util.IOHelper;
import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
/**
* Client that uses the Mesage Endpoint
* pattern to easily exchange messages with the Server.
* <p/>
* Notice this very same API can use for all components in Camel, so if we were using TCP communication instead
* of JMS messaging we could just use <code>camel.getEndpoint("mina:tcp://someserver:port")</code>.
* <p/>
* Requires that the JMS broker is running, as well as CamelServer
*/
public final class CamelClientEndpoint {
private CamelClientEndpoint() {
//Helper class
}
// START SNIPPET: e1
public static void main(final String[] args) throws Exception {
System.out.println("Notice this client requires that the CamelServer is already running!");
AbstractApplicationContext context = new ClassPathXmlApplicationContext("camel-client.xml");
CamelContext camel = context.getBean("camel-client", CamelContext.class);
// get the endpoint from the camel context
Endpoint endpoint = camel.getEndpoint("jms:queue:numbers");
// create the exchange used for the communication
// we use the in out pattern for a synchronized exchange where we expect a response
Exchange exchange = endpoint.createExchange(ExchangePattern.InOut);
// set the input on the in body
// must be correct type to match the expected type of an Integer object
exchange.getIn().setBody(11);
// to send the exchange we need an producer to do it for us
Producer producer = endpoint.createProducer();
// start the producer so it can operate
producer.start();
// let the producer process the exchange where it does all the work in this oneline of code
System.out.println("Invoking the multiply with 11");
producer.process(exchange);
// get the response from the out body and cast it to an integer
int response = exchange.getOut().getBody(Integer.class);
System.out.println("... the result is: " + response);
// stopping the JMS producer has the side effect of the "ReplyTo Queue" being properly
// closed, making this client not to try any further reads for the replies from the server
producer.stop();
// we're done so let's properly close the application context
IOHelper.close(context);
}
}
I got struck at this point of code..
exchange.getIn()
Do I have to use exchange.getOut() to send message?? and How to construct message using string and add headers to it.
Welcome to stackoverflow!
I am still not sure what exactly is the problem you are stuck at and it prevents me (and possibly others as well) in helping you resolve your roadblock.
Perhaps you need to familiarize a bit more on what camel is and how it works. Camel in Action is a great book to help you with that.
If you are unable to get a copy at this point, a preview of the first few chapters of the book is available online and it should give you much better leverage. Source code repository for chapter 2 should give you some more ideas around how to process JMS messages.
In addition to it. Please don't expect full blown solutions from StackOverflow. You may read this page on how to ask a good question

Kafka Producer Thread, huge amound of threads even when no message is send

I profiled my kafka producer spring boot application and found many "kafka-producer-network-thread"s running (47 in total). Which would never stop running, even when no data is sending.
My application looks a bit like this:
var kafkaSender = KafkaSender(kafkaTemplate, applicationProperties)
kafkaSender.sendToKafka(json, rs.getString("KEY"))
with the KafkaSender:
#Service
class KafkaSender(val kafkaTemplate: KafkaTemplate<String, String>, val applicationProperties: ApplicationProperties) {
#Transactional(transactionManager = "kafkaTransactionManager")
fun sendToKafka(message: String, stringKey: String) {
kafkaTemplate.executeInTransaction { kt ->
kt.send(applicationProperties.kafka.topic, System.currentTimeMillis().mod(10).toInt(), System.currentTimeMillis().rem(10).toString(),
message)
}
}
companion object {
val log = LoggerFactory.getLogger(KafkaSender::class.java)!!
}
}
Since each time I want to send a message to Kafka I instantiate a new KafkaSender, I thought a new thread would be created which then sends the message to the kafka queue.
Currently it looks like a pool of producers is generated, but never cleaned up, even when none of them has anything to do.
Is this behaviour intended?
In my opinion the behaviour should be nearly the same as datasource pooling, keep the thread alive for some time, but when there is nothing to do, clear it up.
When using transactions, the producer cache grows on demand and is not reduced.
If you are producing messages on a listener container (consumer) thread; there is a producer for each topic/partition/consumer group. This is required to solve the zombie fencing problem, so that if a rebalance occurs and the partition moves to a different instance, the transaction id will remain the same so the broker can properly handle the situation.
If you don't care about the zombie fencing problem (and you can handle duplicate deliveries), set the producerPerConsumerPartition property to false on the DefaultKafkaProducerFactory and the number of producers will be much smaller.
EDIT
Starting with version 2.8 the default EOSMode is now V2 (aka BETA); which means it is no longer necessary to have a producer per topic/partition/group - as long as the broker version is 2.5 or later.

Resources