Spring Boot Kafka - Execute peice of code after reconnecting service to kafka broker - spring

I'm testing kafka broker down scenarios in my service. Is there any way to execute piece of code once the service became connected again to the broker ?
For example:-
When the producer fails to send a message or throws an exception the service stores the message in a temp table to resend it once reconnecting to the broker.
#Transactional
#EventListener
public void detailsObjectEventListener(EntityObjectEvent<DetailsEntity> event) {
StreamPayload<Details> payload = new StreamPayload<>(event.getType(), new Details(event.getSource()));
MessageChannel channel = BootstrapApplication.getMessageChannel(DetailsChannel.class);
boolean sent;
try {
sent = channel.send(MessageBuilder.withPayload(payload).build());
} catch (Exception ex) {
sent = false;
}
if (!sent) {
entityManager.persist(new FailedStreamEventEntity(event));
}
}
To handle the problem i made a cron job method to republish failed events again when the broker is UP.
#Scheduled(initialDelay = 10000, cron = "0 * * * * ?")
public void reSendFailedEvents() {
if (kafkaIndicator.health().getStatus().equals(Status.UP)) {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Optional.of(FailedStreamEventEntity.class)
.map(builder::createQuery)
.map(query -> query.select(query.from(FailedStreamEventEntity.class)))
.map(entityManager::createQuery)
.map(TypedQuery::getResultList)
.map(List::stream)
.orElse(Stream.empty())
.peek(entityManager::remove)
.map(FailedStreamEventEntity::getEvent)
.forEach(BootstrapApplication.getEventPublisher()::publishEvent);
}
}
But, i believe its the worst solution to handle the problem. Also, i believe that there is a better way that can execute this code once the service reconnected to the broker. (Any help ?)

Related

Spring boot web socket message broker shutdown before redis stream

I have a Spring Boot application, implementing Websocket as well as Redis stream.
The flow is, Subscriber (who subscribes Redis stream) upon receiving message will then send that information into Websocket (using STOMP protocol and AmazonMQ - ActiveMQ as an external message broker).
Example of 1 consumer group
public Subscription fExecutionResponse(RedisTemplate<String, Object> redisTemplate) {
try {
String groupName = FStreamName.fExecutionConsumerGroup;
String streamKey = FStreamName.fExecutionStream;
createConsumerGroup(streamKey, groupName, redisTemplate);
val listenerContainer = listenerContainer(redisTemplate, FTradeDataRedisEvent.class);
val subscription = listenerContainer.receiveAutoAck(
Consumer.from(groupName, FStreamName.fExecutionConsumer),
StreamOffset.create(streamKey, ReadOffset.lastConsumed()),
message -> {
log.info("[Subscription F_EXECUTION]: {}", message.getValue());
FTradeDataRedisEvent mTradeEvent = message.getValue();
try {
if (ExternalConfiguration.futureTradeDataSource.equals(ExecutionSource.ALL) || ExternalConfiguration.futureTradeDataSource.equals(mTradeEvent.getSource())) {
futureProductService.updateProductByDummy(mTradeEvent);
futureExecutionToTickHistoricalService.transform(mTradeEvent);
}
} catch (Exception ex) {
log.error("[fTradeEventResponse] error: {}", ex.getMessage());
}
redisTemplate.opsForStream().acknowledge(streamKey, groupName, message.getId());
redisTemplate.opsForStream().delete(streamKey, message.getId());
});
listenerContainer.start();
log.info("A stream key `{}` had been successfully set up in the group `{}`", streamKey, groupName);
return subscription;
} catch (Exception ex) {
log.error(ex.getMessage(), ex);
}
return null;
}
futureExecutionToTickHistoricalService.transform will send the data to web socket using SimpMessageSendingOperations
public void transform(FTradeDataRedisEvent tradeData) {
if (lastUpdateTickHistoricalDataTime == 0L) {
Calendar calendar = Calendar.getInstance();
this.lastUpdateTickHistoricalDataTime = calendar.getTimeInMillis();
}
List<FTickHistorical> res = separateFTickHistorical(tradeData);
res.forEach(tickHistorical -> {
List<KlineResponse> klineResponses = new ArrayList<>();
klineResponses.add(new KlineResponse(tickHistorical));
messageTemplate.convertAndSend(
PUBLIC_TOPIC_PREDIX + Constants.F_PRODUCTS_DESTINATION + "/" + tickHistorical.getProductId() + "/klines" + "_" + tickHistorical.getResolution().getCode(),
new HistoryResponse(klineResponses)
);
});
}
There are two problems with this setup, I have resolved one of them.
The Redis stream subscriber is started up before the connection to the external message broker is ready. Solved (listen to BrokerAvailabilityEvent and only then start Redis subscriptions)
When redeploy or shutdown application on IDE (like Intellij). The connection to the broker is again destroyed first (before the Redis stream subscribers), at the same time, there are still some data sending to the socket. This cause error: Message broker not active
I don't know how to configure the Spring boot application, so that when the application is stopped, it first stops consuming messages from Redis stream, process all of the pending messages, then close the broker connection.
This is the error log when the application is destroyed.
14.756 INFO 41184 --- [extShutdownHook] c.i.c.foapi.SpotStreamSubscriber : Broker not available
2022-10-09 14:14:14.757 INFO 41184 --- [extShutdownHook] c.i.c.f.config.RedisSubConfiguration : Broker not available
2022-10-09 14:14:14.781 ERROR 41184 --- [cTaskExecutor-1] c.i.c.foapi.consumer.SpotStreamConsumer : [executionResponse] error: Message broker not active. Consider subscribing to receive BrokerAvailabilityEvent's from an ApplicationListener Spring bean.; value of message: ExecutionRedisEvent(id=426665086, eventTime=2022-10-09T14:13:45.056809, productId=2, price=277.16, quantity=0.08, buyerId=2, sellerId=3, createdDate=2022-10-09T14:13:45.056844, takerSide=SELL, orderBuyId=815776680, orderSellId=815776680, symbol=bnbusdt, source=BINANCE)
2022-10-09 14:14:14.785 ERROR 41184 --- [cTaskExecutor-1] c.i.c.foapi.consumer.SpotStreamConsumer : [dummyOrderBookResponse] error: Message broker not active. Consider subscribing to receive BrokerAvailabilityEvent's from an ApplicationListener Spring bean.; value of message: DummyOrderBookEvent(productId=1, bids=[DummyOrderBook(price=1941.6, qty=0), DummyOrderBook(price=18827.3, qty=0.013), DummyOrderBook(price=18938.8, qty=5.004), DummyOrderBook(price=18940.3, qty=22.196), DummyOrderBook(price=18982.5, qty=20.99), DummyOrderBook(price=19027.2, qty=0.33), DummyOrderBook(price=19045.8, qty=8.432)
This is the code of SpotStreamSubscriber
#Component
#RequiredArgsConstructor
#Slf4j
public class SpotStreamSubscriber implements ApplicationListener<BrokerAvailabilityEvent> {
private final SpotStreamConsumer spotStreamConsumer;
#Override
public void onApplicationEvent(BrokerAvailabilityEvent event) {
if (event.isBrokerAvailable()) {
log.info("Broker ready");
spotStreamConsumer.subscribe();
} else {
log.info("Broker not available");
}
}
}
As you can see, the message broker is destroyed before the pending Redis messages have a chance to be processed.
Current architecture,
We use external message broker so that we can scale horizontally the api.

Spring Apache Kafka onFailure Callback of KafkaTemplate not fired on connection error

I'm experimenting a lot with Apache Kafka in a Spring Boot App at the moment.
My current goal is to write a REST endpoint that takes in some message payload, which will use a KafkaTemplate to send the data to my local Kafka running on port 9092.
This is my producer config:
#Bean
public Map<String,Object> producerConfig() {
// config settings for creating producers
Map<String,Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,this.bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,5000);
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,4000);
configProps.put(ProducerConfig.RETRIES_CONFIG,0);
return configProps;
}
#Bean
public ProducerFactory<String,String> producerFactory() {
// creates a kafka producer
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean("kafkaTemplate")
public KafkaTemplate<String,String> kafkaTemplate(){
// template which abstracts sending data to kafka
return new KafkaTemplate<>(producerFactory());
}
My rest endpoint forwards to a service, the service looks like this:
#Service
public class KafkaSenderService {
#Qualifier("kafkaTemplate")
private final KafkaTemplate<String,String> kafkaTemplate;
#Autowired
public KafkaSenderService(KafkaTemplate<String,String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessageWithCallback(String message, String topicName) {
// possibility to add callbacks to define what shall happen in success/ error case
ListenableFuture<SendResult<String,String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new KafkaSendCallback<String, String>() {
#Override
public void onFailure(KafkaProducerException ex) {
logger.warn("Message could not be delivered. " + ex.getMessage());
}
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Your message was delivered with following offset: " + result.getRecordMetadata().offset());
}
});
}
}
The thing now is: I'm expecting the "onFailure()" method to get called when the message could not be sent. But this seems not to work. When I change the bootstrapServers variable in the producer config to localhost:9091 (which is the wrong port, so there should be no connection possible), the producer tries to connect to the broker. It will do several connection attempts, and after 5 seconds, a TimeOutException will occur. But the "onFailure() method won't get called. Is there a way to achieve that the "onFailure()" method can get called event if the connection cannot be established?
And by the way, I set the retries count to zero, but the prodcuer still does a second connection attempt after the first one. This is the log output:
EDIT: it seems like the Kafke producer/ KafkaTemplate goes into an infinite loop when the broker is not available. Is that really the intended behaviour?
The KafkaTemplate does really nothing fancy about connection and publishing. Everything is delegated to the KafkaProducer. What you describe here would happen exactly even if you'd use just plain Kafka Client.
See KafkaProducer.send() JavaDocs:
* #throws TimeoutException If the record could not be appended to the send buffer due to memory unavailable
* or missing metadata within {#code max.block.ms}.
Which happens by the blocking logic in that producer:
/**
* Wait for cluster metadata including partitions for the given topic to be available.
* #param topic The topic we want metadata for
* #param partition A specific partition expected to exist in metadata, or null if there's no preference
* #param nowMs The current time in ms
* #param maxWaitMs The maximum time in ms for waiting on the metadata
* #return The cluster containing topic metadata and the amount of time we waited in ms
* #throws TimeoutException if metadata could not be refreshed within {#code max.block.ms}
* #throws KafkaException for all Kafka-related exceptions, including the case where this method is called after producer close
*/
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long nowMs, long maxWaitMs) throws InterruptedException {
Unfortunately this is not explained in the send() JavaDocs which claims to be fully asynchronous, but apparently it is not. At least in this metadata part which has to be available before we enqueue the record for publishing.
That's what we cannot control and it is not reflected on the returned Future:
try {
clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), nowMs, maxBlockTimeMs);
} catch (KafkaException e) {
if (metadata.isClosed())
throw new KafkaException("Producer closed while send in progress", e);
throw e;
}
See more info in Apache Kafka docs how to adjust the KafkaProducer for this matter: https://kafka.apache.org/documentation/#theproducer
Question answered inside the discussion on https://github.com/spring-projects/spring-kafka/discussions/2250# for anyone else stumbling across this thread. In short, kafkaTemplate.getProducerFactory().reset();does the trick.

IBM MQ provider for JMS : How to automatically roll back messages?

Working versions in the app
IBM AllClient version : 'com.ibm.mq:com.ibm.mq.allclient:9.1.1.0'
org.springframework:spring-jms : 4.3.9.RELEASE
javax.jms:javax.jms-api : 2.0.1
My requirement is that in case of the failure of a message processing due to say, consumer not being available (eg. DB is unavailable), the message remains in the queue or put back on the queue (if that is even possible). This is because the order of the messages is important, messages have to be consumed in the same order that they are received. The Java app is single-threaded.
I have tried the following
#Override
public void onMessage(Message message)
{
try{
if(message instanceOf Textmessage)
{
}
:
:
throw new Exception("Test");// Just to test the retry
}
catch(Exception ex)
{
try
{
int temp = message.getIntProperty("JMSXDeliveryCount");
throw new RuntimeException("Redlivery attempted ");
// At this point, I am expecting JMS to put the message back into the queue.
// But it is actually put into the Bakout queue.
}
catch(JMSException ef)
{
String temp = ef.getMessage();
}
}
}
I have set this in my spring.xml for the jmsContainer bean.
<property name="sessionTransacted" value="true" />
What is wrong with the code above ?
And if putting the message back in the queue is not practical, how can one browse the message, process it and, if successful, pull the message (so it is consumed and no longer on the queue) ? Is this scenario supported in IBM provider for JMS?
The IBM MQ Local queue has BOTHRESH(1).
To preserve message ordering, one approach might be to stop the message listener temporarily as part of your rollback strategy. Looking at the Spring Boot doc for DefaultMessageListenerContainer there is a stop(Runnable callback) method. I've experimented with using this in a rollback as follows.
To ensure my Listener is single threaded, on my DefaultJmsListenerContainerFactory I set containerFactory.setConcurrency("1").
In my Listener, I set an id
#JmsListener(destination = "DEV.QUEUE.2", containerFactory = "listenerTwoFactory", concurrency="1", id="listenerTwo")
And retrieve the DefaultMessageListenerContainer instance.
JmsListenerEndpointRegistry reg = context.getBean(JmsListenerEndpointRegistry.class);
DefaultMessageListenerContainer mlc = (DefaultMessageListenerContainer) reg.getListenerContainer("listenerTwo");
For testing, I check JMSXDeliveryCount and throw an exception to rollback.
retryCount = Integer.parseInt(msg.getStringProperty("JMSXDeliveryCount"));
if (retryCount < 5) {
throw new Exception("Rollback test "+retryCount);
}
In the Listener's catch processing, I call stop(Runnable callback) on the DefaultMessageListenerContainer instance and pass in a new class ContainerTimedRestart as defined below.
//catch processing here and decide to rollback
mlc.stop(new ContainerTimedRestart(mlc,delay));
System.out.println("#### "+getClass().getName()+" Unable to process message.");
throw new Exception();
ContainerTimedRestart extends Runnable and DefaultMessageListenerContainer is responsible for invoking the run() method when the stop call completes.
public class ContainerTimedRestart implements Runnable {
//Container instance to restart.
private DefaultMessageListenerContainer theMlc;
//Default delay before restart in mills.
private long theDelay = 5000L;
//Basic constructor for testing.
public ContainerTimedRestart(DefaultMessageListenerContainer mlc, long delay) {
theMlc = mlc;
theDelay = delay;
}
public void run(){
//Validate container instance.
try {
System.out.println("#### "+getClass().getName()+"Waiting for "+theDelay+" millis.");
Thread.sleep(theDelay);
System.out.println("#### "+getClass().getName()+"Restarting container.");
theMlc.start();
System.out.println("#### "+getClass().getName()+"Container started!");
} catch (InterruptedException ie) {
ie.printStackTrace();
//Further checks and ensure container is in correct state.
//Report errors.
}
}
I loaded my queue with three messages with payloads "a", "b", and "c" respectively and started the listener.
Checking DEV.QUEUE.2 on my queue manager I see IPPROCS(1) confirming only one application handle has the queue open. The messages are processed in order after each is rolled five times and with a 5 second delay between rollback attempts.
IBM MQ classes for JMS has poison message handling built in. This handling is based on the QLOCAL setting BOTHRESH, this stands for Backout Threshold. Each IBM MQ message has a "header" called the MQMD (MQ Message Descriptor). One of the fields in the MQMD is BackoutCount. The default value of BackoutCount on a new message is 0. Each time a message rolled back to the queue this count is incremented by 1. A rollback can be either from a specific call to rollback(), or due to the application being disconnected from MQ before commit() is called (due to a network issue for example or the application crashing).
Poison message handling is disabled if you set BOTHRESH(0).
If BOTHRESH is >= 1, then poison message handling is enabled and when IBM MQ classes for JMS reads a message from a queue it will check if the BackoutCount is >= to the BOTHRESH. If the message is eligible for poison message handling then it will be moved to the queue specified in the BOQNAME attribute, if this attribute is empty or the application does not have access to PUT to this queue for some reason, it will instead attempt to put the message to the queue specified in the queue managers DEADQ attribute, if it can't put to either of these locations it will be rolled back to the queue.
You can find more detailed information on IBM MQ classes for JMS poison message handling in the IBM MQ v9.1 Knowledge Center page Developing applications>Developing JMS and Java applications>Using IBM MQ classes for JMS>Writing IBM MQ classes for JMS applications>Handling poison messages in IBM MQ classes for JMS
In Spring JMS you can define your own container. One container is created for one Jms Destination. We should run a single-threaded JMS listener to maintain the message ordering, to make this work set the concurrency to 1.
We can design our container to return null once it encounters errors, post-failure all receive calls should return null so that no messages are polled from the destination till the destination is active once again. We can maintain an active state using a timestamp, that could be simple milliseconds. A sample JMS config should be sufficient to add backoff. You can add small sleep instead of continuously returning null from receiveMessage method, for example, sleep for 10 seconds before making the next call, this will save some CPU resources.
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public JmsListenerContainerFactory<?> jmsContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory() {
#Override
protected DefaultMessageListenerContainer createContainerInstance() {
return new DefaultMessageListenerContainer() {
private long deactivatedTill = 0;
#Override
protected Message receiveMessage(MessageConsumer consumer) throws JMSException {
if (deactivatedTill < System.currentTimeMillis()) {
return receiveFromConsumer(consumer, getReceiveTimeout());
}
logger.info("Disabled due to failure :(");
return null;
}
#Override
protected void doInvokeListener(MessageListener listener, Message message)
throws JMSException {
try {
super.doInvokeListener(listener, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
private long getDelay(int retryCount) {
if (retryCount <= 1) {
return 20;
}
return (long) (20 * Math.pow(2, retryCount));
}
private void handleException(Message msg) throws JMSException {
if (msg.propertyExists("JMSXDeliveryCount")) {
int retryCount = msg.getIntProperty("JMSXDeliveryCount");
deactivatedTill = System.currentTimeMillis() + getDelay(retryCount);
}
}
#Override
protected void doInvokeListener(SessionAwareMessageListener listener, Session session,
Message message)
throws JMSException {
try {
super.doInvokeListener(listener, session, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
};
}
};
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
}

Spring JMS Template Sync Receive on Non-Durable Subscriber Message Loss

1. Background
We are evaluating Spring JMS and testing out the JMSTemplate for various scenarios - queues, topics (durable, non-durable).
We experienced message loss for non-durable topic subscribers and would like to seek clarifications here.
2. Problem Statement
a) We wrote a standalone java program that would call the JMSTemplate.receive method every n secs to receive messages synchronously from a non-durable topic**.
b) We noticed that there is always message loss after the 1st invocation of the JMSTemplate.receive method. This was due to the JMSTemplate.receive method stopping the connection when it reaches ConnectionFactoryUtils.releaseConnection(...).
JMSTemplate:
public <T> T execute(SessionCallback<T> action, boolean startConnection) throws JmsException
{
Assert.notNull(action, "Callback object must not be null");
Connection conToClose = null;
Session sessionToClose = null;
try {
Session sessionToUse = ConnectionFactoryUtils.doGetTransactionalSession(
getConnectionFactory(), this.transactionalResourceFactory, startConnection);
if (sessionToUse == null) {
conToClose = createConnection();
sessionToClose = createSession(conToClose);
if (startConnection) {
conToClose.start();
}
sessionToUse = sessionToClose;
}
if (logger.isDebugEnabled()) {
logger.debug("Executing callback on JMS Session: " + sessionToUse);
}
return action.doInJms(sessionToUse);
}
catch (JMSException ex) {
throw convertJmsAccessException(ex);
}
finally {
JmsUtils.closeSession(sessionToClose);
ConnectionFactoryUtils.releaseConnection(conToClose, getConnectionFactory(), startConnection); // the connection is stopped here
}
}
ConnectionFactoryUtils.releaseConnection(...):
public static void releaseConnection(Connection con, ConnectionFactory cf, boolean started) {
if (con == null) {
return;
}
if (started && cf instanceof SmartConnectionFactory && ((SmartConnectionFactory) cf).shouldStop(con)) {
try {
con.stop(); // connection was stopped here
}
catch (Throwable ex) {
logger.debug("Could not stop JMS Connection before closing it", ex);
}
}
try {
con.close();
}
catch (Throwable ex) {
logger.debug("Could not close JMS Connection", ex);
}
3. Validation with Spring Documentation
The Spring JMS documentation advised to use pooled connections, so we made sure we did.
Our java program is obtaining the JMS Connection Factories from WLS JMS and MQ JMS (LDAP) Providers and decorated with SingleConnectionFactory and CachingConnectionFactory in respective test cases.
This is what we observed during testing:
a) SingleConnectionFactory - Connection was stopped (Consumer/Session were closed as well).
b) CachingConnectionFactory - Connection was also stopped (although Consumer/Session were cached and not closed)
4. Questions:
a) Has anybody hit the same issue as us?
b) Would you consider this as a defect of Spring JMS for the use case of Non-Durable Subscriptions?
c) We are considering customizing a CachingConnectionFactory that won't stop the connection. Any downsides?
Note: We are aware that Async MessageListeners like DMLC/SMLC and Sync Durable Topic Subscribers using JMSTemplate would not have this issue. We just wish to clarify for Sync Non-Durable Topic Subscribers using JMSTemplate.
Would greatly appreciate any comments and thoughts.
Thanks!
Victor

Correct usage of JMS-Topic communication

I want to use JMS (Topic) in my JavaEE 6 project. I have one class which acts as a publisher and subscriber of a topic at once. The following code shows the most important parts of the class.
public class MessageHandler implements MessageListener {
private static TopicConnectionFactory factory;
private static Topic topic;
private TopicSubscriber subscriber;
private TopicPublisher publisher;
public MessageHandler() throws NamingException, JMSException {
if (factory == null) {
Context context = new InitialContext();
factory = (TopicConnectionFactory) new InitialContext()
.lookup("jms/myfactory");
topic = (Topic) context.lookup("jms/mytopic");
}
TopicConnection connection = factory.createTopicConnection();
connection.start();
TopicSession session = connection
.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
subscriber = session.createSubscriber(topic);
}
#Override
public void onMessage(Message message) {
try {
ObjectMessage msg = (ObjectMessage) message;
Object someO= msg.getObject();
System.out.println(this + " receives "+someO);
} catch (JMSException e) {
e.printStackTrace();
}
}
public void sendMessage(Object someO) {
try {
ObjectMessage msg = session.createObjectMessage();
msg.setObject(someO);
publisher = session.createPublisher(topic);
publisher.publish(msg);
publisher.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
}
My question is, if this is a good way to design such a class. My idea was to share one connection and session for both subscribing and publishing. But I'm scared that this could lead to some overhead or blocking because I'm not closing the connection, session, subscriber and publisher until the object is not needed anymore. All examples I found online directly close everything after a message was sent or received...
Thanks in advance!
Why do you want the class to be subscriber and publisher at once?
Whenever using a messaging system, you may well act as both, but why would you do it for the same topic, you surely don't want to receive your own messages?
So, the purpose of a topic, is to be used among several parts within an application or among several applications - one is placing a message into the topic and others receive the message they subscribed for.
And that also explains what you saw in the examples - the message processing is a one time thing, thus the connection can be closed afterwards.
By the way, since you ask this question within the "java-ee 6" area - can't you use a message driven bean, annotate your topic configuration and let the application server do the infrastructure part for you?

Resources