Spring integration: difficulty with transaction between 2 activators - spring

I have this use case.
First chain:
<int:chain input-channel="inserimentoCanaleActivate" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
This is the relative code:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public EventMessage<ModificaOperativitaRapporto> activate(EventMessage<InserimentoCanale> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
All is working great.
Then I have another chain:
<int:chain id="onlineCensimentoClienteChain" input-channel="ONLINE_CENSIMENTO_CLIENTE" output-channel="inserimentoCanaleActivate">
<int:service-activator ref="onlineCensimentoClienteActivator" method="activate" />
<int:splitter expression="payload.getPayload().getCanali()" />
</int:chain>
And the relative activator:
#Override
public EventMessage<CensimentoCliente> activate(EventMessage<CensimentoCliente> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
The CensimentoCliente payload as described below has a List of payload of the first chain, so with a splitter I split on the list and reuse the code of the first chain.
public interface CensimentoCliente extends Serializable {
Collection<? extends InserimentoCanale> getCanali();
void setCanali(Collection<? extends InserimentoCanale> canali);
...
}
But since every activator gets his transaction definition (since the first one can live without the second one) I have a use case where the transactions are separated.
The goal is to have the db modifies of the two chains been part of the same transaction.
Any help?
Kind regards
Massimo

You can accomplish this by creating a custom channel (or other custom component, but this is the simplest approach) that wraps the message dispatch in a TransactionTemplate callback execution:
public class TransactionalChannel extends AbstractSubscribableChannel {
private final MessageDispatcher dispatcher = new UnicastingDispatcher();
private final TransactionTemplate transactionTemplate;
TransactionalChannel(TransactionTemplate transactionTemplate) {
this.transactionTemplate = transactionTemplate;
}
#Override
protected boolean doSend(final Message<?> message, long timeout) {
return transactionTemplate.execute(new TransactionCallback<Boolean>() {
#Override
public Boolean doInTransaction(TransactionStatus status) {
return getDispatcher().dispatch(message);
}
});
}
#Override
protected MessageDispatcher getDispatcher() {
return dispatcher;
}
}
In your XML, you can define your channel and transaction template and reference your custom channel just as you would any other channel:
<bean id="transactionalChannel" class="com.stackoverflow.TransactionalChannel">
<constructor-arg>
<bean class="org.springframework.transaction.support.TransactionTemplate">
<property name="transactionManager" ref="transactionManager"/>
<property name="propagationBehavior" value="#{T(org.springframework.transaction.TransactionDefinition).PROPAGATION_REQUIRES_NEW}"/>
</bean>
</constructor-arg>
</bean>
For your example, you could perhaps use a bridge to pass the message through the new channel:
<int:bridge input-channel="inserimentoCanaleActivate" output-channel="transactionalChannel" />
<int:chain input-channel="transactionalChannel" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>

You you have <service-activator> and #Transactional on service method, the transaction will be bounded only to that method invocation.
If you want to have a transction for entire message flow (or its part) you should declare TX advice somewhere before.
If your channels are direct all service invocations will be wrapped with the same transaction.
The most simple way to accomplish your wishes, write simple #Gateway interface with #Transactional and call it from the start of your message flow.
To clarify a bit regarding transactions
Understanding Transactions in Message flows

Are these modifying 2 separate relational databases ? If so you are looking at an XA transaction. Now if you are running this on a non XA container like tomcat, all of this must be done in a single thread that is watched by a transaction manager - (you will have to piggy back on the transaction manager that actually triggers these events). The transaction manager can be a JMS message or a poller against some data source. Also this processing must be done in a single thread so that spring can help you run the entire process in a single transaction.
As a final note , do not introduce threadpools / queues between service activators. This can cause the activators to run in separate threads

Related

Spring Integration AOP for Logging outbound Http requests

I was looking at a post from 2014 about using Spring AOP for logging HTTP requests/replies:
Spring integration + logging response time for http adapters(or any endpoint)
To this end, I tried this AOP configuration:
<aop:config >
<aop:aspect id="myAspect" ref="inboundOutboundHttpLogging">
<aop:pointcut id="handleRequestMessageMethod"
expression="execution(* org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleRequestMessage(*))
and
args(message))" />
<aop:before method="requestMessageSent" pointcut-ref="handleRequestMessageMethod" arg-names="message"/>
</aop:aspect>
</aop:config>
Is there perhaps a newer way of using AOP for logging HTTP requests? I want to avoid having to put per-request logging (i.e. outbound-gateway advice on each gateway).
Thanks for any pointers.
The handleRequestMessage() is essentially an input message to this gateway and output. So, if you don't like implementing an AbstractRequestHandlerAdvice and adding it into each your gateway via their <request-handler-advice-chain>, then consider to use a <wire-tap> for input and output channels of those gateway.
You may implement, though, a BeanPostProcessor.postProcessBeforeInitialization() to add your custom AbstractRequestHandlerAdvice into those HTTP gateways you are interested in.
My point is that <aop:aspect> you are presenting us really might lead to some unexpected behavior, like that final method concern you have edit out from your question...
Based upon the suggestions made by #artem-bilan, I was able to find a solution similar to AOP for injecting logging AbstractRequestHandlerAdvice into HTTP outbound request processing. I'm contributing this as a way of showing a possible solution for anyone else who comes across this question.
As #artem-bilan mentions, there is a mechanism for injecting AbstractRequestHandlerAdvice into a AbstractReplyProducingMessageHandler such as an HttpRequestExecutingMessageHandler. In my case, I'm wanting to log the message contents (header and payload) prior to the HTTP call and also log the return message (header and payload). This works nicely.
#artem-bilan suggests that the BeanPostProcessor mechanism can allow to inject the advice without having to add that declaration to each http outbound bean. The BeanPostProcessor looks like this:
public class AddHttpOutboundAdvicePostProcessor implements BeanPostProcessor {
final List<Advice> adviceList;
final AddHttpOutboundAdvicePostProcessor(List<Advice> adviceList) {
this.adviceList = adviceList;
}
#Override
public Object postProcessAfterInitialization(#NonNull Object bean,
#NonNull String beanName)
throws BeansException {
if (bean instanceof AbstractHttpRequestExecutingMessageHandler) {
((AbstractHttpRequestExecutingMessageHandler) bean).setAdviceChain(adviceList);
}
return bean;
}
}
We need to set up this bean into our context. (I'm a die-hard declarative fan hence this is in XML.)
<bean id = "addHttpLoggingPostProcessor"
class = "com.my.package.AddHttpOutboundAdvicePostProcessor" >
<constructor-arg name="adviceList>
<util:list>
<ref bean="outboundLogger" />
</util:list>
</constructor-arg>
</bean>
Here, the outboundLogger is a bean that managers the request-handler-advice. In my choice of implementation, I'm sending a copy of the outbound message to a channel for logging beforehand, and a copy of the response message down another channel for logging the response. The XML declaration of the bean takes the two channel names as constructors:
<bean id="outboundLogger" class="com.my.package.HttpRequestProcessorLogger" >
<constructor-arg name="requestLoggingChannelName" value="XXX" />
<constructor-arg name="responseLoggingChannelName" value="YYY" />
</bean>
where XXX and YYY are the names of channels to the components that perform the logging. I've set these channels to be ExecutorChannels so that the logging is performed asynchronously.
The HttpRequestProcessorLogger bean manages the call to handleRequestMessage():
public class HttpRequestProcessorLogger extends AbstractRequestHandlerAdvice {
private MessageChannel requestLoggingChannel;
private MessageChannel responseLoggingChannel;
private String requestLoggingChannelName;
private String responseLoggingChannelName;
private BeanFactory beanFactory;
public HttpRequestProcessorLogger(String requestLoggingChannelName, String responseLoggingChannelName) {
this.requestLoggingChannelName = requestLoggingChannelName;
this.responseLoggingChannelName = responseLoggingChannelName;
}
#Override
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) {
getChannels();
requestLoggingChannel.send(message);
final Object result = callback.execute();
final message<?> outputMessage =
(MessageBuilder.class.isInstance(result) ? ((MessageBuilder<?>) result).build()
: (Message<?>) result;
responseLoggingChannel.send(outputMessage);
return outputMessage;
}
private synchronized void getChannels() {
if (requestLoggingChannelName != null) {
final DestinationResolver<MessageChannel>
channelResolver = ChannelResolverUtils.getChannelResolver(this.beanFactory);
requestLoggingChannel = channelResolver.resolverDestination(requestLoggingChannelName);
responseLoggingChannel = channelResolver.resolverDestination(responseLoggingChannelName);
requestLoggingChannelName = null;
responseLoggingChannelName = null;
}
}
#Override
public void setBeanFactory(#NonNull BeanFactory beanFactory) throws BeanException {
this.beanFactory = beanFactory;
}
}

Spring transaction closes connection once commit for Propagation type REQUIRED_NEW

In my application i am processing messages from queue using camel and process it in multiple threads.
I tried to persist the data to a table during the process with PlatformTransactionManager, with Propagation type "REQUIRED_NEW", but on using the commit the transaction seems to be closed. and connection not available for other process.
The application context.xml looks as in below snippet.
<!-- other definitions -->
<context:property-placeholder location="classpath:app.properties"/>
<bean id="appDataSource" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.OracleDriver"/>
<property name="url" value="${dburl}"/>
<property name="username" value="${dbUserName}"/>
<property name="password" value="${dbPassword}"/>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="appDataSource" />
</bean>
<!-- Other bean reference. -->
<bean id="itemDao" class="app.item.dao.ItemDao">
<property name="dataSource" ref="appDataSource"/>
</bean>
<bean id="orderProcess" class="app.order.process.OrderProcess" scope="prototype">
<property name="itemDao" ref="itemDao"/>
</bean>
I have a DAO classes something like below, also there are other Dao's.
public class ItemDao{
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;
private PlatformTransactionManager transactionManager;
private TransactionStatus transactionStatus;
//Setter injection of datasource
public void setDataSource(DataSource dataSource) {
this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
this.transactionManager = new DataSourceTransactionManager(dataSource);
}
//setterInjection
public void setTransactionManager(PlatformTransactionManager transactionManager) {
this.transactionManager = transactionManager;
}
public void createAndStartTransaction()
{
DefaultTransactionDefinition transDef = new DefaultTransactionDefinition();
transDef.setPropagationBehavior(Propagation.REQUIRES_NEW.ordinal());
if (transactionManager != null)
{
transactionStatus = transactionManager.getTransaction(transDef);
} // if transactionManager null log something went incorrect
}
public void commit() throws Exception
{
if (transactionManager != null && transactionStatus != null)
{
transactionManager.commit(transactionStatus);
}
}
public void rollBack() throws Exception
{
if (transactionManager != null && transactionStatus != null)
{
transactionManager.rollback(transactionStatus);
}
}
}
Finally in the code flow, once the context is defined and using those beans process the message.
Parse the message from a queue
validate the message, check if the metadata information in database, insert the data to the database.
I am trying to persist the data to database immediately at this time
After that the flow will be processing further.
The challange is that when we tried to use the
Below is what I did to persist the data to database. Refer the code snippet.
But this is working when i perform a a testing with single instance.
//....
//.. fetch info from data base using other dao's
//.. insert into another table
// Below code i added where i need to persist the data to database
try{
orderProcess.itemDao.createAndStartTransaction();
orderProcess.itemDao.
}catch(Exception exe){
orderProcess.itemDao.rollBack();
}finally{
//within try catch
orderProcess.commit();
}
//.. other dao's used to fetch the data from different table database
//.. still the process is not completed
When the process try to fetch the next message from queue, it was not able to get the connection and throws connection null exception.
What is observed is the process closes the connection abruptly, so when the process picks the next message it is not having connection defined.
SQL state [null]; error code [0]; Connection is null.; nested exception is java.sql.SQLException: Connection is null.
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
Any idea how to persist the transaction independently during the process.
The design is not maintainable, but was able to modify the code for my requirement. Didn't notice any side effect
The DAO call was done from different layer.
I extracted the insert/update/delete to Specific DAO class.
And created a sperate method to call the insert(), etc. in this DAO.
public void checkAndValidate(Object input){
// check data exsits in DB
boolean exists = readDao.checkForData(input);
if(!exists){
// the method which was annotated with transactional
insertDataToDB(input);
}
//.. other process..
}
#Transactional
public Object insertDataToDB(Object data) throws exception {
try{
writeDao.insertData(data);
} catch(Exception exe)
{
//handle exception
}
}

Spring Integration Flow with Jdbc Message source which has dynamic query

I am trying to do a change data capture from oracle DB using spring cloud data flow with kafka as broker. I am using polling mechanism for this. I am polling the data base with a basic select query at regular intervals to capture any updated data. For a better fail proof system, I have persisted my last poll time in oracle DB and used it to get the data which is updated after last poll.
public MessageSource<Object> jdbcMessageSource() {
JdbcPollingChannelAdapter jdbcPollingChannelAdapter =
new JdbcPollingChannelAdapter(this.dataSource, this.properties.getQuery());
jdbcPollingChannelAdapter.setUpdateSql(this.properties.getUpdate());
return jdbcPollingChannelAdapter;
}
#Bean
public IntegrationFlow pollingFlow() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows.from(jdbcMessageSource(),spec -> spec.poller(Pollers.fixedDelay(3000)));
flowBuilder.channel(this.source.output());
flowBuilder.transform(trans,"transform");
return flowBuilder.get();
}
My queries in application properties are as below:
query: select * from kafka_test where LAST_UPDATE_TIME >(select LAST_POLL_TIME from poll_time)
update : UPDATE poll_time SET LAST_POLL_TIME = CURRENT_TIMESTAMP
This working perfectly for me. I am able to get the CDC from the DB with this approach.
The problem I am looking over now is below:
Creating an table just to maintain the poll time is an overburden. I am looking for maintaining this last poll time in a kafka topic and retrieve that time from kafka topic when I am making the next poll.
I have modified the jdbcMessageSource method as below to try that:
public MessageSource<Object> jdbcMessageSource() {
String query = "select * from kafka_test where LAST_UPDATE_TIME > '"+<Last poll time value read from kafka comes here>+"'";
JdbcPollingChannelAdapter jdbcPollingChannelAdapter =
new JdbcPollingChannelAdapter(this.dataSource, query);
return jdbcPollingChannelAdapter;
}
But the Spring Data Flow is instantiating the pollingFlow( ) (please see the code above) bean only once. Hence what ever the query that is run first will remain the same. I want to update the query with new poll time for each poll.
Is there a way where I can write a custom Integrationflow to have this query updated everytime I make a poll ?
I have tried out IntegrationFlowContext for that but wasn't successful.
Thanks in advance !!!
With the help of both the answer above, I was able to figure out the approach.
Write a jdbc template and wrap that as a bean and use it for the Integration Flow.
#EnableBinding(Source.class)
#AllArgsConstructor
public class StockSource {
private DataSource dataSource;
#Autowired
private JdbcTemplate jdbcTemplate;
private MessageChannelFactory messageChannelFactory; // You can use normal message channel which is available in spring cloud data flow as well.
private List<String> findAll() {
jdbcTemplate = new JdbcTemplate(dataSource);
String time = "10/24/60" . (this means 10 seconds for oracle DB)
String query = << your query here like.. select * from test where (last_updated_time > time) >>;
return jdbcTemplate.query(query, new RowMapper<String>() {
#Override
public String mapRow(ResultSet rs, int rowNum) throws SQLException {
...
...
any row mapper operations that you want to do with you result after the poll.
...
...
...
// Change the time here for the next poll to the DB.
return result;
}
});
}
#Bean
public IntegrationFlow supplyPollingFlow() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(this::findAll, spec -> {
spec.poller(Pollers.fixedDelay(5000));
});
flowBuilder.channel(<<Your message channel>>);
return flowBuilder.get();
}
}
In our use case, we were persisting the last poll time in a kafka topic. This was to make the application state less. Every new poll to the DB now, will have a new time in the where condition.
P.S: your messaging broker (kafka/rabbit mq) sdould be running in your local or connect to them if there are hosted on a different platform.
God Speed !!!
See Artem's answer for the mechanism for a dynamic query in the standard adapter; an alternative, however, would be to simply wrap a JdbcTemplate in a Bean and invoke it with
IntegrationFlows.from(myPojo(), "runQuery", e -> ...)
...
or even a simple lambda
.from(() -> jdbcTemplate...)
We have this test configuration (sorry, it is an XML):
<inbound-channel-adapter query="select * from item where status=:status" channel="target"
data-source="dataSource" select-sql-parameter-source="parameterSource"
update="delete from item"/>
<beans:bean id="parameterSource" factory-bean="parameterSourceFactory"
factory-method="createParameterSourceNoCache">
<beans:constructor-arg value=""/>
</beans:bean>
<beans:bean id="parameterSourceFactory"
class="org.springframework.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
<beans:property name="parameterExpressions">
<beans:map>
<beans:entry key="status" value="#statusBean.which()"/>
</beans:map>
</beans:property>
<beans:property name="sqlParameterTypes">
<beans:map>
<beans:entry key="status" value="#{ T(java.sql.Types).INTEGER}"/>
</beans:map>
</beans:property>
</beans:bean>
<beans:bean id="statusBean"
class="org.springframework.integration.jdbc.config.JdbcPollingChannelAdapterParserTests$Status"/>
Pay attention to the ExpressionEvaluatingSqlParameterSourceFactory and its createParameterSourceNoCache() factory. The this result can be used for the select-sql-parameter-source.
The JdbcPollingChannelAdapter has a setSelectSqlParameterSource on the matter.
So, you configure a ExpressionEvaluatingSqlParameterSourceFactory to be able to resolve some query parameter as an expression for some bean method invocation to get a desired value from Kafka. Then createParameterSourceNoCache() will help you to obtain an expected SqlParameterSource.
There is some info in docs as well: https://docs.spring.io/spring-integration/docs/current/reference/html/#jdbc-inbound-channel-adapter

JMS - How to send message back to the client?

This is my client side code :
public class ABCServlet extends HttpServlet {
protected void doGet(HttpServletRequest request,
HttpServletResponse response){
//do blah blah
String msg = null;
java.io.OutputStream os = response.getOutputStream();
java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os);
oos.writeObject(msg);
msg = null;
oos.flush();
oos.close();
}
I don't know how using the above code my listener gets kicked off -
public class ABCListener implements MessageListener {
#Override
public void onMessage(Message arg0) {
AbstractJDBCFacade façade = null;
try{
façade = something;
throw new UserException();
}catch(UserException ex){
log.error("ABC Exception " + ex);
}
Configuration :
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">....
<bean id="jmsQueue" class="org.springframework.jndi.JndiObjectFactoryBean">
<bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer102">
I have 3 questions :
1. without putting it on the queue explicitly , how a listener gets invoked?
2. When onMessage method throws UserException, instead of logging I want to pass the message to the client. How can I do that ?
3. Why would someone use JndiObjectFactoryBean instead of ActiveMQ...
JMS by design was supposed to be asynchronous and one-way. Even "synchronous" jms with using receive method of consumer will internally turn into creating a new temporary queue. And here we come to the second point about it's one-way nature. JMS queue was supposed to be one-way and that's why it is called point-to-point (http://www.enterpriseintegrationpatterns.com/patterns/messaging/PointToPointChannel.html). Of course technically with some dancing you will manage to achieve what you want but it is bad practice which will also lead to performance degradation due to the fact that you will need filtering.
To get this thing work fast the best way will be to have exactly one logical receiver (of course you can use concurrent cosumers for one receiver but that should be one logical consumer without any need of filtering the message).
without putting it on the queue explicitly , how a listener gets invoked?
Listener get invoked only when a message comes to a queue. Thats the only way to get it work as it was supposed to work.
In general there are two types of message consuming models: push (also known as event-driven consuming) and poll. In case of using push model all listeners (according to canonical observer pattern) got registered somewhere in the broker and then, when broker receive new message in some queue, it executes listener's method. On the others side in polling model consumer take care itself about receiving messages. So with some interval it comes to a broker and checks the queue for new messages.
Push model: http://www.enterpriseintegrationpatterns.com/patterns/messaging/EventDrivenConsumer.html
Poll model: http://www.enterpriseintegrationpatterns.com/patterns/messaging/PollingConsumer.html
When onMessage method throws UserException, instead of logging I want to pass the message to the client. How can I do that ?
Thats a very bad practice. Of course technically you can achieve it with dirty tricks but thats not the right way of using jms. When onMessage throws the exception then message wont be taken from the queue (of course if u did not reconfigured acknowledge mods or used another tricks). So the best way of solving your probem fmpv is to use redelivery limit on message and a dead letter queue(http://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html). If system was not able to process the message after some attempts (redelivery limit shows exactly this) then broker remove message from the queue and send it to a so-called dead letter queue where all failed (from the point of broker) messages are stored. And then client can read that queue and decide what to do with message.
In amq: http://activemq.apache.org/message-redelivery-and-dlq-handling.html
If you want to use so-called "synchronous" features in JMS and really there is no way of using dead letter queue or smth like that then actually you can use consumer.recieve method on the client. But in this case you should send response on every message. In case of success you can send one message and in case of failure error messages. And so a client will be able to understand what is going on. But i dont think that you need such a huge overhead cause actually you need only failure messages. Also in this case you will have to take care about appropriate receive timeouts.
Why would someone use JndiObjectFactoryBean instead of ActiveMQ...
That's cause you are using Spring and there are additional features especially for spring.
PS:
1. For consuming:
How can I send a message using just this piece of code? Don't I need
to put this on a queue? java.io.OutputStream os =
response.getOutputStream(); java.io.ObjectOutputStream oos = new
java.io.ObjectOutputStream(os); oos.writeObject(msg);
For receiving smth like this:
`
<bean id="connectionFactory" class="org.springframework.
jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="baseJNDITemplate"/>
<property name="jndiName"
value="weblogic.jms.ConnectionFactory"/>
</bean>
<bean id="queue" class="org.springframework.
jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="baseJNDITemplate"/>
<property name="jndiName" value="#{properties.queueName}"/>
</bean>
<bean id="messageListenerContainer"
class="org.springframework.jms.listener.
DefaultMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="queue"/>
<property name="messageListener" ref="messageListener"/>
<property name="sessionTransacted" value="true"/>
</bean>
<bean id="messageListener" class="com.example.ABCListener"/>
And then simply all logic for message processing will be in the listener.
For sending smth like this in config:
<bean id="jmsQueueTemplate"
class="org.springframework.
jms.core.JmsTemplate">
<property name="connectionFactory">
<ref bean="jmsConnectionFactory"/>
</property>
<property name="destinationResolver">
<ref bean="jmsDestResolver"/>
</property>
...
</bean>
<bean id="jmsDestResolver"
class=" org.springframework.jms.support.destination.
JndiDestinationResolver"/>
<bean id="jmsConnectionFactory"
class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jms/myCF"/>
<property name="lookupOnStartup" value="false"/>
<property name="cache" value="true"/>
<property name="proxyInterface" value="amq con fact here"/>
</bean>
and in code simply use jmsTemplate.send(queue, messageCreator) method:
#Autowired
ConnectionFactory connectionFactory;
#Test(enabled = false)
public void testJmsSend(final String msg) throws Exception {
JmsTemplate template = new JmsTemplate(connectionFactory);
template.send("test_queue", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
}
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/cspr_data_access_jms.html
I believe Dead channel comes in picture only when the message is not properly received by the receiver. In my case, the receiver
received it and processed it, however while processing it failed with
some exception. I want to let the sender know that there was a
exception and the message did not process successfully. I can do this
using a response queue but I don't want to do that, can the receiver
receive a message from the sender on the same queue ? How?
Dead letter channel is a kind of error handling for message processing also. If message processing had failed then after the limit end it got transferred there. It is not actually only for transport issues but also for processing issues. If the message processing got failed with exception then message will stay in the queue and wont be acked by default. So what we should do with this message? For example if it failed due to our database error or smth like this? We should initiate error-handling process, notify assurance systems and stakeholders, collect all necessary info and preserve the message. Due to this kind of queues, which was create d exactly for that, it is much easier. And then customer support team will investigate error queue for further analysis of what has happened. Also we have monitoring tools for notifications and statistics collection on such errors. After understanding what has happened message got removed from the queue and archived.
After processing a message, the consumer is responsible for deleting
the message. If the consumer doesn't delete the message, for example
because because it crashed while processing the message, the message
becomes visible again after the message's Visibility Timeout expires.
Each time this happens, the message's receive count is increased.
When this count reaches a configured limit, the message is placed in a
designated Dead Letter Queue.
http://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html
I can do this using a response queue but I don't want to do that, can
the receiver receive a message from the sender on the same queue ?
How?
For you it will look like it's the same queue but internally new temporary queue will be created. To achieve that you should use jms request\reply message pattern. More here: http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
The only part still confuses me is : If I expect my JMS listener
(receiver) to listen to the queue, then my sender should also
implement JMS and connect to the same queue and send a message. But in
the ABCListener application that I am supporting does not have any
configuration where the sender is configured to the queue. All the
sender does is 3 lines of code : java.io.OutputStream os =
response.getOutputStream(); java.io.ObjectOutputStream oos = new
java.io.ObjectOutputStream(os); oos.writeObject(msg); Literally, that
is it. I don't know how it still works!
Of course 3 lines of code with outputstream do nothing except populating msg string. To send any jms message to the queue you anyway will have to use JMS Api or some library like Spring which wrap it by adding additional features.
I've wrote simple samples to get it more clear.
Modified servlet for asynchronous processing with dead letter queue (for dlq you should create also another listener ofc)
public class AsynchronousJmsSenderServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String msg = null;
try (java.io.OutputStream os = response.getOutputStream()) {
try(java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os)) {
oos.writeObject(msg);
}
}
sendJmsMessage(msg);
}
private void sendJmsMessage(final String msg) {
ConnectionFactory connectionFactory = null; //here get it in some way from spring
JmsTemplate template = new JmsTemplate(connectionFactory);
template.send("your_queue_name", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
}
}
And here is the code for "synchronous" processing and status reply messages
public class SynchronousJmsSenderServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String msg = null;
try (java.io.OutputStream os = response.getOutputStream()) {
try(java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os)) {
oos.writeObject(msg);
}
}
sendJmsMessage(msg);
}
private void sendJmsMessage(final String msg) {
ConnectionFactory connectionFactory = null; //here get it in some way from spring
JmsTemplate template = new JmsTemplate(connectionFactory);
Message reply = template.sendAndReceive("your_queue_name", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
if(reply instanceof TextMessage) {
try {
String status = ((TextMessage) reply).getText();
//do error handling if status is error
} catch (JMSException ex) {
throw new RuntimeException("Unable to get status message", ex);
}
} else {
throw new RuntimeException("Only text messages are supported");
}
}
}
public class SynchronousJmsMessageListener implements SessionAwareMessageListener {
#Override
public void onMessage(Message request, Session session) throws JMSException {
try {
//do some processing
sendReply(request, session, "OK");
} catch (Exception ex) {
sendReply(request, session, "Error: " + ex.toString());
}
}
private void sendReply(Message request, Session session, String status) {
try {
TextMessage reply = null; //for example you can use ActiveMQTextMessage here
reply.setJMSCorrelationID(request.getJMSCorrelationID());
reply.setText(status);
MessageProducer producer = session.createProducer(reply.getJMSReplyTo());
producer.send(reply);
} catch (JMSException exception) {
throw new RuntimeException("Unable to send reply", exception);
}
}
}
You will need Spring 5 to have sendAndReceive method on jmsTemplate. Or you will have to do all that manually.
PS1: Please let me know if that will work

ItemReaderAdapter to Read Custom DAO

I have a requirement to use the spring batch to read the existing logic retrieved from database and the existing target object method returns me the list of objects after querying from database.
So I have a task to read this in chunks. When I see the list size from existing code, I see it is around 15000 but on implementing the spring batch, I wanted to read in chunks of 100 and this was not happening through ItemReaderAdapter.
Below code snippets would give you an idea of the issue I am mentioning. So would this be possible from Spring Batch. I notice the Delegating Job Sample Spring Example, but the service there returns the object on every chunk and not the total list object.
Please advice
Job.xml
<step id="firststep">
<tasklet>
<chunk reader="myreader" writer="mywriter" commit-interval="100" />
</tasklet>
</step>
<job id="firstjob" incrementer="idIncrementer">
<step id="step1" parent="firststep" />
</job>
<beans:bean id="myreader" class="org.springframework.batch.item.adapter.ItemReaderAdapter">
<beans:property name="targetObject" ref="readerService" />
<beans:property name="targetMethod" value="getCustomer" />
</beans:bean>
<beans:bean id="readerService" class="com.sh.java.ReaderService">
</beans:bean>
ReaderService.java
public class ReaderService {
public List<CustomItem> getCustomer() throws Exception {
/*
* code to get database instances
*/
List<CustomItem> customList = dao.getCustomers(date);
System.out.println("Customer List Size: " + customList.size()); //Here it is 15K
return (List<CustomItem>) customList;
}
}
Before all: read a 15K List<> of objects might impact (in negative) performance; check if you can write a custom SQL query and use a JDBC/Hibernate cursor item reader instead.
What you are trying to do is not possible using ItemReaderAdapter (it wasn't designed to read a chunk of object) but you can achieve the same result writing a custom ItemReader extending AbstractItemCountingItemStreamItemReader to inherit ItemStream capabilities and override the abstract or no-op methods; especially in:
doOpen() call your readerService.getCustomers() and save List<> in class variables,
in doRead() read next item - from List<> read in doOpen() - using built-in index stored in ExecutionContext
#Bellabax,
Doing the way you suggested also reads the entire database records in doOpen, however, from the list retrieved from doOpen, the reader reads it in a chunks. Pls advise
CustomerReader.java
public class CustomerReader extends
AbstractItemCountingItemStreamItemReader<Customer>
{
List<Customer> customerList;
public CustomerReader ()
{
}
#Override
protected void doClose() throws Exception
{
customerList.clear();
setMaxItemCount(0);
setCurrentItemCount(0);
}
#Override
protected void doOpen() throws Exception
{
customList = dao.getCustomers(date);
System.out.println("Customer List Size: "+list.size()); //This still prints 15k
setMaxItemCount(list.size());
}
#Override
protected Customer doRead() throws Exception
{
//Here reading 15K in chunks!
Customer customer = customList.get(getCurrentItemCount() - 1);
return customer;
}
}

Resources