How to poll directory for a file? - spring

I need to be able to poll a directory for a specific file using SCP, and once the file has been processed, it needs to keep polling.
Is this possible with Spring Batch?

The normal way to handle this is using Spring Integration. The way I'd address it is with a Spring Integration flow that uses a SFTP Inbound Channel Adapter to retrieve the files, then passes the transferred name to Spring Batch to launch. The flow would actually be similar to the sample in the SpringBatchIntegration in my Spring Batch Webinar here: https://github.com/mminella/SpringBatchWebinar
In that example, I use Twitter to launch the job. The only thing you'd need to change is the twitter piece for the SFTP.

I had to solve the same question (but just accessing to the local filesystem) and I did not find any solution in the framework, so I ended up creating my own class which polls for the file and creates a resource.I know this is just a workaround, but I haven't found a better way to do that so far.
I can't remember where (maybe in the "retry handling" part) but I read in the documentation something like "batch jobs should not try to solve issues like files not found, connections down and so, these kind of errors should make the job raise an error to be handled by operators" so I gave up...
On the other hand Spring Retry was part of Spring batch and now is a new separate library, maybe you just can assume the file is there and if the reader does not find it, let the step fail and establish a "retry policy" for that step, but for me that's overkill.
This is what I did:
<bean id="resourceFactory"
class="com.mycompany.batch.zip.ResourceFactory">
<property name="retryAttemps" value="${attemps}" />
<property name="timeBetweenAttemps" value="${timeBetweenAttemps}"/>
</bean>
<bean id="myResource"
factory-bean="resourceFactory" factory-method="create" scope="step">
<constructor-arg value="${absolutepath}" type="java.lang.String" />
</bean>
<!--step scope to avoid looking for the file when deployment-->
<bean id="myReader"
class="org.springframework.batch.item.xml.StaxEventItemReader" scope="step">
<property name="fragmentRootElementName" value="retailer" />
<property name="unmarshaller" ref="reportUnmarshaller" />
<property name="resource" ref="myResource" />
</bean>
And this is my class:
public class ResourceFactory {
public static final Logger LOG= LoggerFactory.getLogger(ResourceFactory.class);
private int retryAttemps;
private long timeBetweenAttemps;
public Resource create(String resource) throws IOException, InterruptedException {
Resource r;
File f=new File(resource);
int attemps=1;
while (!f.exists()) {
if (attemps<this.retryAttemps) {
attemps++;
LOG.warn("File "+resource+" not found, waiting "+timeBetweenAttemps+
" before retrying. Attemp: "+attemps+" of "+this.retryAttemps);
Thread.sleep(this.timeBetweenAttemps);
} else {
throw new FileNotFoundException(resource);
}
if (resource!=null && resource.endsWith(".zip")) {
ZipFile zipFile = new ZipFile(resource);
ZipEntry entry=zipFile.entries().nextElement();
if (entry==null) {
throw new FileNotFoundException("The zip file has no entries inside");
}
//TODO Test if a buffered Stream is faster than the raw InputStream
InputStream is=new BufferedInputStream(zipFile.getInputStream(entry));
r= new InputStreamResource(is);
if (LOG.isInfoEnabled()) {
int size=(int)entry.getSize();
LOG.info("Opening a compressed file of "+size+" bytes");
}
} else {
LOG.info("Opening a regular file");
r= new FileSystemResource(f);
}
}
return r;
}
}
If anyone knows a better way to do that, I'll gladly remove this answer (and implement the new solution)
PS: BTW, I've found some faults in my code when reviewing this post, so for me this is being helpful even with no other answers :)

Related

JMS - How to send message back to the client?

This is my client side code :
public class ABCServlet extends HttpServlet {
protected void doGet(HttpServletRequest request,
HttpServletResponse response){
//do blah blah
String msg = null;
java.io.OutputStream os = response.getOutputStream();
java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os);
oos.writeObject(msg);
msg = null;
oos.flush();
oos.close();
}
I don't know how using the above code my listener gets kicked off -
public class ABCListener implements MessageListener {
#Override
public void onMessage(Message arg0) {
AbstractJDBCFacade façade = null;
try{
façade = something;
throw new UserException();
}catch(UserException ex){
log.error("ABC Exception " + ex);
}
Configuration :
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">....
<bean id="jmsQueue" class="org.springframework.jndi.JndiObjectFactoryBean">
<bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer102">
I have 3 questions :
1. without putting it on the queue explicitly , how a listener gets invoked?
2. When onMessage method throws UserException, instead of logging I want to pass the message to the client. How can I do that ?
3. Why would someone use JndiObjectFactoryBean instead of ActiveMQ...
JMS by design was supposed to be asynchronous and one-way. Even "synchronous" jms with using receive method of consumer will internally turn into creating a new temporary queue. And here we come to the second point about it's one-way nature. JMS queue was supposed to be one-way and that's why it is called point-to-point (http://www.enterpriseintegrationpatterns.com/patterns/messaging/PointToPointChannel.html). Of course technically with some dancing you will manage to achieve what you want but it is bad practice which will also lead to performance degradation due to the fact that you will need filtering.
To get this thing work fast the best way will be to have exactly one logical receiver (of course you can use concurrent cosumers for one receiver but that should be one logical consumer without any need of filtering the message).
without putting it on the queue explicitly , how a listener gets invoked?
Listener get invoked only when a message comes to a queue. Thats the only way to get it work as it was supposed to work.
In general there are two types of message consuming models: push (also known as event-driven consuming) and poll. In case of using push model all listeners (according to canonical observer pattern) got registered somewhere in the broker and then, when broker receive new message in some queue, it executes listener's method. On the others side in polling model consumer take care itself about receiving messages. So with some interval it comes to a broker and checks the queue for new messages.
Push model: http://www.enterpriseintegrationpatterns.com/patterns/messaging/EventDrivenConsumer.html
Poll model: http://www.enterpriseintegrationpatterns.com/patterns/messaging/PollingConsumer.html
When onMessage method throws UserException, instead of logging I want to pass the message to the client. How can I do that ?
Thats a very bad practice. Of course technically you can achieve it with dirty tricks but thats not the right way of using jms. When onMessage throws the exception then message wont be taken from the queue (of course if u did not reconfigured acknowledge mods or used another tricks). So the best way of solving your probem fmpv is to use redelivery limit on message and a dead letter queue(http://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html). If system was not able to process the message after some attempts (redelivery limit shows exactly this) then broker remove message from the queue and send it to a so-called dead letter queue where all failed (from the point of broker) messages are stored. And then client can read that queue and decide what to do with message.
In amq: http://activemq.apache.org/message-redelivery-and-dlq-handling.html
If you want to use so-called "synchronous" features in JMS and really there is no way of using dead letter queue or smth like that then actually you can use consumer.recieve method on the client. But in this case you should send response on every message. In case of success you can send one message and in case of failure error messages. And so a client will be able to understand what is going on. But i dont think that you need such a huge overhead cause actually you need only failure messages. Also in this case you will have to take care about appropriate receive timeouts.
Why would someone use JndiObjectFactoryBean instead of ActiveMQ...
That's cause you are using Spring and there are additional features especially for spring.
PS:
1. For consuming:
How can I send a message using just this piece of code? Don't I need
to put this on a queue? java.io.OutputStream os =
response.getOutputStream(); java.io.ObjectOutputStream oos = new
java.io.ObjectOutputStream(os); oos.writeObject(msg);
For receiving smth like this:
`
<bean id="connectionFactory" class="org.springframework.
jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="baseJNDITemplate"/>
<property name="jndiName"
value="weblogic.jms.ConnectionFactory"/>
</bean>
<bean id="queue" class="org.springframework.
jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="baseJNDITemplate"/>
<property name="jndiName" value="#{properties.queueName}"/>
</bean>
<bean id="messageListenerContainer"
class="org.springframework.jms.listener.
DefaultMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="queue"/>
<property name="messageListener" ref="messageListener"/>
<property name="sessionTransacted" value="true"/>
</bean>
<bean id="messageListener" class="com.example.ABCListener"/>
And then simply all logic for message processing will be in the listener.
For sending smth like this in config:
<bean id="jmsQueueTemplate"
class="org.springframework.
jms.core.JmsTemplate">
<property name="connectionFactory">
<ref bean="jmsConnectionFactory"/>
</property>
<property name="destinationResolver">
<ref bean="jmsDestResolver"/>
</property>
...
</bean>
<bean id="jmsDestResolver"
class=" org.springframework.jms.support.destination.
JndiDestinationResolver"/>
<bean id="jmsConnectionFactory"
class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jms/myCF"/>
<property name="lookupOnStartup" value="false"/>
<property name="cache" value="true"/>
<property name="proxyInterface" value="amq con fact here"/>
</bean>
and in code simply use jmsTemplate.send(queue, messageCreator) method:
#Autowired
ConnectionFactory connectionFactory;
#Test(enabled = false)
public void testJmsSend(final String msg) throws Exception {
JmsTemplate template = new JmsTemplate(connectionFactory);
template.send("test_queue", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
}
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/cspr_data_access_jms.html
I believe Dead channel comes in picture only when the message is not properly received by the receiver. In my case, the receiver
received it and processed it, however while processing it failed with
some exception. I want to let the sender know that there was a
exception and the message did not process successfully. I can do this
using a response queue but I don't want to do that, can the receiver
receive a message from the sender on the same queue ? How?
Dead letter channel is a kind of error handling for message processing also. If message processing had failed then after the limit end it got transferred there. It is not actually only for transport issues but also for processing issues. If the message processing got failed with exception then message will stay in the queue and wont be acked by default. So what we should do with this message? For example if it failed due to our database error or smth like this? We should initiate error-handling process, notify assurance systems and stakeholders, collect all necessary info and preserve the message. Due to this kind of queues, which was create d exactly for that, it is much easier. And then customer support team will investigate error queue for further analysis of what has happened. Also we have monitoring tools for notifications and statistics collection on such errors. After understanding what has happened message got removed from the queue and archived.
After processing a message, the consumer is responsible for deleting
the message. If the consumer doesn't delete the message, for example
because because it crashed while processing the message, the message
becomes visible again after the message's Visibility Timeout expires.
Each time this happens, the message's receive count is increased.
When this count reaches a configured limit, the message is placed in a
designated Dead Letter Queue.
http://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html
I can do this using a response queue but I don't want to do that, can
the receiver receive a message from the sender on the same queue ?
How?
For you it will look like it's the same queue but internally new temporary queue will be created. To achieve that you should use jms request\reply message pattern. More here: http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
The only part still confuses me is : If I expect my JMS listener
(receiver) to listen to the queue, then my sender should also
implement JMS and connect to the same queue and send a message. But in
the ABCListener application that I am supporting does not have any
configuration where the sender is configured to the queue. All the
sender does is 3 lines of code : java.io.OutputStream os =
response.getOutputStream(); java.io.ObjectOutputStream oos = new
java.io.ObjectOutputStream(os); oos.writeObject(msg); Literally, that
is it. I don't know how it still works!
Of course 3 lines of code with outputstream do nothing except populating msg string. To send any jms message to the queue you anyway will have to use JMS Api or some library like Spring which wrap it by adding additional features.
I've wrote simple samples to get it more clear.
Modified servlet for asynchronous processing with dead letter queue (for dlq you should create also another listener ofc)
public class AsynchronousJmsSenderServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String msg = null;
try (java.io.OutputStream os = response.getOutputStream()) {
try(java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os)) {
oos.writeObject(msg);
}
}
sendJmsMessage(msg);
}
private void sendJmsMessage(final String msg) {
ConnectionFactory connectionFactory = null; //here get it in some way from spring
JmsTemplate template = new JmsTemplate(connectionFactory);
template.send("your_queue_name", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
}
}
And here is the code for "synchronous" processing and status reply messages
public class SynchronousJmsSenderServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String msg = null;
try (java.io.OutputStream os = response.getOutputStream()) {
try(java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(os)) {
oos.writeObject(msg);
}
}
sendJmsMessage(msg);
}
private void sendJmsMessage(final String msg) {
ConnectionFactory connectionFactory = null; //here get it in some way from spring
JmsTemplate template = new JmsTemplate(connectionFactory);
Message reply = template.sendAndReceive("your_queue_name", new MessageCreator() {
#Override
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(msg);
}
});
if(reply instanceof TextMessage) {
try {
String status = ((TextMessage) reply).getText();
//do error handling if status is error
} catch (JMSException ex) {
throw new RuntimeException("Unable to get status message", ex);
}
} else {
throw new RuntimeException("Only text messages are supported");
}
}
}
public class SynchronousJmsMessageListener implements SessionAwareMessageListener {
#Override
public void onMessage(Message request, Session session) throws JMSException {
try {
//do some processing
sendReply(request, session, "OK");
} catch (Exception ex) {
sendReply(request, session, "Error: " + ex.toString());
}
}
private void sendReply(Message request, Session session, String status) {
try {
TextMessage reply = null; //for example you can use ActiveMQTextMessage here
reply.setJMSCorrelationID(request.getJMSCorrelationID());
reply.setText(status);
MessageProducer producer = session.createProducer(reply.getJMSReplyTo());
producer.send(reply);
} catch (JMSException exception) {
throw new RuntimeException("Unable to send reply", exception);
}
}
}
You will need Spring 5 to have sendAndReceive method on jmsTemplate. Or you will have to do all that manually.
PS1: Please let me know if that will work

Customised MultiResourceItemReader: assign different mappers/writers for each file inside an archive

I have a requirement to read/process an archive which contains several flat files, each file should have it's own mapping and writer.
How do I go about assigning different FieldSetmappers and Writers for each file
using the bean configuration.
I have started with extending the MultiResourceItemReader and overriding the open method
as shown here:
#Override
public void open(ExecutionContext executionContext)
throws ItemStreamException {
ZipFile zipFile;
List<Resource> resources = new ArrayList<Resource>();
try {
zipFile = new ZipFile(pathtozipfile);
Enumeration zippedFile = zipFile.entries();
while (zippedFile.hasMoreElements()) {
ZipEntry zipEntry = (ZipEntry) zippedFile.nextElement();
resources.add(new InputStreamResource(zipFile
.getInputStream(zipEntry), zipEntry.getName()));
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
this.setResources(resources.toArray(new Resource[resources.size()]));
super.open(executionContext);
}
and the bean declaration as follows:
<bean id="itemReader" class="com.proc.spring.ZipMResourceItemReader" scope="step">
<property name="pathtozipfile" value="#{jobParameters['input.pathtozipfile']}" />
<property name="delegate" ref="delegateReader" /> </bean>
<bean id="delegateReader" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="lineMapper1">
<bean class="org.springframework.batch.item.file.mapping.." />
</property>
</bean>
I know programmatically, I could use the getCurrentResource() during read but I expect spring must support the ability to assign different FieldSetMappers based on current resource of the MultiResourceItemReader.
I guess perhaps extracting the files as the first step then assign each file/resource to a step: FlatFileItemReader would be solution but would prefer to use delegate method if its possible to differentiate the mappers/writers based on the resource name.
First of all: extract all files in a previous step is the best pratice.
A solution can involve a custom ResourceAwareItemReaderItemStream used to dispatch to correct reader looking at current resource name; you can do dispatch manually or using a Classifier. This custom reader is used as delegate of your multiresource item reader.
class ReaderDispatcher implements ResourceAwareItemReaderItemStream<Object> {
private ItemReader<Object> delegate1;
private ItemReader<Object> delegate2;
private ItemReader<Object> delegate1;
private ItemReader<Object> currentDelegate;
private Resource resource;
public setResource(org.springframework.core.io.Resource resource) {
this.resource = resource;
currentDelegate = getDelegateFromResource();
}
public Object read() {
return currentDelegate.read();
}
// Others interfaces implementation (or extends from one of abstract reader/stream implementations)
private ItemReader getDelegateFromResource() {
// here code to detect right reader from delegateN
}
}
(sorry for untested/incomplete code; I am unable to check it, but I hope you can see the idea behind).
You can pre-configure your readers with its own fieldSetMapper or any other customization you want.
Check ResourceAwareItemWriterItemStream for writer counterpart of reader interface.

Connect LDAP from Spring

I have to realize a web application based on Spring, allowing the user to manage LDAP data. The connection to the LDAP should be done only with the JNDI framework (no SpringLDAP allowed).
For this, I realized a utility class to do the basic operations (add, update, delete, list, ...).
Here is a short block of code of this class :
public class LdapUtility {
private static LdapUtility instance;
private DirContext dirContext;
public static LdapUtility getInstance() {
if(LdapUtility.instance == null)
LdapUtility.instance = new LdapUtility();
return LdapUtility.instance;
}
/**
* Connect to the LDAP
*/
private LdapUtility() {
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "cn=Manager,dc=my-domain,dc=com");
env.put(Context.SECURITY_CREDENTIALS, "secret");
try {
dirContext = new InitialDirContext(env);
}
catch(Exception ex) {
dirContext = null;
}
}
public void addUser(User u) {
dirContext.createSubcontext(....); //add user in the LDAP
}
}
With this code, I can access all my methods by calling LdapUtility.getInstance()..., but the connection to the LDAP will never be released.
Another way would be to connect to the LDAP before each operation, but in this case there would be too much connections to the LDAP...
So, here is my question : what is the most elegant/smartest way to access these methods ?
Thank you in advance :-)
Since you're already using Spring, I would recommend using Spring LDAP:
Spring LDAP is a Java library for simplifying LDAP operations, based on the pattern of Spring's JdbcTemplate. The framework relieves the user of common chores, such as looking up and closing contexts, looping through results, encoding/decoding values and filters, and more.
Especially if you're not familiar with LDAP and potential performance problems, it can help to start of using a utility library like this that will do all the heavy lifting for you.
You configure the LDAP connection settings in the spring config:
<bean id="contextSource" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="ldap://localhost:389" />
<property name="base" value="dc=example,dc=com" />
<property name="userDn" value="cn=Manager" />
<property name="password" value="secret" />
</bean>
<bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
<constructor-arg ref="contextSource" />
</bean>
You can then just use the LdapTemplate wherever you need to perform an LDAP action:
return ldapTemplate.search(
"", "(objectclass=person)",
new AttributesMapper() {
public Object mapFromAttributes(Attributes attrs)
throws NamingException {
return attrs.get("cn").get();
}
});
without a spring (being forbidden), i would quickly implement something simillar:
(when being lazy) create a simple callback interface (such as you can find in spring -- JpaCallback.execute(EntityManager em)) -- but for LDAP -- MyLdapCallback.execute(LdapConnection connection) -- intead of LdapConnection you can imagine anything you require -- objects from OpenLdap or SDK Context. Something like (just for presentation):
...
interface LdapCallback<T> {
T execute(DirContext ctx) throws NamingException, IOException;
}
...
private <T> T execute(LdapCallback<T> callback) throws NamingException, IOException {
T result = null;
LdapContext ctx = new InitialLdapContext();
try {
result = callback.execute(ctx);
} finally {
if (tls != null) {
tls.close();
}
ctx.close();
}
return result;
}
...
Once done, you will create anonymous classes for each Ldap call an call the callback via execute(callback).
(having more time) implement ad 1. + create AOP that will wrap my methods marked with annotation with aspect that will itself execute my methods within the wrapper above (without explicitly doing so in my code)
There are several ways to connect to ldap. Using javax.naming.* is one of them. In javadoc you may find, that classes in your SPI provider manages their own connections, so you don't care for it -- that may be an answer to your question -- see JDK doc and how Context manages conections and network -- http://docs.oracle.com/javase/6/docs/api/javax/naming/ldap/LdapContext.html .
If you are accustomed to more JDBC-like access, you may find http://www.openldap.org/jldap/ more to your liking. There you have conections completely under your control and you treat them much the same way as in JDBC. You may use any pooling library you like.
Not knowing the exact requirements I interpret the core question as being "when to open/close the connection".
My crystal ball tells me you may want to use a connection pool. True, you don't close the connection explicitly as this is handled by the pool but this may be ok for your assignment. It's fairly easy:
// Enable connection pooling
env.put("com.sun.jndi.ldap.connect.pool", "true");
The complete source code is referenced in Oracle's basic LDAP tutorial.

Spring integration: difficulty with transaction between 2 activators

I have this use case.
First chain:
<int:chain input-channel="inserimentoCanaleActivate" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
This is the relative code:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public EventMessage<ModificaOperativitaRapporto> activate(EventMessage<InserimentoCanale> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
All is working great.
Then I have another chain:
<int:chain id="onlineCensimentoClienteChain" input-channel="ONLINE_CENSIMENTO_CLIENTE" output-channel="inserimentoCanaleActivate">
<int:service-activator ref="onlineCensimentoClienteActivator" method="activate" />
<int:splitter expression="payload.getPayload().getCanali()" />
</int:chain>
And the relative activator:
#Override
public EventMessage<CensimentoCliente> activate(EventMessage<CensimentoCliente> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
The CensimentoCliente payload as described below has a List of payload of the first chain, so with a splitter I split on the list and reuse the code of the first chain.
public interface CensimentoCliente extends Serializable {
Collection<? extends InserimentoCanale> getCanali();
void setCanali(Collection<? extends InserimentoCanale> canali);
...
}
But since every activator gets his transaction definition (since the first one can live without the second one) I have a use case where the transactions are separated.
The goal is to have the db modifies of the two chains been part of the same transaction.
Any help?
Kind regards
Massimo
You can accomplish this by creating a custom channel (or other custom component, but this is the simplest approach) that wraps the message dispatch in a TransactionTemplate callback execution:
public class TransactionalChannel extends AbstractSubscribableChannel {
private final MessageDispatcher dispatcher = new UnicastingDispatcher();
private final TransactionTemplate transactionTemplate;
TransactionalChannel(TransactionTemplate transactionTemplate) {
this.transactionTemplate = transactionTemplate;
}
#Override
protected boolean doSend(final Message<?> message, long timeout) {
return transactionTemplate.execute(new TransactionCallback<Boolean>() {
#Override
public Boolean doInTransaction(TransactionStatus status) {
return getDispatcher().dispatch(message);
}
});
}
#Override
protected MessageDispatcher getDispatcher() {
return dispatcher;
}
}
In your XML, you can define your channel and transaction template and reference your custom channel just as you would any other channel:
<bean id="transactionalChannel" class="com.stackoverflow.TransactionalChannel">
<constructor-arg>
<bean class="org.springframework.transaction.support.TransactionTemplate">
<property name="transactionManager" ref="transactionManager"/>
<property name="propagationBehavior" value="#{T(org.springframework.transaction.TransactionDefinition).PROPAGATION_REQUIRES_NEW}"/>
</bean>
</constructor-arg>
</bean>
For your example, you could perhaps use a bridge to pass the message through the new channel:
<int:bridge input-channel="inserimentoCanaleActivate" output-channel="transactionalChannel" />
<int:chain input-channel="transactionalChannel" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
You you have <service-activator> and #Transactional on service method, the transaction will be bounded only to that method invocation.
If you want to have a transction for entire message flow (or its part) you should declare TX advice somewhere before.
If your channels are direct all service invocations will be wrapped with the same transaction.
The most simple way to accomplish your wishes, write simple #Gateway interface with #Transactional and call it from the start of your message flow.
To clarify a bit regarding transactions
Understanding Transactions in Message flows
Are these modifying 2 separate relational databases ? If so you are looking at an XA transaction. Now if you are running this on a non XA container like tomcat, all of this must be done in a single thread that is watched by a transaction manager - (you will have to piggy back on the transaction manager that actually triggers these events). The transaction manager can be a JMS message or a poller against some data source. Also this processing must be done in a single thread so that spring can help you run the entire process in a single transaction.
As a final note , do not introduce threadpools / queues between service activators. This can cause the activators to run in separate threads

How to disable freemarker caching in Spring MVC

I'm using spring mvc v3 with freemarker views and cannot disable caching.
I tried by setting cache to false in viewResolver element in (spring-servlet.xml) but didn't work.
Basically what I'd like to the do some changes in freemarker and see these changes in the browser with refresh only (w/o restarting the application)
Any hints how to do that?
In my XML the following was successful:
<bean id="freemarkerMailConfiguration" class="org.springframework.ui.freemarker.FreeMarkerConfigurationFactoryBean">
<property name="templateLoaderPaths" value="classpath:emailtemplates/task,classpath:emailtemplates/user"/>
<!-- Activate the following to disable template caching -->
<property name="freemarkerSettings" value="cache_storage=freemarker.cache.NullCacheStorage" />
</bean>
This is my mail config, but the freemarkerConfig should be interesting four you, too.
I dont use to configure freemarker with xml configurations but with #Configuration annotated classes; cause i rather the Spring-Boot´ style. So you can disable the freemarker´s cache like this:
#Bean
public FreeMarkerConfigurer freeMarkerConfigurer() throws IOException, TemplateException
{
FreeMarkerConfigurer configurer = new FreeMarkerConfigurer()
{
#Override
protected void postProcessConfiguration(freemarker.template.Configuration config) throws IOException, TemplateException
{
ClassTemplateLoader classTplLoader = new ClassTemplateLoader(context.getClassLoader(), "/templates");
ClassTemplateLoader baseMvcTplLoader = new ClassTemplateLoader(FreeMarkerConfigurer.class, ""); //TODO tratar de acceder a spring.ftl de forma directa
MultiTemplateLoader mtl = new MultiTemplateLoader(new TemplateLoader[]
{
classTplLoader,
baseMvcTplLoader
});
config.setTemplateLoader(mtl);
config.setCacheStorage(new NullCacheStorage());
}
};
configurer.setDefaultEncoding("UTF-8");
configurer.setPreferFileSystemAccess(false);
return configurer;
}
The key is in:
config.setCacheStorage(new NullCacheStorage());
But you can also use this instruction instead:
config.setTemplateUpdateDelayMilliseconds(0);
It should work for you.
In application.properties:
spring.freemarker.cache=false
As defined by the manual :
If you change the template file, then FreeMarker will re-load and
re-parse the template automatically when you get the template next
time. However, since checking if the file has been changed can be time
consuming, there is a Configuration level setting called ``update
delay''. This is the time that must elapse since the last checking for
a newer version of a certain template before FreeMarker will check
that again. This is set to 5 seconds by default. If you want to see
the changes of templates immediately, set it to 0.
After searching around, the configuration key was in the freemarker.template.Configuration javadocs, at the setSetting(key, value) method.
So, in short, just set the config template_update_delay to 0 for immediate change detection.
<bean id="freemarkerConfig" class="org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer">
<property name="templateLoaderPath" value="/WEB-INF/ftl/"/>
<property name="freemarkerSettings">
<props>
<prop key="template_update_delay">0</prop>
<prop key="default_encoding">UTF-8</prop>
</props>
</property>
</bean>
Did you check the FreeMarker documentation, which contains some hints regarding how to influence template caching at the FreeMarker Configuration level. I'm not sure if you have access to the FreeMarker Configuration object from inside Spring MVC, but if you have, then the documentation page mentioned above could point you towards a possible solution.
I wasted the last two days (note entirely for this project) trying to disable the cache. It turns out I have the two options antiJARLocking and antiResourceLocking set in my context.xml. Then the templates will ALWAYS be cached
I had the same problem which I could solve only by implementing a custom template loader. Here is the working code:
protected void init() throws Exception {
freemarkerConfig = new Configuration();
freemarkerConfig.setObjectWrapper(ObjectWrapper.DEFAULT_WRAPPER);
freemarkerConfig.setTemplateLoader(new CacheAgnosticTemplateLoader(new DefaultResourceLoader(), pdfTemplatePath));
}
protected static class CacheAgnosticTemplateLoader extends SpringTemplateLoader {
public CacheAgnosticTemplateLoader(ResourceLoader resourceLoader, String templateLoaderPath) {
super(resourceLoader, templateLoaderPath);
}
#Override
public long getLastModified(Object templateSource) {
// disabling template caching
return new Date().getTime();
}
}
It seems that in the recently released FreeMarker version 2.3.17, a legal and simpler way to do it has appeared: freemarker.cache.NullCacheStorage.

Resources