Spring Integration Flow with Jdbc Message source which has dynamic query - spring-boot

I am trying to do a change data capture from oracle DB using spring cloud data flow with kafka as broker. I am using polling mechanism for this. I am polling the data base with a basic select query at regular intervals to capture any updated data. For a better fail proof system, I have persisted my last poll time in oracle DB and used it to get the data which is updated after last poll.
public MessageSource<Object> jdbcMessageSource() {
JdbcPollingChannelAdapter jdbcPollingChannelAdapter =
new JdbcPollingChannelAdapter(this.dataSource, this.properties.getQuery());
jdbcPollingChannelAdapter.setUpdateSql(this.properties.getUpdate());
return jdbcPollingChannelAdapter;
}
#Bean
public IntegrationFlow pollingFlow() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows.from(jdbcMessageSource(),spec -> spec.poller(Pollers.fixedDelay(3000)));
flowBuilder.channel(this.source.output());
flowBuilder.transform(trans,"transform");
return flowBuilder.get();
}
My queries in application properties are as below:
query: select * from kafka_test where LAST_UPDATE_TIME >(select LAST_POLL_TIME from poll_time)
update : UPDATE poll_time SET LAST_POLL_TIME = CURRENT_TIMESTAMP
This working perfectly for me. I am able to get the CDC from the DB with this approach.
The problem I am looking over now is below:
Creating an table just to maintain the poll time is an overburden. I am looking for maintaining this last poll time in a kafka topic and retrieve that time from kafka topic when I am making the next poll.
I have modified the jdbcMessageSource method as below to try that:
public MessageSource<Object> jdbcMessageSource() {
String query = "select * from kafka_test where LAST_UPDATE_TIME > '"+<Last poll time value read from kafka comes here>+"'";
JdbcPollingChannelAdapter jdbcPollingChannelAdapter =
new JdbcPollingChannelAdapter(this.dataSource, query);
return jdbcPollingChannelAdapter;
}
But the Spring Data Flow is instantiating the pollingFlow( ) (please see the code above) bean only once. Hence what ever the query that is run first will remain the same. I want to update the query with new poll time for each poll.
Is there a way where I can write a custom Integrationflow to have this query updated everytime I make a poll ?
I have tried out IntegrationFlowContext for that but wasn't successful.
Thanks in advance !!!

With the help of both the answer above, I was able to figure out the approach.
Write a jdbc template and wrap that as a bean and use it for the Integration Flow.
#EnableBinding(Source.class)
#AllArgsConstructor
public class StockSource {
private DataSource dataSource;
#Autowired
private JdbcTemplate jdbcTemplate;
private MessageChannelFactory messageChannelFactory; // You can use normal message channel which is available in spring cloud data flow as well.
private List<String> findAll() {
jdbcTemplate = new JdbcTemplate(dataSource);
String time = "10/24/60" . (this means 10 seconds for oracle DB)
String query = << your query here like.. select * from test where (last_updated_time > time) >>;
return jdbcTemplate.query(query, new RowMapper<String>() {
#Override
public String mapRow(ResultSet rs, int rowNum) throws SQLException {
...
...
any row mapper operations that you want to do with you result after the poll.
...
...
...
// Change the time here for the next poll to the DB.
return result;
}
});
}
#Bean
public IntegrationFlow supplyPollingFlow() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(this::findAll, spec -> {
spec.poller(Pollers.fixedDelay(5000));
});
flowBuilder.channel(<<Your message channel>>);
return flowBuilder.get();
}
}
In our use case, we were persisting the last poll time in a kafka topic. This was to make the application state less. Every new poll to the DB now, will have a new time in the where condition.
P.S: your messaging broker (kafka/rabbit mq) sdould be running in your local or connect to them if there are hosted on a different platform.
God Speed !!!

See Artem's answer for the mechanism for a dynamic query in the standard adapter; an alternative, however, would be to simply wrap a JdbcTemplate in a Bean and invoke it with
IntegrationFlows.from(myPojo(), "runQuery", e -> ...)
...
or even a simple lambda
.from(() -> jdbcTemplate...)

We have this test configuration (sorry, it is an XML):
<inbound-channel-adapter query="select * from item where status=:status" channel="target"
data-source="dataSource" select-sql-parameter-source="parameterSource"
update="delete from item"/>
<beans:bean id="parameterSource" factory-bean="parameterSourceFactory"
factory-method="createParameterSourceNoCache">
<beans:constructor-arg value=""/>
</beans:bean>
<beans:bean id="parameterSourceFactory"
class="org.springframework.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
<beans:property name="parameterExpressions">
<beans:map>
<beans:entry key="status" value="#statusBean.which()"/>
</beans:map>
</beans:property>
<beans:property name="sqlParameterTypes">
<beans:map>
<beans:entry key="status" value="#{ T(java.sql.Types).INTEGER}"/>
</beans:map>
</beans:property>
</beans:bean>
<beans:bean id="statusBean"
class="org.springframework.integration.jdbc.config.JdbcPollingChannelAdapterParserTests$Status"/>
Pay attention to the ExpressionEvaluatingSqlParameterSourceFactory and its createParameterSourceNoCache() factory. The this result can be used for the select-sql-parameter-source.
The JdbcPollingChannelAdapter has a setSelectSqlParameterSource on the matter.
So, you configure a ExpressionEvaluatingSqlParameterSourceFactory to be able to resolve some query parameter as an expression for some bean method invocation to get a desired value from Kafka. Then createParameterSourceNoCache() will help you to obtain an expected SqlParameterSource.
There is some info in docs as well: https://docs.spring.io/spring-integration/docs/current/reference/html/#jdbc-inbound-channel-adapter

Related

JdbcPollingChannelAdapter and IntegrationFlow No rolling back the update when an exception occurs in the Integraion flow messages

My use case is that I have got a spring boot application with a JdbcPollingChannelAdapter to fetch data from a postgresql database, updating the fetched rows and moving foreward with message flow (using IntegrationFlowBuilder) to process some transform to the ResultSet and publish the results to RabbitMQ.
JdbcPollingChannelAdapter is configured to fetch data each 60 seonds with a select for update query followed by an update query to flag the status form NEW to PUBLISH status:
The sql query :select * from table where status= 'NEW' order by tms_creation limit 100 for update;
The update query : update table set cod_etat = 'PUBLISH', tms_modification = now() where id in (:id)
Also, there is no Max Row per Poll to fetch data, which means that the jdbc poller will execute the sql request as many time as data (with status NEW) is present.
First issue: I stop my RabbitMQ and let my microservice running, the JdbcPollingChannelAdapter fetch the first ResultSet pass them through the Message flow and process the update. The message flow process the resultSet to send them through a channel to rabbitMQ(using spring cloud stream). The send fail and no Rollback has occured which means that the resultSet has been flagged as published.
I Have been loking around in documentation to figure out what I have missed. So any help would be appreciate.
Second issue: I run 3 instances of my application on PCF, and handle the concurrent access to the rows in the datable. My transaction and the select for update query in The JdbcPollingChannelAdapter suppose to get Row-level Lock Modes for the current transaction as per sql query (select for update). But what is happening is that more than one instance could get the same rows which is supposed to be managed by the current lock. Thus, it leads to multiple instances handling the same data and publishing them multiple times.
My code is as
#EnableConfigurationProperties(ProprietesSourceJdbc.class)
#Component
public class KafkaGuy {
private static final Logger LOG = LoggerFactory.getLogger(KafkaGuy.class);
private ProprietesSourceJdbc proprietesSourceJdbc;
private DataSource sourceDeDonnees;
private DemandeSource demandeSource;
private ObjectMapper objectMapper;
private JdbcTemplate jdbcTemplate;
public KafkaGuy(ProprietesSourceJdbc proprietesSourceJdbc, DemandeSource demandeSource, DataSource dataSource, JdbcTemplate jdbcTemplate, ObjectMapper objectMapper) {
this.proprietesSourceJdbc = proprietesSourceJdbc;
this.demandeSource = demandeSource;
this.sourceDeDonnees = dataSource;
this.objectMapper = objectMapper;
this.jdbcTemplate = jdbcTemplate;
}
#Bean
public MessageSource<Object> jdbcSourceMessage() {
JdbcPollingChannelAdapter jdbcSource = new JdbcPollingChannelAdapter(this.sourceDeDonnees, this.proprietesSourceJdbc.getQuery());
jdbcSource.setUpdateSql(this.proprietesSourceJdbc.getUpdate());
return jdbcSource;
}
#Bean
public IntegrationFlow fluxDeDonnees() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows.from(jdbcSourceMessage());
flowBuilder
.split()
.log(LoggingHandler.Level.INFO, message ->
message.getHeaders().get("sequenceNumber")
+ " événements publiés sur le bus de message sur "
+ message.getHeaders().get("sequenceSize")
+ " événements lus (lot)")
.transform(Transformers.toJson())
.enrichHeaders(h -> h.headerExpression("type", "payload.typ_evenement"))
.publishSubscribeChannel(publishSubscribeSpec -> publishSubscribeSpec
.subscribe(flow -> flow
.transform(Transformers.toJson())
.transform(kafkaGuyTransformer())
.channel(this.demandeSource.demandePreinscriptionOuput()))
);
return flowBuilder.get();
}
#Bean
public KafkaGuyTransformer kafkaGuyTransformer() {
return new KafkaGuyTransformer();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
PeriodicTrigger trigger = new PeriodicTrigger(this.proprietesSourceJdbc.getTriggerDelay(), TimeUnit.SECONDS);
pollerMetadata.setTrigger(trigger);
pollerMetadata.setMaxMessagesPerPoll(proprietesSourceJdbc.getMaxRowsPerPoll());
return pollerMetadata;
}
public class KafkaGuyTransformer implements GenericTransformer<Message, Message> {
#Override
public Message transform(Message message) {
Message<String> msg = null;
try {
DemandeRecueDTO dto = objectMapper.readValue(message.getPayload().toString(), DemandeRecueDTO.class);
msg = MessageBuilder.withPayload(dto.getTxtDonnee())
.copyHeaders(message.getHeaders())
.build();
} catch (Exception e) {
LOG.error(e.getMessage(), e);
}
return msg;
}
}
}
I am new In spring integration and sorry if is not well explained. Any help is appreciate.
Everything looks good and should be as you have described. Only the problem I see that there is no transaction configured for the IntegrationFlows.from(jdbcSourceMessage()).
Consider to PollerMetadata.setAdviceChain() with a TransactionInterceptor.
Another way is to use a PollerSpec with its transactional() option.
This way you won't use local data base transactions which are committed exactly after return from the ResultSet processing. With transaction on the application level there is not going to be a commit until you exit a thread.

Spring Jdbc inbound channel adapter

I'm trying for a program in spring which does DB poll and selects that record to read. I see example for xml but i would like to know how do we do in java config. Can someone show me an example ?
You need JdbcPollingChannelAdapter #Bean definition, marked with the #InboundChannelAdapter:
#Bean
#InboundChannelAdapter(value = "fooChannel", poller = #Poller(fixedDelay="5000"))
public MessageSource<?> storedProc(DataSource dataSource) {
return new JdbcPollingChannelAdapter(dataSource, "SELECT * FROM foo where status = 0");
}
http://docs.spring.io/spring-integration/docs/4.3.11.RELEASE/reference/html/overview.html#programming-tips

Correct use of Hazelcast Transactional Map in an Spring Boot app

I am working on a proof of concept of Hazelcast Transactional Map. To accomplish this I am writing an Spring Boot app and using Atomikos as my JTA/XA implementation.
This app must update a transactional map and also update a database table by inserting a new row all within the same transaction.
I am using JPA / SpringData / Hibernate to work with the database.
So the app have a component (a JAVA class annotated with #Component) that have a method called agregar() (add in spanish). This method is annotated with #Transactional (org.springframework.transaction.annotation.Transactional)
The method must performe two task as a unit: first must update a TransactionalMap retrieved from Hazelcast instance and, second, must update a database table using a repository extended from JpaRepository (org.springframework.data.jpa.repository.JpaRepository)
This is the code I have written:
#Transactional
public void agregar() throws NotSupportedException, SystemException, IllegalStateException, RollbackException, SecurityException, HeuristicMixedException, HeuristicRollbackException, SQLException {
logger.info("AGRENADO AL MAPA ...");
HazelcastXAResource xaResource = hazelcastInstance.getXAResource();
UserTransactionManager tm = new UserTransactionManager();
tm.begin();
Transaction transaction = tm.getTransaction();
transaction.enlistResource(xaResource);
TransactionContext context = xaResource.getTransactionContext();
TransactionalMap<TaskKey, TaskQueue> mapTareasDiferidas = context.getMap("TAREAS-DIFERIDAS");
TaskKey taskKey = new TaskKey(1L);
TaskQueue taskQueue = mapTareasDiferidas.get(taskKey);
Integer numero = 4;
Task<Integer> taskFactorial = new TaskImplFactorial(numero);
taskQueue = new TaskQueue();
taskQueue.getQueue().add(taskFactorial);
mapTareasDiferidas.put(taskKey, taskQueue);
transaction.delistResource(xaResource, XAResource.TMSUCCESS);
tm.commit();
logger.info("AGRENADO A LA TABLA ...");
PaisEntity paisEntity = new PaisEntity(100, "ARGENTINA", 10);
paisRepository.save(paisEntity);
}
This code is working: if one of the tasks throw an exception then both are rolled back.
My questions are:
Is this code actually correct?
Why #Transactional is not taking care of commiting the changes in the map and I must explicitylly do it on my own?
The complete code of the project is available en Github: https://github.com/diegocairone/hazelcast-maps-poc
Thanks in advance
Finally i realized that i must inject the 'UserTransactionManager' object and take the transaction from it.
Also is necessary to use a JTA/XA implementation. I have chosen Atomikos and XA transactions must be enable in MS SQL Server.
The working example is available at Github https://github.com/diegocairone/hazelcast-maps-poc on branch atomikos-datasource-mssql
Starting with Hazelcast 3.7, you can get rid of the boilerplate code to begin, commit or rollback transactions by using HazelcastTransactionManager which is a PlatformTransactionManager implementation to be used with Spring Transaction API.
You can find example here.
Also, Hazelcast can participate in XA transaction with Atomikos. Here's a doc
Thank you
I have updated to Hazelcast 3.7.5 and added the following code to HazelcastConfig class.
#Configuration
public class HazelcastConfig {
...
#Bean
public HazelcastInstance getHazelcastInstance() {
....
}
#Bean
public HazelcastTransactionManager getTransactionManager() {
HazelcastTransactionManager transactionManager = new HazelcastTransactionManager(getHazelcastInstance());
return transactionManager;
}
#Bean
public ManagedTransactionalTaskContext getTransactionalContext() {
ManagedTransactionalTaskContext transactionalContext = new ManagedTransactionalTaskContext(getTransactionManager());
return transactionalContext;
}
When I run the app I get this exception:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
bean named 'transactionManager' available: No matching
PlatformTransactionManager bean found for qualifier
'transactionManager' - neither qualifier match nor bean name match!
The code is available at Github on a new branch: atomikos-datasource-mssql-hz37
Thanks in advance

Connect LDAP from Spring

I have to realize a web application based on Spring, allowing the user to manage LDAP data. The connection to the LDAP should be done only with the JNDI framework (no SpringLDAP allowed).
For this, I realized a utility class to do the basic operations (add, update, delete, list, ...).
Here is a short block of code of this class :
public class LdapUtility {
private static LdapUtility instance;
private DirContext dirContext;
public static LdapUtility getInstance() {
if(LdapUtility.instance == null)
LdapUtility.instance = new LdapUtility();
return LdapUtility.instance;
}
/**
* Connect to the LDAP
*/
private LdapUtility() {
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "cn=Manager,dc=my-domain,dc=com");
env.put(Context.SECURITY_CREDENTIALS, "secret");
try {
dirContext = new InitialDirContext(env);
}
catch(Exception ex) {
dirContext = null;
}
}
public void addUser(User u) {
dirContext.createSubcontext(....); //add user in the LDAP
}
}
With this code, I can access all my methods by calling LdapUtility.getInstance()..., but the connection to the LDAP will never be released.
Another way would be to connect to the LDAP before each operation, but in this case there would be too much connections to the LDAP...
So, here is my question : what is the most elegant/smartest way to access these methods ?
Thank you in advance :-)
Since you're already using Spring, I would recommend using Spring LDAP:
Spring LDAP is a Java library for simplifying LDAP operations, based on the pattern of Spring's JdbcTemplate. The framework relieves the user of common chores, such as looking up and closing contexts, looping through results, encoding/decoding values and filters, and more.
Especially if you're not familiar with LDAP and potential performance problems, it can help to start of using a utility library like this that will do all the heavy lifting for you.
You configure the LDAP connection settings in the spring config:
<bean id="contextSource" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="ldap://localhost:389" />
<property name="base" value="dc=example,dc=com" />
<property name="userDn" value="cn=Manager" />
<property name="password" value="secret" />
</bean>
<bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
<constructor-arg ref="contextSource" />
</bean>
You can then just use the LdapTemplate wherever you need to perform an LDAP action:
return ldapTemplate.search(
"", "(objectclass=person)",
new AttributesMapper() {
public Object mapFromAttributes(Attributes attrs)
throws NamingException {
return attrs.get("cn").get();
}
});
without a spring (being forbidden), i would quickly implement something simillar:
(when being lazy) create a simple callback interface (such as you can find in spring -- JpaCallback.execute(EntityManager em)) -- but for LDAP -- MyLdapCallback.execute(LdapConnection connection) -- intead of LdapConnection you can imagine anything you require -- objects from OpenLdap or SDK Context. Something like (just for presentation):
...
interface LdapCallback<T> {
T execute(DirContext ctx) throws NamingException, IOException;
}
...
private <T> T execute(LdapCallback<T> callback) throws NamingException, IOException {
T result = null;
LdapContext ctx = new InitialLdapContext();
try {
result = callback.execute(ctx);
} finally {
if (tls != null) {
tls.close();
}
ctx.close();
}
return result;
}
...
Once done, you will create anonymous classes for each Ldap call an call the callback via execute(callback).
(having more time) implement ad 1. + create AOP that will wrap my methods marked with annotation with aspect that will itself execute my methods within the wrapper above (without explicitly doing so in my code)
There are several ways to connect to ldap. Using javax.naming.* is one of them. In javadoc you may find, that classes in your SPI provider manages their own connections, so you don't care for it -- that may be an answer to your question -- see JDK doc and how Context manages conections and network -- http://docs.oracle.com/javase/6/docs/api/javax/naming/ldap/LdapContext.html .
If you are accustomed to more JDBC-like access, you may find http://www.openldap.org/jldap/ more to your liking. There you have conections completely under your control and you treat them much the same way as in JDBC. You may use any pooling library you like.
Not knowing the exact requirements I interpret the core question as being "when to open/close the connection".
My crystal ball tells me you may want to use a connection pool. True, you don't close the connection explicitly as this is handled by the pool but this may be ok for your assignment. It's fairly easy:
// Enable connection pooling
env.put("com.sun.jndi.ldap.connect.pool", "true");
The complete source code is referenced in Oracle's basic LDAP tutorial.

Spring integration: difficulty with transaction between 2 activators

I have this use case.
First chain:
<int:chain input-channel="inserimentoCanaleActivate" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
This is the relative code:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public EventMessage<ModificaOperativitaRapporto> activate(EventMessage<InserimentoCanale> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
All is working great.
Then I have another chain:
<int:chain id="onlineCensimentoClienteChain" input-channel="ONLINE_CENSIMENTO_CLIENTE" output-channel="inserimentoCanaleActivate">
<int:service-activator ref="onlineCensimentoClienteActivator" method="activate" />
<int:splitter expression="payload.getPayload().getCanali()" />
</int:chain>
And the relative activator:
#Override
public EventMessage<CensimentoCliente> activate(EventMessage<CensimentoCliente> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
The CensimentoCliente payload as described below has a List of payload of the first chain, so with a splitter I split on the list and reuse the code of the first chain.
public interface CensimentoCliente extends Serializable {
Collection<? extends InserimentoCanale> getCanali();
void setCanali(Collection<? extends InserimentoCanale> canali);
...
}
But since every activator gets his transaction definition (since the first one can live without the second one) I have a use case where the transactions are separated.
The goal is to have the db modifies of the two chains been part of the same transaction.
Any help?
Kind regards
Massimo
You can accomplish this by creating a custom channel (or other custom component, but this is the simplest approach) that wraps the message dispatch in a TransactionTemplate callback execution:
public class TransactionalChannel extends AbstractSubscribableChannel {
private final MessageDispatcher dispatcher = new UnicastingDispatcher();
private final TransactionTemplate transactionTemplate;
TransactionalChannel(TransactionTemplate transactionTemplate) {
this.transactionTemplate = transactionTemplate;
}
#Override
protected boolean doSend(final Message<?> message, long timeout) {
return transactionTemplate.execute(new TransactionCallback<Boolean>() {
#Override
public Boolean doInTransaction(TransactionStatus status) {
return getDispatcher().dispatch(message);
}
});
}
#Override
protected MessageDispatcher getDispatcher() {
return dispatcher;
}
}
In your XML, you can define your channel and transaction template and reference your custom channel just as you would any other channel:
<bean id="transactionalChannel" class="com.stackoverflow.TransactionalChannel">
<constructor-arg>
<bean class="org.springframework.transaction.support.TransactionTemplate">
<property name="transactionManager" ref="transactionManager"/>
<property name="propagationBehavior" value="#{T(org.springframework.transaction.TransactionDefinition).PROPAGATION_REQUIRES_NEW}"/>
</bean>
</constructor-arg>
</bean>
For your example, you could perhaps use a bridge to pass the message through the new channel:
<int:bridge input-channel="inserimentoCanaleActivate" output-channel="transactionalChannel" />
<int:chain input-channel="transactionalChannel" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
You you have <service-activator> and #Transactional on service method, the transaction will be bounded only to that method invocation.
If you want to have a transction for entire message flow (or its part) you should declare TX advice somewhere before.
If your channels are direct all service invocations will be wrapped with the same transaction.
The most simple way to accomplish your wishes, write simple #Gateway interface with #Transactional and call it from the start of your message flow.
To clarify a bit regarding transactions
Understanding Transactions in Message flows
Are these modifying 2 separate relational databases ? If so you are looking at an XA transaction. Now if you are running this on a non XA container like tomcat, all of this must be done in a single thread that is watched by a transaction manager - (you will have to piggy back on the transaction manager that actually triggers these events). The transaction manager can be a JMS message or a poller against some data source. Also this processing must be done in a single thread so that spring can help you run the entire process in a single transaction.
As a final note , do not introduce threadpools / queues between service activators. This can cause the activators to run in separate threads

Resources