two datasource in Spring integration inbound channel - spring

I am using int-jdbc:inbound-channel-adapter of Spring Integration.
My Query is how to use two different datasource i.e. datasource A for querying and datasource B for update , in a single adapter?

You cannot; the same JdbcTemplate is used for both operations; you can omit the update query and do the update on an outbound channel adapter.

In Spring JDBC module we have something like AbstractRoutingDataSource.
Which you can implement based on some ThreadLocal variable.
From other side the JdbcPollingChannelAdapter has code like this:
private Object poll() {
List<?> payload = doPoll(this.sqlQueryParameterSource);
...
executeUpdateQuery(payload);
return payload;
}
So, you should somehow to hook in between doPoll() and executeUpdateQuery and, therefore, change the key in the ThreadLocal to be able to switch to another DataSource in the AbstractRoutingDataSource.
I only the hack like custom sqlParameterSourceFactory.createParameterSource() and the ThreadLocal modification there. Just because the code is like:
private void executeUpdateQuery(Object obj) {
SqlParameterSource updateParamaterSource = this.sqlParameterSourceFactory.createParameterSource(obj);
this.jdbcOperations.update(this.updateSql, updateParamaterSource);
}
(Will commit the fix for updateParamaterSource typo soon :-)).
But! As Gary mentioned in his answer, it would be better to have several JDBC adapters for different DataSources : one for SELECT and another for UPDATE. Both of them may work in the same XA Transaction (<transactional> on the <poller>). And from there we really differentiate the business logic and level of responsibility.

Related

guava eventbus post after transaction/commit

I am currently playing around with guava's eventbus in spring and while the general functionality is working fine so far i came across the following problem:
When a user want's to change data on a "Line" entity this is handled as usual in a backend service. In this service the data will be persisted via JPA first and after that I create a "NotificationEvent" with a reference to the changed entity. Via the EventBus I send the reference of the line to all subscribers.
public void notifyUI(String lineId) {
EventBus eventBus = getClientEventBus();
eventBus.post(new LineNotificationEvent(lineId));
}
the eventbus itself is created simply using new EventBus() in the background.
now in this case my subscribers are on the frontend side, outside of the #Transactional realm. so when I change my data, post the event and let the subscribers get all necessary updates from the database the actual transaction is not committed yet, which makes the subscribers fetch the old data.
the only quick fix i can think of is handling it asynchronously and wait for a second or two. But is there another way to post the events using guava AFTER the transaction has been committed?
I don't think guava is "aware" of spring at all, and in particular not with its "#Transactional" stuff.
So you need a creative solution here. One solution I can think about is to move this code to the place where you're sure that the transaction has finished.
One way to achieve that is using TransactionSyncrhonizationManager:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization(){
void afterCommit(){
// do what you want to do after commit
// in this case call the notifyUI method
}
});
Note, that if the transaction fails (rolls back) the method won't be called, in this case you'll probably need afterCompletion method. See documentation
Another possible approach is refactoring your application to something like this:
#Service
public class NonTransactionalService {
#Autowired
private ExistingService existing;
public void entryPoint() {
String lineId = existing.invokeInTransaction(...);
// now you know for sure that the transaction has been committed
notifyUI(lineId);
}
}
#Service
public class ExistingService {
#Transactional
public String invokeInTransaction(...) {
// do your stuff that you've done before
}
}
One last thing I would like to mention here, is that Spring itself provides an events mechanism, that you might use instead of guava's one.
See this tutorial for example

Convert document objects to DTO spring reactive

I'm trying to convert a document object that is retrieved by the ReactiveCrudRepository as a Flux<Client> into Flux<ClientDto>
Now that I figure out a way to do this, I'm not sure if this is blocking or not:
public Mono<ServerResponse> findAll(final ServerRequest serverRequest) {
final Flux<ClientDto> map = clientService.findAll().map(client -> modelMapper.map(client, ClientDto.class)) /*.delayElements(Duration.ofSeconds(10))*/;
return ServerResponse.ok()
.contentType(MediaType.TEXT_EVENT_STREAM)
.body(map, ClientDto.class);
}
I've tried adding the commented delayElements method and it seems it's sending them one by one, so non-blocking.
I think this is more of a nested question, but at the core I want to know how do I figure out if I do something blocking.
Thanks in advance!
You are blocking if you explicitly call to block method or if you are using a standard jdbc connector to connect to the database instead of a reactive one like reactiveMongo provided by Spring Data.
In the snnipet you have posted, there isn't any blocking, but to be totally sure you should review the code of your clientService class and its nested calls (to a repository for example)

how can I have a datasource object that is not a bean in spring?

This might sound weird. But I want to know how/if I can create datasource objects during runtime and not when the container start-up.
Here is the problem I am working on:
I have a MySql database which stores the URL, userName and password for other SQL Servers that I need to connect and do overnight processing. This list of SQL Servers changes everytime. so it cannot be hard-coded in properties files. Further, the no of SQL servers is about 5000 or more.
The business logic involves reading the MySQL database (which is currently a datasource bean created during container start-up) and for each entry in SQL_SERVER_MAPPING table in MySQL database, I need to connect to that database and run reports.
I was thinking of doing something along this line for each of the SQL server instances
public DataSource getdataSource(String url, String u, String p, String class) {
return DataSourceBuilder
.create()
.username(u)
.password(p)
.url(url)
.driverClassName(class)
.build();
}
public JdbcTemplate jdbcTemplate(DataSource datasource) {
return new JdbcTemplate(dataSource);
}
Here is a builder that generates datasource for a given url and create the necessary jdbcTemplate from it. so basically create one for each of SQL server configurations.
My concern is I will be creating about 5000 datasources and 5000 jdbcTemplate or perhaps even more. That doesn't sound right to me. what is the right way to get around here?
is there a way to remove datasource objects as soon I am done with it or recycle them?
should I cache these dataSource objects in my Spring application, so I dont have to create one each time and discard it. but this implies, I need to cache 5000 (or probably more in the future).
Spring docs says
The DataSource should always be configured as a bean in the Spring IoC container. In the first case the bean is given to the service directly; in the second case it is given to the prepared template.
so that makes things harder for me.
Thanks
You can define a bean myBean with scope prototype and use the getBean(String name, Object... args) method of a BeanFactory. args would be the args sent to the constructor (in your case these would be db connections). The bean would return a jdbcTemplate constructed with a datasource defined from the connection properties. This template can be further used in other classes.
Since the scope of the bean is prototype, the created instances will be garbaged collected after the current object is used. This can help if you have memory constraints, the really heavy lifting in terms of creating objects is done when getting the actual DB connections. Caching would be a good solution in case of heavy re-usage of connections.
See an example of this bean and method usage here: spring bean with dynamic constructor value

Best practices for Spring Transactions and generic DAOs & Services

I work on a Java EE application with Spring and JPA (EclispeLink). We developed a user-friendly interface for administrating the database tables. As I know more about Spring and Transactions now, I decided to refactor my code to add better transaction management. The question is how to best deal with generic DAOs, generic services and Spring transactions?
Our current solution was:
A generic BasicDAO which deals with all common database actions (find, create, update, delete...)
A DaoFactory which contains a map of implementations of BasicDao for all entity types (which only need basic database actions) and which gets the entitymanager injected by spring to pass it to the daos
A generic BasicService which offers the common services (actually directly linked to the dao methods)
A ServiceFactory which contains a map of implementations of BasicService for all entity types, which gets the daoFactory injected and passes it to the services. It has a method "getService(Class T)" to provide the right service to the controllers.
Controllers corresponding to the right entity types which delegate their requests to a generic controller which handles the request parameters using reflection and retrieves the right service from the service factory's map to call the update/create/remove method.
Problem is that, when I add the #Transactionnal annotations on the generic Service and my serviceFactory creates the typed services in its map, these services don't seem to have active transactions running.
1) Is it normal, due to the genericity and the fact that only spring-managed services can have transactions?
2) What is the best solution to solve my problem:
Create managed typed services only implementing the generic service and inject them directly in my serviceFactory?
Remove the service layer for these basic services? (but maybe I'll get the same problem with transactions on my dao generic layer...)
Other suggestions?
I read some questions related to these points on the web but couldn't find examples which went as far into genericity as here, so I hope somebody can advise me... Thanks in advance!
For basic gets you don't need a service layer.
A service layer is for dealing with multiple aggregate roots - ie complex logic invloving multiple different entities.
My implementation of a generic repository looks like this :
public class DomainRepository<T> {
#Resource(name = "sessionFactory")
protected SessionFactory sessionFactory;
public DomainRepository(Class genericType) {
this.genericType = genericType;
}
#Transactional(readOnly = true)
public T get(final long id) {
return (T) sessionFactory.getCurrentSession().get(genericType, id);
}
#Transactional(readOnly = true)
public <T> List<T> getFieldEquals(String fieldName, Object value) {
final Session session = sessionFactory.getCurrentSession();
final Criteria crit = session.createCriteria(genericType).
add(Restrictions.eq(fieldName, value));
return crit.list();
}
//and so on ..
with different types instantiated by spring :
<bean id="tagRepository" class="com.yourcompnay.data.DomainRepository">
<constructor-arg value="com.yourcompnay.domain.Tag"/>
</bean>
and can be referenced like so :
#Resource(name = "tagRepository")
private DomainRepository<Tag> tagRepository;
And can also be extended manually for complex entities.

How to use StatelessSession with Spring Data JPA and Hibernate?

I'm using Spring + Spring Data JPA with Hibernate and I need to perform some large and expensive database operations.
How I can use a StatelessSession to perform these kind of operations?
A solution is to implement a Spring factory bean to create this StatelessSession and inject it in your custom repositories implementation:
public class MyRepositoryImpl implements MyRepositoryCustom {
#Autowired
private StatelessSession statelessSession;
#Override
#Transactional
public void myBatchStatements() {
Criteria c = statelessSession.createCriteria(User.class);
ScrollableResults itemCursor = c.scroll();
while (itemCursor.next()) {
myUpdate((User) itemCursor.get(0));
}
itemCursor.close();
return true;
}
}
Check out the StatelessSessionFactoryBean and the full Gist here. Using Spring 3.2.2, Spring Data JPA 1.2.0 and Hibernate 4.1.9.
Thanks to this JIRA and the guy who attached StatelessSessionFactoryBean code. Hope this helps somebody, it worked like a charm for me.
To get even better performance results you can enable jdbc batch statements on the SessionFactory / EntityManager by setting the hibernate.jdbc.batch_size property on the SessionFactory configuration (i.e.: LocalEntityManagerFactoryBean).
To have an optimal benefit of the jdbc batch insert / updates write as much entities of the same type as possible. Hibernate will detect when you write another entity type and flushes the batch automatically even when it has not reached the configured batch size.
Using the StatelessSession behaves basically the same as using something like Spring's JdbcTemplate. The benefit of using the StatelessSession is that the mapping and translation to SQL is handled by Hibernate. When you use my StatelessSessionFactoryBean you can even mix the Session and the StatelessSession mixed in one transaction. But be careful of modifying an Entity loaded by the Session and persisting it with the StatelessSession because it will result into locking problems.

Resources