Spring boot, multi-tenet, multi-module, #Transactional , parallelStream - spring

I am trying to insert some 50k records into db. We have used AbstractRoutingDataSource which resolve Datasource using TenantContext which is a utility class and has a private static final ThreadLocal CURRENT_TENANT = new ThreadLocal<>();
when I am using parallel stream or if I am trying to make the method #Async I am getting the below error
Code:
.parallelStream()
.forEach(row -> {
TenantContext.setCurrentTenant(centerCd);
someDao.insert(row);
});
Error:
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:305)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:378)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:474)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:289)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]
at org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.determineTargetDataSource(AbstractRoutingDataSource.java:207)
at org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.getConnection(AbstractRoutingDataSource.java:169)
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:262)
... 10 common frames omitted

It works exactly like you described: your TenantContext is exactly ThreadLocal and exists in a thread, which is initiated either by parallelStream() or Async method. (in reality, the call inside of the Async or forEach method is a run from Runnable)
The data source is attempted to be injected/resolved at start of the thread: because your transaction have to be started at the thread creation, before your Runnable gets into a run method. And at this moment of time you haven't yet specified your tenant, call TenantContext.setCurrentTenant(centerCd) is performed later in a run method implementation.
I would suggest applying such structure to your code:
class TenantAwareThread extends Thread {
public TenantAwareThread(Runnable target, TenantData tenantData) {
super(target);
TenantContext.setCurrentTenant(tenantData);
}
}
#Autowired
TaskExecutor executor;
void startTask(TenantData tenantData, RowData row) {
executor.execute(
new TenantAwareThread(() -> {
someDao.insert(row);
},
tenantData));
}
You create a new thread type which is aware of tenant data from the very beginning. And simply wraps your executions into such thread.

Related

Unable to limit the parallelism using ThreadPoolExecutor with Spring Batch

Here's my configuration:
#StepScope
#Bean(name = "mySlaveStep")
public Step mySlaveStep(
#Qualifier(value = "myReader") ItemReader reader,
#Qualifier(value = "myWriter") ItemWriter writer,
StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("MySlaveStep")
.<SomeObject, SomeObject>chunk(1000)
.reader(reader)
.writer(writer)
.build();
}
#Bean(name = "myStep")
public Step myStep(
#Qualifier(value = "myPartitioner") Partitioner partitioner, // with #StepScope
#Qualifier(value = "myExecutor") TaskExecutor executor, // With/without #StepScope
#Qualifier(value = "myStep") Step step, // With #StepScope
StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory
.get("MyStep")
.partitioner("MyPartition", partitioner)
.taskExecutor(executor)
.step(step)
.build();
}
#StepScope // With or without
#Bean(name = "taskExecutor")
public TaskExecutor taskExecutor() {
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setQueueCapacity(Integer.MAX_VALUE);
executor.setCorePoolSize(2);
executor.setMaxPoolSize(2);
return executor;
}
The exception I'm getting is:
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'scopedTarget.mySlaveStep': Scope 'step' is not active for the current thread; consider defining a scoped proxy for this bean if you intend to refer to it from a singleton; nested exception is java.lang.IllegalStateException: No context holder available for step scope
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:368) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) ~[spring-aop-5.2.0.RELEASE.jar:5.2.0.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:192) ~[spring-aop-5.2.0.RELEASE.jar:5.2.0.RELEASE]
at com.sun.proxy.$Proxy144.execute(Unknown Source) ~[na:na]
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:138) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:135) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: java.lang.IllegalStateException: No context holder available for step scope
at org.springframework.batch.core.scope.StepScope.getContext(StepScope.java:167) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at org.springframework.batch.core.scope.StepScope.get(StepScope.java:99) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:356) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
So I assume this is related to this. Removing/adding #StepScope from TaskExecutor bean does not change the outcome, however, removing TaskExecutor altogether resolves the issue. I'm only trying to limit the number of parallel partitions being handled as per here. How do I go about it?
First, I will address the scope of the task executor. The task executor should not be "SpringBatch-scoped" (job-scoped or step-scoped) or even scoped at all (IMO the default singleton scope is the correct scope of such a compnent for most use cases). Spring Batch does not create or manage threads, it delegates that to task executors in different parts of the framework. Therefore, such a component should not be impacted by any scope of Spring Batch and should not impact the behaviour of a Spring Batch job by any mean. If this is the case, that would be a bug in Spring Batch.
Now let me address the scope of a step. A step in Spring Batch cannot be step-scoped. That does not make sense. Marking the step as step-scoped means do not create that step bean until the job enclosing it is running (ie at runtime). But, at that time, the step was not configured yet. A batch artefact of a step (reader, writer, listener, tasklet, partitioner, etc) can be step-scoped though, but not the step itself. There is a note about that in the reference documentation here: Late Binding of Job and Step Attributes. Removing the step scope on mySlaveStep should fix your issue.
While I see valid use cases for step components to be scoped (to use late-binding for instance), I do not see any valid use case to scope the step itself.

Saving a record using JPA in a Spring Boot Scheduler

I'm using a Spring Boot Scheduler to run a query on the DB daily to find some records based on a condition and update the records returned. Fetching the records using JPA works fine, but when I loop through them, update them, and try to save each updated record I get the following error:
Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Error while committing the transaction
Caused by: javax.persistence.RollbackException: Error while committing the transaction at org.hibernate.internal.ExceptionConverterImpl.convertCommitException(ExceptionConverterImpl.java:81) at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:104) at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:562) ... 30 more Caused by: java.lang.NullPointerException at com.xxx.yyy.config.JpaAuditingConfiguration.auditorProvider$lambda-0(JpaAuditingConfiguration.kt:15) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:208) at com.sun.proxy.$Proxy168.getCurrentAuditor(Unknown Source) at java.base/java.util.Optional.map(Optional.java:265) at org.springframework.data.auditing.AuditingHandler.getAuditor(AuditingHandler.java:109) at org.springframework.data.auditing.AuditingHandler.markModified(AuditingHandler.java:104) at org.springframework.data.jpa.domain.support.AuditingEntityListener.touchForUpdate(AuditingEntityListener.java:112).
Here is the scheduler code I have. If I run the same code inside my service and call it using an endpoint everything works fine:
#Component
class Scheduler(
private val repository: Repository
) {
#Scheduled(cron = "0 0 2 * * *")
fun expire() {
val records = repository.findRecords()
for (record in records) {
try {
// Call some external API using record.id but this part is commented out for now until the saving works
record.active = false
repository.save(record)
} catch (ex: Exception) {
logger.error("Error expiring record " + record.id)
logger.error("Exception: ${ex.printStackTrace()}")
continue
}
}
}
}
the null pointer exception happens in the JpaAuditingConfiguration config I use for storing the created_at and last_modified_at dates. Here is the code I have for that class:
#Configuration
#EnableJpaAuditing(auditorAwareRef = "auditorProvider")
class JpaAuditingConfiguration {
#Bean
fun auditorProvider(): AuditorAware<String> {
return AuditorAware { Optional.of(SecurityContextHolder.getContext().authentication.name) }
}
}
Your JpaAuditingConfiguration requires the security context to be non null when you make modifications. When you're running your task in a scheduler there is no active request, so no active session, and therefore your authentication is null.
Usually, this is solved by making a special app user and manually authenticating them in your scheduled task.

Why #Qualifier not work

I used spring boot + jdbctemplate and I have to use multi datasource, e.g.
#Configuration
public class MultiDBConfig {
#Bean(name = "fooDb")
#ConfigurationProperties(prefix = "foo.datasource")
public DataSource fooDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "fooJdbcTemplate")
public JdbcTemplate fooJdbcTemplate(#Qualifier("fooDb") DataSource ds) {
return new JdbcTemplate(ds);
}
#Bean(name = "barDb")
#ConfigurationProperties(prefix = "bar.datasource")
public DataSource barDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "barJdbcTemplate")
public JdbcTemplate barJdbcTemplate(#Qualifier("barDb") DataSource ds) {
return new JdbcTemplate(ds);
}
}
when start my application, it failed and have below error info
Parameter 0 of method fooJdbcTemplate in com.example.multidatasourcedemo.MultiDBConfig required a single bean, but 3 were found:
- fooDb: defined by method 'fooDataSource' in class path resource [com/example/multidatasourcedemo/MultiDBConfig.class]
- barDb: defined by method 'barDataSource' in class path resource [com/example/multidatasourcedemo/MultiDBConfig.class]
- testDb: defined by method 'testDataSource' in class path resource [com/example/multidatasourcedemo/MultiDBConfig.class]
Action:
Consider marking one of the beans as #Primary, updating the consumer to accept multiple beans, or using #Qualifier to identify the bean that should be consumed
But I obviously have used #Qualifier to identify the bean , e.g.
#Bean(name = "fooJdbcTemplate")
public JdbcTemplate fooJdbcTemplate(#Qualifier("fooDb") DataSource ds)
Why doesn't #Qualifier work here?
So I've done some debugging and found something which might explain what's happening. At this point I'm not sure if it's a bug (could be this one), but I have not been able to find any other documentation to clarify this either.
For reference this is spring-boot 1.5.4.
I started from the log, you can find below an excerpt, more specifically the line regarding DataSourceInitializer.init (below with ==> at the beginning):
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'javax.sql.DataSource' available: expected single matching bean but found 3: fooDb,barDb,testDb
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1041) ~[spring-beans-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:345) ~[spring-beans-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:340) ~[spring-beans-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1090) ~[spring-context-4.3.9.RELEASE.jar:4.3.9.RELEASE]
==> at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.init(DataSourceInitializer.java:77) ~[spring-boot-autoconfigure-1.5.4.RELEASE.jar:1.5.4.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:366) ~[spring-beans-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:311) ~[spring-beans-4.3.9.RELEASE
...
What happens is, when initialising the data sources, spring-boot tries to initialise the DB as well, feature which is enabled by default according to the docs:
Spring JDBC has a DataSource initializer feature. Spring Boot enables it by default and loads SQL from the standard locations schema.sql and data.sql (in the root of the classpath).
This takes place in the #PostConstruct section of org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer:
#PostConstruct
public void init() {
if (!this.properties.isInitialize()) {
logger.debug("Initialization disabled (not running DDL scripts)");
return;
}
if (this.applicationContext.getBeanNamesForType(DataSource.class, false, false).length > 0) {
==> this.dataSource = this.applicationContext.getBean(DataSource.class);
}
if (this.dataSource == null) {
logger.debug("No DataSource found so not initializing");
return;
}
runSchemaScripts();
}
As you can see, it tries to get the DataSource to execute the DB initialisation using the class this.dataSource = this.applicationContext.getBean(DataSource.class); and since there are 3 instances and no primary, it fails as per the expected behaviour of getBean(class)
<T> T getBean(Class<T> requiredType) throws BeansException
Return the bean instance that uniquely matches the given object type, if any.
This method goes into ListableBeanFactory by-type lookup territory but may also be translated into a conventional by-name lookup based on the name of the given type. For more extensive retrieval operations across sets of beans, use ListableBeanFactory and/or BeanFactoryUtils.
Parameters:
requiredType - type the bean must match; can be an interface or superclass. null is disallowed.
Returns:
an instance of the single bean matching the required type
Throws:
NoSuchBeanDefinitionException - if no bean of the given type was found
==> NoUniqueBeanDefinitionException - if more than one bean of the given type was found
BeansException - if the bean could not be created
So, bottom line, this happens before even trying to autowire your #Qualifier("fooDb") bean in the method, and I believe you have at lease these 2 choices, and in both cases your #Qualifier will be taken into account at the time when your JdbcTemplate is created:
if you need to execute some scripts to initialise your DB, then use #Primary to indicate which DataSource could be used for the task
otherwise, you can disable this implicit feature by adding spring.datasource.initialize=false in your application.properties (see here a list of common properties that can be configured)
This can be caused by a few different things. In my case, I had the following situation:
Two Datasource beans being configured in two Java classes, but both given specific Bean IDs
One place a Datasource was being injected, but correctly annotated with a Qualifier
A SpringBootApplication that was correctly excluding DataSourceAutoConfiguration
However, the bug turned out to be: a second class had been annotated as a SpringBootApplication and that was starting up...lost among the logs.
So, if everything else looks correct: check if some other, unexpected, SpringBootApplication is starting up.

Strategy to refresh/update SessionFactory in spring integration

HI I am using spring integration extensively in my project and in the current case dynamically creating my ftp, sftp adapters using spring dynamic flow registration. Also to provide session-factories I create them dynamically based on persisted configuration for each unique connection .
This works great but sometimes there are situations when I need to modify an existing session config dynamically and in this case I do require the session factory to refresh with a new session config . This can happen due to changing creds dynamically.
To do the same I am looking for two approches
remove the dynamic flows via flowcontext.remove(flowid). But this does not somehow kill the flow, I still see the old session factory and flow running.
If there is a way to associate a running adapter with a new Sessionfactory dynamically this would also work . But still have not find a way to accomplish this .
Please help
UPDATE
my dynamic registration code below
CachingSessionFactory<FTPFile> csf = cache.get(feed.getConnectionId());
IntegrationFlow flow = IntegrationFlows
.from(inboundAdapter(csf).preserveTimestamp(true)//
.remoteDirectory(feed.getRemoteDirectory())//
.regexFilter(feed.getRegexFilter())//
.deleteRemoteFiles(feed.getDeleteRemoteFiles())
.autoCreateLocalDirectory(feed.getAutoCreateLocalDirectory())
.localFilenameExpression(feed.getLocalFilenameExpression())//
.localFilter(localFileFilter)//
.localDirectory(new File(feed.getLocalDirectory())),
e -> e.id(inboundAdapter.get(feed.getId())).autoStartup(false)
.poller(Pollers//
.cron(feed.getPollingFreq())//
.maxMessagesPerPoll(1)//
.advice(retryAdvice)))
.enrichHeaders(s -> s.header(HEADER.feed.name(), feed))//
.filter(selector)//
.handle(fcHandler)//
.handle(fileValidationHandler)//
.channel(ftbSubscriber)//
.get();
this.flowContext.registration(flow).addBean(csf).//
id(inboundFlow.get(feed.getId())).//
autoStartup(false).register();
I am trying removing the same via
flowContext.remove(flowId);
on removing also the poller and adapter still look like they are active
java.lang.IllegalStateException: failed to create FTPClient
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:275)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:200)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:62)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:134)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:224)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:245)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.integration.handler.advice.AbstractRequestHandlerAdvice.invoke(AbstractRequestHandlerAdvice.java:65)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy188.call(Unknown Source)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
*POST Gary comments * changed the order of the chain and removing autostartup as defined in his example and now the polling adapter looks like getting removed .
changed order to match the one from Gary and remove autostartup from the flowcontext chain. Though looks like bug is still there if autstrtup is true .
this.flowContext.registration(flow).//
id(inboundFlow.get(feed.getId()))//
.addBean(sessionFactory.get(feed.getId()), csf)//
.register();
* researching more *
The standardIntegrationFlow.start does start all the components inside the flow irrespective of the autostartup status . I guess we do need to check the isAutostartup for these as well and only start them if autostartup is True when starting the IntegrationFlow. existing code below of standardIF . I there a way to override this or does this need a PR or fix .
if (!this.running) {
ListIterator<Object> iterator = this.integrationComponents.listIterator(this.integrationComponents.size());
this.lifecycles.clear();
while (iterator.hasPrevious()) {
Object component = iterator.previous();
if (component instanceof SmartLifecycle) {
this.lifecycles.add((SmartLifecycle) component);
((SmartLifecycle) component).start();
}
}
this.running = true;
}
remove() should shut everything down. If you are using CachingSessionFactory we need to destroy() it, so it closes the cached sessions.
The flow will automatically destroy() the bean if you add it to the registration (using addBean()).
If you can edit your question to show your dynamic registration code, I can take a look.
EDIT
Everything works fine for me...
#SpringBootApplication
public class So43916317Application implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(So43916317Application.class, args).close();
}
#Autowired
private IntegrationFlowContext context;
#Override
public void run(String... args) throws Exception {
CSF csf = new CSF(sf());
IntegrationFlow flow = IntegrationFlows.from(Ftp.inboundAdapter(csf)
.localDirectory(new File("/tmp/foo"))
.remoteDirectory("bar"), e -> e.poller(Pollers.fixedDelay(1_000)))
.handle(System.out::println)
.get();
this.context.registration(flow)
.id("foo")
.addBean(csf)
.register();
Thread.sleep(10_000);
System.out.println("removing flow");
this.context.remove("foo");
System.out.println("destroying csf");
csf.destroy();
Thread.sleep(10_000);
System.out.println("exiting");
Assert.state(csf.destroyCalled, "destroy not called");
}
#Bean
public DefaultFtpSessionFactory sf() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUsername("ftptest");
sf.setPassword("ftptest");
return sf;
}
public static class CSF extends CachingSessionFactory<FTPFile> {
private boolean destroyCalled;
public CSF(SessionFactory<FTPFile> sessionFactory) {
super(sessionFactory);
}
#Override
public void destroy() {
this.destroyCalled = true;
super.destroy();
}
}
}
log...
16:15:38.898 [task-scheduler-5] DEBUG o.s.i.f.i.FtpInboundFileSynchronizer - 0 files transferred
16:15:38.898 [task-scheduler-5] DEBUG o.s.i.e.SourcePollingChannelAdapter - Received no Message during the poll, returning 'false'
16:15:39.900 [task-scheduler-3] DEBUG o.s.integration.util.SimplePool - Obtained org.springframework.integration.ftp.session.FtpSession#149a806 from pool.
16:15:39.903 [task-scheduler-3] DEBUG o.s.i.f.r.s.CachingSessionFactory - Releasing Session org.springframework.integration.ftp.session.FtpSession#149a806 back to the pool.
16:15:39.903 [task-scheduler-3] DEBUG o.s.integration.util.SimplePool - Releasing org.springframework.integration.ftp.session.FtpSession#149a806 back to the pool
16:15:39.903 [task-scheduler-3] DEBUG o.s.i.f.i.FtpInboundFileSynchronizer - 0 files transferred
16:15:39.903 [task-scheduler-3] DEBUG o.s.i.e.SourcePollingChannelAdapter - Received no Message during the poll, returning 'false'
removing flow
16:15:40.756 [main] INFO o.s.i.e.SourcePollingChannelAdapter - stopped org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean#0
16:15:40.757 [main] INFO o.s.i.channel.DirectChannel - Channel 'application.foo.channel#0' has 0 subscriber(s).
16:15:40.757 [main] INFO o.s.i.endpoint.EventDrivenConsumer - stopped org.springframework.integration.config.ConsumerEndpointFactoryBean#0
16:15:40.757 [main] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Retrieved dependent beans for bean 'foo': [org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizer#0, org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean#0, org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean#0.source, foo.channel#0, com.example.So43916317Application$$Lambda$12/962287291#0, org.springframework.integration.config.ConsumerEndpointFactoryBean#0, foocom.example.So43916317Application$CSF#0]
destroying csf
16:15:40.757 [main] DEBUG o.s.integration.util.SimplePool - Removing org.springframework.integration.ftp.session.FtpSession#149a806 from the pool
exiting
16:15:50.761 [main] TRACE o.s.c.a.AnnotationConfigApplicationContext - Publishing event in org.springframework.context.annotation.AnnotationConfigApplicationContext#27c86f2d: org.springframework.boot.context.event.ApplicationReadyEvent[source=org.springframework.boot.SpringApplication#5c18016b]
As you can see, the polling stops after the remove() and the session is closed by the destroy().
EDIT2
If you have auto start turned off you have to start via the registration...
IntegrationFlowRegistration registration = this.context.registration(flow)
.id("foo")
.addBean(csf)
.autoStartup(false)
.register();
...
registration.start();

Spring & Hibernate SessionFactory performance issue

I am facing a performance issue with Hibernate sessionFactory.
It is a Spring Boot - Hibernate app with a SessionFactory configured like this
#Bean
public SessionFactory sessionFactory(HibernateEntityManagerFactory hemf){
return hemf.getSessionFactory();
}
I have also tried all the different ways described in this question Spring Boot - Handle to Hibernate SessionFactory
My DAO looks like this
#Autowired
private SessionFactory sessionFactory;
#Transactional
public List<Type> findAll() {
return sessionFactory.getCurrentSession().createQuery("from Type").list();
}
When the number of concurrent db requests is bigger than the configured maximumPoolSize(10 in this example) then the application becomes unresponsive.
#RequestMapping(value = "/stress-sessionfactory")
public void stressTest(#RequestParam int threadsCount) {
List<Thread> threads = new ArrayList<>();
for (int i = 0; i < threadsCount; i++) {
final int k = i;
Runnable runnable
= () -> {
List<Type> all = typeDAOHibernate.findAll();
LOG.info("{}:sessionfactory:{} ", k, all.size());
};
Thread t = new Thread(runnable);
threads.add(t);
}
threads.stream().forEach(t -> t.start());
}
You can find a standalone example in github.
The example is configured with maximumPoolSize=10.
So if you just hit
http://localhost:8080/stress-sessionfactory?threadsCount=11 you will get the error I am talking about.
On the other hand a Spring Data repository can easily handle thousands of concurrent requests! (e.g http://localhost:8080/stress-jpa?threadsCount=2000)
I Have tried the same scenario with different datasources(Hikari, Tomcat) , different databases(oracle,h2) and different hibernate
versions( 5.011-Final,v4.3.11-Final) and I always get the same error.
Stacktrace
Exception in thread "Thread-51" Exception in thread "Thread-47" org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:431)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:447)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:277)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:656)
at com.example.dao.TypeDAOHibernate$$EnhancerBySpringCGLIB$$e6373e2e.findAll(<generated>)
at com.example.controller.StressController.lambda$stressTest$0(StressController.java:36)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1692)
at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1602)
at org.hibernate.jpa.spi.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:1700)
at org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:48)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:189)
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:380)
... 9 more
Caused by: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:48)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:90)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:112)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getConnectionForTransactionManagement(LogicalConnectionManagedImpl.java:230)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:237)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:214)
at org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:52)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1512)
at org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:45)
... 11 more
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30001ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:591)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:194)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:146)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:386)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:87)
... 18 more
[]
I noticed that you are injecting sessionFactory but not using spring-framework transaction features.
If you want to use like this you should close the session you are getting from sessionFactory in dao layer.

Resources