Too many connections are getting created - spring

I have gone through the HikariCP connection pool, I could say this is awesome as I have seen good performance. But here my concern is I think it is taking too many connections.
Scenario:
I have list object which contains 10004 records and when I ran insert query it is taking 13 seconds to complete the operation.
DB Properties:
final HikariDataSource dataSource = new HikariDataSource();
dataSource.setMaximumPoolSize(100);
dataSource.setDriverClassName("oracle.jdbc.driver.OracleDriver");
dataSource.setJdbcUrl("jdbc:oracle:thin:#g9u1769.houston.hpecorp.net:1525:ODSDBD");
dataSource.setUsername("Solid_batch1");
dataSource.setPassword("solid_batch123");
dataSource.setMaxLifetime(30000);
log:
2016-08-27 11:26:01.779 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#221e8d6a
2016-08-27 11:26:04.204 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#3c4ecbf
2016-08-27 11:26:06.620 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#7592f187
2016-08-27 11:26:09.038 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#22f125f
2016-08-27 11:26:11.455 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#605f1c17
2016-08-27 11:26:13.869 [] [] [] [Hikari connection adder (pool HikariPool-0)] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-0 - Added connection oracle.jdbc.driver.T4CConnection#42d5b6f
2016-08-27 11:26:13.975 [] [] [] [main] WARN c.h.i.i.d.manager.dao.DaoService - detail query : 13
Can anyone help me in reducing the connection creation.

you can reduce maximum pool size so connection will be restrict to that and then it not make more connection instead it will use once defined connection is free to use by setting :
dataSource.setMaximumPoolSize(20);

Related

TransientDataAccessResourceException - R2DBC pgdb connection remains in idle in transaction

I have a spring-boot application where using webflux and r2dbc-postgres. I have discovered a strange issue when trying to do some db operations in a flatMap().
Code example:
#Transactional
public Mono<Void> insertDummyFooBars() {
return Flux.fromIterable(IntStream.rangeClosed(1, 260).boxed().collect(Collectors.toList()))
.log()
.flatMap(i -> this.repository.save(FooBar.builder().foo("test-" + i).build()))
.log()
.concatMap(i -> this.repository.findAll())
.then();
}
It seems like flatMap can process max 256 elements in batches. (Queues.SMALL_BUFFER_SIZE default value is 256). So when I tried to run the code above (with 260 elements) I've got an exception - TransientDataAccessResourceException and the following message:
Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException
There is no Releasing R2DBC Connection after this exception. The pgdb connection/session remains in idle in transaction state and the app is not able to run properly when pool max size is reached and all of the connections are in idle in transaction state. I think the connection should be released even if an exception happened or not.
If I use concatMap instead of flatMap it works as expected - no exception, connection released! It's also ok with flatMap when the elements are less than or equal to 256.
Is it possible to force pgdb connection closure? What should I do If I have lot of db operations in flatMap like this? Should I replace all of them with concatMap? Is there a global solution for this?
Versions:
Postgres: 12.6, Spring-boot: 2.7.6
Demo project
LOG:
2022-12-08 16:32:13.092 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(256)
2022-12-08 16:32:13.092 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.114 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onNext(FooBar(id=258, foo=test-1))
2022-12-08 16:32:13.143 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [SELECT foo_bar.* FROM foo_bar]
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | request(1)
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(257)
2022-12-08 16:32:13.144 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onComplete()
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | cancel()
2022-12-08 16:32:13.160 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onError(org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded)
2022-12-08 16:32:13.167 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 :
org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded
at org.springframework.r2dbc.connection.ConnectionFactoryUtils.convertR2dbcException(ConnectionFactoryUtils.java:215) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at org.springframework.r2dbc.core.DefaultDatabaseClient.lambda$inConnectionMany$8(DefaultDatabaseClient.java:147) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at reactor.core.publisher.Flux.lambda$onErrorMap$29(Flux.java:7105) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.Flux.lambda$onErrorResume$30(Flux.java:7158) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.25.jar:3.4.25]
I have tried to change the Queues.SMALL_BUFFER_SIZE, and also tried to add a concurrency value to the flatmap. It works when I reduced the value to 255 but I think it is not a good solution.

Create regex pattern for fluentd

I'm using Fluentd configuration and regexp parser to parse the logs.
This are my original logs in fluentd:
2022-09-22 18:15:09,633 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level.
2022-09-22 18:15:14,968 [ringHikariCP connection closer] DEBUG PoolBase - springHikariCP - Closing connection com.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)
I want to create json format for the above logs with regexp like this:
{
"logtime": "2022-09-22 18:15:09,633",
"Logger Name": "[springHikariCP housekeeper ]",
"Log level": "DEBUG",
"message": "HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level"
}
{
"logtime": "2022-09-22 18:15:09,633",
"Logger Name": "[ringHikariCP connection closer]",
"Log level": "DEBUG",
"message": "PoolBase - springHikariCP - Closing connectioncom.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)"
}
Can someone with Ruby expirience help me to create Ruby regex for this logs above
The regex should be straightforward, e.g.
(?'logtime'\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3})\s(?'logname'\[[^\]]*\])\s(?'loglevel'\w+)\s(?'message'.*)
Online Test
And then, reassemble the parts using a substitution (gsub, see example below) or putting a new string together from the parts (match) :
re = /(?'logtime'\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3})\s(?'logname'\[[^\]]*\])\s(?'loglevel'\w+)\s(?'message'.*?$)/m
str = '2022-09-22 18:15:09,633 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level.
2022-09-22 18:15:14,968 [ringHikariCP connection closer] DEBUG PoolBase - springHikariCP - Closing connection com.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)'
str.gsub(re) do |match|
puts "{
\"logtime\": \"#{$~[:logtime]}\",
\"Logger Name\": \"#{$~[:logname]}\",
\"Log level\": \"#{$~[:loglevel]}\",
\"message\": \"#{$~[:message]}\"
}"
end

How can I turn off netty client DNS connect retry?

I'm using Netty-httpClient for Spring webClient.
When I connect www.naver.com and set the host like this.
127.0.0.1 www.naver.com
125.209.222.142 www.naver.com
And run httpClient.get() result like this.
DEBUG r.n.r.PooledConnectionProvider - [id:83c8069f] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:83c8069f] SSL enabled using engine sun.security.ssl.SSLEngineImpl#5412b204 and SNI www.naver.com:443
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
DEBUG i.n.u.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#7ec1e7a5
DEBUG r.n.t.TransportConfig - [id:83c8069f] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:83c8069f] REGISTERED
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connecting to [www.naver.com/127.0.0.1:443].
INFO reactor - [id:83c8069f] CONNECT: www.naver.com/127.0.0.1:443
INFO reactor - [id:83c8069f] CLOSE
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connect attempt to [www.naver.com/127.0.0.1:443] failed.
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: www.naver.com/127.0.0.1:443
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:0e3c272c] SSL enabled using engine sun.security.ssl.SSLEngineImpl#47b241b and SNI www.naver.com:443
DEBUG r.n.t.TransportConfig - [id:0e3c272c] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:0e3c272c] REGISTERED
INFO reactor - [id:83c8069f] UNREGISTERED
DEBUG r.n.t.TransportConnector - [id:0e3c272c] Connecting to [www.naver.com/125.209.222.142:443].
INFO reactor - [id:0e3c272c] CONNECT: www.naver.com/125.209.222.142:443
DEBUG r.n.r.DefaultPooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Registering pool release on close event for channel
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
DEBUG i.n.u.Recycler - -Dio.netty.recycler.linkCapacity: 16
DEBUG i.n.u.Recycler - -Dio.netty.recycler.ratio: 8
DEBUG i.n.u.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
INFO reactor - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] ACTIVE
see that. I don't want connect twice and don't spend more time to connect.
just fail if can not make connect.
Can I avoid this situation?
This is managing under the issue.
https://github.com/reactor/reactor-netty/issues/1822

Why so many connections are used by Spring reactive with Mongo

I got the exception 'MongoWaitQueueFullException' and I realize the number of connections that my application is using. I use the default configuration of Spring boot (2.2.7.RELEASE) with reactive MongoDB (4.2.8). Transactions are used.
Even when running an integration test that basically creates a bit more than 200 elements then groups them (200 groups). 10 connections are used. When this algorithm is executed over a real data-set, this exception is thrown. The default limit of the waiting queue (500) was reached. This does not make the application scalable.
My question is: is there a way to design a reactive application that helps to reduce the number of connections?
This is the output of my test. Basically, it scans all translations of bundle files and them group them per translation key. An element is persisted per translation key.
return Flux
.fromIterable(bundleFile.getFiles())
.map(ScannedBundleFileEntry::getLocale)
.flatMap(locale ->
handler
.scanTranslations(bundleFileEntity.toLocation(), locale, context)
.index()
.map(indexedTranslation ->
createTranslation(
workspaceEntity,
bundleFileEntity,
locale.getId(),
indexedTranslation.getT1(), // index
indexedTranslation.getT2().getKey(), // bundle key
indexedTranslation.getT2().getValue() // translation
)
)
.flatMap(bundleKeyTemporaryRepository::save)
)
.thenMany(groupIntoBundleKeys(bundleFileEntity))
.then(bundleKeyTemporaryRepository.deleteByBundleFile(bundleFileEntity.getId()))
.then(Mono.just(bundleFileEntity));
The grouping function:
private Flux<BundleKeyEntity> groupIntoBundleKeys(BundleFileEntity bundleFile) {
return this
.findBundleKeys(bundleFile)
.groupBy(BundleKeyGroupKey::new)
.flatMap(bundleKeyGroup ->
bundleKeyGroup
.collectList()
.map(bundleKeys -> {
final BundleKeyGroupKey key = bundleKeyGroup.key();
final BundleKeyEntity entity = new BundleKeyEntity(key.getWorkspace(), key.getBundleFile(), key.getKey());
bundleKeys.forEach(entity::mergeInto);
return entity;
})
)
.flatMap(bundleKeyEntityRepository::save);
}
The test output:
560 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Neither #ContextConfiguration nor #ContextHierarchy found for test class [be.sgerard.i18n.controller.TranslationControllerTest], using SpringBootContextLoader
569 [main] INFO o.s.t.c.s.AbstractContextLoader - Could not detect default resource locations for test class [be.sgerard.i18n.controller.TranslationControllerTest]: no resource found for suffixes {-context.xml, Context.groovy}.
870 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener, org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener, org.springframework.test.context.event.EventPublishingTestExecutionListener, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener, org.springframework.security.test.context.support.ReactorContextTestExecutionListener]
897 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#4372b9b6, org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener#232a7d73, org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener#4b41e4dd, org.springframework.test.context.support.DirtiesContextTestExecutionListener#22ffa91a, org.springframework.test.context.transaction.TransactionalTestExecutionListener#74960bfa, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#42721fe, org.springframework.test.context.event.EventPublishingTestExecutionListener#40844aab, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener#1f6c9cd8, org.springframework.security.test.context.support.ReactorContextTestExecutionListener#5b619d14, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener#66746f57, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener#447a020, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener#7f36662c, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener#28e8dde3, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener#6d23017e]
1551 [background-preinit] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
1677 [main] INFO b.s.i.c.TranslationControllerTest - Starting TranslationControllerTest on sgerard with PID 538 (started by sgerard in /home/sgerard/sandboxes/github-oauth/server)
1678 [main] INFO b.s.i.c.TranslationControllerTest - The following profiles are active: test
3250 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Bootstrapping Spring Data Reactive MongoDB repositories in DEFAULT mode.
3747 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 493ms. Found 9 Reactive MongoDB repository interfaces.
5143 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
5719 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
5996 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:4337}] to localhost:27017
6010 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=12207332, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505436362981}
6019 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
6040 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:4338}] to localhost:27017
6042 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1727974, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505468960066}
7102 [nioEventLoopGroup-2-2] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017
11078 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 1 endpoint(s) beneath base path ''
11158 [main] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
11720 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017
12084 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService 'taskScheduler'
12161 [main] INFO b.s.i.c.TranslationControllerTest - Started TranslationControllerTest in 11.157 seconds (JVM running for 13.532)
20381 [nioEventLoopGroup-2-3] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017
20408 [nioEventLoopGroup-2-2] INFO b.s.i.s.w.WorkspaceManagerImpl - Synchronize, there is no workspace for the branch [master], let's create it.
20416 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - The workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12] has been created.
20421 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - Initializing workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12].
20525 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [exception] with 2 file(s).
20812 [nioEventLoopGroup-2-4] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017
21167 [nioEventLoopGroup-2-8] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017
21167 [nioEventLoopGroup-2-6] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017
21393 [nioEventLoopGroup-2-5] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017
21398 [nioEventLoopGroup-2-7] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017
21442 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [validation] with 2 file(s).
21503 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/test/resources/be/sgerard/i18n/service/i18n/file] named [file] with 2 file(s).
21621 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [front/src/main/web/src/assets/i18n] named [i18n] with 2 file(s).
22745 [SpringContextShutdownHook] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService 'taskScheduler'
22763 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017 because the pool has been closed.
22766 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017 because the pool has been closed.
22767 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017 because the pool has been closed.
22769 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017 because the pool has been closed.
22770 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017 because the pool has been closed.
22776 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017 because the pool has been closed.
Process finished with exit code 0
Spring Reactive is asynchronous. Imagine you have 3 items in your dataset. It opens a connection for the save of the first item. But it won't wait for it to finish and use for the second save. Instead, it opens a second connection as soon as possible. Thus you'll end up overloading all the possible connections in the pool.

Spring-Batch - MultiFileResourcePartitioner - Caused by: java.net.SocketException: Connection reset

I am using FlatFileItemReader and have extended the AbstractResource to return a stream from Amazon S3 object.
S3Object amazonS3Object = s3client.getObject(new GetObjectRequest(bucket,file));
InputStream stream = null;
stream = amazonS3Object.getObjectContent();
return stream;
In my batch job I have also implemented MultiFileResourcePartitioner in which i gave the bucket to partition all the files. I am able to read only part of few files and after which i get a socket reset error.see below pieces of error
.ResourcelessTransactionManager$ResourcelessTransaction#122ba881]
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=9
2015-08-24 23:24:03 DEBUG StepContextRepeatCallback:68 - Preparing chunk execution for StepContext: org.springframework.batch.core.scope.context.StepContext#252ce07a
2015-08-24 23:24:03 DEBUG StepContextRepeatCallback:76 - Chunk execution starting: queue size=0
2015-08-24 23:24:03 DEBUG ResourcelessTransactionManager:367 - Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2015-08-24 23:24:03 DEBUG RepeatTemplate:464 - Starting repeat context.
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=1
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=2
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=3
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=4
2015-08-24 23:24:03 DEBUG DefaultClientConnection:160 - Connection 0.0.0.0:58171<->10.37.135.39:8099 shut down
2015-08-24 23:24:03 DEBUG DefaultClientConnection:176 - Connection 0.0.0.0:58171<->10.37.135.39:8099 closed
Caused by: org.springframework.batch.item.file.NonTransientFlatFileException: Unable to read from resource: [null]
at org.springframework.batch.item.file.FlatFileItemReader.readLine(FlatFileItemReader.java:220)
at org.springframework.batch.item.file.FlatFileItemReader.doRead(FlatFileItemReader.java:173)
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.read(AbstractItemCountingItemStreamItemReader.java:83)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy17.read(Unknown Source)
at org.springframework.batch.core.step.item.SimpleChunkProvider.doRead(SimpleChunkProvider.java:91)
at org.springframework.batch.core.step.item.FaultTolerantChunkProvider.read(FaultTolerantChunkProvider.java:87)
... 22 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)</i>
My requirement is to process files with millions of records out of an S3 bucket and the application runs on AWS. I have passed the S3 client configurations with retry's and open connections which didn't help much.
As #Michael Minella mentioned, it might be a choice for you to use Spring Cloud AWS project to get resources:
#Autowired
private ResourcePatternResolver resourcePatternResolver;
public void resolveAndLoad() throws IOException {
Resource[] allTxtFilesInFolder = this.resourcePatternResolver.getResources("s3://bucket/name/*.txt");
Resource[] allTxtFilesInBucket = this.resourcePatternResolver.getResources("s3://bucket/**/*.txt");
Resource[] allTxtFilesGlobally = this.resourcePatternResolver.getResources("s3://**/*.txt");
}
And then pass the resources to your MultiFileResourcePartitioner to see the exception can be solved.

Resources