We have a Spring-Boot REST application running with Infinispan 13.0.12 caches and we see periodic seemingly random cases where the application becomes un-responsive. A thread dump indicates over 200 threads in this state:
"http-nio-8080-exec-379" #11999 daemon prio=5 os_prio=0 tid=0x00007f28900f9800 nid=0x2c68
waiting on condition [0x00007f28485c2000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c09af3e8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at org.jgroups.util.Credit.decrementIfEnoughCredits(Credit.java:65)
at org.jgroups.protocols.UFC.handleDownMessage(UFC.java:119)
at org.jgroups.protocols.FlowControl.down(FlowControl.java:323)
at org.jgroups.protocols.FlowControl.down(FlowControl.java:317)
at org.jgroups.protocols.FRAG3.down(FRAG3.java:139)
at org.jgroups.stack.ProtocolStack.down(ProtocolStack.java:927)
at org.jgroups.JChannel.down(JChannel.java:645)
at org.jgroups.JChannel.send(JChannel.java:484)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.send(JGroupsTransport.java:1161)
Our Java configuration looks like this:
#Autowired
#Bean
public SpringEmbeddedCacheManagerFactoryBean springEmbeddedCacheManagerFactoryBean(GlobalConfigurationBuilder gcb, ConfigurationBuilder configurationBuilder) {
SpringEmbeddedCacheManagerFactoryBean springEmbeddedCacheManagerFactoryBean = new SpringEmbeddedCacheManagerFactoryBean();
springEmbeddedCacheManagerFactoryBean.addCustomGlobalConfiguration(gcb);
springEmbeddedCacheManagerFactoryBean.addCustomCacheConfiguration(configurationBuilder);
return springEmbeddedCacheManagerFactoryBean;
}
#Autowired
#Bean
public EmbeddedCacheManager defaultCacheManager(SpringEmbeddedCacheManager springEmbeddedCacheManager) throws Exception {
return springEmbeddedCacheManager.getNativeCacheManager();
}
#Bean
public GlobalConfigurationBuilder globalConfigurationBuilder() {
GlobalConfigurationBuilder result = GlobalConfigurationBuilder.defaultClusteredBuilder();
result.transport().addProperty("configurationFile", jgroupsConfigFile);
result.cacheManagerName(IDENTITY_CACHE);
result.defaultCacheName(IDENTITY_CACHE + "-default");
result.serialization()
.marshaller(new JavaSerializationMarshaller())
.allowList()
.addClasses(
LinkedMultiValueMap.class,
String.class
);
result.globalState().enable().persistentLocation(DATA_DIR);
return result;
}
#Bean
public ConfigurationBuilder configurationBuilder() {
ConfigurationBuilder result = new ConfigurationBuilder();
result.clustering().cacheMode(CacheMode.REPL_SYNC);
return result;
}
#Bean
public org.infinispan.configuration.cache.Configuration cacheConfiguration() {
ConfigurationBuilder builder = new ConfigurationBuilder();
return builder
.clustering()
.cacheMode(CacheMode.REPL_SYNC)
.remoteTimeout(replicationTimeoutSeconds, TimeUnit.SECONDS)
.stateTransfer().timeout(stateTransferTimeoutMinutes, TimeUnit.MINUTES)
.persistence()
.addSoftIndexFileStore()
.shared(false)
.fetchPersistentState(true)
.expiration().lifespan(expirationHours, TimeUnit.HOURS)
.build();
}
#Autowired
#Bean
public Cache<String, MultiValueMap<String, String>> identityCache(EmbeddedCacheManager manager, org.infinispan.configuration.cache.Configuration cacheConfiguration) throws IOException {
Cache<String, MultiValueMap<String, String>> result = manager
.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE)
.getOrCreateCache(IDENTITY_CACHE, cacheConfiguration);
result.getAdvancedCache().getStats().setStatisticsEnabled(true);
return result;
}
and we run a three node cluster with the default-jgroups-udp.xml config. Can anyone suggest a likely cause? Perhaps the config is sub-optimal?
TIA
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups
http://www.jgroups.org/schema/jgroups-4.0.xsd">
<UDP mcast_addr="${jgroups.udp.mcast_addr:228.6.7.9}"
mcast_port="${jgroups.udp.mcast_port:46655}"
ucast_send_buf_size="1m"
mcast_send_buf_size="1m"
ucast_recv_buf_size="20m"
mcast_recv_buf_size="25m"
ip_ttl="${jgroups.ip_ttl:2}"
thread_naming_pattern="pl"
enable_diagnostics="false"
bundler_type="no-bundler"
max_bundle_size="8500"
thread_pool.min_threads="${jgroups.thread_pool.min_threads:0}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:200}"
thread_pool.keep_alive_time="60000"
/>
<PING />
<MERGE3 min_interval="10000"
max_interval="30000"
/>
<FD_SOCK />
<FD_ALL timeout="60000"
interval="15000"
timeout_check_interval="5000"
/>
<VERIFY_SUSPECT timeout="5000"
/>
<pbcast.NAKACK2 xmit_interval="100"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
resend_last_seqno="true"
/>
<UNICAST3 xmit_interval="100"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
conn_expiry_timeout="0"
/>
<pbcast.STABLE stability_delay="500"
desired_avg_gossip="5000"
max_bytes="1M"
/>
<pbcast.GMS print_local_addr="false"
install_view_locally_first="true"
join_timeout="${jgroups.join_timeout:5000}"
/>
<UFC max_credits="2m"
min_threshold="0.40"
/>
<MFC max_credits="2m"
min_threshold="0.40"
/>
<FRAG3 frag_size="8000"/>
</config>
You have a replicated cache. This means that reads are always local, so the stack trace must be on a write (or rebalance).
The block means that the sender is waiting for credits from a receiver, which don't arrive, so the receiver must be stuck in sth. Also, the stack trace is not complete; can you show the entire trace?
To know what's going on, it would be good to see thread dumps of all members. I suggest zip them up and post a link to the zip here...
Cheers
Could this be related to https://issues.redhat.com/browse/ISPN-14260 ?
Are you using ACL cache authorisations?
Our temporary fix was to disable cache authorisations which solves all our locking issues. Waiting on the patch for 13 and 14 to be finalised
Related
My goal is to create my own Camel component for Spring Kafka.
I have managed to create it and start consuming. I also want to be able to stop the component and consumption (with JMX, with other Camel route,...), without loosing any messages.
To do that, when stopping Camel component, I need to stop a MessageListenerContainer and eventually MessageListener which is registered in MessageListenerContainer.
My problem is that when MessageListenerContainer is stopped, MessageListener is still processing messages.
#Override
protected void doStart() throws Exception {
super.doStart();
if (kafkaMessageListenerContainer != null) {
return;
}
kafkaMessageListenerContainer = kafkaListenerContainerFactory.createContainer(endpoint.getTopicName());
kafkaMessageListenerContainer.setupMessageListener(messageListener());
kafkaMessageListenerContainer.start();
}
#Override
protected void doStop() throws Exception {
LOG.info("STOPPING kafkaMessageListenerContainer");
kafkaMessageListenerContainer.stop();
LOG.info("STOPPED kafkaMessageListenerContainer");
super.doStop();
}
private MessageListener<Object, Object> messageListener() {
return new MessageListener<Object, Object>() {
#Override public void onMessage(ConsumerRecord<Object, Object> data) {
LOG.info("Record received: {}", data.offset());
//...pass a message to Camel processing route
LOG.info("Record processed: {}", data.offset());
}
};
}
This is snippet from log
{"time":"2020-11-27T14:01:57.047Z","message":"Record received: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"c5efc5db-5981-4477-925a-83ffece49572"}
{"time":"2020-11-27T14:01:57.153Z","message":"STOPPED kafkaMessageListenerContainer","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.153Z","message":"Route: testTopic.consumer shutdown complete, was consuming from: my-kafka://events.TestTopic","logger":"org.apache.camel.impl.DefaultShutdownStrategy","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.159Z","message":"Record processed: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.165Z","message":"Record received: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.275Z","message":"Record processed: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"f7bcebb4-9e5e-46a1-bc5b-569264914b05"}
...
I would expect that MessageListener would not consume anymore after MessageListenerContainer is gracefully stopped. I must be missing something, any suggestions?
Many thanks!
I found a issue which caused my problem.
For some reason I was overriding consumerFactory which was not correct.
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
return new DefaultKafkaConsumerFactory<>(props);
}
After removing this and use default one, which is configured in application.yml, problem was resolved.
I am a newbie in Spring batch and I have a couple of questions.
Question 1: I am using a MultiResourceItemReader for reading a bunch of CSV files and a JDBC Item writer to update the DB in batches. The commit interval is set to 1000. If there is a file with a 10k records and I encounter a DB error at the 7th batch is there any way I can roll back all the previously committed chunks?
Question 2: If there are two files each having 100 records and the commit interval is set to 1000 then the MultiResourceItemReader reads both files and sends it to the Writer. Is there any way we can just Write one file at a time ignoring the commit interval in this case essentially creating a loop in writer alone?
Posting the solution that worked for me in case someone need it for reference.
For Question 1 I was able to achieve it using the StepListenerSupport in the writer and overriding the BeforeStep and AfterStep. Sample snippet as below
public class JDBCWriter extends StepListenerSupport implements ItemWriter<MyDomain>{
private boolean errorFlag;
private String sql = "{ CALL STORED_PROC(?, ?, ?, ?, ?) }";
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public void beforeStep(StepExecution stepExecution){
try{
Connection connection = jdbcTemplate.getDataSource().getConnection();
connection.setAutoCommit(false);
}
catch(SQLException ex){
setErrorFlag(Boolean.TRUE);
}
}
#Override
public void write(List<? extends MyDomain> items) throws Exception{
if(!items.isEmpty()){
CallableStatement callableStatement = connection.prepareCall(sql);
callableStatement.setString("1", "FirstName");
callableStatement.setString("2", "LastName");
callableStatement.setString("3", "Date of Birth");
callableStatement.setInt("4", "Year");
callableStatement.registerOutParameter("errors", Types.INTEGER, "");
callableStatement.execute();
if(errors != 0){
this.setErrorFlag(Boolean.TRUE);
}
}
else{
this.setErrorFlag(Boolean.TRUE);
}
}
#Override
public void afterChunk(ChunkContext context){
if(errorFlag){
context.getStepContext().getStepExecution().setExitStatus(ExitStatus.FAILED); //Fail the Step
context.getStepContext().getStepExecution().setStatus(BatchStatus.FAILED); //Fail the batch
}
}
#Override
public ExitStatus afterStep(StepExecution stepExecution){
try{
if(!errorFlag){
connection.commit();
}
else{
connection.rollback();
stepExecution.setExitStatus(ExitStatus.FAILED);
}
}
catch(SQLException ex){
LOG.error("Commit Failed!" + ex);
}
return stepExecution.getExitStatus();
}
public void setErrorFlag(boolean errorFlag){
this.errorFlag = errorFlag;
}
}
XML Config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
....
http://www.springframework.org/schema/batch/spring-batch-3.0.xsd">
<job id="fileLoadJob" xmlns="http://www.springframework.org/schema/batch">
<step id="batchFileUpload" >
<tasklet>
<chunk reader="fileReader"
commit-interval="1000"
writer="JDBCWriter"
/>
</tasklet>
</step>
</job>
<bean id="fileReader" class="...com.FileReader" />
<bean id="JDBCWriter" class="...com.JDBCWriter" />
</beans>
Question 1: The only way to accomplish this is via some form of compensating logic. You can do that via a listener (ChunkListener#afterChunkError for example), but the implementation is up to you. There is nothing within Spring Batch that knows what the overall state of the output is and how to roll it back beyond the current transaction.
Question 2: Assuming you're looking for one output file per input file, due to the fact that most Resource implementations are non-transactional, the writers associated with them do special work to buffer up to the commit point and then flush. The problem here is that because of that, there is no real opportunity to divide that buffer to multiple resources. To be clear, it can be done, you'll just need a custom ItemWriter to do it.
In the manual for ExpressionEvaluatingRequestHandlerAdvice, it clearly says, A typical use case for this advice might be with an <ftp:outbound-channel-adapter/>, perhaps to move the file to one directory if the transfer was successful, or to another directory if it fails.
But I cannot figure out expression to move payload from current directory to another one.
This example just deletes or renames the file:
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.delete()" />
<property name="successChannel" ref="afterSuccessDeleteChannel" />
<property name="onFailureExpression" value="payload.renameTo(new java.io.File(payload.absolutePath + '.failed.to.send'))" />
<property name="failureChannel" ref="afterFailRenameChannel" />
</bean>
How to achieve this?
Edit
As per Gary's suggestion, this is the new try:
Managed to change the expression to "T(java.nio.file.Files).move(payload.path, new java.io.File(new java.io.File('sent'), payload.name).path, T(java.nio.file.StandardCopyOption).REPLACE_EXISTING)",
but still get the error Method move(java.lang.String,java.lang.String,java.nio.file.StandardCopyOption) cannot be found on java.nio.file.Files type
The code is,
#Bean
#ServiceActivator(inputChannel = "toSftpChannel", adviceChain = "expressionAdvice")
public MessageHandler uploadHandler() {
SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(outRemoteDirectory));
handler.setFileNameGenerator(new FileNameGenerator() {
#Override
public String generateFileName(Message<?> message) {
if (message.getPayload() instanceof File) {
return ((File) message.getPayload()).getName();
} else {
throw new IllegalArgumentException("File expected as payload.");
}
}
});
return handler;
}
#MessagingGateway()
public interface UploadGateway {
#Gateway(requestChannel = "toSftpChannel")
void upload(File file);
}
#Bean
public String onUploadSuccessExpression() {
return "T(java.nio.file.Files).move(payload.path, new java.io.File(new java.io.File('sent'), payload.name).path, T(java.nio.file.StandardCopyOption).REPLACE_EXISTING)";
}
#Bean
public String onUploadFailedExpression() {
return "payload";
}
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice expressionEvaluatingRequestHandlerAdvice = new ExpressionEvaluatingRequestHandlerAdvice();
expressionEvaluatingRequestHandlerAdvice.setOnSuccessExpressionString(onUploadSuccessExpression());
expressionEvaluatingRequestHandlerAdvice.setSuccessChannelName("uploadSuccessChannel");
expressionEvaluatingRequestHandlerAdvice.setOnFailureExpressionString(onUploadFailedExpression());
expressionEvaluatingRequestHandlerAdvice.setFailureChannelName("uploadFailedChannel");
expressionEvaluatingRequestHandlerAdvice.setTrapException(true);
expressionEvaluatingRequestHandlerAdvice.setPropagateEvaluationFailures(true);
return expressionEvaluatingRequestHandlerAdvice;
}
The upload method from UploadGateway is called.
The stack trace is,
"main#1" prio=5 tid=0x1 nid=NA runnable
java.lang.Thread.State: RUNNABLE
at org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice.evaluateSuccessExpression(ExpressionEvaluatingRequestHandlerAdvice.java:241)
at org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice.doInvoke(ExpressionEvaluatingRequestHandlerAdvice.java:214)
at org.springframework.integration.handler.advice.AbstractRequestHandlerAdvice.invoke(AbstractRequestHandlerAdvice.java:70)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy81.handleRequestMessage(Unknown Source:-1)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.doInvokeAdvisedRequestHandler(AbstractReplyProducingMessageHandler.java:127)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:112)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.convertAndSend(AbstractMessageSendingTemplate.java:143)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.convertAndSend(AbstractMessageSendingTemplate.java:135)
at org.springframework.integration.gateway.MessagingGatewaySupport.send(MessagingGatewaySupport.java:392)
at org.springframework.integration.gateway.GatewayProxyFactoryBean.invokeGatewayMethod(GatewayProxyFactoryBean.java:481)
at org.springframework.integration.gateway.GatewayProxyFactoryBean.doInvoke(GatewayProxyFactoryBean.java:433)
at org.springframework.integration.gateway.GatewayProxyFactoryBean.invoke(GatewayProxyFactoryBean.java:424)
at org.springframework.integration.gateway.GatewayCompletableFutureProxyFactoryBean.invoke(GatewayCompletableFutureProxyFactoryBean.java:65)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy87.upload(Unknown Source:-1)
If the new directory is on the same disk as the old one, in the 'onSuccessExpression', simply use payload.renameTo(...) similar to the way the sample does in the onFailureExpression.
`payload.renameTo(new java.io.File(new File('newDir'), payload.name))`
Creates a file with the payload's name to a directory newDir (which must exist).
If you are JDK 7 or above use...
T(java.nio.file.Files).move(payload.path, new java.io.File(new File('newDir'), payload.name).path)
...instead.
This will handle the situation of the new directory being on a different disk (which a simple File.renameTo() will not).
If you are still on JDK 6 and the new directory might be on a different disk you will need to use onSuccessExpression=payload and subscribe a service activator to the successChannel to manipulate the file itself, perhaps using Spring's FileCopyUtils.
I'm having a problem with slowiness when many request come to my website, It starts to generate "Wait" threads, I've set up the rest template as a Bean
#Bean
public RestTemplate restTemplate(RestTemplateBuilder restTemplateBuilder) {
return restTemplateBuilder
.setConnectTimeout(Integer.parseInt(env.getProperty("service.configuration.http.http-request-timeout")))
.setReadTimeout(Integer.parseInt(env.getProperty("service.configuration.http.http-request-timeout")))
.requestFactory(clientHttpRequestFactory())
.build();
}
When i look for the process which is generating that problem i find HttpClient in wait.
Anybody knows what can i do to solve this problem?
I'm using java8, apache tomcat, spring boot
In my past project I used this kind of configuration:
#Bean
public RestTemplate restTemplate()
{
HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory();
factory.setHttpClient(httpClient());
RestTemplate result = new RestTemplate(factory);
return result;
}
#Bean
public HttpClient httpClient()
{
CloseableHttpClient httpClient = null;
//Use a connection pool
PoolingHttpClientConnectionManager pcm = new PoolingHttpClientConnectionManager();
HttpClientBuilder hcb = HttpClientBuilder.create();
//Close Idle connection after 5 seconds
pcm.closeIdleConnections(5000, TimeUnit.MILLISECONDS);
//Specify all the timeouts in milli-seconds
RequestConfig config = RequestConfig.custom().setConnectionRequestTimeout(5000).setSocketTimeout(5000).setConnectTimeout(5000).build();
hcb.setDefaultRequestConfig(config);
hcb.setConnectionManager(pcm).setConnectionManagerShared(true);
// Check if proxy is required to connect to the final resource
if (proxyEnable)
{
//If enabled.... configure it
BasicCredentialsProvider credentialProvider = new BasicCredentialsProvider();
AuthScope scope = new AuthScope(hostProxy, portProxy);
if( StringUtils.hasText(usernameProxy) && StringUtils.hasText(passwordProxy) )
{
UsernamePasswordCredentials credentials = new UsernamePasswordCredentials(usernameProxy, passwordProxy);
credentialProvider.setCredentials(scope, credentials);
}
hcb.setDefaultCredentialsProvider(credentialProvider).setRoutePlanner(proxyRoutePlanner);
}
//Use custom keepalive strategy
if (cas != null)
{
hcb.setKeepAliveStrategy(cas);
}
httpClient = hcb.build();
return httpClient;
}
Where cas is an instance of:
public class WsKeepAliveStrategy implements ConnectionKeepAliveStrategy
{
private Long timeout;
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context)
{
return timeout;
}
public void setTimeout(Long timeout)
{
this.timeout = timeout;
}
}
In this way I could configure httpclient in order to use a connection pool, specify when to close the idle connection and specify the socket timeout, connection timeout, connection request timeout
By using this configuration I add no more issue
I hope it can be useful
Angelo
Must be a case of missing timeout, should try to get the exact problem happening in your case, and change the setting causing that. Changing RequestFactory to another library may or may not solve that it all depends on the problem - so my advise is to identify it first.
For example:
We faced similar issue that our thread was getting stuck in restTemplate so we took a thread dump which were like
"pool-12-thread-1" #41 prio=5 os_prio=0 tid=0x00007f17a624e000 nid=0x3d runnable [0x00007f1738f96000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
- locked <0x00000000ebc7d888> (a java.lang.Object)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
- locked <0x00000000ebc7d8a0> (a sun.security.ssl.AppInputStream)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
- locked <0x00000000e94b9608> (a java.io.BufferedInputStream)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
- locked <0x00000000d57e5a30> (a sun.net.www.protocol.https.DelegateHttpsURLConnection)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
- locked <0x00000000d57e5a30> (a sun.net.www.protocol.https.DelegateHttpsURLConnection)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:84)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:652)
It clearly show that the reason is missing timeout in read so we added
final SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setReadTimeout(10_000); // 10 sec as needed by us
final RestTemplate restTemplate = new RestTemplate(requestFactory);
Similarly after you got the reason, add proper timeout
HERE explains why it happens.
This configuration is inline with another Baeldung's article about rest template builder. It seems nice and clean but it hides a default PoolingHttpClientConnectionManager with the defaultMaxPerRoute set to 5.
What does this default max per route means? It means that only 5 simultaneous HTTP connections to the same host will be possible.
So you can configure RestTemplate to use a pooled implementation such as HttpComponentsClientHttpRequestFactory with overridden defaultMaxPerRoute:
PoolingHttpClientConnectionManager poolingConnManager = new
PoolingHttpClientConnectionManager();
poolingConnManager.setMaxTotal(50);
poolingConnManager.setDefaultMaxPerRoute(50);
In my Spring Batch application I have written a CustomItemWriter which internally writes item to DynamoDB using DynamoDBAsyncClient, this client returns Future object. I have a input file with millions of record. Since CustomItemWriter returns future object immediately my batch job exiting within 5 sec with status as COMPLETED, but in actual it is taking 3-4 minutes to write all item to the DB, I want that batch job finishes only after all item written to DataBase. How can i do that?
job is defined as below
<bean id="report" class="com.solution.model.Report" scope="prototype" />
<batch:job id="job" restartable="true">
<batch:step id="step1">
<batch:tasklet>
<batch:chunk reader="cvsFileItemReader" processor="filterReportProcessor" writer="customItemWriter"
commit-interval="20">
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="customItemWriter" class="com.solution.writer.CustomeWriter"></bean>
CustomeItemWriter is defined as below
public class CustomeWriter implements ItemWriter<Report>{
public void write(List<? extends Report> item) throws Exception {
List<Future<PutItemResult>> list = new LinkedList();
AmazonDynamoDBAsyncClient client = new AmazonDynamoDBAsyncClient();
for(Report report : item) {
PutItemRequest req = new PutItemRequest();
req.setTableName("MyTable");
req.setReturnValue(ReturnValue.ALL_ODD);
req.addItemEntry("customerId",new
AttributeValue(item.getCustomeId()));
Future<PutItemResult> res = client.putItemAsync(req);
list.add(res);
}
}
}
Main class contains
JobExecution execution = jobLauncher.run(job, new JobParameters());
System.out.println("Exit Status : " + execution.getStatus());
Since in ItemWriter its returning future object it doesn't waits to complete the opration. And from the main since all item is submitted for writing Batch Status is showing COMPLETED and job terminates.
I want that this job should terminate only after actual write is performed in the DynamoDB.
Can we have some other step well to wait on this or some Listener is available?
Here is one approach. Since ItemWriter::write doesn't return anything you can make use of listener feature.
#Component
#JobScope
public class YourWriteListener implements ItemWriteListener<WhatEverYourTypeIs> {
#Value("#{jobExecution.executionContext}")
private ExecutionContext executionContext;
#Override
public void afterWrite(final List<? extends WhatEverYourTypeIs> paramList) {
Future future = this.executionContext.readAndValidate("FutureKey", Future.class);
//wait till the job is done using future object
}
#Override
public void beforeWrite(final List<? extends WhatEverYourTypeIs> paramList) {
}
#Override
public void onWriteError(final Exception paramException, final List<? extends WhatEverYourTypeIs> paramList) {
}
}
In your writer class, everything remains same except addind the future object to ExecutionContext.
public class YourItemWriter extends ItemWriter<WhatEverYourTypeIs> {
#Value("#{jobExecution.executionContext}")
private ExecutionContext executionContext;
#Override
protected void doWrite(final List<? extends WhatEverYourTypeIs> youritems)
//write to DynamoDb and get Future object
executionContext.put("FutureKey", future);
}
}
}
And you can register the listener in your configuration. Here is a java code, you need to do the same in your xml
#Bean
public Step initStep() {
return this.stepBuilders.get("someStepName").<YourTypeX, YourTypeY>chunk(10)
.reader(yourReader).processor(yourProcessor)
.writer(yourWriter).listener(YourWriteListener)
.build();
}