I am using RestHighLevelClient version 7.2 to connect to the ElasticSearch cluster version 7.2. My cluster has 3 Master nodes and 2 data nodes. Data node memory config: 2 core and 8 GB. I have used to below code in my spring boot project to create RestHighLevelClient instance.
#Bean(destroyMethod = "close")
#Qualifier("readClient")
public RestHighLevelClient readClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
builder.setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder.setConnectTimeout(30000).setSocketTimeout(60000)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
RestHighLevelClient is a singleton bean. Intermittently I am getting SocketTimeoutException with both GET and PUT request. The index size is around 50 MB. I have tried increasing the socket timeout value, but still, I receive the same error. Am I missing some configuration? Any help would be appreciated.
I got the issue just wanted to share so that it can help others.
I was using Load Balancer to connect to the ElasticSerach Cluster.
As you can see from my RestClientBuilder code that I was using only the loadbalancer host and port. Although I have multiple master node, still RestClient was not retrying my request in case of connection timeout.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
According to the RestClient code if we use a single host then it won't retry in case of any connection issue.
So I changed my code as below and it started working.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, 9200),new HttpHost(elasticHost, 9201))).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider));
For complete RestClient code please refer https://github.com/elastic/elasticsearch/blob/master/client/rest/src/main/java/org/elasticsearch/client/RestClient.java
Retry code block in RestClient
private Response performRequest(final NodeTuple<Iterator<Node>> nodeTuple,
final InternalRequest request,
Exception previousException) throws IOException {
RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache);
HttpResponse httpResponse;
try {
httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get();
} catch(Exception e) {
RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e);
onFailure(context.node);
Exception cause = extractAndWrapCause(e);
addSuppressedException(previousException, cause);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, cause);
}
if (cause instanceof IOException) {
throw (IOException) cause;
}
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
}
throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause);
}
ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse);
if (responseOrResponseException.responseException == null) {
return responseOrResponseException.response;
}
addSuppressedException(previousException, responseOrResponseException.responseException);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, responseOrResponseException.responseException);
}
throw responseOrResponseException.responseException;
}
I'm facing the same issue, and seeing this I realized that the retry is happening on my side too in each host (I have 3 host and the exception happens in 3 threads). I wanted to post it since you might face the same issue or someone else might come to this post because of the same SocketConnection Exception.
Searching the official docs, the HighLevelRestClient uses under the hood the RestClient, and the RestClient uses CloseableHttpAsyncClient which have a connection pool. ElasticSearch specifies that you should close the connection once that you are done, (which sounds ambiguous the definition of "done" in an application), but in general in internet I have found that you should close it when the application is closing or ending, rather than when you finished querying.
Now on the official documentation of apache they have an example to handle the connection pool, which i'm trying to follow, I'll try to replicate the scenario and will post if that fixes my issue, the code can be found here:
https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/examples/org/apache/http/examples/nio/client/AsyncClientEvictExpiredConnections.java
This is what i have so far:
#Bean(name = "RestHighLevelClientWithCredentials", destroyMethod = "close")
public RestHighLevelClient elasticsearchClient(ElasticSearchClientConfiguration elasticSearchClientConfiguration,
RestClientBuilder.HttpClientConfigCallback httpClientConfigCallback) {
return new RestHighLevelClient(
RestClient
.builder(getElasticSearchHosts(elasticSearchClientConfiguration))
.setHttpClientConfigCallback(httpClientConfigCallback)
);
}
#Bean
#RefreshScope
public RestClientBuilder.HttpClientConfigCallback getHttpClientConfigCallback(
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager,
CredentialsProvider credentialsProvider
) {
return httpAsyncClientBuilder -> {
httpAsyncClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
httpAsyncClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
httpAsyncClientBuilder.setConnectionManager(poolingNHttpClientConnectionManager);
return httpAsyncClientBuilder;
};
}
public class ElasticSearchClientManager {
private ElasticSearchClientManager.IdleConnectionEvictor idleConnectionEvictor;
/**
* Custom client connection manager to create a connection watcher
*
* #param elasticSearchClientConfiguration elasticSearchClientConfiguration
* #return PoolingNHttpClientConnectionManager
*/
#Bean
#RefreshScope
public PoolingNHttpClientConnectionManager getPoolingNHttpClientConnectionManager(
ElasticSearchClientConfiguration elasticSearchClientConfiguration
) {
try {
SSLIOSessionStrategy sslSessionStrategy = new SSLIOSessionStrategy(getTrustAllSSLContext());
Registry<SchemeIOSessionStrategy> sessionStrategyRegistry = RegistryBuilder.<SchemeIOSessionStrategy>create()
.register("http", NoopIOSessionStrategy.INSTANCE)
.register("https", sslSessionStrategy)
.build();
ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager =
new PoolingNHttpClientConnectionManager(ioReactor, sessionStrategyRegistry);
idleConnectionEvictor = new ElasticSearchClientManager.IdleConnectionEvictor(poolingNHttpClientConnectionManager,
elasticSearchClientConfiguration);
idleConnectionEvictor.start();
return poolingNHttpClientConnectionManager;
} catch (IOReactorException e) {
throw new RuntimeException("Failed to create a watcher for the connection pool");
}
}
private SSLContext getTrustAllSSLContext() {
try {
return new SSLContextBuilder()
.loadTrustMaterial(null, (x509Certificates, string) -> true)
.build();
} catch (Exception e) {
throw new RuntimeException("Failed to create SSL Context with open certificate", e);
}
}
public IdleConnectionEvictor.State state() {
return idleConnectionEvictor.evictorState;
}
#PreDestroy
private void finishManager() {
idleConnectionEvictor.shutdown();
}
public static class IdleConnectionEvictor extends Thread {
private final NHttpClientConnectionManager nhttpClientConnectionManager;
private final ElasticSearchClientConfiguration elasticSearchClientConfiguration;
#Getter
private State evictorState;
private volatile boolean shutdown;
public IdleConnectionEvictor(NHttpClientConnectionManager nhttpClientConnectionManager,
ElasticSearchClientConfiguration elasticSearchClientConfiguration) {
super();
this.nhttpClientConnectionManager = nhttpClientConnectionManager;
this.elasticSearchClientConfiguration = elasticSearchClientConfiguration;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(elasticSearchClientConfiguration.getExpiredConnectionsCheckTime());
// Close expired connections
nhttpClientConnectionManager.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 5 sec
nhttpClientConnectionManager.closeIdleConnections(elasticSearchClientConfiguration.getMaxTimeIdleConnections(),
TimeUnit.SECONDS);
this.evictorState = State.RUNNING;
}
}
} catch (InterruptedException ex) {
this.evictorState = State.NOT_RUNNING;
}
}
private void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
public enum State {
RUNNING,
NOT_RUNNING
}
}
}
Related
I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".
We have recently moved form jedis to using lettuce in our production services. However we have hit a roadblock while creating redis distributed locks
We are using non clustered setup of aws elasticache with one master and 2 read repilcas
Configs :
Spring-boot : 2.2.5
spring-boot-starter-data-redis : 2.2.5
spring-data-redis : 2.2.5
spring-integration-redis : 5.2.4
redis : 5.0.6
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
GenericObjectPoolConfig poolingConfig = new GenericObjectPoolConfig();
poolingConfig.setMaxIdle(Integer.valueOf(maxConnections));
poolingConfig.setMaxTotal(Integer.valueOf(maxIdleConnections));
poolingConfig.setMinIdle(Integer.valueOf(minIdleConnections));
poolingConfig.setMaxWaitMillis(-1);
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(Duration.ofSeconds(10)).build();
final ClientOptions clientOptions = ClientOptions.builder().socketOptions(socketOptions).build();
LettucePoolingClientConfiguration clientOption = LettucePoolingClientConfiguration.builder()
.poolConfig(poolingConfig).readFrom(ReadFrom.REPLICA_PREFERRED)
.commandTimeout(Duration.ofMillis(Long.valueOf(commandTimeout)))
.clientOptions(clientOptions).useSsl().build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration = new RedisStaticMasterReplicaConfiguration(
primaryEndPoint, Integer.valueOf(port));
redisStaticMasterReplicaConfiguration.addNode(readerEndPoint, Integer.valueOf(port));
redisStaticMasterReplicaConfiguration.setPassword(password);
/*
* LettuceClientConfiguration clientConfig = LettuceClientConfiguration
* .builder() .useSsl()
*
* .readFrom(new ReadFrom() {
*
* #Override public List<RedisNodeDescription> select(Nodes nodes) {
* List<RedisNodeDescription> allNodes = nodes.getNodes(); int ind =
* Math.abs(index.incrementAndGet() % allNodes.size()); RedisNodeDescription
* selected = allNodes.get(ind);
* //logger.info("Selected random node {} with uri {}", ind, selected.getUri());
* List<RedisNodeDescription> remaining = IntStream.range(0, allNodes.size())
* .filter(i -> i != ind) .mapToObj(allNodes::get).collect(Collectors.toList());
* return Stream.concat( Stream.of(selected), remaining.stream()
* ).collect(Collectors.toList()); } }) .build();
*/
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientOption);
}
#Bean
public StringRedisTemplate stringRedisTemplate() {
return new StringRedisTemplate(redisConnectionFactory());
}
LOCKING SERVICE
#Service
public class RedisLockService {
#Autowired
RedisConnectionFactory redisConnectionFactory;
private static final Logger LOGGER = LoggerFactory.getLogger(RedisLockService.class);
public Lock obtainLock(String registryKey,String redisKey,Long lockExpiry){
try{
RedisLockRegistry registry = new RedisLockRegistry(redisConnectionFactory, registryKey, lockExpiry);
Lock lock = registry.obtain(redisKey);
if(lock.tryLock()==false)
{
LOGGER.info("Lock already made");
return null;
}
else
return lock;
}catch (Exception e) {
LOGGER.warn("Unable to acquire lock: ", e);
return null;
}
}
public void unLock(Lock lock) {
if(lock!=null)
lock.unlock();
}
}
error We are getting while trying to call obtainLock function
.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Error running script (call to f_8426c8df41c64d8177dce3ecbbe9146ef3759cd2): #user_script:6: #user_script: 6: -READONLY You can't write against a read only replica.
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.rethrowAsLockException(RedisLockRegistry.java:224)
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.tryLock(RedisLockRegistry.java:276)
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.tryLock(Re
you need to connect to the master read/write node of Redis
I am using the Java HighLevelRestClient to connect to my elasticsearch instance hosted on AWS. I can make requests against the URL on postman and from my browser just fine, but when I use the client library I receive
java.net.ConnectException: Connection Refused.
(I don't currently need any authentication as this is a small public test instance). This is my code:
RestHighLevelClient restHighLevelClient = new RestHighLevelClient(restClientBuilder);
GetRequest getRequest = new GetRequest("some_index", "some_type","some_id");
final String[] elasticGetResponse = new String[1];
restHighLevelClient.getAsync(getRequest, new ActionListener() {
#Override
public void onResponse(GetResponse documentFields) {
try {
elasticGetResponse[0] = restHighLevelClient.get(getRequest).toString();
}
catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onFailure(Exception e) {
e.printStackTrace();
}
});
Please let me know how I can fix this... thanks!
Update: Here is my code for the restClientBuilder:
MySSLHelper sslHelper = new MySSLHelper(SSLConfig.builder()
.withKeyStoreProvider(myKeyStoreProvider)
.withTrustStoreProvider(InternalTrustStoreProvider.INSTANCE)
.build());
RestClientBuilder restClientBuilder = RestClient.builder(new HttpHost("MY_ELASTICSEARCH_ENDPOINT")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpAsyncClientBuilder) {
return httpAsyncClientBuilder.setSSLContext(sslHelper.getContext());
}
});
I was the same problem and solved putting the port and protocol, like showed in this page:
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.0/java-rest-high-getting-started-initialization.html
My code stayed like this:
RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticsearchHost, 9200, "http")));
Please try to do something like this:
RestClientBuilder restClientBuilder = RestClient.builder(new HttpHost("MY_ELASTICSEARCH_ENDPOINT", "MY_ELASTICSEARCH_PORT", "MY_ELASTICSEARCH_PROTOCOL"))...
Hope this helps.
Good bye.
Hi I m trying to use httpcomponents5 beta to make persistent connection, I have tried the example given in their site, the code is as follows,
final IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setSoTimeout(Timeout.ofSeconds(45)).setSelectInterval(10000).setSoReuseAddress(true).setSoKeepAlive(true).build();
final SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(new TrustAllStrategy()).build();
final PoolingAsyncClientConnectionManager connectionManager = PoolingAsyncClientConnectionManagerBuilder.create().setConnectionTimeToLive(TimeValue.of(1, TimeUnit.DAYS)).setTlsStrategy(new H2TlsStrategy(sslContext, NoopHostnameVerifier.INSTANCE)).build();
client = HttpAsyncClients.createMinimal(protocol, H2Config.DEFAULT, null, ioReactorConfig, connectionManager);
client.start();
final org.apache.hc.core5.http.HttpHost target = new org.apache.hc.core5.http.HttpHost("localhost", 8000, "https");
Future<AsyncClientEndpoint> leaseFuture = client.lease(target, null);
AsyncClientEndpoint asyncClientEndpoint = leaseFuture.get(60, TimeUnit.SECONDS);
final CountDownLatch latch = new CountDownLatch(1);
final AsyncRequestProducer requestProducer = AsyncRequestBuilder.post(target.getSchemeName()+"://"+target.getHostName()+":"+target.getPort()+locationposturl).addParameter(new BasicNameValuePair("info", requestData)).setEntity(new StringAsyncEntityProducer("json post data will go here", ContentType.APPLICATION_JSON)).setHeader("Pragma", "no-cache").setHeader("from", "http5").setHeader("Custom", customheaderName).setHeader("Secure", secureHeader).build();
locEndPoint.execute(requestProducer, SimpleResponseConsumer.create(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(final SimpleHttpResponse response) {
if (response != null) {
if (response.getCode() > -1) {
try {
System.out.println("http5:: COMPLETED : RESPONSE "+response.getBodyText());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
latch.countDown();
}
#Override
public void failed(final Exception ex) {
System.out.println("http5:: FAILED : "+target+locationposturl);
LoggerUtil.printStackTrace(ex);
System.out.println("http5::Exception Request failed "+LoggerUtil.getStackTrace(ex));
latch.countDown();
}
#Override
public void cancelled() {
System.out.println("http5:: CANCELLED : "+target+locationposturl);
System.out.println(http5::Exception Request cancelled");
latch.countDown();
}
});
latch.await();
This code works without a problem for the first time,but when I send a subsequent requests it throws an exception as follows,
http5:: Exception occured java.lang.IllegalStateException: Endpoint is
not connected at
org.apache.hc.core5.util.Asserts.check(Asserts.java:38) at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.getValidatedPoolEntry(PoolingAsyncClientConnectionManager.java:497)
at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.execute(PoolingAsyncClientConnectionManager.java:552)
at
org.apache.hc.client5.http.impl.async.MinimalHttpAsyncClient$InternalAsyncClientEndpoint.execute(MinimalHttpAsyncClient.java:405)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:81)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:114)
What may be the problem with endpoint, I m forcing endpoint to keep alive for a day, kindly shed some light on this
Using Spring FTP Integration and Annotation configuration, I downloaded files from the FTP server. After downloaded still our application is trigger to connect the server and find the any newly added files, if any files added it will download from the server. But I don't need to maintain the FTP server session alive and disconnect the server after first connection or first time downloaded.
Code :
public class FtpServices {
#Bean(name="ftpSessionFactory")
public DefaultFtpSessionFactory ftpSessionFactory() {
System.out.println("session");
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(21);
sf.setUsername("user");
sf.setPassword("password");
return sf;
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
System.out.println("2");
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.afterPropertiesSet();
fileSynchronizer.setRemoteDirectory("/test/");
// fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.docx"));
fileSynchronizer.setFilter(filter);
return fileSynchronizer;
}
#Bean()
#InboundChannelAdapter(value="ftpChannel", poller = #Poller(fixedDelay = "50", maxMessagesPerPoll = "1"))
public FtpInboundFileSynchronizingMessageSource ftpMessageSource() {
System.out.println(3);
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("D:/Test-downloaded/"));
//source.stop();
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel", requiresReply="false")
public MessageHandler handler() {
System.out.println(4);
MessageHandler handler = new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload()+" #ServiceActivator");
System.out.println(" Message Header :"+message.getHeaders());
}
};
return handler;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(triggerOnlyOnce());
return pollerMetadata;
}
}
and also I override the AbtractFTPSessionFactory.java to test FTP server connection and disconnection process.
protected void postProcessClientAfterConnect(T t) throws IOException {
System.out.println("After connect");
}
protected void postProcessClientBeforeConnect(T client) throws IOException {
System.out.println("Before connect");
}
Console :
INFO : org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase -2147483648
INFO : org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 0
Before connect
After connect
D:\Test-downloaded\demo 1.txt #ServiceActivator
Message Header :{id=e4a1fd7f-0bbf-9692-f70f-b0ac68b4dec4, timestamp=1477317086272}
D:\Test-downloaded\demo.txt #ServiceActivator
Message Header :{id=9115ee92-12b4-bf1f-d592-9c13bf7a27fa, timestamp=1477317086324}
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Thanks.
That is really a purpose of any #InboundChannelAdapter: poll the target system for new data periodically.
To do that once we sometimes suggest OnlyOnceTrigger:
public class OnlyOnceTrigger implements Trigger {
private final AtomicBoolean done = new AtomicBoolean();
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
return !this.done.getAndSet(true) ? new Date() : null;
}
}
But this might not work for your case, because there might not be desired files in the source FTP directory yet.
Therefore we have to poll until you will receive required files and .stop() an adapter when that condition is met.
For this purpose you can use any downstream logic to determine the state or consider to implement AbstractMessageSourceAdvice to be injected to the PollerMetadata of the #Poller: http://docs.spring.io/spring-integration/reference/html/messaging-channels-section.html#conditional-pollers