I'm having a problem with slowiness when many request come to my website, It starts to generate "Wait" threads, I've set up the rest template as a Bean
#Bean
public RestTemplate restTemplate(RestTemplateBuilder restTemplateBuilder) {
return restTemplateBuilder
.setConnectTimeout(Integer.parseInt(env.getProperty("service.configuration.http.http-request-timeout")))
.setReadTimeout(Integer.parseInt(env.getProperty("service.configuration.http.http-request-timeout")))
.requestFactory(clientHttpRequestFactory())
.build();
}
When i look for the process which is generating that problem i find HttpClient in wait.
Anybody knows what can i do to solve this problem?
I'm using java8, apache tomcat, spring boot
In my past project I used this kind of configuration:
#Bean
public RestTemplate restTemplate()
{
HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory();
factory.setHttpClient(httpClient());
RestTemplate result = new RestTemplate(factory);
return result;
}
#Bean
public HttpClient httpClient()
{
CloseableHttpClient httpClient = null;
//Use a connection pool
PoolingHttpClientConnectionManager pcm = new PoolingHttpClientConnectionManager();
HttpClientBuilder hcb = HttpClientBuilder.create();
//Close Idle connection after 5 seconds
pcm.closeIdleConnections(5000, TimeUnit.MILLISECONDS);
//Specify all the timeouts in milli-seconds
RequestConfig config = RequestConfig.custom().setConnectionRequestTimeout(5000).setSocketTimeout(5000).setConnectTimeout(5000).build();
hcb.setDefaultRequestConfig(config);
hcb.setConnectionManager(pcm).setConnectionManagerShared(true);
// Check if proxy is required to connect to the final resource
if (proxyEnable)
{
//If enabled.... configure it
BasicCredentialsProvider credentialProvider = new BasicCredentialsProvider();
AuthScope scope = new AuthScope(hostProxy, portProxy);
if( StringUtils.hasText(usernameProxy) && StringUtils.hasText(passwordProxy) )
{
UsernamePasswordCredentials credentials = new UsernamePasswordCredentials(usernameProxy, passwordProxy);
credentialProvider.setCredentials(scope, credentials);
}
hcb.setDefaultCredentialsProvider(credentialProvider).setRoutePlanner(proxyRoutePlanner);
}
//Use custom keepalive strategy
if (cas != null)
{
hcb.setKeepAliveStrategy(cas);
}
httpClient = hcb.build();
return httpClient;
}
Where cas is an instance of:
public class WsKeepAliveStrategy implements ConnectionKeepAliveStrategy
{
private Long timeout;
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context)
{
return timeout;
}
public void setTimeout(Long timeout)
{
this.timeout = timeout;
}
}
In this way I could configure httpclient in order to use a connection pool, specify when to close the idle connection and specify the socket timeout, connection timeout, connection request timeout
By using this configuration I add no more issue
I hope it can be useful
Angelo
Must be a case of missing timeout, should try to get the exact problem happening in your case, and change the setting causing that. Changing RequestFactory to another library may or may not solve that it all depends on the problem - so my advise is to identify it first.
For example:
We faced similar issue that our thread was getting stuck in restTemplate so we took a thread dump which were like
"pool-12-thread-1" #41 prio=5 os_prio=0 tid=0x00007f17a624e000 nid=0x3d runnable [0x00007f1738f96000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
- locked <0x00000000ebc7d888> (a java.lang.Object)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
- locked <0x00000000ebc7d8a0> (a sun.security.ssl.AppInputStream)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
- locked <0x00000000e94b9608> (a java.io.BufferedInputStream)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
- locked <0x00000000d57e5a30> (a sun.net.www.protocol.https.DelegateHttpsURLConnection)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
- locked <0x00000000d57e5a30> (a sun.net.www.protocol.https.DelegateHttpsURLConnection)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:84)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:652)
It clearly show that the reason is missing timeout in read so we added
final SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setReadTimeout(10_000); // 10 sec as needed by us
final RestTemplate restTemplate = new RestTemplate(requestFactory);
Similarly after you got the reason, add proper timeout
HERE explains why it happens.
This configuration is inline with another Baeldung's article about rest template builder. It seems nice and clean but it hides a default PoolingHttpClientConnectionManager with the defaultMaxPerRoute set to 5.
What does this default max per route means? It means that only 5 simultaneous HTTP connections to the same host will be possible.
So you can configure RestTemplate to use a pooled implementation such as HttpComponentsClientHttpRequestFactory with overridden defaultMaxPerRoute:
PoolingHttpClientConnectionManager poolingConnManager = new
PoolingHttpClientConnectionManager();
poolingConnManager.setMaxTotal(50);
poolingConnManager.setDefaultMaxPerRoute(50);
Related
I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".
I am using Spring integration sftp to put files on a remote server and below is configuration.
<spring-integration.version>5.2.5.RELEASE</spring-integration.version>
I have configurated #MessagingGateway.
#MessagingGateway
public interface SftpMessagingGateway {
#Gateway(requestChannel = "sftpOutputChannel")
void sendToFTP(Message<?> message);
}
I have configured the MessageHandler as below,
#Bean
#ServiceActivator(inputChannel = "sftpOutputChannel" )
public MessageHandler genericOutboundhandler() {
SftpMessageHandler handler = new SftpMessageHandler(outboundTemplate(), FileExistsMode.APPEND);
handler.setRemoteDirectoryExpressionString("headers['remote_directory']");
handler.setFileNameGenerator((Message<?> message) -> ((String) message.getHeaders().get(Constant.FILE_NAME_KEY)));
handler.setUseTemporaryFileName(false);
return handler;
}
I have configured SftpRemoteFileTemplate as below
private SftpRemoteFileTemplate outboundTemplate;
public SftpRemoteFileTemplate outboundTemplate(){
if (outboundTemplate == null) {
outboundTemplate = new SftpRemoteFileTemplate(sftpSessionFactory());
}
return outboundTemplate;
}
This is the configuration for SessionFactory
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory();
factory.setHost(host);
factory.setPort(port);
factory.setUser(username);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
factory.setKnownHosts(host);
factory.setSessionConfig(configPropertiesOutbound());
CachingSessionFactory<LsEntry> csf = new CachingSessionFactory<LsEntry>(factory);
csf.setSessionWaitTimeout(1000);
csf.setPoolSize(10);
csf.setTestSession(true);
return csf;
}
I have configured all this in one of the service.
Now the problem is,
Sometimes the entire operation takes more than 15 min~ specially if the service is ideal for few hours and I am not sure what is causing this issue.
It looks like it is spending time on getting the active session from CachedSessionFactory the after operations are pretty fast below is the snap from one of the tool where I have managed to capture the time.
It usually takes few miliseconds to transfer files.
I have recently made below changes but before that as well I was getting the same issue,
I have set isShareSession to false earlier it was DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
There was no pool size I have set it to 10
I think I have configured something incorrectly and that's why I end up piling connection ? Or there is something else ?
Observation :
The time taking to complete the operation is somewhat similar all the time when issue occurs i.e 938000 milliseconds +
If I restart the application daily it works perfectly fine.
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
I've created a Spring JMS application using version 4.1.2.RELEASE, which is connected to a broker that is running ActiveMQ 5.11.0. The problem that I'm seeing is as follows. In the logs, I notice that every second, I'm seeing a connection being created as such.
2017-06-21 13:10:21,046 | level=INFO | thread=ActiveMQ Task-1 | class=org.apache.activemq.transport.failover.FailoverTransport | Successfully connected to tcp://localhost:61616
I know that it is creating a new ActiveMQ connection each time, because it says successfully "connected" and not "reconnected" as shown in the code located here: http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.collector/1.3.3/org/apache/activemq/transport/failover/FailoverTransport.java#891
I don't have a caching connection factory set for my consumer, but I'm wondering if the following is the culprit when it comes to why I'm seeing constant connections being created.
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
The following post states that consumers should not be cached, but I wonder if that applies to caching the connection + session. If the connection is cached, but the session is not, then I wonder if that creates a problem.
Why DefaultMessageListenerContainer should not use CachingConnectionFactory?
The following are the configurations that I'm using in my application. I am hoping that it is something that I've misconfigured, and would appreciate any insights that anyone has to offer.
Spring Configurations
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() throws Throwable {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
factory.setMaxMessagesPerTask(-1);
factory.setConcurrency(1);
factory.setSessionTransacted(true);
return factory;
}
#Bean
public CachingConnectionFactory cachingConnectionFactory(){
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(connectionFactory());
cachingConnectionFactory.setCacheConsumers(false);
cachingConnectionFactory.setSessionCacheSize(1);
return cachingConnectionFactory;
}
#Bean
public ActiveMQConnectionFactory connectionFactory(){
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(1000L);
redeliveryPolicy.setRedeliveryDelay(1000L);
redeliveryPolicy.setMaximumRedeliveries(6);
redeliveryPolicy.setUseExponentialBackOff(true);
redeliveryPolicy.setBackOffMultiplier(5);
ActiveMQConnectionFactory activeMQ = new ActiveMQConnectionFactory("admin", "admin", "tcp://localhost:61616");
activeMQ.setRedeliveryPolicy(redeliveryPolicy);
activeMQ.setPrefetchPolicy(prefetchPolicy());
return activeMQ;
}
#Bean
public JmsMessagingTemplate jmsMessagingTemplate(){
ActiveMQTopic activeMQ = new ActiveMQTopic("topic.out");
JmsMessagingTemplate jmsMessagingTemplate = new JmsMessagingTemplate(cachingConnectionFactory());
jmsMessagingTemplate.setDefaultDestination(activeMQ);
return jmsMessagingTemplate;
}
protected ActiveMQPrefetchPolicy prefetchPolicy(){
ActiveMQPrefetchPolicy prefetchPolicy = new ActiveMQPrefetchPolicy();
int prefetchValue = 1000;
prefetchPolicy.setQueuePrefetch(prefetchValue);
return prefetchPolicy;
}
Thanks,
Juan
The issue was indeed the following code.
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
The moment that I removed it, the rapid connection creation stopped.
I would like to handle TCP connection factory exceptions.
With an abstract connection factory:
#Bean
public AbstractClientConnectionFactory clientFactory() {
TcpNetClientConnectionFactory factory = new TcpNetClientConnectionFactory(host, Integer.parseInt(port));
factory.setSoKeepAlive(Boolean.parseBoolean(keepAlive));
factory.setSoTimeout(timeout);
factory.setSoReceiveBufferSize(Integer.parseInt(bufferSize));
factory.setSoSendBufferSize(Integer.parseInt(bufferSize));
return factory;
}
This connection is injected in both TcpSendingMessageHandler and TcpReceivingChannelAdapter separated beans.
#Bean
public TcpReceivingChannelAdapter tcpIn() {
...
receiver.setConnectionFactory(clientFactory());
...
}
#Bean
#ServiceActivator(...)
public TcpSendingMessageHandler tcpOut() {
...
sender.setConnectionFactory(clientFactory());
...
}
I have some ApplicatioListeners such as: TcpConnectionExceptionEvent, TcpConnectionCloseEvent and TcpConnectionOpenEvent.
#EventListener
public void handleTcpConnectionCloseEvent(TcpConnectionExceptionEvent event){
...
}
However, I detected when an opened connection was closed, a TcpConnectionExceptionEvent was launched but not when the connection was not even opened. How it is possible to deal with “connection refused” problems or any other TCP errors?
Is it possible to start/stop the connection using the control bus? I am sending:
Message operation = MessageBuilder.withPayload("#clientFactory.isRunning()").build();
boolean sent = operationChannel.send(operation);
This seems to be not working because no response was received and only creates “continuous” calls looking for the reference swamping the application. clientFactory beans exists (checked with context.getBeanDefinitionNames())
Additionally, could I set a max retries for trying to reconnect?
Edit (Retry Advice): I added the tcpRetryAdvice to my tcp outbound channel but I am still confused because the clientFactory() (when “connection refused”) does not follow the defined policy in the retryAdvice. How can I control the current attemps and if finally the message was delivered?
#Bean
#ServiceActivator(inputChannel = "tcpSender", adviceChain = "tcpRetryAdvice")
public TcpSendingMessageHandler tcpOut(AbstractClientConnectionFactory connectionFactory) { ... }
#Bean
public RequestHandlerRetryAdvice tcpRetryAdvice() {
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(2);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(3000);
backOffPolicy.setMaxInterval(10000);
backOffPolicy.setMultiplier(2);
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setBackOffPolicy(backOffPolicy);
RequestHandlerRetryAdvice tcpRetryAdvice = new RequestHandlerRetryAdvice();
tcpRetryAdvice.setRetryTemplate(retryTemplate);
// This allows fail-controlling
tcpRetryAdvice.setRecoveryCallback(new ErrorMessageSendingRecoverer(failMessageChannel()));
return tcpRetryAdvice;
}
Edit (ControlBus):
I am just trying to stop sending messages to TCP because I have to know when the TCP (connectionFactory) is connected in order to do not drop away JMS consumed messages.
Edit (Logging error for beans)
First of all, I have:
tcpRetryAdvice.setRecoveryCallback(new ErrorMessageSendingRecoverer(failMessageChannel()));
This is OK to trace the exception but how I could get the message not sent?
Then I have error sent to the errorChannel but I am still seeing the whole stacktrace when the clientFactory() bean raises a "connecton refused" exception. I would like to avoid this:
[ERROR][TcpSendingMessageHandler] - [TcpSendingMessageHandler.java:80] - 26/08/2016
20:40:57.424 - Error creating connection
java.net.ConnectException: Connection refused: connect
at java.net.TwoStacksPlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
....
I just want:
In my error channel: Message not delivered <Failed to obtain a connection; nested exception is java.net.ConnectException: Connection refused: connect>
I am sending the callback to a failChannel() and I do not want to see the stacktrace. Ideally, I want to manage the callback. Get the message not sent to save it for waiting the socket reconnection and log the error with an errorChannel.
A new TcpConnectionFailedEvent was recently added.
4.3.2 should be released in the next week or so; you can try it out first using the 4.3.2.BUILD-SNAPSHOT version.
You can add a retry advice to the outbound adapter.
I am not sure what you're saying about the control bus.