How do i stop hibernate search from sniffing the nodes of a non-existent local elastic search server? - elasticsearch

I've created a open search service domain on AWS and i've set the property hibernate.search.backend.uris to the address of that domain. Everything works fine, i'm able to index my entity tables and can run search queries against the open search service domain.
Yet for some reason hibernate search still routinely tries to connect to localhost:9200 in order to perform a node sniffing routine. This obviously doesn't work and the exception [es_rest_client_sniffer[T#1]] Sniffer - error while sniffing nodes java.net.ConnectException: Connection refused: no further information is thown.
How do i stop hibernate search from performing this futile action? It keeps trying to sniff the nodes every few minutes or so. I've tried setting the fields hibernate.search.backend.hosts and hibernate.search.backend.protocol instead of .uris. But even when i set those properties instead hibernate search still tries to interact with a non-existent elastic search service on localhost. I am running hibernate search 6.1.5.Final. The elastic search version is set to 7.16.3. Here are all the relevant properties that i set programmatically.
jpaProperties.put("hibernate.search.backend.aws.credentials.type", "static");
jpaProperties.put("hibernate.search.backend.aws.credentials.access_key_id", awsId);
jpaProperties.put("hibernate.search.backend.aws.credentials.secret_access_key", awsKey);
jpaProperties.put("hibernate.search.backend.aws.region", openSearchAwsInstanceRegion);
jpaProperties.put("hibernate.search.backend.aws.signing.enabled", true);
//--------------------------------------------------------------------------------------------
jpaProperties.put("hibernate.search.automatic_indexing.synchronization.strategy", indexSynchronizationStrategy);
jpaProperties.put("hibernate.search.backend.request_timeout", requestTimeout);
jpaProperties.put("hibernate.search.backend.connection_timeout", elasticSearchConnectionTimeout);
jpaProperties.put("hibernate.search.backend.read_timeout", readTimeout);
jpaProperties.put("hibernate.search.backend.max_connections", maximumElasticSearchConnections);
jpaProperties.put("hibernate.search.backend.max_connections_per_route", maximumElasticSearchConnectionsPerRout);
jpaProperties.put("hibernate.search.schema_management.strategy", schemaManagementStrategy);
jpaProperties.put("hibernate.search.backend.analysis.configurer", "class:config.EnhancedLuceneAnalysisConfig");
jpaProperties.put("hibernate.search.backend.uris", elasticSearchHostAddress);
jpaProperties.put("hibernate.search.backend.directory.type", "local-filesystem");
jpaProperties.put("hibernate.search.backend.type", "elasticsearch");
jpaProperties.put("hibernate.search.backend.directory.root", luceneAbsoluteFilePath);
jpaProperties.put("hibernate.search.backend.lucene_version", "LUCENE_CURRENT");
jpaProperties.put("hibernate.search.backend.io.writer.infostream", true);
EDIT:
These are all the elastic search related dependencies that my application uses.
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>6.1.5.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch-aws</artifactId>
<version>6.1.5.Final</version>
</dependency>
Heres the stacktrace,
[ERROR] 2022-07-21 14:33:58.402 [es_rest_client_sniffer[T#1]] Sniffer - error while sniffing nodes
java.net.ConnectException: Connection refused: no further information
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:918) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:299) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.ElasticsearchNodesSniffer.sniff(ElasticsearchNodesSniffer.java:106) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer.sniff(Sniffer.java:209) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer$Task.run(Sniffer.java:140) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer$1.run(Sniffer.java:81) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
at sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) ~[?:?]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.5.jar:4.1.5]
... 1 more
This stacktrace was copied after i had removed <elasticsearch.version>7.16.3</elasticsearch.version> so see whether that would solve the problem. If the version is missing hibernate search seems to default to version 7.17.3

It took a while but i finally found the source of the problem.
Apparently spring boot has a built-in autoconfiguration class for Elastic search called ElasticsearchRestClientAutoConfiguration. The code in this class is run by default and it initializes a org.elasticsearch.client.RestClient that that has a node sniffer enabled by default. If no elastic search server is running on localhost this RestClient will keep throwing exceptions because there is nothing to connect to.
Because this class is not part of the hibernate search java library settings such as hibernate.search.backend.discovery.enabled = false do not influence this RestClient or its Sniffer.
You can prevent Spring from creating this RestClient by telling Spring not to run ElasticsearchRestClientAutoConfiguration. This can be done in two ways.
Firstly you can add the following property to your application.properties:
spring.autoconfigure.exclude = org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
Second, you can also exclude this autoconfiguration class by adding it as a value to the exclude argument of the #SpringBootApplication annotation. For instance:
#SpringBootApplication(exclude ={ElasticsearchRestClientAutoConfiguration.class})
public class MyConfiguration {

Hibernate Search will only enable node discovery (create a sniffer) if the configuration property hibernate.search.backend.discovery.enabled is set to true, and by default it's false.
If the properties you listed are the only ones you set, then I don't think Hibernate Search is creating this sniffer. The sniffer also doesn't use the URIs you passed to Hibernate Search, so that as well tends to prove the sniffer is not created by Hibernate Search.
If you don't believe me, see by yourself by starting your app in debug mode and putting a breakpoint in org.hibernate.search.backend.elasticsearch.client.impl.ElasticsearchClientFactoryImpl#createSniffer.
I think you probably have something else in your application creating an Elasticsearch client and a sniffer, and that something else is not completely configured. Try launching your application in debug mode and putting breakpoints in the constructors of org.elasticsearch.client.sniff.Sniffer?

Related

Should Oracle's ucp.jar reside in Tomcat's lib or application's war? Missing ResultSetMetaData. Achieving clean redeploy of Tomcat app with Oracle?

Suppose it is 2016. I am building a very simple Java EE app with Spring for DI, jdbc template and web, Oracle for persistence and Deploy it to Tomcat. Sounds easy, not sure if it could be more trivial.
There are the following most recent stable versions:
Tomcat 8.5
Oracle jdbc drivers v 12.x
and Spring 4.3.x
Tomcat recommends putting jdbc drivers to $CATALINA_BASE/lib, so I follow this recommendation. Oracle recommends using their UCP pool and tutorials at oracle.com also suggest putting ucp.jar together with ojdbc.jar (to Tomcat's lib folder). I use Spring to manage lifecycle of UCP pool and pass it as a datasource to JdbcTemplate.
I use a single dedicated server at production and for the best experience of my users I use a Tomcat's Parallel deployment feature. There is nothing very special about this feature, it allows to deploy a new version with no downtime and automatically (and gracefully) undeploy an old version when there are no active sessions left for it.
The missing ResultSetMetaData problem
The unexpected problem I may have after deploying a new version of application with such a simple setup:
INFO [http-nio-8080-exec-6] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.sql.ResultSetMetaData]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.sql.ResultSetMetaData]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1427)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1415)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1254)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1215)
at com.sun.proxy.$Proxy31.getMetaData(Unknown Source)
at org.springframework.jdbc.core.SingleColumnRowMapper.mapRow(SingleColumnRowMapper.java:89)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:93)
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60)
at org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:465)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:407)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:477)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:487)
at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:497)
at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:503)
at example.App.rsMetadataTest(App.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:204)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:854)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:765)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:655)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
And now the app is broken. Any subsequent attempt to make a call involving ResultSetMetaData (i.e. jdbcTemplate.queryForObject("select 'hello' from dual", String.class)) will fail with:
java.lang.NoClassDefFoundError: java/sql/ResultSetMetaData
com.sun.proxy.$Proxy31.getMetaData(Unknown Source)
org.springframework.jdbc.core.SingleColumnRowMapper.mapRow(SingleColumnRowMapper.java:89)
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:93)
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60)
org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:465)
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:407)
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:477)
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:487)
org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:497)
org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:503)
example.App.rsMetadataTest(App.java:82)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:204)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:854)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:765)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
javax.servlet.http.HttpServlet.service(HttpServlet.java:655)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
How to reproduce
Unfortunately I do not understand the root cause of the exception. The ResultSetMetaData is a JDK class, how it can be not found? Was it unloaded? At least after some experiments I know exactly the minimum steps required to reproduce it:
deploy 1st version of an app and init db pool (i.e. with a simple connection, but which DOES NOT involve ResulstSetMetaData, i.e. jdbcTempalte.query()).
deploy the 2nd version of an app
wait for the 1st version to undeploy (as gracefully as possible)
and make a call which involves ResultSetMetaData.
Boom! The ResultSetMetaData not found again and the app is broken.
This bug does not depend on Tomcat's Parallel deployment feature. You can have the most recent (9.x) Tomcat with stock configuration, 2 different webapps using the same Oracle jdbc driver, deploy it in the order and under the same conditions I described above and get the same error.
Also I would like to add that the following statement from Tomcat is incorrect:
this web application instance has been stopped already
I know exactly that the 2nd (just deployed) app gets invoked (not the unloaded one), it is alive and could not be stopped. But it fails at reaching ResultSetMetaData on it's way.
With the help of docker-compose I did many experiments to isolate the problem and see what can fix it. One thing that fixes the problem is putting ucp.jar to .war, not into Tomcat's lib.
That's the reason for the question in the title:
Should Oracle's ucp.jar reside in Tomcat's lib or be bundled to application's war?
ucp.jar itself is not a jdbc driver which gets registered with a global service-provider. Do you put HikariCP to Tomcat's lib? I do not think so. And bundling ucp to webapp fixes the ResultSetMetaData problem. Are there any other reasons for ucp.jar to be placed to Tomcat's lib?
Broken reflection
Unfortunately moving ucp.jar to war by setting compile or runtime scope for it in Maven can lead to another problem:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'oracleDataSource' defined in example.App: Initialization of bean failed; nested exception is java.lang.ArrayStoreException: sun.reflect.annotation.AnnotationTypeMismatchExceptionProxy
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:562)
....
... 64 more
Caused by: java.lang.ArrayStoreException: sun.reflect.annotation.AnnotationTypeMismatchExceptionProxy
at sun.reflect.annotation.AnnotationParser.parseEnumArray(AnnotationParser.java:744)
...
at java.lang.Class.getAnnotations(Class.java:3446)
at org.springframework.transaction.annotation.AnnotationTransactionAttributeSource.determineTransactionAttribute(AnnotationTransactionAttributeSource.java:152)
The context won't start as soon as you add #EnableTransactionManagement in your Spring Java config or <tx:annotation-driven/> if you prefer XML. But I do want to use #Transactional annotations in my app. So I am stuck again. Here at least I was able to understand the problem. Spring 4 tries to read annotations on PoolDataSourceImpl to see if the bean needs to proxied to support annotation-based transaction control. The Class#getAnnotations() fails to read annotations on the PoolDataSourceImpl class, because oracle.jdbc.logging.annotations.Feature exists in both jars (ucp and jdbc). And there are 2 class loaders having different instances of Class<oracle.jdbc.logging.annotations.Feature>. The part of introspection capabilities on PoolDataSourceImpl is broken with a weird ArrayStoreExceotion!
The presence of such an error is an argument for keeping both Oracle jars in the same classpath.
If you faced the above problems in 2016 (when there was no higher versions of Oracle driver), what would you do? I am asking this, because the project I work on is a bit stuck in the past. Earlier, upgrading Oracle driver had led to unexpected and unobvious problems in production, so at the nearest release we are hesitant to update the jdbc driver. But since the project was recently upgraded from Tomcat 7 to Tomcat 8, there is now a risk to face the missing ResultSetMetaData problem, which should be solved.
I forgot to say: you might face the stacktrace complaining on missing ResultSetMetaData in a previous version of Tomcat: 7.x. But it did not spoil the observable behaviour. Unlike Tomcat 9.x and 8.x, Tomcat 7.x printed the exception once, but somehow managed to execute the query and successfully handle the request. Tomcat 7.x did not break the app. Does it mean that modern Tomcat has the regression which Tomcat 7.x did not have?
The potential memory leak Tomcat warnings
What I also do not like at redeployment is the following lines at logs:
WARNING [Catalina-utility-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [app##1] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:502)
java.util.TimerThread.mainLoop(Timer.java:526)
java.util.TimerThread.run(Timer.java:505)
WARNING [Catalina-utility-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [app##1] appears to have started a thread named [oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource$BlockReleaser.run(BlockSource.java:329)
WARNING [Catalina-utility-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [app##1] appears to have started a thread named [InterruptTimer] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:502)
java.util.TimerThread.mainLoop(Timer.java:526)
java.util.TimerThread.run(Timer.java:505)
Is it possible to fix them at all? From my tests they are not caused by UCP, but rather come from ojdbc.jar. I did not find any solution here. Neither latest version of ojdbc8 (or ojdbc11), nor using other pools or lifecycle methods of Oracle's UniversalConnectionPoolManager (as suggested here) have helped here.
If you replace ojdbc with postgres database and driver, you won't see similar warnings and your logs will be clean.
The source code
I did not provide any code in the post, it is already pretty long, but I created a repo with the minimal application example and parameterised docker-compose test. So you can easily play with it and reproduce all the problems I mentioned with a single command: docker-compose rm -fs && docker-compose up --build
I am aware that you mentioned I use Spring to manage lifecycle of UCP pool and pass it as a datasource to JdbcTemplate but my advice will be to create your datasource as a tomcat resource (i.e., at the context level):
<Resource
name="tomcat/UCPPool"
auth="Container"
<!-- Defines UCP or JDBC factory for connections -->
factory="oracle.ucp.jdbc.PoolDataSourceImpl"
<!-- Defines type of the datasource instance -->
type="oracle.ucp.jdbc.PoolDataSource"
description="UCP Pool in Tomcat"
<!-- Defines the Connection Factory to get the physical connections -->
connectionFactoryClassName="oracle.jdbc.pool.OracleDataSource”
minPoolSize="2"
maxPoolSize="60"
initialPoolSize="15"
autoCommit="false"
user="scott"
password="tiger"
<!-- FCF is auto-enabled in 12.2. Use this property only if you are using Pre 12.2 UCP
fastConnectionFailoverEnabled=”true” -->
<!-- Database URL -->
url="jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=proddbclust
er-scan)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=proddb)))"
</Resource>
The example is obtained from the guide provided for Oracle when describing Configure Tomcat for UCP.
And try to acquire a reference to that datasource through JNDI:
#Bean
public DataSource dataSource() {
final JndiDataSourceLookup dsLookup = new JndiDataSourceLookup();
dsLookup.setResourceRef(false);
DataSource dataSource = dsLookup.getDataSource("tomcat/UCPPool");
return dataSource;
}
You are very likely facing a class loading issue and putting ucp.jar together with ojdbc.jar in your $CATALINA_BASE/lib and configuring this JNDI lookup can solve the problem.
Regarding your warnings, please, consider read this related SO question, especially this answer: it seems that there is a bug in the Oracle JDBC drive and an update to driver version 12.2 should solve the problem.
P.S.: Great question, very well documented!!
The messages "appears to have started a thread named [...] but has failed to stop it" point directly at the heart of the problem, which is a very common issue when re-deploying webapps within a webapp container, whether tomcat or jetty or any other. The issue is that some long-running threads were started by the app, but not explicitly shutdown, so they keep running, and hence they keep an instance of the WebappClassLoader for this webapp in memory, which references the classes previously loaded by it. When you then redeploy the same webapp, a new distinct WebappClassLoader with the same resources is created, which however doesn't have access to the classes loaded by the prior incarnation that the JVM is still referencing, thus leading to the NoClassDefFoundError.
There are only three general means of dealing with this:
a) Always restart the webapp container when redeploying webapps.
b) Fix all code in the webapps so that all such long-running threads are shut down. This means implementing ServletContextListeners that will perform explicit shutdown operations, stopping pool management threads etc. when the ServletContext is stopped (i.e. the webapp is undeployed).
c) Relocate the offending code so it is not loaded by the WebappClassLoader but by the SystemClassLoader, and thus never goes out of scope. In this case you would achieve that by moving the ojdbc.jar to the system classpath (tomcat/lib) and the datasource definition to the server configuration file (tomcat/conf/server.xml). It is anyway a bad practice to include database drivers within a webapp, such fundamental code should be centrally located so that only one instance of it runs within the JVM. Having these inside webapps can lead to conflicts.

Spring Boot Micro Service Not Defined in Registry When JMS Server Not Reachable

I have a strange issue that took me several days to narrow down. Basically, I have a Jhipster project based on Spring boot Version 2.1.10.RELEASE, which contains 4 microservices. We are interested here in 2 of them: Gateway and Corehub.
In the gateway, I have an angular app that performs a POST to /services/corehub/api/someendpoint which used to be working and that is failing now with different error messages, but the one I have more regularly is
{
"type": "https://www.jhipster.tech/problem/problem-with-message",
"title": "Method Not Allowed",
"status": 405,
"detail": "Request method 'POST' not supported",
"path": "/services/ambientcorehub/api/trips",
"message": "error.http.405"
}
I ended up looking at the traces of the Registry that keeps track of the microservices for internal communication and I found out that when this error occurs, I cannot find the corehub in the traces anymore. So it looks like the corehub micro service is not registered.
An other GIT branch of this service does not have this problem, so I performed a diff between these two branches and I removed the changes until I could narrow down the problem.
So, in the corehub, I have a JMS listener based on this mq-jms-spring implementation. The maven dependency is as follows:
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.2.7</version>
</dependency>
I commented out my JmsListener class and its associated JmsContext (to get access to a topic), and kept only the configuration properties defining access to the server, along with the port, channel, topic name, etc.
If I comment the above maven dependency, my service works again.
If I keep the maven dependency, with the configuration only, my corehub microservice is not registered in the Registry and becomes not accessible anymore from the gateway and thus the Angular UI.
What is important to note, is that I have currently some network issue which prevents me from accessing the JMS Server.
So I believe the exception that is raised by this IBM library because the JMS server is not reachable, breaks the registration of the microservice towards the spring boot registry.
Here are the traces that come over and over in the corehub console:
2020-07-22 08:21:45.316 WARN 7964 --- [nfoReplicator-0] o.s.boot.actuate.jms.JmsHealthIndicator : JMS health check failed
com.ibm.msg.client.jms.DetailedIllegalStateException: JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name '172.31.14.1(9010)'.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:489)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:448)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7815)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl._createConnection(JmsConnectionFactoryImpl.java:303)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:236)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6005)
at com.ibm.mq.jms.MQConnectionFactory.createConnection(MQConnectionFactory.java:6030)
at org.springframework.jms.connection.SingleConnectionFactory.doCreateConnection(SingleConnectionFactory.java:409)
at org.springframework.jms.connection.SingleConnectionFactory.initConnection(SingleConnectionFactory.java:349)
at org.springframework.jms.connection.SingleConnectionFactory.getConnection(SingleConnectionFactory.java:327)
at org.springframework.jms.connection.SingleConnectionFactory.createConnection(SingleConnectionFactory.java:242)
at org.springframework.boot.actuate.jms.JmsHealthIndicator.doHealthCheck(JmsHealthIndicator.java:52)
at org.springframework.boot.actuate.health.AbstractHealthIndicator.health(AbstractHealthIndicator.java:82)
at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:95)
at org.springframework.cloud.netflix.eureka.EurekaHealthCheckHandler.getHealthStatus(EurekaHealthCheckHandler.java:110)
at org.springframework.cloud.netflix.eureka.EurekaHealthCheckHandler.getStatus(EurekaHealthCheckHandler.java:106)
at com.netflix.discovery.DiscoveryClient.refreshInstanceInfo(DiscoveryClient.java:1406)
at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:117)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2538' ('MQRC_HOST_NOT_AVAILABLE').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
... 24 common frames omitted
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9204: Connection to host '172.31.14.1(9010)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2538;AMQ9204: Connection to host '/172.31.14.1:9010' rejected. [1=java.net.ConnectException[Connection timed out: connect],3=/172.31.14.1:9010,4=TCP,5=Socket.connect]],3=172.31.14.1(9010),5=RemoteTCPConnection.bindAndConnectSocket]
at com.ibm.mq.jmqi.remote.api.RemoteFAP$Connector.jmqiConnect(RemoteFAP.java:13558)
at com.ibm.mq.jmqi.remote.api.RemoteFAP.jmqiConnect(RemoteFAP.java:1426)
at com.ibm.mq.jmqi.remote.api.RemoteFAP.jmqiConnect(RemoteFAP.java:1385)
at com.ibm.mq.ese.jmqi.InterceptedJmqiImpl.jmqiConnect(InterceptedJmqiImpl.java:377)
at com.ibm.mq.ese.jmqi.ESEJMQI.jmqiConnect(ESEJMQI.java:562)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:381)
... 23 common frames omitted
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9204: Connection to host '/172.31.14.1:9010' rejected. [1=java.net.ConnectException[Connection timed out: connect],3=/172.31.14.1:9010,4=TCP,5=Socket.connect]
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.bindAndConnectSocket(RemoteTCPConnection.java:901)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.protocolConnect(RemoteTCPConnection.java:1381)
at com.ibm.mq.jmqi.remote.impl.RemoteConnection.connect(RemoteConnection.java:976)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getNewConnection(RemoteConnectionSpecification.java:553)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSessionFromNewConnection(RemoteConnectionSpecification.java:233)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSession(RemoteConnectionSpecification.java:141)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionPool.getSession(RemoteConnectionPool.java:127)
at com.ibm.mq.jmqi.remote.api.RemoteFAP$Connector.jmqiConnect(RemoteFAP.java:13302)
... 28 common frames omitted
Caused by: java.net.ConnectException: Connection timed out: connect
at java.base/java.net.PlainSocketImpl.connect0(Native Method)
at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:101)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/java.net.Socket.connect(Socket.java:558)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection$4.run(RemoteTCPConnection.java:1022)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection$4.run(RemoteTCPConnection.java:1014)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.connectSocket(RemoteTCPConnection.java:1014)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.bindAndConnectSocket(RemoteTCPConnection.java:805)
... 35 common frames omitted
Here are some version numbers:
jhipster-dependencies.version: 3.0.7
Spring boot version: 2.1.10.RELEASE
ibmmq-jms-spring version(s) that are affected by this issue: Version 2.2.7
Java version (including vendor and platform): AdoptOpenJDK\jdk-11.0.6.10-hotspot
A small code sample that demonstrates the issue.
Here is my configuration in application.yml:
spring:
jms:
# Used for JMS Message reception.
isPubSubDomain: false
application:
oag:
# This can be a queue or a topic (if subdomain is defined)
# In case of a topic, sub domain must be set to public.
queueName: "BRIDGE.XXX.TO.YYY.TST"
isTopic: false
ibm:
mq:
queueManager:
channel: XXX_GWT11.BT1
connName: 172.31.14.1(9010)
user: xxxx
password:
Would it be possible to catch this exception to avoid breaking regular Spring Boot registration mechanism?
I cannot afford having my cluster down because I cannot access the JMS server.
Beside this, I opened this message on IBM MQ side here and a person suggested me to stop the JMS health indicator. So I set the following property to no avail:
management:
endpoint:
jms:
# Prevent Unreachable JMS Server from unregistering corehub from the registry, leading to unreachable microservice from the Gateway
enabled: false
Corresponding documentation is here
Any help would be greatly appreciated.
Thank you
Christophe
If you do not want your application to be considered unhealthy when JMS is down, disabling the JMS health indicator is what I would recommend. It hasn't worked for you as you have used management.endpoint.jms.enabled. The correct property to use is management.health.jms.enabled:
management:
health:
jms:
enabled: false

java.lang.NumberFormatException: For input string: "443,80"

I am getting the following exception while hitting my backend Spring boot application which is deployed on a Kubernetes container:
java.lang.NumberFormatException: For input string: "443,80"
All my services are registered with eureka:
#Eureka
spring.application.name=app-name
eureka.client.registerWithEureka=true
eureka.client.fetchRegistry=true
eureka.client.serviceUrl.defaultZone=http://app-eureka-dev/eureka
eureka.instance.preferIpAddress=true
eureka.instance.non-secure-port-enabled=true
And all my requests are routed through ingress/zuul services.
spring.application.name=app-gateway
eureka.client.registerWithEureka=false
eureka.client.fetchRegistry=true
eureka.client.serviceUrl.defaultZone=http://app-eureka-dev/eureka
When we try hitting backend services from the Swagger API, I am getting below exception.
java.lang.NumberFormatException: For input string: "443,80"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at springfox.documentation.swagger2.web.HostNameProvider.componentsFrom(HostNameProvider.java:72)
at springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(Swagger2Controller.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod
I am connecting to the eureka service through the container name, even though I am getting above exception. Is there any other configuration required, as we do ssl offloading in ingress, rest should be plain http or unsecure calls within the container services.
This should have been fixed with Springfox 2.7.0, as can be seen in this GitHub issue and the release notes of this version.
Before Springfox 2.7.0, the following code was used to determine the port number in HostNameProvider:
String port = request.getHeader("X-Forwarded-Port");
if (hasText(port)) {
builder.port(Integer.parseInt(port));
}
So basically it used the X-Forwarded-Port header to determine the port number. However, in your case it seems like it passes both the HTTP and HTTPS ports (443,80), which is obviously not a valid integer.
Upgrading your springfox-swagger2 dependency to 2.7.0 (or higher) should do the trick.
I experienced the same problem in SWAGGER when my DTO properties annotated with #APIModelProerty. Majorly, the problem occurred when Long type properties are present in the DTO object and failed to convert the empty string ("") value into 0.
So, it solved when I added example property in #APIModelProperty(example = "1", required = "false").

jclouds connection refused during hazelcast clustering

Questions
Why does it use localhost?
What does keystone have to do with it?
I can't seem to configure a keystone endpoint
Context
App: Spring Boot (1.5.6) REST API
Hibernate 5.2
Hazelcast 3.9 - as 2nd-level cache only
hazelcast-jclouds 3.7.1
jclouds-compute and jclouds-allcompute 2.0.2
Openstack cloud for VMs running the app
The Setup
I have my hazelcast.xml configured as follows:
<discovery-strategies>
<discovery-strategy class="com.hazelcast.jclouds.JCloudsDiscoveryStrategy" enabled="true">
<properties>
<property name="modules">org.jclouds.logging.slf4j.config.SLF4JLoggingModule</property>
<property name="provider">openstack-nova</property>
<property name="endpoint">http://dev.nova.cloud.youdontknow.net:8774/v2/</property>
<property name="identity">redacted</property>
<property name="credential">cens0red</property>
</properties>
</discovery-strategy>
</discovery-strategies>
The problem
App initialization fails. Here's some log tidbits:
[TRACE] o.j.r.internal.RestAnnotationProcessor : looking up default endpoint for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.r.internal.RestAnnotationProcessor : using default endpoint Optional.of(http://localhost:5000/v2.0/) for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.rest.internal.InvokeHttpMethod : << converted AuthenticationApi.authenticateWithTenantNameAndCredentials to POST http://localhost:5000/v2.0/tokens HTTP/1.1
And here's bits of the exception stack traces:
Caused by: com.hazelcast.core.HazelcastException: Failed to get registered addresses
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverNodes(JCloudsDiscoveryStrategy.java:93)
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverLocalMetadata(JCloudsDiscoveryStrategy.java:106)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.discoverLocalMetadata(DefaultDiscoveryService.java:91)
...
Caused by: org.jclouds.http.HttpResponseException: Connection refused: connect connecting to POST http://localhost:5000/v2.0/tokens HTTP/1.1
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:122)
...
at com.sun.proxy.$Proxy147.authenticateWithTenantNameAndCredentials(Unknown Source)
at org.jclouds.openstack.keystone.v2_0.functions.AuthenticatePasswordCredentials.authenticateWithTenantName(AuthenticatePasswordCredentials.java:43)
Other Notes
Looks like it's using the default keystone address configured in org.jclouds.openstack.keystone.v2_0.KeystoneApiMetadata - but I don't know how that's involved.
By looking at the code, I think hazlecast-jclouds is not prepared to manage generic APIs. When connecting to a provider, you don't need to specify the endpoint, as it is well-known (the AWS endpoints, Google, Azure, etc), but when using generic APIs such as OpenStack or CloudStack, you need to tell jclouds where to connect. Unfortunately, it looks like hazlecast-jclouds lacks support for configuring custom endpoints for generic APIs.
A quick look at the code suggests that it could be easy to add, though. The properties that are taken into account are defined in the JCloudsDiscoveryStrategyFactory, and then read in the ComputeServiceBuilder to create the jclouds context.
I'm not familiar with Hazlecast, but I'd say that adding the definition for the "endpoint" property, and then, if present, configuring it by calling the jclouds contextBuilder.endpoint(endpoitn) method should do the trick.

microservice not able to locate zipkin service using discovery-server

I have mircroservice environment based on spring-boot, where i am using zipkin server and discovery-server(eureka) and config-server. Now i have a rest-microservice which sends logs to zipkin server and this microservice is required to resolve where is zipkin server using discovery-server.
following is zipkin configuration i have in my rest-microservice's application.properties(pulled from config-server).
spring.zipkin.baseUrl=http://MTD-ZIPKIN-SERVER/
spring.zipkin.locator.discovery.enabled=true
spring.zipkin.enabled=true
...
here MTD-ZIPKIN-SERVER is zipkin-server name in discovery-server.
discovery-server dashboard.
but it does not try to resolve zipkin from discovery-server, instead it tries to connect directly using spring.zipkin.baseUrl, and i get below exception.
Dropped 1 spans due to ResourceAccessException(I/O error on POST request for "http://MTD-ZIPKIN-SERVER/api/v1/spans":
MTD-ZIPKIN-SERVER; nested exception is java.net.UnknownHostException:
MTD-ZIPKIN-SERVER)
org.springframework.web.client.ResourceAccessException: I/O error on
POST request for "http://MTD-ZIPKIN-SERVER/api/v1/spans":
MTD-ZIPKIN-SERVER; nested exception is java.net.UnknownHostException:
MTD-ZIPKIN-SERVER at
org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:666)
at
org.springframework.web.client.RestTemplate.execute(RestTemplate.java:628)
at
org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:590)
at
org.springframework.cloud.sleuth.zipkin.RestTemplateSender.post(RestTemplateSender.java:73)
at
org.springframework.cloud.sleuth.zipkin.RestTemplateSender.sendSpans(RestTemplateSender.java:46)
at
zipkin.reporter.AsyncReporter$BoundedAsyncReporter.flush(AsyncReporter.java:245)
at
zipkin.reporter.AsyncReporter$Builder.lambda$build$0(AsyncReporter.java:166)
at zipkin.reporter.AsyncReporter$Builder$$Lambda$1.run(Unknown
Source) at java.lang.Thread.run(Thread.java:745) Caused by:
java.net.UnknownHostException: MTD-ZIPKIN-SERVER at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
if i provide exact zipkin url in property spring.zipkin.baseUrl like below
spring.zipkin.baseUrl=http://localhost:5555/
then my rest-microservice is able to connect to zipkin-server.
My goal here is to read zipkin-server location from discovery-srever. What am i doing wrong? Do i need to add some zipkin enabling annotation on my spring-boot rest-microservice?
This feature is available in edgware release train. That corresponds to version 1.3.x of sleuth

Resources