EhCache terracottaConfig url in https - https

Can I work with secured environments (eg. HTTPS) using Terracota server Arrays ?
I 've tried to configure ehcache.xml file like this:
<terracottaConfig rejoin="true" url="https://localhost:9510,https://localhost:9511"/>
But it remains in error.
Caused by: java.lang.IllegalArgumentException: URI can't be null.
at sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:116) ~[na:1.6.0_23]
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911) ~[na:1.6.0_23]
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:841) ~[na:1.6.0_23]
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) ~[na:1.6.0_23]
at com.tc.config.schema.setup.sources.ServerConfigurationSource.getInputStream(ServerConfigurationSource.java:42) ~[na:na]
at com.tc.config.schema.setup.StandardXMLFileConfigurationCreator.trySource(StandardXMLFileConfigurationCreator.java:343) ~[na:na]
at com.tc.config.schema.setup.StandardXMLFileConfigurationCreator.getConfigDataSourceStrean(StandardXMLFileConfigurationCreator.java:289) ~[na:na]
at com.tc.config.schema.setup.StandardXMLFileConfigurationCreator.loadConfigDataFromSources(StandardXMLFileConfigurationCreator.java:222) ~[na:na]
at com.tc.config.schema.setup.StandardXMLFileConfigurationCreator.loadConfigAndSetIntoRepositories(StandardXMLFileConfigurationCreator.java:120) ~[na:na]
at com.tc.config.schema.setup.StandardXMLFileConfigurationCreator.createConfigurationIntoRepositories(StandardXMLFileConfigurationCreator.java:102) ~[na:na]
at com.terracotta.express.StandaloneL1Boot.resolveEmbedded(StandaloneL1Boot.java:177) ~[terracotta-toolkit-1.5-runtime-4.2.0.jar:na]
at com.terracotta.express.StandaloneL1Boot.resolveConfig(StandaloneL1Boot.java:122) ~[terracotta-toolkit-1.5-runtime-4.2.0.jar:na]
... 106 common frames omitted
If it is possible what is the way to do this?

Simply put Terracotta doesn't support cluster communication over SSL currently. If you are using the commercial edition you have an additional layer of security because a client won't be able to connect to the cluster if it doesn't have the correct license key. Apart from that you can use firewall rules to restrict access.

Related

Fail to deploy service using thick client

I'm trying to deploy a simple service using a thick client, I use kubernetes job to launch a thick client, and then use ignite instance to deploy:
private void deployService() {
ServiceConfiguration serviceCfg = new ServiceConfiguration();
serviceCfg.setName("simpleService");
serviceCfg.setMaxPerNodeCount(1);
serviceCfg.setTotalCount(1);
serviceCfg.setService(new SimpleServiceImpl());
ignite.services().deploy(serviceCfg);
}
but I got the following error:
SEVERE: Failed to initialize service (service will not be deployed): simpleService
class org.apache.ignite.IgniteCheckedException: com.example.ignite_springcloud.model.ignite_service.SimpleServiceImpl at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:11026) at org.apache.ignite.internal.processors.service.GridServiceProcessor.copyAndInject(GridServiceProcessor.java:1381) at
org.apache.ignite.internal.processors.service.GridServiceProcessor.redeploy(GridServiceProcessor.java:1302) at
org.apache.ignite.internal.processors.service.GridServiceProcessor.processAssignment(GridServiceProcessor.java:1931) at
org.apache.ignite.internal.processors.service.GridServiceProcessor.onSystemCacheUpdated(GridServiceProcessor.java:1555) at
org.apache.ignite.internal.processors.service.GridServiceProcessor.access$300(GridServiceProcessor.java:133) at
org.apache.ignite.internal.processors.service.GridServiceProcessor$ServiceEntriesListener$1.run0(GridServiceProcessor.java:1537) at
org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:2007) at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: com.example.ignite_springcloud.model.ignite_service.SimpleServiceImpl at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:697) at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1765) at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1724) at
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:318) at
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:303) at
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100) at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80) at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:11020) ... 10 more Caused by:
java.lang.ClassNotFoundException: com.example.ignite_springcloud.model.ignite_service.SimpleServiceImpl at
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at
java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) at
java.base/java.lang.Class.forName0(Native Method) at
java.base/java.lang.Class.forName(Class.java:398) at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9503) at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9441) at
org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:325) at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:673) ... 170 more
Also, the service related class and interface are defined together with thick client, and I didn't provide jar or classpath on server nodes, but I set peer class loading for both client and servers:
igniteConfig.setPeerClassLoadingEnabled(true);
igniteConfig.setDeploymentMode(DeploymentMode.CONTINUOUS);
I just wonder if this is the correct way to deploy a service to server. and btw, if I deployed the service through a thick client, then that thick client left the cluster and closed, will the service be accessible and callable by other client nodes?
Peer class loading does not work for Services: https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading
You have to deploy service classes manually on server nodes.
if I deployed the service through a thick client, then that thick client left the cluster and closed, will the service be accessible and callable by other client nodes
Yes.

How do i stop hibernate search from sniffing the nodes of a non-existent local elastic search server?

I've created a open search service domain on AWS and i've set the property hibernate.search.backend.uris to the address of that domain. Everything works fine, i'm able to index my entity tables and can run search queries against the open search service domain.
Yet for some reason hibernate search still routinely tries to connect to localhost:9200 in order to perform a node sniffing routine. This obviously doesn't work and the exception [es_rest_client_sniffer[T#1]] Sniffer - error while sniffing nodes java.net.ConnectException: Connection refused: no further information is thown.
How do i stop hibernate search from performing this futile action? It keeps trying to sniff the nodes every few minutes or so. I've tried setting the fields hibernate.search.backend.hosts and hibernate.search.backend.protocol instead of .uris. But even when i set those properties instead hibernate search still tries to interact with a non-existent elastic search service on localhost. I am running hibernate search 6.1.5.Final. The elastic search version is set to 7.16.3. Here are all the relevant properties that i set programmatically.
jpaProperties.put("hibernate.search.backend.aws.credentials.type", "static");
jpaProperties.put("hibernate.search.backend.aws.credentials.access_key_id", awsId);
jpaProperties.put("hibernate.search.backend.aws.credentials.secret_access_key", awsKey);
jpaProperties.put("hibernate.search.backend.aws.region", openSearchAwsInstanceRegion);
jpaProperties.put("hibernate.search.backend.aws.signing.enabled", true);
//--------------------------------------------------------------------------------------------
jpaProperties.put("hibernate.search.automatic_indexing.synchronization.strategy", indexSynchronizationStrategy);
jpaProperties.put("hibernate.search.backend.request_timeout", requestTimeout);
jpaProperties.put("hibernate.search.backend.connection_timeout", elasticSearchConnectionTimeout);
jpaProperties.put("hibernate.search.backend.read_timeout", readTimeout);
jpaProperties.put("hibernate.search.backend.max_connections", maximumElasticSearchConnections);
jpaProperties.put("hibernate.search.backend.max_connections_per_route", maximumElasticSearchConnectionsPerRout);
jpaProperties.put("hibernate.search.schema_management.strategy", schemaManagementStrategy);
jpaProperties.put("hibernate.search.backend.analysis.configurer", "class:config.EnhancedLuceneAnalysisConfig");
jpaProperties.put("hibernate.search.backend.uris", elasticSearchHostAddress);
jpaProperties.put("hibernate.search.backend.directory.type", "local-filesystem");
jpaProperties.put("hibernate.search.backend.type", "elasticsearch");
jpaProperties.put("hibernate.search.backend.directory.root", luceneAbsoluteFilePath);
jpaProperties.put("hibernate.search.backend.lucene_version", "LUCENE_CURRENT");
jpaProperties.put("hibernate.search.backend.io.writer.infostream", true);
EDIT:
These are all the elastic search related dependencies that my application uses.
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>6.1.5.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch-aws</artifactId>
<version>6.1.5.Final</version>
</dependency>
Heres the stacktrace,
[ERROR] 2022-07-21 14:33:58.402 [es_rest_client_sniffer[T#1]] Sniffer - error while sniffing nodes
java.net.ConnectException: Connection refused: no further information
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:918) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:299) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287) ~[elasticsearch-rest-client-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.ElasticsearchNodesSniffer.sniff(ElasticsearchNodesSniffer.java:106) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer.sniff(Sniffer.java:209) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer$Task.run(Sniffer.java:140) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at org.elasticsearch.client.sniff.Sniffer$1.run(Sniffer.java:81) ~[elasticsearch-rest-client-sniffer-7.17.3.jar:7.17.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
at sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) ~[?:?]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.5.jar:4.1.5]
... 1 more
This stacktrace was copied after i had removed <elasticsearch.version>7.16.3</elasticsearch.version> so see whether that would solve the problem. If the version is missing hibernate search seems to default to version 7.17.3
It took a while but i finally found the source of the problem.
Apparently spring boot has a built-in autoconfiguration class for Elastic search called ElasticsearchRestClientAutoConfiguration. The code in this class is run by default and it initializes a org.elasticsearch.client.RestClient that that has a node sniffer enabled by default. If no elastic search server is running on localhost this RestClient will keep throwing exceptions because there is nothing to connect to.
Because this class is not part of the hibernate search java library settings such as hibernate.search.backend.discovery.enabled = false do not influence this RestClient or its Sniffer.
You can prevent Spring from creating this RestClient by telling Spring not to run ElasticsearchRestClientAutoConfiguration. This can be done in two ways.
Firstly you can add the following property to your application.properties:
spring.autoconfigure.exclude = org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
Second, you can also exclude this autoconfiguration class by adding it as a value to the exclude argument of the #SpringBootApplication annotation. For instance:
#SpringBootApplication(exclude ={ElasticsearchRestClientAutoConfiguration.class})
public class MyConfiguration {
Hibernate Search will only enable node discovery (create a sniffer) if the configuration property hibernate.search.backend.discovery.enabled is set to true, and by default it's false.
If the properties you listed are the only ones you set, then I don't think Hibernate Search is creating this sniffer. The sniffer also doesn't use the URIs you passed to Hibernate Search, so that as well tends to prove the sniffer is not created by Hibernate Search.
If you don't believe me, see by yourself by starting your app in debug mode and putting a breakpoint in org.hibernate.search.backend.elasticsearch.client.impl.ElasticsearchClientFactoryImpl#createSniffer.
I think you probably have something else in your application creating an Elasticsearch client and a sniffer, and that something else is not completely configured. Try launching your application in debug mode and putting breakpoints in the constructors of org.elasticsearch.client.sniff.Sniffer?

Quarkus Hibernate ORM fails to pick the default data source when multiple data sources are defined

I have a Quarkus application which has a default reactive postgres data source and jdbc db2 data source. I cannot go with reactive db2 data source because of existing open issue (https://github.com/eclipse-vertx/vertx-sql-client/issues/1131).
During application startup Hibernate ORM is not able to pick the default datasource in the application.properties file and throws error
Exception: Model classes are defined for the default persistence unit, but no default datasource was found. The default EntityManagerFactory will not be created. To solve this, configu
re the default datasource. Refer to https://quarkus.io/guides/datasource for guidance.
at io.quarkus.hibernate.orm.deployment.HibernateOrmProcessor.handleHibernateORMWithNoPersistenceXml(HibernateOrmProcessor.java:932)
at io.quarkus.hibernate.orm.deployment.HibernateOrmProcessor.configurationDescriptorBuilding(HibernateOrmProcessor.java:420)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at io.quarkus.deployment.ExtensionLoader$2.execute(ExtensionLoader.java:882)
at io.quarkus.builder.BuildContext.run(BuildContext.java:277)
at org.jboss.threads.ContextHandler$1.runWith(ContextHandler.java:18)
at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2449)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1478)
at java.base/java.lang.Thread.run(Thread.java:833)
at org.jboss.threads.JBossThread.run(JBossThread.java:501)
application.properties
# Postgres reactive datasource
quarkus.datasource.db-kind=postgresql
quarkus.datasource.jdbc=false
quarkus.datasource.reactive.url=***
quarkus.datasource.username=***
quarkus.datasource.password=***
#DB2 Agoral data source
quarkus.datasource.legacy.db-kind=db2
quarkus.datasource.legacy.reactive=false
quarkus.datasource.legacy.jdbc.url=***
quarkus.datasource.legacy.username=***
quarkus.datasource.legacy.password=***
Hibernate ORM Picks the default data source and everything works fine when the application has only the default data source. Please let me know if I am missing any configuration here. THanks.

Keycloak / SpringBoot - The Issuer <https://example.com> provided in the OpenID Configuration did not match the requested issuer <https://bar.com>

I have an issue with a project I just join.
The technical stack :
Jhipster with Angular and SpringBoot
Keycloak
I replace the right url by example.com and bar.com
application.yaml
The endpoint https://bar.com/auth/realms/artemis/.well-known/openid-configuration returns this :
{
"issuer": "https://example.com/auth/realms/artemis",
"authorization_endpoint": "https://example.com/auth/realms/artemis/protocol/openid-connect/auth",
"token_endpoint": "https://bar.com/auth/realms/artemis/protocol/openid-connect/token",
"token_introspection_endpoint": "https://bar.com/auth/realms/artemis/protocol/openid-connect/token/introspect",
"userinfo_endpoint": "https://bar.com/auth/realms/artemis/protocol/openid-connect/userinfo",
"end_session_endpoint": "https://example.com/auth/realms/artemis/protocol/openid-connect/logout",
"jwks_uri": "https://bar.com/auth/realms/artemis/protocol/openid-connect/certs",
"check_session_iframe": "https://example.com/auth/realms/artemis/protocol/openid-connect/login-status-iframe.html",
}
When I run the App I got this error :
Caused by: java.lang.IllegalStateException: The Issuer "https://example.com/auth/realms/artemis" provided in the OpenID Configuration did not match the requested issuer "https://bar.com:8443/auth/realms/artemis"
at org.springframework.security.oauth2.client.registration.ClientRegistrations.fromOidcIssuerLocation(ClientRegistrations.java:76)
at org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientPropertiesRegistrationAdapter.getBuilderFromIssuerIfPossible(OAuth2ClientPropertiesRegistrationAdapter.java:84)
at org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientPropertiesRegistrationAdapter.getClientRegistration(OAuth2ClientPropertiesRegistrationAdapter.java:60)
at org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientPropertiesRegistrationAdapter.lambda$getClientRegistrations$0(OAuth2ClientPropertiesRegistrationAdapter.java:53)
at java.util.HashMap.forEach(HashMap.java:1289)
at org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientPropertiesRegistrationAdapter.getClientRegistrations(OAuth2ClientPropertiesRegistrationAdapter.java:52)
at org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientRegistrationRepositoryConfiguration.clientRegistrationRepository(OAuth2ClientRegistrationRepositoryConfiguration.java:55)
at org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientRegistrationRepositoryConfiguration$$EnhancerBySpringCGLIB$$c9d328e3.CGLIB$clientRegistrationRepository$0(<generated>)
at org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientRegistrationRepositoryConfiguration$$EnhancerBySpringCGLIB$$c9d328e3$$FastClassBySpringCGLIB$$1d0ccf00.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:363)
at org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientRegistrationRepositoryConfiguration$$EnhancerBySpringCGLIB$$c9d328e3.clientRegistrationRepository(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
... 92 common frames omitted
I'm new with Spring Boot. I don't really understand what I have to do to be able to use 2 differents url.
Thx for the help ! I can give you more informations if you need.
Your application.yaml config issuer-uri is not matching issuer of used OIDC Keycloak realm. Set it to https://example.com/auth/realms/artemis and it should be fine.
-- This may not be relative to OP's case. But for other cases.
-- Just a headsup.
-- I am not a professional on this, I could be wrong, but it helped in my case.
The problem could occur from the other side, in the Authorization Server.
So, for example, you may not only need to look at the application.yml in the Resource Server:
spring.security.oauth2.resourceserver.jwt.issuer-uri: http://localhost:9999
you may also need to look at the Authorization Server:
#Bean
public ProviderSettings providerSettings() {
return new ProviderSettings().issuer("http://localhost:9999");
}

Handling org.elasticsearch.client.transport.NoNodeAvailableException

Hi In my SpringBoot project i have configured elastic search using JPA. I am using ElasticsearchRepository for it. Now for the configuration when i am using localhost then everything works fine but when i am putting IP address then i am facing an exception-
org.elasticsearch.client.transport.NoNodeAvailableException: None of
the configured nodes are available:
[{#transport#-1}{lDnuVli1Rriy-9j1pdozZA}{27.101.12.99}{27.101.12.99:9300}]
at
org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
~[elasticsearch-5.6.11.jar:5.6.11] at
org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:571)
~[spring-data-elasticsearch-3.0.10.RELEASE.jar:3.0.10.RELEASE] at
org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:156)
~[spring-data-elasticsearch-3.0.10.RELEASE.jar:3.0.10.RELEASE] at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.8.0_151] at
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
~[na:1.8.0_151] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
~[na:1.8.0_151] at java.lang.reflect.Method.invoke(Unknown Source)
~[na:1.8.0_151]
Code for initlizing Elastic Search -
#Bean
public Client client() throws Exception {
Settings settings = Settings.builder()
.put("cluster.name",getElasticCluster())
.build();
return new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(getElasticHost()),getElasticPort()));
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
elasticsearch:
jest:
proxy:
host: 27.101.12.99
port: 9300
I had a lot of search but nothing is helpful in my case. So Please check and help.
The elasticsearch client in your application is joning the cluster using the transport protocoll. This approach is deprecated and already removed in recent releases.
This said transport protocoll is not HTTP and your jest proxy probably fails to analyse/mock the data send. This is the reason why localhost works but jest proxy fails.
In order to have your application compatible with future releases of elasticsearch you should consider using the high level REST client without loosing any functionality for the spring app. And as a quick win you´ll be able to use jest again because the REST client is using HTTP to communicate with elasticsearch.
Please have a look on this for details about the client migration (I assumed the elasticsearch version based on the stacktrace, please double ckeck it) https://www.elastic.co/guide/en/elasticsearch/client/java-rest/5.6/java-rest-high-level-migration.html

Resources