Does camel-elasticsearch 2.11.x not work remotely? - elasticsearch

I am using the camel elasticsearch component : http://camel.apache.org/elasticsearch.html
My assumption, based on the docs, is that the elasticsearch server must be on the same network as the running camel route in order to work. Is this correct?
To clarify, the only connection property available is 'clustername'. I assume this is discovered by searching the network via multicast for the cluster.
My code needs to connect to a remote service. Is this just not possible?
I am fairly new to elasticsearch in general.

I had a similar problem with the autodiscovery of elasticsearch. I had a camel route that tried to index some exchanges, but the cluster was located in another subnet and thus not discoverd.
With the java api of ES it is possible to connect to a remote cluster with a TransportClient specifying an IP adress. I don't have acces to the code at the moment but the Java API in the ES documentation provides clean example code. You could make such a connection from within a bean in the route for example.
I also submitted a patch to Camel to add an ip parameter to the route, which should then connect to the remote cluster with such a TransportClient. The documentation states that should be available with Camel 2.12
Hope this helps.

Related

Disable reactive Elasticsearch client

My spring-boot (version 2.4.1) application was successfully connected to an ElasticSearch(v7.9.3) instance using the autowired org.elasticsearch.client.RestHighLevelClient (I just had to specify the application properties and it worked).
In a new phase of the project a dependency with spring-boot-starter-webflux was added to use some reactive logic to call an external webservice. (which has nothing to do with my elasticsearch connection)
But now suddenly the elasticsearch client also tries to connect using reactor and I got errors like:
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.data.elasticsearch.client.NoReachableHostException:
Host 'https://elastic-dev.company.intra:9200:9200' not reachable. Cluster state is offline.
Caused by: org.springframework.data.elasticsearch.client.NoReachableHostException:
Host 'https://elastic-dev.company.intra:9200:9200' not reachable. Cluster state is offline.
at org.springframework.data.elasticsearch.client.reactive.SingleNodeHostProvider.lambda$lookupActiveHost$4(SingleNodeHostProvider.java:108) ~[spring-data-elasticsearch-4.1.2.jar!/:4.1.2]
I know there is a configuration issue with :9200:9200 but I would like to just disable the use of reactor for my Elasticsearch client so it just uses the old way (I still need my Elasticseach client). Is this possible ?
Thanks.
After searching further I found a solution which was also suggested by P.J.Meish: disable the AutoConfiguration classes regarding the reactive elasticsearch:
I preferred the config in application.properties:
spring.autoconfigure.exclude=\
org.springframework.boot.actuate.autoconfigure.elasticsearch.ElasticSearchReactiveHealthContributorAutoConfiguration,\
org.springframework.boot.autoconfigure.data.elasticsearch.ReactiveElasticsearchRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.elasticsearch.ReactiveElasticsearchRestClientAutoConfiguration

ECS Health Check Issue with Spring Boot Management Port

Set up-1:(Not Working)
I have a task running in the ECS cluster. But it's going down because of a health check immediately after it started.
My service is spring boot based which has both traffic(for service calls) and management ports(for health check). I have "permitAll() permission for "*/health" path.
PFA: I configured the same by selecting the override port option in the TG health check tab as well.
Set up-2: (Working Fine)
I have the same setup in my docker-compose file and I can access health check endpoint in my local container.
This is how I defined in my compose:
service:
image: repo/a:name
container_name: container-1
ports:
- "9904:9904" # traffic port
- "8084:8084". # management Port
So, I tried configuring the management port on Task Def in the container section. I tried updated the corresponding service for this latest revision of the TD, but when I save this service, I'm getting an error. Is this the right way of handling this?
Error in ECS console:
Failed updating Service : The task definition is configured to use a dynamic host port,
but the target group with targetGroupArn arn:aws:elasticloadbalancing:us-east-2:{accountId}:targetgroup/ecs-container-tg/{someId} has a health check port specified.
Service
Two possible resolutions:
Is there a way I can specify this port mapping in the docker file?
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
The task definition is configured to use a dynamic host port but target has a health check port specified.
Base on the error it seems like you have configured dynamic port mapping in Task definition, you can verify this in task definition.
understanding-dynamic-port-mapping-in-amazon-ecs
So in dynamic port, ECS schedule will assign and publish random port in the host which will be different than 8082, so change the health check setting accordingly to traffic port.
this will resolve the health issue, now come to your query
Is there a way I can specify this port mapping in the docker file?
No, port mapping happen at run time not at build time, you can specify that in task definition.
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
You can assign static port mapping which mean both publish port and expose will be same 8082:8082 in this health check will work by using static port mapping.
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
Healthcheck is simple HTTP Get a call that ALB expecting 200 HTTP status code in response, so you can create a simple endpoint that will return 200 HTTP status code.
So, after 2 days of doing different things:
In task definition, the networking mode should be "Bridge" type
In task definition, leave the CPU and memory units empty. Providing them at the container level should be enough.

Jaeger with ElasticSearch

I have created a microservice based architecture using Spring Boot and deployed the application on Kubernetes/Istio platform.
The different microservices communicate with each other using either JMS (ActiveMQ) or REST API.
I am getting the tracing of REST communication on Istio's Jaeger but the JMS based communication is missing in Jaeger.
I am using ElasticSearch to store my application logs.
Is it possible to use the same ElasticSearch as a backend(DB) of Jaeger?
If yes then I will store tracing specific logs in ElasticSearch and query them on Jaeger UI.
I believe you can reuse Elasticsearch for multiple purposes - each would use a different set of indices, so separation is good.
from: https://www.jaegertracing.io/docs/1.11/deployment/ :
Collectors require a persistent storage backend. Cassandra and Elasticsearch are the primary supported storage backends
Tying the networking all together, a docker-compose example:
How to configure Jaeger with elasticsearch?
While this isn't exactly what you asked, it sounds like what you're trying to achieve is seeing tracing for your JMS calls in Jaegar. If that is the case, you could use an OpenTracing tracing solution for JMS or ActiveMQ to report tracing data directly to Jaegar. Here's one potential solution I found with a quick google. There may be others.
https://github.com/opentracing-contrib/java-jms

Wildfly Swarm Consul

I am trying to register a Wildfly Swarm REST service to a running Consule agent, but it's not working correctly.
I am able to register a service (I can see it in the Consul ui), but somehow the health checks are not working.
The Swarm Server keeps frequently telling me, that "sending the check" failed due to "HTTP 405 Method not allowed". I can see simular logs in the Consule console, that GET method is not allowed.
I am at a dead end: My application is not working, nor does the Wildfly Swarm example (same exception). I also configured a CORS filter on both sides just to be sure, but thats not working either.
I am using Wildfly Swarm 2017.10.1 and Consul 1.0.0.
I hope you guys can help.
Best regards
I figured it out myself. Obviously, it wasn't that hard ^^
I checked the version of the Consul Client API which is used for my Wildfly Swarm version: It's 0.9.16. I've downloaded all Consul versions and checked which one are compatible. I can verify that all versions up to 0.9.3 are working.
Consul 1.0.0 has some very critical breaking changes and I really don't understand why they were not implemented in a HTTP API v2, but thats not the point here.
I highly recommend to upgrade the Consul Client API used by the topology-consul fraction to a newer version like 0.16.5 or 0.17.0.
At least, please add a note in the README for the ribbon-consul example what Consul versions can be used.

How to configure DS Replicas links in Spring Cloud Eureka Server Dashboard

I setup an Eureka cluster composed of 3 replicas.
I got the nice dashboard which is automatically populated with the instances currently registered with Eureka and the DS Replicas.
However the DS Replicas links seems to point to the value I set as eureka.client.serviceUrl.defaultZone. In my case this value is something similar to http://node-01:8761/eureka/ which in reality returns a 404. Is there a way I could configure the dashboard to strip out the /eureka/ part so when I follow the links I end up in the other dashboards or am I misunderstanding the use of those links?
There is no easy way to alter the link as you ask.
However, the UI part of the Eureka server is a standard Spring MVC #Controller. An instance of it is created by org.springframework.cloud.netflix.eureka.server.EurekaServerConfiguration. Have a look at the current implementation... it shouldn't be too hard to provide your own customized version instead.

Resources