My microservice with #EnableTurbine config:
turbine:
clusterNameExpression: new String('default')
appConfig: bestallning
bestallning is a #SpringCloudApplication, hystrix.stream is enabled. It registers in eureka and turbine app can find it. But it has management.port set to 8092 and server.port 8082. hystrix.stream binds to management.port
Turbine now tries to fetch hystrix.stream from server.port of bestallning, not management.port that hystrix.stream is bound to.
Fetching instance list for apps: [bestallning]
Fetching instances for app: bestallning
Received instance list for app: bestallning, size=1
Retrieved hosts from InstanceDiscovery: 1
Found hosts that have been previously terminated: 0
Hosts up:1, hosts down: 0
Url for host: http://143.237.21.196:8082/hystrix.stream default
Could not initiate connection to host, giving up: [{"timestamp":1460035761979,"status":404,"error":"Not Found","message":"No message available","path":"/hystrix.stream"}]
Stopping InstanceMonitor for: 143.237.21.196 default
Is it possible to have turbine look for hystrix.stream using the correct port?
I think you'd have to write your own InstanceDiscovery (and create a #Bean of that type). Might be a useful feature in the existing implementations though, so please open an issue in Spring Cloud Netflix.
Related
I have a spring book (kotlin) app using Jedis to connect to redis.
Spring has documented a list of common connection properties for redis. My understanding from reading blog posts and documentation is that JedisConnectionFactory is expected to automatically read and respect those values.
Unfortunately it doesn't seem like this is happening in my code.
I would expect a connection failure with a default redis running on localhost and an application.yml as follows, but it isn't happening.
spring:
redis:
database: 0
host: localhostBadhost
port: 9999 #default is: 6379
password: badPassword
timeout: 60000
What steps need to be taken to ensure Jedis executes its auto-configuration step?
I have a pod which consists of multiple containers each have an application running. How do I enable actuator to fetch metrics from these applications. I couldn't find a way to do this.
There are four micro services running in the pod on different ports say 8082, 8080, 8081, 8083. But the actuator is scraping the metrics only from the micro service running on 8080(default port).
I tried adding application properties indicated in code section to all properties. but it didn't work. Here is the application.property content:
management.endpoint.metrics.enabled=true
management.endpoints.web.exposure.include=*
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
management.metrics.use-global-registry=true
management.server.port=8888
Expected output: I should be able to see the metrics from each applications using /metrics endpoint.
I was able to solve this problem. Here are the steps:
Configure separate acutator ports for each service in application.properties.
Expose the configure ports in deployment yaml(in case of kubernetes).
That's it.
For a microservice that I have been working on, I created a custom health check class extending AbstractHealthIndicator and able to get the output in
http://localhost:8080/actuator/health
But when I register the service with consul, the health check status is failing.
Tried to configure the actuator url with consul health check asspring.cloud.consul.discovery.health-check-url= http://localhost:8080/actuator/health in bootstrap. But it is still failing with error Get http://localhost:8566/actuator/health: dial tcp 127.0.0.1:8566: connect: connection refused
If I try with health-check-path: /actuator/health, it is not taking this path at all and defaulting to http://QINDW062.it.local:8566/my-health-check.
Any suggestions?
Edit:
You need to set a property in your configurations file (application.yml for example) pointing to your health check endpoint:
spring:
cloud:
consul:
discovery:
healthCheckPath: /actuator/health
You can also decide the time interval for the health check (how often Consul will call the health check) with the following property:
spring:
cloud:
consul:
discovery:
healthCheckInterval: 20s
I've encountered an issue with #RefreshScope and it's behaviour. Two main queries:
When the refresh endpoint is invoked, the service effectively restarts and unregisters, then re-registers with Eureka. I thought refreshing the scope would in the main be non-service affecting?
My service starts on a random port, i.e. I've set server.port to be 0 in my properties. The restart mentioned above changes initial value assigned to the server port, and changes it to be 0 for the purpose of registering with Eureka. This means the service is effectively uncontactable to any Ribbon/eureka aware load balancer.
See my sample project here:
https://github.com/KramKroc/refreshscope
Thanks to #DaveSyer I was able to get to the bottom of this issue. In my bootstrap.yml in sample-service (see https://github.com/KramKroc/refreshscope) I had the following line:
eureka:
instance:
nonSecurePort: ${server.port:8082}
This was incorrect as it caused the service to re-register with eureka on the server.port (which was set to zero) or 8082 if not defined. Removing the nonSecurePort entry allowed the service to be refreshed and still be registered with the correct (random) port in Eureka.
I’m having a bit of trouble getting Turbine to work in Spring Cloud. In a nutshell, I can’t determine how to configure it to aggregate circuits from more than one application at a time.
I have 6 separate services, a eureka server, and a turbine server running in standalone mode. I can see from my Eureka server that all of the services are registered, including turbine. My turbine server is up and running, and I can see its /hystrix page without issue. But when I try to use it to examine turbine.stream, I only see the FIRST server that is listed in turbine.appConfig, the rest are ignored.
This is my Turbine server’s application.yml, or at least the relevant parts:
---
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8010/eureka/
server:
port: 8030
info:
component: Turbine
turbine:
clusterNameExpression: new String(“default”)
appConfig: sentence,subject,verb,article,adjective,noun
management:
port: 8990
When I run this and access the hystrix dashboard on my turbine instance, asking for the turbine.stream, the ONLY circuit breakers listed in the output are for the first service listed in appConfig, the “sentence” service in this case. Curiously, if I re-arrange the order of these services and put another one first (like “noun”), I see only the circuits for THAT service. Only the first service in the list is displayed.
I’ll admit to being a little confused on some of the terminology, like streams, clusters, etc., so I could be missing some basic concept here, but my understanding is that Turbine could digest streams from more than one service and aggregate them in a single display. Suggestions would be appreciated.
I don't have enough reputation to comment, so I have to write this in an answer :)
I had the exactly same problem:
There are two services "test-service" and "other-service", each with it's own working hystrix-stream
and there is one Turbine-Application, which is configured like this:
turbine:
clusterNameExpression: new String("default")
appConfig: test-service,other-service
All of my services are running on my local machine.
Result is: My Hystrix-Dashboard just shows the metrics from "test-service".
Reason:
It seems to be, that a Turbine-Client which is configured the described way doesn't handle multiple services when they are running at the same host.
This is explained here:
https://github.com/Netflix/Hystrix/issues/117#issuecomment-14262713
Turbine maintains state of all these instances in order to maintain persistent connections to them and it does rely on the "hostname" and if the host name is the same then it won't instantiate a new connection to that same server (on a different port).
So the main point is, that all of your services must be registered with different hostnames. How you could do this on your local machine is described below.
UPDATE 2015-06-12/2016-01-23: Workaround for local testing
Change your hostfile:
# ...
127.0.0.1 localhost
127.0.0.1 localdomain1
127.0.0.1 localdomain2
# ...
127.0.0.1 localdomainx
And then set the hostname for your clients each to a different domain-entry like this:
application.yml:
eureka:
instance:
hostname: localdomainx