My consul set up is as follows: -
consul-01 server on 192.168.30.112
consul-02 server on 192.168.30.113
consul-03 server on 192.168.30.114
consul-client on 192.168.30.115
consul-client on 192.168..30.116
I have registered 3 .NET Core services viz. service1, service2 and service3. 2 instances of service1 and service2 running on 192.168.30.115 and 192.168.30.116. 1 instance of service3 is running on 192.168.30.116. My use case is that service1 is talking to service2 and service2 in turns talking with service3. All is working fine. Now when i defined an intention as consul intention create -deny service1 service2, I am supposed that it must give me an error when I hit the url of service1, but it gives proper output to me. I am not using any sidecar-proxy in my setup. I just want to know that whether intentions only work with side-car proxy or they work work without it also.
Thanks
Consul intentions are authorization polices that allow you to control access between applications within a service mesh. You must use a sidecar proxy, or natively integrate your application with the mesh, in order to use intentions. They are not applicable if you are only using Consul for service discovery.
Related
I am working on a scenarios where, we need to have a composite health actuator for our service. For e.g. we have 3 services A,B & C. Health of "Service A" is dependent on "Service B" and that of "Service B" is dependent on "Service C". So if either of services is down or unhealthy, health check for Services A should fail.
We can do that by making a simple webclient (HTTP) call for the health endpoint and check the status. But wanted to check, if there is any other efficient approach for same as the SLA for health endpoint should be less then 1 sec or as minimal as possible.
We are using PCF as PaaS.Any thoughts/suggestions on the same.
Thanks,
Sagar
We did something similar using http, through the RestTemplate to do health checks and dint face any delay, it fairly responds well.
If you can use Internal routes (*.apps.internal i.e. container to container) then it could be more efficient as traffic will stay within PCF.
Set up-1:(Not Working)
I have a task running in the ECS cluster. But it's going down because of a health check immediately after it started.
My service is spring boot based which has both traffic(for service calls) and management ports(for health check). I have "permitAll() permission for "*/health" path.
PFA: I configured the same by selecting the override port option in the TG health check tab as well.
Set up-2: (Working Fine)
I have the same setup in my docker-compose file and I can access health check endpoint in my local container.
This is how I defined in my compose:
service:
image: repo/a:name
container_name: container-1
ports:
- "9904:9904" # traffic port
- "8084:8084". # management Port
So, I tried configuring the management port on Task Def in the container section. I tried updated the corresponding service for this latest revision of the TD, but when I save this service, I'm getting an error. Is this the right way of handling this?
Error in ECS console:
Failed updating Service : The task definition is configured to use a dynamic host port,
but the target group with targetGroupArn arn:aws:elasticloadbalancing:us-east-2:{accountId}:targetgroup/ecs-container-tg/{someId} has a health check port specified.
Service
Two possible resolutions:
Is there a way I can specify this port mapping in the docker file?
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
The task definition is configured to use a dynamic host port but target has a health check port specified.
Base on the error it seems like you have configured dynamic port mapping in Task definition, you can verify this in task definition.
understanding-dynamic-port-mapping-in-amazon-ecs
So in dynamic port, ECS schedule will assign and publish random port in the host which will be different than 8082, so change the health check setting accordingly to traffic port.
this will resolve the health issue, now come to your query
Is there a way I can specify this port mapping in the docker file?
No, port mapping happen at run time not at build time, you can specify that in task definition.
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
You can assign static port mapping which mean both publish port and expose will be same 8082:8082 in this health check will work by using static port mapping.
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
Healthcheck is simple HTTP Get a call that ALB expecting 200 HTTP status code in response, so you can create a simple endpoint that will return 200 HTTP status code.
So, after 2 days of doing different things:
In task definition, the networking mode should be "Bridge" type
In task definition, leave the CPU and memory units empty. Providing them at the container level should be enough.
I've started building a microservice application with the netflix stack, and have been successful in registering clients with the eureka discovery server.
I want to have two instances of each client service,
and i'm wondering what happens if one instance of a client goes down. Does loadbalancing handle such situations ? If yes, then isn't eureka also acting as a failover system ?
We have a collection of microservices built with Spring Boot, using Spring Cloud Netflix. Up until now, they've been packaged as RPMs and deployed to VMs. Using Eureka has allowed for service registration/discovery (obviously) and our cross-microservice interaction to be done using Spring's RestTemplate with a Virtual IP (VIP), like the following:
http://foo-service/<PATH_TO_RESOURCE>
Client-side load-balancing was another benefit.
Now, we are looking to use Docker and run within Rancher. I'm wondering using Eureka still makes sense in this environment.
Within Rancher, if the Service is named 'foo-service', that name is used as a VIP within the Rancher internal network so the same URL shown above can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will round-robin load-balance traffic amongst them.
Plus, it seems Rancher will know about Containers coming and going sooner than Eureka would.
I'm struggling to find a solid reason to keep Eureka.
Not much familiar with Rancher, AFAIK it enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos or Kubernetes to manage your containers.
So, it finally comes down to whether your infrastructure platform provides service discovery functionality or not (I know Docker swarm and Kubernetes provides Service discovery, not sure about the others); if you get free service discovery out of the box from your platform and if you don't need client side load balancing, eureka is an overkill.
Here is an answer for the question in context of Kubernetes
https://stackoverflow.com/a/40568412/6785908
Quoting the relevant parts
In Kubernetes platform, using Eureka (Or Consul/zookeeper any
other service registries) for service discovery is an overkill; you
can achieve the same (arguably) functionality with Kubernetes Services
(+kube DNS Addon), which will act as a referable IP address and a load
balancer (not client side) for the ephemeral Pods. Read this
[article][1] by Christian Posta. If you want to refer your service by
its name instead of IP address add KubeDNS (A kubernetes add on) to
your cluster.
http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/
Edit
Since you said,
Within Rancher, if the Service is named 'foo-service', it is used as a
VIP within the Rancher internal network so the same URL shown above
can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will
round-robing load-balance traffic amongst them.
So you are getting both Service discovery and the (server side) load balancer from your platform for free. So if you don't have a compelling reason to do client side load balancing, forget about eureka.
We've three Spring Boot applications:
Eureka Service
Config Server
Simple Web Service making use of Eureka and Config Server
I've set up the services so that we use a Eureka First Discovery, i.e. the simple web application finds out about the config server from the eureka service.
When started separately (either locally or by starting them as individual docker images) everything is ok, i.e. start config server after discovery service is running, and the Simple web service is started once the config server is running.
When docker-compose is used to start the services, they obviously start at the same time and essentially race to get up and running. This isn't an issue as we've added failFast: true and retry values to the simple web service and also have the docker container restarting so that the simple web service will eventually restart at a time when the discovery service and config server are both running but this doesn't feel optimal.
The unexpected behaviour we noticed was the following:
The simple web service reattempts a number of times to connect to the discovery service. This is sensible and expected
At the same time the simple web service attempts to contact the config server. Because it cannot contact the discovery service, it retries to connect to a config server on localhost, e.g. logs show retries going to http://localhost:8888. This wasn't expected.
The simple web service will eventually successfully connect to the discovery service but the logs show it stills tries to establish communication to the config server by going to http://localhost:8888. Again, this wasn't ideal.
Three questions/observations:
Is it a sensible strategy for the config client to fall back to trying localhost:8888 when it has been configured to use discovery to find the config server?
When the eureka connections is established, should the retry mechanism not now switch to trying the config server endpoint as indicated by Eureka? Essentially putting in higher/longer retry intervals and periods for the config server connection is pointless in this case as it's never going to connect to it if it's looking at localhost so we're better just failing fast.
Are there any properties that can override this behaviour?
I've created a sample github repo that demonstrates this behaviour:
https://github.com/KramKroc/eurekafirstdiscovery/tree/master