I'm using spring cloud eureka for microservice registration in a dockerised environment on aws.
As i'm using dockers ephemeral port mapping the port exposed on the container host is unknown. To overcome that i've a custom EurekaInstanceConfigBean that asks the docker daemon on the host for the assigned port so i can use that to register with eureka.
That all works fine until registration starts. The EurekaDiscoveryClientConfiguration contains a #EventListener(EmbeddedServletContainerInitializedEvent.class) that overrides the external port i've assigned in my custom EurekaInstanceConfigBean and sets it back to the local port inside the container.
I think the listeners purpose is to support auto port assignment in case of server.port=0 but in my setup it's breaking things.
The question is: Can i somehow stop the EurekaDiscoveryClientConfiguration to override my manually set port? Can i somehow use my own EurekaDiscoveryClientConfiguration?
You could use host networking and thus the docker container uses the network stack of the host which makes the service accessible on it's IP addresses.
I used this by utilizing docker-compose. The services all have random ports despite the edge services which are working as reverse proxy (in my case zuul based). these edge services have stable ports.
Related
I am trying to build a batch service in an existing application that has server.port=8080 property configured in application.properties file. When I run the batch process and Spring Batch trying to bring up remote partitions(separate JVMs), spring cloud deployer local throws error saying
"\r\n\r\n***************************\r\nAPPLICATION FAILED TO START\r\n***************************\r\n\r\nDescription:\r\n\r\nThe Tomcat connector configured to listen on port 8080 failed to start. The port may already be in use or the connector may be misconfigured.\r\n\r\nAction:\r\n\r\nVerify the connector's configuration, identify and stop any process that's listening on port 8080, or configure this application to listen on another port.
Is there a way to make the framework generate random ports for worker partitions being the server.port property that is already configured in the application.properties as is?
Thanks.
A Spring Batch remote partitioning setup requires a message broker for the communication between the manager and workers, but it does not require any web capabilities. You seem to be deploying all your apps locally (manager and workers) as web applications, hence the port conflict when multiple workers are deployed.
You have at least two options:
Either set a random server port for each app (See how Spring Boot allows you to do that here)
Or, if the number of workers is fixed, set ports to distinct values statically.
I am trying to deploy Spring Boot microservices using Docker using Appmesh and EC2. I have deployed two sample microservices (https://github.com/amitgct/appmesh-hello) namely: caller-service and called-service using docker on a single EC2 instance and configured appmesh accordingly by following guide https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-ec2.html. Currently, my applications are running on ec2 but they cannot communicate with each other and getting error on calling called-service from caller-service i.e. Unknown host. Can anyone tell me how can I specify hostname and register service with that host on EC2 and App mesh. (Note: I don't want to use kubernetes, ECS, AWS cloud map, AWS Route53) . If can provide example also then very thankful to you. Please help.
https://www.appmeshworkshop.com/servicediscovery/
here's a step by step process shown, and this is for http protocol...
but if you change the listeners section in virtual routes to tcp then it should work for TCP messages as well - for those systems which works on tcp protocol - example Akka Clusters
Set up-1:(Not Working)
I have a task running in the ECS cluster. But it's going down because of a health check immediately after it started.
My service is spring boot based which has both traffic(for service calls) and management ports(for health check). I have "permitAll() permission for "*/health" path.
PFA: I configured the same by selecting the override port option in the TG health check tab as well.
Set up-2: (Working Fine)
I have the same setup in my docker-compose file and I can access health check endpoint in my local container.
This is how I defined in my compose:
service:
image: repo/a:name
container_name: container-1
ports:
- "9904:9904" # traffic port
- "8084:8084". # management Port
So, I tried configuring the management port on Task Def in the container section. I tried updated the corresponding service for this latest revision of the TD, but when I save this service, I'm getting an error. Is this the right way of handling this?
Error in ECS console:
Failed updating Service : The task definition is configured to use a dynamic host port,
but the target group with targetGroupArn arn:aws:elasticloadbalancing:us-east-2:{accountId}:targetgroup/ecs-container-tg/{someId} has a health check port specified.
Service
Two possible resolutions:
Is there a way I can specify this port mapping in the docker file?
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
The task definition is configured to use a dynamic host port but target has a health check port specified.
Base on the error it seems like you have configured dynamic port mapping in Task definition, you can verify this in task definition.
understanding-dynamic-port-mapping-in-amazon-ecs
So in dynamic port, ECS schedule will assign and publish random port in the host which will be different than 8082, so change the health check setting accordingly to traffic port.
this will resolve the health issue, now come to your query
Is there a way I can specify this port mapping in the docker file?
No, port mapping happen at run time not at build time, you can specify that in task definition.
Another way to configure the management port mappings in the container config of task definition within ECS? (Prefered)
You can assign static port mapping which mean both publish port and expose will be same 8082:8082 in this health check will work by using static port mapping.
Get rid of Spring Boot's actuator endpoint and implement our own endpoint for health? (BAD: As I need to implement lot of things to show all details which is returned by spring boot)
Healthcheck is simple HTTP Get a call that ALB expecting 200 HTTP status code in response, so you can create a simple endpoint that will return 200 HTTP status code.
So, after 2 days of doing different things:
In task definition, the networking mode should be "Bridge" type
In task definition, leave the CPU and memory units empty. Providing them at the container level should be enough.
I have a development environment in windows where I can access the spring build in both IP Address and Hostname(PC-Name) for my Eureka Config Server / Client. When I move it to the RedHat environment it does not recognize the URL if it is a hostname.
My main goal is to change the eureka client status page to point to the hysterix monitor for the eureka client's hysterix stream. The value of ${spring.cloud.client.hostname} resolves to the hostname. I was wondering what is the way to make it the current IP of the eureka client?
To be exact here is an example of what I want am trying to do.
eureka:
instance:
preferIPAddress: true
statusPageUrlPath: http://${spring.cloud.client.hostname}:${eureka.cloud.config.port}/hystrix/monitor?stream=http%3A%2F%2F${spring.cloud.client.hostname}%3A${server.port}%2Factuator%2Fhystrix.stream
It just so happen that the client and the server are both in the same machine so I am contented in using the client hostname for both the Eureka Config Server path and the Eureka Client hystrix stream.
Note that I already set the preferIPAddress to true but the generated hostname is still the value of "/etc/hostname". I saw some solution that explicitly specifies the IP Address in the Eureka Client Instance. But I prefer to make it dynamic so that the same code can run smoothly on either Development and Deploy environment.
What can I do so that the hostname can also be recognized the same as the ip address?
The answer has dawn upon me just now. I just changed the following to this.
${spring.cloud.client.hostname}
↓↓↓↓↓↓↓↓↓↓↓↓
${spring.cloud.client.ip-address}
I was thrown to a wrong conclusion because other sites would tell me to use this configuration ${spring.cloud.client.ipAddress} which does not work.
Probably there was a change in Finchley / Spring boot 2.0 version. If anyone can give me a link to a documentation or discussion describing the configuration change, it would be helpful.
We have a collection of microservices built with Spring Boot, using Spring Cloud Netflix. Up until now, they've been packaged as RPMs and deployed to VMs. Using Eureka has allowed for service registration/discovery (obviously) and our cross-microservice interaction to be done using Spring's RestTemplate with a Virtual IP (VIP), like the following:
http://foo-service/<PATH_TO_RESOURCE>
Client-side load-balancing was another benefit.
Now, we are looking to use Docker and run within Rancher. I'm wondering using Eureka still makes sense in this environment.
Within Rancher, if the Service is named 'foo-service', that name is used as a VIP within the Rancher internal network so the same URL shown above can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will round-robin load-balance traffic amongst them.
Plus, it seems Rancher will know about Containers coming and going sooner than Eureka would.
I'm struggling to find a solid reason to keep Eureka.
Not much familiar with Rancher, AFAIK it enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos or Kubernetes to manage your containers.
So, it finally comes down to whether your infrastructure platform provides service discovery functionality or not (I know Docker swarm and Kubernetes provides Service discovery, not sure about the others); if you get free service discovery out of the box from your platform and if you don't need client side load balancing, eureka is an overkill.
Here is an answer for the question in context of Kubernetes
https://stackoverflow.com/a/40568412/6785908
Quoting the relevant parts
In Kubernetes platform, using Eureka (Or Consul/zookeeper any
other service registries) for service discovery is an overkill; you
can achieve the same (arguably) functionality with Kubernetes Services
(+kube DNS Addon), which will act as a referable IP address and a load
balancer (not client side) for the ephemeral Pods. Read this
[article][1] by Christian Posta. If you want to refer your service by
its name instead of IP address add KubeDNS (A kubernetes add on) to
your cluster.
http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/
Edit
Since you said,
Within Rancher, if the Service is named 'foo-service', it is used as a
VIP within the Rancher internal network so the same URL shown above
can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will
round-robing load-balance traffic amongst them.
So you are getting both Service discovery and the (server side) load balancer from your platform for free. So if you don't have a compelling reason to do client side load balancing, forget about eureka.