We are having the following problem, when deploying several microservices, randomly sometimes one of them is registered in the registry with an incorrect IP. I understand that eureka is incorrectly identifying the IP of the container ...
We tested several solutions like this application.yml config, but we did not get it working properly. Any ideas?
eureka:
instance:
prefer-ip-address: true
hostname: ${server.address}
ip-address: ${server.address}
I saw this solution here: https://groups.google.com/d/msg/jhipster-dev/n7s7OTgT18E/RtZ3O4hlEwAJ
But this config throws "Could not resolve placeholder "server.address" in string value". This make sense when I read this: Reference a key in application.yml
#snowblind This is most probably a problem with docker container networking. Actually when doing initial registration, Eureka only uses what is available from inside the application. So it will determine it's address with something like java.net.InetAddress and then propagate this value to the Registry to advertize how the service can be reached. However in your case it seems to be reporting wrong informations.
So first make sure you always map the container port to the same host port. Alternatively you can use docker host networking (--net=host) so that your container share the same network interface as your host. This is possible in docker-compose by adding net: "host" to your service descption.
Another idea would be to use hostname based Eureka registration as opposed to the IP based one that we configure by default (prefer-ip-adress = true), but I cannot guarantee that it will work as I've never tried it myself, please refer to Eureka docs if you want to do this.
Related
I've looked over most of the documentation provided, couldn't find an absolute answer about changing jhipster-registry port, it's default is 8761, but when I try to chnage it's port through YAML config file it gets indeed working in that port but both the gateway and microservice cannot be found by the registry. am i doing anything wrong ? is jhipster-registry bound to remain intact when it comes to port manipulation ?
You must change port in spring.cloud.config.uri in all application's bootstrap*.yml so they can retrieve their config from the registry and also change it in eureka.client.defaultZone in application.yml in jhipster-registry's central-config folder if you use file system backend or in git repo if you use git backend.
This is because the registry is both a Spring Cloud Config server and an Eureka server. In JHipster's setup, the applications first connect to the config server, retrieve their config which indicate the URL of the Eureka server. As this is a common config for all apps, it's set in application*.yml in config server.
Please read also the jhipster-registry doc: https://www.jhipster.tech/jhipster-registry/
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I deployed a RabbitMQ server on my kubernetes cluster and i am able to access the management UI from the browser. But my spring boot app cannot connect to port 5672 and i get connection refused error. The same code works , if i replace my application.yml properties from kuberntes host to localhost and run a docker image on my machine.I am not sure what i am doing wrong?
Has anyone tried this kind of setup.
Please help. Thanks!
Let's say the dns is named rabbitmq. If you want to reach it, then you have to make sure that rabbitmq's deployment has a service attached with the correct ports for exposure. So you would target http://rabbitmq:5672.
To make sure this or something alike exists you can debug k8s services. Run kubectl get services | grep rabbitmq to make sure the service exists. If it does, then get the service yaml by running 'kubectl get service rabbitmq-service-name -o yaml'. Finally, check spec.ports[] for the ports that allow you to connect to the pod. Search for '5672' in spec.ports[].port for amqp. In some cases, the port might have been changed. This means spec.ports[].port might be 3030 for instance, but spec.ports[].targetPort be 5672.
Do you are exposing TCP port of rabbitMQ to outside of cluster?
Maybe only management port has exposed.
If you can connect to management UI, but not on port 5672, maybe indicate that your 5672 port is not exposed outside of cluster.
Obs: if I not understand correctly your question, please let me know.
Good luck
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.
I am doing a POC on Consul for supporting service discovery and multiple microservice versions. Consul clients and server cluster(3 servers) are set up on Linux VMs. I followed the documentation at Consul and the set up is successful.
Here is my doubt. My set up is completely on VMs. I've added a service definition using HTTP API. The same service is running on two nodes.
The services are correctly registered:
curl http://localhost:8600/v1/catalog/service/my-service
gives me the two node details.
When I do a DNS query:
dig #127.0.0.1 -p 8600 my-service.service.consul
I am able to see the expected results with the node which hosts the service. But I cannot ping the service since the service name is not resolved.
ping -c4 my-service or ping -c4 my-service.service.consul
ping: unknown host.
If I enter a mapping for my-service in /etc/hosts file, I can ping this, only from the same VM. I won't be able to ping this from another VM on the same LAN or WAN.
The default port for DNS is 53. Consul DNS interface listens to 8600. I cannot use Docker for DNS forwarding. Is it possible I missed something here? Can consul DNS query work without Docker/dnsmasq or iptables updates?
To be clear, here is what I would like to have as the end result:
ping my-service
This needs to ping the nodes I have configured, in a round robin fashion.
Please bear with me if this question is basic, and I've gone through each of the consul related questions in SO.
Also gone through this and this and these too says I need to do extra set up.
Wait! Please don't do this!
DO. NOT. RUN. CONSUL. AS. ROOT.
Please. You can, but don't. Instead do the following:
Run a caching or forwarding DNS server on your VMs. I'm bias toward dnsmasq because of its simplicity and stability in the common case.
Configure dnsmasq to forward the TLD .consul to the consul agent listening on 127.0.0.1:8600 (the default).
Update your /etc/resolv.conf file to point to 127.0.0.1 as your nameserver.
There are a few ways of doing this, and the official docs have a write up that is worth looking into:
https://www.consul.io/docs/guides/forwarding.html
That should get you started.
This can be a pretty complicated topic, but the simplest way is to change consul to bind to port 53 using the ports directive, and add some recursers to the consul config can pass real DNS requests on to a host that has full DNS functionality. Something like these bits:
{
"recursors": [
"8.8.8.8",
"8.8.4.4"
],
"ports": {
"dns": 53
}
}
Then modify your system to use the consul server for dns with a nameserver entry in /etc/resolve.conf. Depending on your OS, you might be able to use a port in the resolv.conf file, and avoid having to deal with Consul needing root to bind to port 53.
In a more complicated scenario, I know many people that use unbound or bind to do split DNS and essentially do the opposite, routing the .consul domain to the consul cluster on a non-privileged port at the org level of their DNS infrastructure.