Suppose I've got services A and B. Both of them are deployed to test server and connected to Consul.
When I start service A on my local machine it will read data from consul and interact with service B deployed on test server.
How can I make service A to interact with service B on the local machine if it's also running?
I thought to run local Consul instance and proxy missing requests (configuration and service discovery) to test server consul but I didn't find any info about it.
How can / should I configure my local environment with Consul?
Steps for configuring consul on local environment :
install consul on local
https://www.consul.io/downloads.html
consul agent -dev - command used to run consul on local
You can use git2consul tool for reading the config from the local git repository, like
git2consul --config <path to git2consul file>
https://github.com/breser/git2consul
If you want to avoid the usage of service B running in the test environment you should make your local service B to register whit a different name into the consul server like C and also change you local service A to consume it.
This way you would have registered into consul two instances of service A, one instance of service B and one instance of service C and.
Related
I have installed Docker Desktop locally (Windows 11 OS) and enabled Kubernetes cluster in it. By default the Docker desktop creates 2 entries in the host file -> host.docker.internal and kubernetes.docker.internal. I am running spring boot application which is dockerized and I have put it inside the kubernetes cluster. I am trying to connect to local database (MSSQL) via passing the config as kubernetes.docker.internal and host.docker.internal in the application.yml. However, it is complaining as host not found. Is there any specific config which I might be missing?
I have recently started exploring the microservice architecture using jhipster and was trying to install and run the jhipster-registry from docker hub. Docker shows that the registry is running, but I am unable to access it on port 8761.
Pulled the image with docker pull jhipster/jhipster-registry
Started the container with docker run --name jhipster-registry -d jhipster/jhipster-registry
Here's a snapshot of what docker container ls returns:
Am I missing something over here?
You are starting the JHipster Registry container, but you aren't exposing the port.
You can expose a port by passing the port flag -p 8761:8761 which will enable you to connect to it via localhost:8761 or 127.0.0.1:8761 in a browser.
You may need to configure some environment variables for the JHipster Registry to start correctly. These may depend on your generated app's options, such as authentication type. For convenience JHipster apps come with a docker-compose.yml file. You can start it with docker-compose -f src/main/docker/jhipster-registry.yml up, as documented.
According to the docker documentation here
https://docs.docker.com/network/host/
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
On Mac what alternatives do people use?
My scenario
I want to run a docker container that'll host a micro-service
The micro-service has dependencies upon databases that I'm also running via docker
I thought I'd be able to use --net=host on Mac when running the micro-service
But the micro-service port is not exposed
I can override the db addresses (they default to localhost) on the microservice.
But that involves robust --env usage
What's the simplest / most elegant solution?
The most simple and most elegant solution is to use docker named bridge network.
You can create a custom bridge network (default is bridge) like this:
docker network create my-network
Every container deployed inside this network can communicate with each other by using the container name.
$ docker run --network=my-network --name my-app ...
$ docker run --network=my-network --name my-database...
In the example above you can connect to your database from inside your application by using my-database:port. If the container port is exposed in the Dockerfile you don't need to map it on your host and you can keep all your communication internal inside your custom docker bridge network.
In most cases the application its port is mapped (example: -p 80:80) so localhost:80 is mapped on container:80 and you can access the app from on your localhost. If the app needs to communicate with a db you don't need to expose the port of the db and you don't have to map it on localhost as explained already above.
Just keep the communication between app and db internal in your custom bridge network.
I'am trying to use Hue as a file browser for HDFS. So for that I have clone the hue repository and build the app with the following commands given in README.md of the hue repository.
git clone https://github.com/cloudera/hue.git
cd hue
make apps
build/env/bin/hue runserver
Hue UI is accessible in local machine using default port using the url http://localhost:8000 and everything works fine. But when I use my machine ip address http://x.x.x.x:8000 and try to access the Hue UI it keeps on processing and waiting.
Other observations -:
I can ping from remote machine to the host machine.
There is no firewall blocking the ports. (checked with nmap port scanner)
Machines are in same network.
I can access other ports for Hadoop NameNodes UI and DataNodes.
Changing the http_host in hue.ini doesn't affect the result
The ideal setup for Hue is configuring a reverse proxy (Nginx or Apache HTTP, for example)
However, you should refer to the Configuration documentation to externally run the server outside of 127.0.0.1
[desktop]
# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8888
I was able to find a solution to the issue.. First hue run on a CherryPy web server so starting server by command build/env/bin/hue runserver will start the development server where hue.ini configuration is neglected.
So the correct command to start the production server after setting up correct configuration in hue.ini file is build/env/bin/hue runcpserver. Then I was able to access it using remote host without any problem. You also can use supervisor to start the production server. More information about that can be found here
I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.
I start eureka server: ( pesudo code)
kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
Then I use command "docker pull" to download config server image and run it as below:
kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort.
Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.
Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
http://config-server.default:31001 and http://eureka-server.default:31000. default is the namespace here.
172.16.100.81:31000 will make it accessible from outside the cluster.