This is very similar to the following question, however the solution/answer to this previous question doesn't solve the problem.
In my case I'm not connecting to MySQL specifically, however trying to resolve www.google.com results in the same UnknownHostException only within the container. When I run from just the JVM and not within a container on my MAC, there's no issues in resolving.
Same scenario where:
InetAddress ip = InetAddress.getByName("www.google.com");
I've tried the following suggested fix:
RUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
as well as..
RUN echo "hosts: files dns" >> /etc/nsswitch.conf"
Neither seem to do the trick..
Are there any other suggestions out there, anything I'm missing in addition to the above suggestions?
Thanks in advance.
It turns out the solution was fairly simple, and there's a couple options..
For you experts, this is probably funny, but at least its one more idea for the next guy..
So what I've found is I can specify the DNS on each of the nodes in my swarm via:
/etc/docker/daemon.json
{
"dns": ["10.0.0.2", "8.8.8.8", etc.. ]
}
After setting this on each node, specifically 8.8.8.8 for google's DNS, then "google.com" resolved and no prob. Note that its a google specific DNS, but its provides a public DNS. Yahoo, Amazon, etc all resolved.. The 10.0.0.2 address would be any other DNS you want to specify, and you can specify multiples.
This came from the following post: Fix Docker's networking DNS config
However, it even easier if you want to specify the DNS via your compose/stack file.
Rather than go to each node in your swarm and update the daemon.json DNS entries, you can specify the DNS directly in your compose.
version: '3.3'
services:
my-sample-service:
image: my-repo/my-sample:1.0.0
ports: 8081:8080
networks:
- my-network
dns:
- 10.0.0.1 #this would be whatever your say internal DNS is priority 1
- 10.0.0.2 #this would be whatever other DNS you'd provide priority 2
- 8.8.8.8 #default google address, even though docker specifies this
#as a default, it wasn't working until I specifically set it
#this would be the last one checked if no resolution happened
#on the first two.
Related
Good day!
I have a microservice that runs in a windower and a registry that stores the address of the microservices.
I also have a script that runs when the container is turned on. The script gets its local ip and sends it to another server using curl. After executing the script, code 0 is returned and the container exits. How can you fix this problem?
#docker-compose realtime logs
nginx_1 | "code":"SUCCESSFUL_REQUEST" nginx_1 exited with code 0
My bash script
#!/bin/bash
address=$(hostname -i)
curl -X POST http://registry/service/register -H 'Content-Type: application/json' -d '{"name":"'"$MICROSERVICE_NAME"'","address":"'"$address"'"}'
The script runs fine and no problem, but unfortunately it breaks the container process. Is it possible to somehow intercept this code so that it does not shut down the container?
I would be grateful for any help or comment!🙏
EDIT:
Dockerfile here the script is called after starting the container
FROM nginx:1.21.1-alpine
WORKDIR /var/www/
COPY ./script.sh /var/www/script.sh
RUN apk add --no-cache --upgrade bash && \
apk add nano
#launch script
CMD /var/www/script.sh
EDIT 2:
my docker-compose.yml
version: "3.9"
services:
#database
pgsql:
hostname: pgsql
build: ./pgsql
ports:
- 5432:5432/tcp
volumes:
- ./pgsql/data:/var/lib/postgresql/data
#registry
registry_fpm:
build: ./fpm/registry
depends_on:
- pgsql
volumes:
- ./microservices/registry:/var/www/registry
registry_nginx:
hostname: registry
build: ./nginx/registry
depends_on:
- registry_fpm
volumes:
- ./microservices/registry:/var/www/registry
- ./nginx/registry/nginx.conf:/etc/nginx/nginx.conf
#server
nginx:
build: ./nginx
environment:
MICROSERVICE_NAME: Microservice_1
depends_on:
- registry_nginx
ports:
- 80:80/tcp
the purpose of the registry is to store only the ip of all microservices. If you are familiar with microservices, then it is quite possible that you know that the registry is like the custodian of all addresses of microservices. The registry is used by other microservices to obtain microservice addresses so that microservices can communicate over http.
there is no need for these addresses as far as i can tell. the microservices can easily use each other's hostnames.
you already do this with your curl: the POST request goes to the server registry; and so on
docker compose may just be all the orchestration you require for you microservices.
regarding IPs and networking
if you prefer, for more isolation and consistency, you can configure in your compose.yaml
custom networks virtualised network adapters; think of it as vLANs where the nodes are selected containers only.
for addn info on networking refer
custom IP addresses for each container
hostnames for each container
links deprecated; do not use; information only
regarding heartbeat
keeping track of a heartbeat shouldn't be necessary.
but if you really need one, doing it from within the container is a no-no. a container should be only one running process. and creating a new record is redunduant as the docker daemon is already keeping track of all IP and state (and loads of others).
the function of registry (keeping track of lifecycle) is instead played by the docker daemon. try docker compose ps
however, you can configure the container to restart automatically when it fails using the restart tag
if you need a way to monitor these without the CLI, listening on the docker socket is the way to go.
you could make your own dashboard that taps into the Docker API whose endpoints are listed here. NB: the socket might need to be protected and if possible, ought to be mounted as read-only
but better solution would be a using an image that already does this. i cannot give you recommendations unfortunately; i have not used any.
I'm writing as I've encountered an issue that doesn't seem to get resolved, would value the community's help.
I'm trying to push an image to a local registry I deployed on port 5000.
When I use this command docker push localhost:5000/explorecalifornia.com to push the image to my local registry, I get the following message
Get "http://localhost:5000/v2/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I've confirmed the registry is on port 5000 by using GET on postman, and I get a valid, expected {} response (since there's no images currently on my local registry).
I've since tried to fix this by updating my etc/hosts file to comment out "::1 localhost" per advise of this post. This is the contents of my etc/hosts file
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
# ::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
I also updated my etc/resolve.conf file with the following nameservers per advise from this post.
nameserver 10.0.2.3
nameserver 8.8.8.8
nameserver 8.8.4.4
None of this worked. Did anyone also encounter this issue? Is there any recommendations to help fix this issue?
Here's the source code if it helps! Thank you in advance :)
You may have a HTTP proxy defined. Please try running these commands.
unset http_proxy
unset https_proxy
I think there must be sth changed with the kind image. I use the script from this page:
https://kind.sigs.k8s.io/docs/user/local-registry/
I am able to create a new kind cluster and registry. After that, I have no problem following the video lecture to push the explorecalifornia.com image to the local registry.
I use a work-around for this error and it is as below:
Firstly, tag the images using the localhost ip instead, ie
docker tag imagename 127.0.0.1:5000/imagename
and,
docker push 127.0.0.1:5000/imagename
I hope this works for you as well.
For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586
I've been experimenting with an ICP instance (ICP 2.1.0.2): 1 master node and 2 worker nodes.
I noticed that the pods in my ICP Kubernetes cluster don't have outbound Internet connectivity (or are having DNS lookup issues)
For example, If I start up a busybox pod in my cluster, and try to do "nslookup github.com" or "ping google.com" .. it fails..
kubectl run curl --image=radial/busyboxplus:curl -i --tty
root#curl-545bbf5f9c-gssbg:/ ]$ nslookup github.com
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'github.com'
I checked and saw that "kube-dns" (service, pod, daemonset.extensions, daemonset.apps) does appear to be running.
When I'm logged into (eg. SSH) to the ICP master and the worker nodes machines, I am able to ping these external sites successfully.
Any suggestions for how to troubleshoot this problem? Thanks!
We had kind of the reverse problem - where we could look up anything on internet or other domains, but not the domain in which the cluster was deployed.
That turned out to be the vague documentation around what cluster_domain and cluster_CA_domain mean in the config.yaml. But as a plus we got to learn a bit more about those and about configuring kube-dns.
Basically cluster_domain should be a private virtual domain to the cluster for which kube-dns will be authoritative. Anything else it should use the host's resolve.conf nameservers as upstream servers. If you suspect that your DNS servers are not being utilised for public DNS then you can update the kube-dns configMap to specify the upstream servers that it should use.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
This is assuming you have configure cluster_domain, cluster_CA_domain correctly of course.
They should look something like
cluster_domain = mycluster.icp <----- could be "Mickey-mouse" for all it matters
cluster_CA_domain = icp.mycompany.com <----- the endpoint that portal/registry/api etc are accessible to users on
So I'm using dnsmasq for my local dev environment & I need to set it up to use multiple domains ex. (.dev, .test, .somethingelse) how can this be done?
currently It's working with .dev only
this is how my dnsmasq.conf looks like
address=/dev/127.0.0.1
listen-address=127.0.0.1
For every (sub)domain you want to server locally, add the following entry to your dnsmasq.conf:
address=/.domain/127.0.0.1
Now let your OS know, that you want to redirect requests to this domain to your local dnsmasq nameserver. Do this by creating a file "domain" in "/etc/resolvers".
/etc/resolvers/domain has the following content:
nameserver 127.0.0.1
More info about the resolver thing.
A more generic answer would be to have in /etc/dnsmasq.conf
local=/mylan/
and in /etc/hosts
192.168.1.3 dev dev.mylan
192.168.1.3 test test.mylan
192.168.1.4 build build.mylan
as per https://serverfault.com/questions/136332/setting-up-dnsmasq-for-a-local-network
(note that the solution comes in aid for the DHCP settings where you cannot have 2 hosts on the same IP, as the OP liked)
for me, address=/.aaa.com/.bbb.com/127.0.0.1 do the trick.
.dev is not recommended to be used in development as Google actually owns that top level domain.
You might want to use reserved TLDs, like .localhost, for development.
Good article about the same problem: https://web.archive.org/web/20180722223228/https://iyware.com/dont-use-dev-for-development/
In your /usr/local/etc/dnsmasq.conf add:
address=/dev/test/127.0.0.1
And then create files:
/etc/resolver/dev and /etc/resolver/test. Both with content:
nameserver 127.0.0.1
From now all xyz.dev and xyz.test domains will point to 127.0.0.1.