Using dnsmaqs with Consul and the confusion around recursive - consul

I'm using dnsmasq but I'm a little confused as to what gets set where. Everything works as expected, but I wasn't sure if any of my config parameters are redundant or would cause issues down the road.
1 - Do I need to set the recusors option in Consul's config?
2 - Do I still need both nameservers entry in /etc/resolv.conf?
3 - Do I need dnsmasq on all Consul clients or just the servers?
#/etc/dnsmasq.d/dnsmasq.conf`
server=/consul/127.0.0.1#8600
My Consul config looks like this:
{
"server": false,
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"datacenter": "us-east-1",
"advertise_addr": "172.16.11.144",
"data_dir": "/var/consul",
"encrypt": "XXXXXXXXXXXXX",
"retry_join_ec2": {
"tag_key": "SOMEKEY",
"tag_value": "SOMEVALUE"
},
"log_level": "INFO",
"recursors" : [ "172.31.33.2" ],
"enable_syslog": true
}
My /etc/resolv.conf looks like this:
nameserver 127.0.0.1
nameserver 172.31.33.2

1) read the documentation: https://www.consul.io/docs/agent/options.html#recursors having a recursor setup is great if you have external services registered in Consul, otherwise it's probably moot. You likely don't want ALL of your DNS traffic to hit consul directly, just the consul specific DNS traffic.
2 & 3:
It's up to you. Some people run dnsmasq on every machine. Some people centralize dnsmasq on their internal DNS servers. Both are valid configurations. If you run it on every single machine, then you probably just need 1 nameserver entry, pointed at localhost. If you run it centralized (i.e. just on your internal DNS servers) then you just point every machine at your internal DNS servers. Both are valid options.

Related

Running multiple, independent, Docker daemon on Windows with Hyper-V isolation and LCOW

I am trying to run multiple Docker daemon configured to run containers with Hyper-V isolation and LCOW on the same Windows 10 machine.
I was able to configure the daemons to manage their own data files, but I am still struggling to get the network configuration clean.
When the first daemon start, it binds to the local "nat" network for DNS resolution. When the second daemon starts, it tries to bind to the same "nat" network then fails as port 53 is already being used by first daemon.
ERRO[2019-02-15T15:50:58.194988300Z] Resolver Setup/Start failed for container nat, "error in opening name server socket listen udp 172.18.64.1:53: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted."
Containers started by this daemon then cannot perform any name resolution. Access through IP still works properly.
Here is the dockerd configuration I am currently using:
{
"registry-mirrors": [],
"insecure-registries": [],
"bridge": "mydaemon1",
"data-root": "C:\\Users\\myuser\\Desktop\\Docker\\Docker",
"deprecated-key-path": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\config\\key.json",
"debug": true,
"exec-root": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\exec-root",
"exec-opts": [
"isolation=hyperv"
],
"experimental": true,
"group": "mydaemon-docker",
"hosts": [
"npipe:////./pipe/mydaemon1_engine"
],
"pidfile": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\docker.pid",
"storage-opts": [
"lcow.kirdpath=C:\\Users\\myuser\\Desktop\\Docker\\server\\resources",
"lcow.kernel=lcow-kernel",
"lcow.initrd=lcow-initrd.img"
]
}
I tried to tweak the bridge configuration, but it didn't change anything. Daemon always tries to connect to nat network. It looks like the only supported value is none, which removes the default eth0 in the containers and any DNS support.
Is it possible to configure the network used for DNS resolution, ie nat here?
Ideally I want the daemon to have its own, dedicated, nat network.
I know it is not possible to do it in Docker for Windows while using the MobyVM as WinNAT, which is used in that case, does not support it.
While using Hyper-V isolation and LCOW, it seems WinNAT is not used anymore as Get-NetNat does not return any NAT network configuration despite DNS working properly. I am not sure I am right on anything, whether this is possible neither if any other Windows limitation applies...

ibm-cloud-private DNS or Internet issues from inside the pods

I've been experimenting with an ICP instance (ICP 2.1.0.2): 1 master node and 2 worker nodes.
I noticed that the pods in my ICP Kubernetes cluster don't have outbound Internet connectivity (or are having DNS lookup issues)
For example, If I start up a busybox pod in my cluster, and try to do "nslookup github.com" or "ping google.com" .. it fails..
kubectl run curl --image=radial/busyboxplus:curl -i --tty
root#curl-545bbf5f9c-gssbg:/ ]$ nslookup github.com
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'github.com'
I checked and saw that "kube-dns" (service, pod, daemonset.extensions, daemonset.apps) does appear to be running.
When I'm logged into (eg. SSH) to the ICP master and the worker nodes machines, I am able to ping these external sites successfully.
Any suggestions for how to troubleshoot this problem? Thanks!
We had kind of the reverse problem - where we could look up anything on internet or other domains, but not the domain in which the cluster was deployed.
That turned out to be the vague documentation around what cluster_domain and cluster_CA_domain mean in the config.yaml. But as a plus we got to learn a bit more about those and about configuring kube-dns.
Basically cluster_domain should be a private virtual domain to the cluster for which kube-dns will be authoritative. Anything else it should use the host's resolve.conf nameservers as upstream servers. If you suspect that your DNS servers are not being utilised for public DNS then you can update the kube-dns configMap to specify the upstream servers that it should use.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
This is assuming you have configure cluster_domain, cluster_CA_domain correctly of course.
They should look something like
cluster_domain = mycluster.icp <----- could be "Mickey-mouse" for all it matters
cluster_CA_domain = icp.mycompany.com <----- the endpoint that portal/registry/api etc are accessible to users on

consul-template does not work on remote machine

I have three machins like :
[consul#cjportal]$ consul members
Node Address Status Type Build Protocol DC
portal1 192.168.11.155:8301 alive client 0.7.0 2 dc1
portal0 192.168.14.100:8301 alive client 0.7.0 2 dc1
portal2 192.168.11.182:8301 alive server 0.7.0 2 dc1
and all 3 machines has the same consul config file like:
{
"service":{
"name":"portal_confgen",
"tags":[
"portal"
],
"address": "127.0.0.1",
"port": 8823,
"check":{
"name":"ping",
"script":"ping -c1 192.168.11.155",
"interval":"10s"
}
}
}
and all 3 machines running consul , only server portal2 runs consul-template,use command:
consul-template -config=/home/consul/consul-template/config/hosts.hcl -consul=localhost:8500
my consul-template config file, hosts.hcl:
template {
source = "/home/consul/consul-template/hosts.ctmpl"
destination = "/home/consul/conf/conf.d/test.conf"
}
But when I change k/v in consul storage, only localhost portal2 writes destination file correctly, remote machines portal0 and portal1 does not work.What do I miss???
You need consul-template to be ran on all servers.
What are you expect? if you run consul template only on portal2 so only it will be updated, if you will run it on portal0 and portal1 they would be updated too

EC2 instance not getting pinged

I have ec2 instance running and which is linked with elastic ip.
when I ping it from local machine then It shows request time out because of which I am not able connect to it via putty and win scp.
I am facing this issue from last 2days.
It was working well for last 2 months.
Please help.
My instance is runig and healthy.
If you want to ping an EC2 instance from your local machine you need to allow inbound Internet Control Message Protocol (ICMP) traffic. Please check your Security Groups to make sure this is allowed. Remember that all inbound traffic is disable by default. You may need to establish a rule similar to this one (JSON format):
"AllowIngressICMP": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": <Your Security Group here>,
"IpProtocol": "icmp",
"FromPort": "-I",
"ToPort": "-I",
"CidrIp": "0.0.0.0/0"
** The -I means "every port"

How to access externally to consul UI

How can I access to consul UI externally?
I want to access consul UI writing
<ANY_MASTER_OR_SLAVE_NODE_IP>:8500
I have try doing a ssh tunnel to acces:
ssh -N -f -L 8500:localhost:8500 root#172.16.8.194
Then if I access http://localhost:8500
It works, but it is not what I want. I need to access externally, without ssh tunnel.
My config.json file is the next:
{
"bind_addr":"172.16.8.216",
"server": false,
"datacenter": "nyc2",
"data_dir": "/var/consul",
"ui_dir": "/home/ikerlan/dist",
"log_level": "INFO",
"enable_syslog": true,
"start_join": ["172.16.8.211","172.16.8.212","172.16.8.213"]
}
Any help?
Thanks
Add
{
"client_addr": "0.0.0.0"
}
to your configuration or add the option -client 0.0.0.0 to the command line of consul to make your Web UI accessible from the outside (see the docs for more information).
Please note that this will also make your Consul REST API accessible from the outside. Depending on your environment you might want to activate Consul's ACLs to restrict access.
You can use socat in this case.
socat -d -d TCP-L:8500,bind=172.16.93.128,fork TCP:localhost:8500 &
where 172.16.93.12 is my IP.
I run it as a docker image, i gave
docker pull consul
docker run -p 8500:8500 consul
and i am able to access the consul ui at http://<hostname>:8500/ui
Finally I find the solution.
Add to the config file with the bind addr that is the IP of the machine, and the client_addr that is the hosts he listen to. So I use 0.0.0.0 to listen to all the IPs.
"bind_addr":"<machine-ip>",
"client_addr":"0.0.0.0",
I don't have hands-on experience with Consul yet, but here are a few tips:
Run sudo netstat -peanut | grep :8500 and check if Consul is bound to 0.0.0.0 or an explicit ip. Should check docs if this is configurable.
On each node install Squid, Nginx or any other software which can act as a HTTP proxy
No way to get User Interface if ther no user interface )
Classic UI its some stack of Desktop Environment(x-term....), so before get, you need install it on node

Resources