Linking DNS to a consul node - consul

I'm trying to set up a consul agent using an example in "Using Docker" (chapter 11). The example suggests running this to set up one of the consul nodes:
docker run -d --name consul -h consul-1 \
-p 8300:8300 -p 8301:8301 -p 8301:8301/udp \
-p 8302:8302/udp -p 8400:8400 -p 8500:8500 \
-p 172.17.42.1:53:8600/udp \
gliderlabs/consul agent -data-dir /data -server \
-client 0.0.0.0 \
-advertise $HOSTA -bootstrap-expect 2
I assume the line with -p 172.17.42.1:53:8600/upp is linking the container's DNS service with the consul node using an IP address that worked for the author. What IP address should I use here?

Looks like 172.17.42.1 was the default bridge address for docker 1.8 to use when a container is connecting to the host. This changed in 1.9 and seems to be 172.17.0.1 for me -- although I don't know if this is a guaranteed.

You seem to be running an example setup, so better if you expose it to your localhost 127.0.0.1 instead. That's a DNS service, as long as you give a dig command using the correct port for DNS, it will just work. For example following will do for port 8600:
dig #127.0.0.1 -p 8600 stackoverflow.service.consul
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.55.amzn1 <<>> #127.0.0.1 -p 53 tracker.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57167
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;stackoverflow.service.consul. IN A
;; ANSWER SECTION:
stackoverflow.service.consul. 0 IN A 10.X.X.X
;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jul 7 11:29:01 2017
;; MSG SIZE rcvd: 56
If you want it to work on the default DNS port so that the queries can directly be handled. You can use something like dnsmaq or any of the methods listed at the following link for DNS forwarding:
https://www.consul.io/docs/guides/forwarding.html

Related

After DNS A record change, dig and nslookup showing different results for a the same domain name on Mac OS X

After changing the DNS A record of a hostname, dig and nslookup are showing different results. While dig is showing the correct IP, nslookup is still showing the old IP. I am on macos 11.2.3
The nslookup output for my domain {{domainname}} is (note that I replaced the resulting ip address with xxx.old.ip.xxx.
$ nslookup {{domainname}}
Server: 192.168.178.1
Address: 192.168.178.1#53
Non-authoritative answer:
Name: {{domainname}}
Address: xxx.old.ip.xxx
and the dig output (note that I replaced the resulting IP with yyy.new.ip.yyy to indicate that it is a different IP than in the case of nslookup
$ dig {{domainname}}
; <<>> DiG 9.10.6 <<>> {{domainname}}
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45116
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;{{domainname}}. IN A
;; ANSWER SECTION:
{{domainname}}. 1157 IN A yyy.new.ip.yyy
;; Query time: 70 msec
;; SERVER: 192.168.178.1#53(192.168.178.1)
;; WHEN: Fri Mar 12 18:29:03 CET 2021
;; MSG SIZE rcvd: 63
What is going wrong with nslookup? Is it DNS caching? What can I do to force nslookup (and other tools) to refresh the DNS cache if that was the issue.
Update: After about 20 minutes nslookup and dig are showing the same IP and ssh is connecting using the {{domainname}}.
Neither response is from an Authoritative Name Server, therefore the DNS Answer came from a resolver's cache. Before changing a DNS Resource Record, make a note of the TTL value. That is the number of seconds to wait before resolvers drop the cached value. For the dig command, the TTL left on the cached value is 1157 seconds before the next refresh.

Dnsmasq not working domain .local

i bought a macpro and im quite new, im configuring my development environment, and when i installed dnsmasq i cant access the "anyname.local" page, it says site cant be reached, i have everything running, apache, dnsmasq, but nothing works, i followed this tutorial above.
Link Tut
I print some information maybe someone can have any idea whats wrong.
dig foo.local
; <<>> DiG 9.8.3-P1 <<>> foo.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 3951
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;foo.local. IN A
;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2018030101 1800 900 604800 86400
;; Query time: 18 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Fri Mar 2 00:03:23 2018
;; MSG SIZE rcvd: 102
The .local suffix should be avoided on MacOS because it is used by MacOS Bonjour service.
You can look at Apple explanation page where they recommend to use other suffixes like: .private, .intranet, .internal, or .lan.
I personally use .localhost, .test or .example.
As a side note, it's also not to recommended to use .dev because Chrome 63 started requiring a SSL certificate for this suffix.
Hope this helps!

How to allow external connection to elastic search on GCE instance

I'm setting elasticsearch cluster on GCE, eventualy it will be used from within the application which is on the same network, but for now while developing, i want to have access from my dev env. Also even on long term i would have to access kibana from external network so need to know how to allow that. For now i'm not taking care of any security considirations.
The cluster (currently one node) on GCE instance, centos7.
It has external host enabled, the ephemeral option.
I can access 9200 from within the instance:
es-instance-1 ~]$ curl -XGET 'localhost:9200/?pretty'
But not via the external ip which when i test it shows 9200 closed:
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
localhost as expected is good:
es-instance-1 ~]$ nmap -v -A localhost | grep 9200
Discovered open port 9200/tcp on 127.0.0.1
I see a similar question here and following it i went to create a firewall rule. First added a tag 'elastic-cluster' to the instance and then a rule
$ gcloud compute instances describe es-instance-1
tags:
items:
- elastic-cluster
$ gcloud compute firewall-rules create allow-elasticsearch --allow TCP:9200 --target-tags elastic-cluster
Here it's listed as created:
gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-elasticsearch default 0.0.0.0/0 tcp:9200 elastic-cluster
so now there is a rule which supposed to allow 9200
but still not reachable
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
What am i missing?
Thanks

How to cache "SERVFAIL" with bind?

I've searched on google in the last hour and couldn't find anything relevant to my issue, I have bind installed and running flawlessly which I'm using for multiple domains and local reverse lookups, still ... some remote nameservers are offline and do not return any result to my requests, and that's slowing the applications which are using bind.
For example:
# dig #127.0.0.1 -x 155.1.2.3
; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> #127.0.0.1 -x 155.1.2.3
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 40057
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;3.2.1.155.in-addr.arpa. IN PTR
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Dec 27 14:06:14 EET 2016
;; MSG SIZE rcvd: 51
timeouts after ~ 5 seconds, but if I retry the command the result (SERVFAIL) isn't cached and my application is delayed for another 5 seconds over and over again. I know that I can implement caching inside the application but I'm sure it will be alot more efficient to create caches for this within bind configuration.
How can I cache SERVFAIL for ... let's say 5 minutes ?
It's supported by bind ?
Thank you!
By default bind caches all the responses. what is the TTL you are receiving in the SERVFAIL response? Also check if you have max-ncache-ttl set to 0 on the client resolver configuration.

Bash ping server with addres and port and get short info

How I can connect to sever by ip and port, and get short info about it?
I tried to do it with netcat and curl, but info is too long. I also tried to use telnet but it is not a good way for me.
I have a script which connect to some addresses on specified ports and I if it is connected I want to show short info about it.
Is it possible? Is any other method to solve this problem?
IP addresses are different. They can be a http, mysql, ssl, etc.
I attach a code with a connection's function:
if nc -w 10 -z $1 $i; then
printf "\n$1:$i - Port is open\n\nSERVER INFO:\n";
printf "\n$(curl -IL $1)\n";
else
printf "\n$1:$i - Port is closed\n"
fi;
EDIT:
Example of response from server I would like to get
{IP number}: ssh - OpenSSH 6.0pl1, http - apache 1.3.67, https - httpd 2.0.57
You were pretty close. You can include the host just as you have in your script.
for port in $(seq 21 23); do
out=$(nc -w 1 -q 1 localhost $port)
echo port ${port}: $out
done
#port 21:
#port 22: SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.7
#port 23:

Resources