How to cache "SERVFAIL" with bind? - caching

I've searched on google in the last hour and couldn't find anything relevant to my issue, I have bind installed and running flawlessly which I'm using for multiple domains and local reverse lookups, still ... some remote nameservers are offline and do not return any result to my requests, and that's slowing the applications which are using bind.
For example:
# dig #127.0.0.1 -x 155.1.2.3
; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> #127.0.0.1 -x 155.1.2.3
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 40057
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;3.2.1.155.in-addr.arpa. IN PTR
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Dec 27 14:06:14 EET 2016
;; MSG SIZE rcvd: 51
timeouts after ~ 5 seconds, but if I retry the command the result (SERVFAIL) isn't cached and my application is delayed for another 5 seconds over and over again. I know that I can implement caching inside the application but I'm sure it will be alot more efficient to create caches for this within bind configuration.
How can I cache SERVFAIL for ... let's say 5 minutes ?
It's supported by bind ?
Thank you!

By default bind caches all the responses. what is the TTL you are receiving in the SERVFAIL response? Also check if you have max-ncache-ttl set to 0 on the client resolver configuration.

Related

Starting PgBouncer in Windows gives "FATAL could not open /dev/null"

I have a practically vanilla install of PgBouncer on Windows Server 2019 Datacenter, downloaded using "Application Stack Builder". I'm trying to connect to a Postgres 11 database, local to the server.
The only config change I've done is to specify the database and the location of the log and PID files. Here's my config, taken from the Github pgbouncer-minimal.ini:
;;; This is an almost minimal starter configuration file that only
;;; contains the settings that are either mandatory or almost always
;;; useful. All settings show their default value.
[databases]
;; add yours here
postgres = host=localhost port=7450 user=postgres password=postgres
;; fallback
;* =
[pgbouncer]
;; required in daemon mode unless syslog is used
logfile = log/pgbouncer.log
;; required in daemon mode
pidfile = log/pgbouncer.pidfile
syslog = 0
;; set to enable TCP/IP connections
;listen_addr =
;; PgBouncer port
;listen_port = 6432
;; some systems prefer /var/run/postgresql
;unix_socket_dir = /tmp
;; change to taste
auth_type = trust
;; probably need this
auth_file = etc/userlist.txt
;; pool settings are perhaps best done per pool
;pool_mode = session
;default_pool_size = 20
;; should probably be raised for production
;max_client_conn = 100
Why do I keep getting the following error in the logs and how can I fix this?
2022-07-01 12:27:11.711 Coordinated Universal Time [8724] FATAL could not open /dev/null: The system cannot find the file specified.

After DNS A record change, dig and nslookup showing different results for a the same domain name on Mac OS X

After changing the DNS A record of a hostname, dig and nslookup are showing different results. While dig is showing the correct IP, nslookup is still showing the old IP. I am on macos 11.2.3
The nslookup output for my domain {{domainname}} is (note that I replaced the resulting ip address with xxx.old.ip.xxx.
$ nslookup {{domainname}}
Server: 192.168.178.1
Address: 192.168.178.1#53
Non-authoritative answer:
Name: {{domainname}}
Address: xxx.old.ip.xxx
and the dig output (note that I replaced the resulting IP with yyy.new.ip.yyy to indicate that it is a different IP than in the case of nslookup
$ dig {{domainname}}
; <<>> DiG 9.10.6 <<>> {{domainname}}
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45116
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;{{domainname}}. IN A
;; ANSWER SECTION:
{{domainname}}. 1157 IN A yyy.new.ip.yyy
;; Query time: 70 msec
;; SERVER: 192.168.178.1#53(192.168.178.1)
;; WHEN: Fri Mar 12 18:29:03 CET 2021
;; MSG SIZE rcvd: 63
What is going wrong with nslookup? Is it DNS caching? What can I do to force nslookup (and other tools) to refresh the DNS cache if that was the issue.
Update: After about 20 minutes nslookup and dig are showing the same IP and ssh is connecting using the {{domainname}}.
Neither response is from an Authoritative Name Server, therefore the DNS Answer came from a resolver's cache. Before changing a DNS Resource Record, make a note of the TTL value. That is the number of seconds to wait before resolvers drop the cached value. For the dig command, the TTL left on the cached value is 1157 seconds before the next refresh.

Dnsmasq not working domain .local

i bought a macpro and im quite new, im configuring my development environment, and when i installed dnsmasq i cant access the "anyname.local" page, it says site cant be reached, i have everything running, apache, dnsmasq, but nothing works, i followed this tutorial above.
Link Tut
I print some information maybe someone can have any idea whats wrong.
dig foo.local
; <<>> DiG 9.8.3-P1 <<>> foo.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 3951
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;foo.local. IN A
;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2018030101 1800 900 604800 86400
;; Query time: 18 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Fri Mar 2 00:03:23 2018
;; MSG SIZE rcvd: 102
The .local suffix should be avoided on MacOS because it is used by MacOS Bonjour service.
You can look at Apple explanation page where they recommend to use other suffixes like: .private, .intranet, .internal, or .lan.
I personally use .localhost, .test or .example.
As a side note, it's also not to recommended to use .dev because Chrome 63 started requiring a SSL certificate for this suffix.
Hope this helps!

Linking DNS to a consul node

I'm trying to set up a consul agent using an example in "Using Docker" (chapter 11). The example suggests running this to set up one of the consul nodes:
docker run -d --name consul -h consul-1 \
-p 8300:8300 -p 8301:8301 -p 8301:8301/udp \
-p 8302:8302/udp -p 8400:8400 -p 8500:8500 \
-p 172.17.42.1:53:8600/udp \
gliderlabs/consul agent -data-dir /data -server \
-client 0.0.0.0 \
-advertise $HOSTA -bootstrap-expect 2
I assume the line with -p 172.17.42.1:53:8600/upp is linking the container's DNS service with the consul node using an IP address that worked for the author. What IP address should I use here?
Looks like 172.17.42.1 was the default bridge address for docker 1.8 to use when a container is connecting to the host. This changed in 1.9 and seems to be 172.17.0.1 for me -- although I don't know if this is a guaranteed.
You seem to be running an example setup, so better if you expose it to your localhost 127.0.0.1 instead. That's a DNS service, as long as you give a dig command using the correct port for DNS, it will just work. For example following will do for port 8600:
dig #127.0.0.1 -p 8600 stackoverflow.service.consul
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.55.amzn1 <<>> #127.0.0.1 -p 53 tracker.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57167
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;stackoverflow.service.consul. IN A
;; ANSWER SECTION:
stackoverflow.service.consul. 0 IN A 10.X.X.X
;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jul 7 11:29:01 2017
;; MSG SIZE rcvd: 56
If you want it to work on the default DNS port so that the queries can directly be handled. You can use something like dnsmaq or any of the methods listed at the following link for DNS forwarding:
https://www.consul.io/docs/guides/forwarding.html

DiG transfer fails with axfr options

For testing purposes, I'm trying to get a list of all DNS records set for a domain, using this method.
This works:
root#cs:/# dig #nameserver domain
; <<>> DiG 9.9.2-P1 <<>> #nameserver domain
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32999
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;domain. IN A
;; ANSWER SECTION:
domain. 3600 IN A my-IP
;; Query time: 2 msec
;; SERVER: my-IPv6-IP-(I-think)
;; WHEN: Thu Jun 20 16:03:05 2013
;; MSG SIZE rcvd: 83
However, when I add axfr to the command as is suggested in that answer on Server Fault (and all over the net), it fails:
root#cs:/# dig #ns1.transip.nl changeyourschool.nl axfr
; <<>> DiG 9.9.2-P1 <<>> #ns1.transip.nl changeyourschool.nl axfr
; (2 servers found)
;; global options: +cmd
; Transfer failed.
Why is this, and, more importantly, how can I get the full list of DNS records if this fails?
Why this is, I don't know, but you can use this to get all the DNS records:
root#cs:/# dig google.com ANY +nostat +nocmd +nocomments
; <<>> DiG 9.9.2-P1 <<>> google.com ANY +nostat +nocmd +nocomments
;; global options: +cmd
;google.com. IN ANY
google.com. 56328 IN NS ns4.google.com.
google.com. 56328 IN NS ns2.google.com.
google.com. 56328 IN NS ns1.google.com.
google.com. 56328 IN NS ns3.google.com.
ns4.google.com. 85545 IN A 216.239.38.10
ns1.google.com. 85545 IN A 216.239.32.10
ns3.google.com. 57402 IN A 216.239.36.10
ns2.google.com. 85545 IN A 216.239.34.10
The +nostat, +nocmd and +nocomments additions can be omitted, but reduce the useless output.
Keelan's solution did not work for me.
What did work for me was a two step process (on Linux and Windows).
Step one type:
dig ns google.com
Where google.com is the domain of interest.
This returned a list of name servers such as:
ns1.google.com. 60 IN A 216.239.32.10
Step two type:
dig ns1.google.com google.com any
Where ns1.google.com is the name server for the domain (found in step 1) and google.com is the domain of interest.
This yielded results such as:
google.com. 31335 IN NS ns4.google.com.
google.com. 31335 IN NS ns2.google.com.
google.com. 31335 IN NS ns3.google.com.
google.com. 59 IN SOA ns1.google.com. dns-admin.google.com. 1579113 7200 1800 1209600 300
google.com. 60 IN A 216.58.220.142
google.com. 2251 IN TXT "v=spf1 include:_spf.google.com ip4:216.73.93.70/31 ip4:216.73.93.72/31 ~all"
google.com. 31335 IN NS ns1.google.com.
google.com. 185 IN AAAA 2404:6800:4006:800::200e
Hope this helps. If it does not, you can always try: http://www.whois.com.au/whois/dns.html.
Like the answer you link to explains, the convention is to disallow the axfr command except for trusted peers.
If zone transfers are disabled, you can only get an approximate listing of hosts within a zone by guessing them, i.e. basically a dictionary attack. A well-maintained site will have measures in place to mitigate that approach as well.

Resources