When is a secondary DNS server used? - macos

On our router we have the primary DNS set to a local IP, which is running Windows Server 2008 and the built in DNS server. We use this to resolve domains to local servers, if the domain is not founds locally we have forwarders set up to query external name servers.
The secondary DNS on the router is set to our ISP's primary DNS, incase the local DNS server is down.
The mac clients in our office pick up the DNS servers correctly from the router but it seems very random as to what DNS server it uses. For example, a local site would load up but some of the images would not. If I hard coded my DNS address to be the local DNS server everything would work fine.
So my question is, when would a mac client use the secondary DNS server? I though it'd only use it if the primary DNS was unavailable?
Thanks!

The general idea of a secondary DNS server was that in case the primary DNS server doesn't reply (e.g. it is offline, unreachable, restarting, etc.), the system can fall back to a secondary one, so it won't be unable to resolve DNS names during that time. Doesn't reply means "no reply at all", it will not ask the secondary when the primary one said that a name is unknown. Answering that a name is unknown is a reply.
The problem here is that DNS uses UDP and UDP is connectionless. So if a DNS server is offline, the system won't notice that other by not receiving a reply from it. As an UDP packet may as well get lost and the round-trip time (RTT) is unknown, it will have to resend the request a couple of times, every time waiting for several seconds, before it finally gets to the conclusion that this server is dead. This means it can take up to an entire minute and above to resolve a DNS name if the first DNS server dies.
As that seems unacceptable, different operating system developed different strategies to handle this in a better way. As both DNS servers are supposed to deliver the same result for the same domain (if not, your setup is actually flawed as the secondary should be a 1-to-1 replacement for the primary one), it shouldn't matter which one is being used. Some systems may send a request to the primary one but if no reply comes back within a few seconds, they don't resend to it but first try the secondary one (then they resend to the primary one and so on). Some may also query both at once, make the faster one win and then keep using that one for a while (until they start another race to see if it is still the faster one). Some may also prefer the primary one but do some kind of load balancing and switch to the secondary one if more than a certain amount of queries are currently pending on the primary one. Some will just alternate between them as a poor man's load balancing. All of this is actually allowed.
In your case, though, I'm afraid something is wrong with your primary server as by default, macOS will only use the primary one. If it constantly falls back to the secondary one, it may consider the primary one to be too slow. Every time that happens, the secondary server becomes the primary one, see this older knowlebase article. This cnet article explained how this can be disabled but I'm not sure this is still possible in current systems. I wasn't able to find any reference on this but IIRC from the very back of my head, Apple once mentioned on a WWDC that they are now more aggressive at DNS querying and may even try to contact multiple DNS servers at once with the fastest one winning in some cases but I might be wrong on this (maybe this was iOS only or so).

I googled this article which explains newer MacOS DNS search order. And this one which explains how to tweak it to obtain results that you desire.
Though the general idea is that it was never intended (in any OS) that first server is the one used and the second one is a backup. ( Even on windows, if first server for some reason doesn't answers very quickly, the second one will be queried.) It's wiser to regard server query order as unspecified.

Related

Are the ip addresses automatically cached in the recursive DNS when it is returned from the authoritative DNS?

It is said that the recursive DNS refers to its cache first before performing recursive search on the authoritative DNS. So, I wanted to know how is the DNS caching done. Is it automatic or if no, what happens? How is the DNS record cached?
Unless you are running a DNS server, then the caching is being done by your client or and LDNS. Your system runs something called a resolver (set of libraries in linux, DNS client service on windows) whose job it is to take names and turn them into IP address, hopefully following the TTL of the returned records. Additionally, browsers and other applications may also add their own level of caching, often not adhering to the TTL returned for the record.
Additionally, your machine will point to another server, either locally or your ISP's, that is your LDNS (local DNS). This is a configuration that's required for proper functioning because it must be specified by IP address, either manually entered or obtained via a mechanism like DHCP.

Socket.Bind and IP source routing, with multiple local network interfaces

I wrote a tool running on a system (Win7) with two network interfaces, each linked to a different subnet, each with its own gateway which is then linked to two separate distant networks (there are outgoing firewalls after each gateway). I’m initiating outgoing TCP connections via both NICs by using Socket.Bind (before doing Connect) to each relevant NIC’s IP address. First NIC is working fine, but for the second NIC, I’m getting SocketException: “A socket operation was attempted to an unreachable network”.
My original understanding was that since sockets are bound to concrete NIC’s local endpoint, which has its gateway defined, the connection should be routed to this gateway and therefore should work. However, it seems that source IP address is ignored and the routing is working according to local routing table (i.e. second NIC’s connect request goes to first, default, network and being rejected because it has wrong subnet).
Adjusting local routing tables helps, but it makes me wonder about the whole reasoning behind ability of the socket to bind to specific local IP.
Doing some extra reading, I found out that, indeed, there’s such thing as “source IP routing”, but it is disabled in Windows by default (via DisableIPSourceRouting registry setting), due to security reasons, as described, e.g. here:
http://msdn.microsoft.com/en-us/library/ff648853.aspx
http://www.bloggersbase.com/disableipsourcerouting/
Questions:
If my original understanding was correct (i.e. Socket.Bind should be enough) – why it is not working without modifying routing tables?
If my understand was NOT correct (i.e. Socket.Bind is ignored and routing is used) – what’s the point of having Socket.Bind? Why doing it at all?
Also, I’d like to understand better, what is the actual risk of having source IP routing enabled (preferably with example of a possible exploit)?
Any ideas of solving the requirement without manually modifying local routing table will be greatly appreciated.
Many thanks.
OK, after some reading, here are some high-level explanations on what's happening. I still need to verify the below conclusions in my system. Apparently, local binding is typically ignored when selecting network interface. Instead, routing table is used for this. However, in Strong Host Model (default for Vista and newer, non-existant in XP), source IP is used as a 'constraint' in the routing table lookup.
Brief explanation about strong host model vs. weak host model:
http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx
Explanation on what's different in XP vs newer Windows versions in respect to the above:
http://blogs.technet.com/b/networking/archive/2009/04/24/source-ip-address-selection-on-a-multi-homed-windows-computer.aspx

Alternative Host (by DNS?) for Web Server Failure Protection

I'm interested in having a second web host run a copy of my website, such that if my first host goes down, the traffic routes to the second host. Is this possible?
My guess would be to add additional nameservers beyond the first two.
I also suspect it's doable with no-ip.com, but I'm not clear on how that works, and if they would require me to leave my first host entirely?
See if your DNS provider will let you do round robbin DNS.
Basically, DNS queries will return more than one IP for your site. Try nslookup google.com to see how it might look.
There are loads of other ways to do geographical load balancing and failover (most are expensive though).
DNS Made Easy provides this service, which is called DNS Failover. For others looking:
http://www.dnsmadeeasy.com/s0306/price/dns.html

Redirect Traffic from NIC to Another NIC On Separate Networks While Using Remoting

The project I'm working on is to handle data capture from scan guns (Pocket PC 2003) and process this data on a host (Win XP) then into our inventory database on a separate server (Win 2000). This is all driven by the Remoting framework provided by MS and As Good As It Gets (http://gotcf.net). The application is complete enough for a general proof of concept with both the client and server working properly while in the emulator.
All is well until I began to test using actual scan guns. Due to security concerns, the scanners are on a separate network (for clarification the 10 network) than the server (the 15 network). My development machine has dual NIC connected to both networks and can communicate with both independently. However, I am having issues with my application receiving information from the 10 network using .Net Remoting, and then sending out information to the server on the 15 network via a third party app (Combination of ODBC, Btrieve, and OLE).
Is there anyway to process information from one network then update the server on another?
Any suggestions will be greatly appreciated!
Note: I'm not very familiar with networking, thus I may be calling it the wrong name but the gun IP's start with 10...* and the server IP's start with 15...*
So long as the computer's routing table is properly configured, you shouldn't have to worry about this from your application. So long as you're using the proper IP addresses, the networking stack should take care of delivering things to the right place.
You might want to check the output of "route print" (at least I think that was available on WinXp -- if not, someone else will likely post the correct command for XP soon). In any way, you should see what network destinations are configured for which interfaces. You'll need to make sure that the server's IP on the 15 network will properly route via the interface you want (ie. the lowest-cost matching destination/netmask lists your 15 interface).
The issue seems to stem from both the NIC cards not set up properly and a so far unresolved issue with the frameworks I've chosen.
To solve the NIC problem, the easiest solution I'd found had me clear the default gateway on the 10 network.
The other issue deals with recreating the remoting objects after they've been destroyed. I currently have to warm boot the scanner in order to re-connect to the host. In order to correct this issue I'm going to contact As Good As It Gets to see what their input is. Damn firewall

How do I detect hosts on my LAN?

To help users, I would like my code to discover Oracle databases on the LAN. I thought to do this by first detecting all hosts, then checking each host to see if it is listening on Oracle's default port.
Any ideas how to go about this? Preferably in Java, but any language or algorithm would do.
Are you using DHCP? If so, your DHCP server has a list of the leases it has passed out. That should do you for a list of hosts on the LAN. Then try opening a connection to the Oracle port on each of those hosts and see if it accepts the connection.
It should be pretty simple to implement as a shell script with half a dozen lines or so. Java seems like overkill for something like this. Loop through the leases file, grab the IP from each lease, and telnet to the Oracle port; if it connects, disconnect and print the IP to standard out.
If you want to stay platform-independant, and unless you have access to some kind of database that lists the hosts, the only way to get a list is to try each IP address in the local network - might as well try to connect to the Oracle port on each of them.
There are lots of problems with this approach:
Will only search through the local network, which may only be a small part of the LAN (in case of large companies with lots of subnets)
Can take a long time (you definitely want to reduce the timeout for the connection attempts, but if someone has configured his LAN as a class A network, it will still take forever)
Can trigger all kinds of alerts, such as desktop users' personal firewalls, and intrusion detection systems - because you're doing exactly the same thing someone trying to exploit a security hole in Oracle servers would do
As brazzy points out, scanning for hosts is likely to cause problems, especially if there is a bug in your scanner.
A better approach may be to get the owners of the databases to register them somewhere, for example in a local DNS service (or does Oracle have zeroconf support?), or simply on some intranet webpage or wiki.
You better register the SID names/addresses to some server with a fixed address(maybe with a simple web service), and then query the list from there. Another approach is the bruteforce one (explained by #brazzy) by scanning one or more subnets, but this isn't really a good thing to do.
In case you are looking for a tool Loo#Lan can do this for you. Unfortunately there's no source available...
All of these smart answers are the reasons why many companies do not use the default port. Using a different port for each database is entirely possible, you know.

Resources