OS/X "etc/resolver/dev" isn't working – why not? - macos

I expect to be able to resolve the DNS name www.foobar.dev using a DNS server that's running in a VM on my OS/X (High Sierra) system because I have created an /etc/resolver/dev file containing the following one line: (specifying the VM's virtual address)
nameserver ww.xx.yy.zz
... but dig www.foobar.dev continues to consult the Internet nameserver,
while dig #ww.xx.yy.zz www.foobar.dev successfully retrieves the entry from the VM's DNS.
I've used the dscacheutil command to be sure that an errant entry is not in the DNS resolver cache.
So, why isn't the presence of an /etc/resolver/dev file with the specified contents sufficient to direct "anything.dev" to the specified DNS server?
Interestingly – sometimes it seems to work. Also, the command scutil --dns produces the following expected entry, which seems to indicate that the /etc/resolver/dev file is being detected!
resolver #8
domain : dev
nameserver[0] : ww.xx.yy.zz
flags : Request A records
reach : 0x00020002 (Reachable,Directly Reachable Address)

It's probably working fine, you're just testing it wrong. dig (and host and nslookup) don't use the system resolver, nor do they fully implement the system resolver's lookup policy. As a result, they're useful for testing the DNS system itself, but not for testing how the OS uses DNS. The official way to test the system resolver is dscacheutil (e.g. dscacheutil -q host -a name www.foobar.dev), but that's annoyingly verbose, so I tend to just use ping and look at the IP it reports.

As #GordonDavisson in other answer said - ping command is useful for the system resolver testing. My addition is that it also may fail because of DNS cache. Do not forget to clear it:
sudo killall -HUP mDNSResponder

Better replace /etc/resolver files with true DNS config, as just like /etc/resolv.conf this is all legacy stuff kept only for backward compatibility (and maybe because POSIX requires it?).
Here's how you can do it from command line using scutil, it's really simple.
Of course, there is also a programmatic interface to all this.
See Apple's SystemConfiguration Framework.

Related

MongoDB no suitable servers found

I'm having trouble connecting to a replica set.
[MongoDB\Driver\Exception\ConnectionTimeoutException]
No suitable servers found (`serverSelectionTryOnce` set):
[Server closed connection. calling ismaster on 'a.mongodb.net:27017']
[Server closed connection. calling ismaster on 'b.mongodb.net:27017']
[Server closed connection. calling ismaster on 'c.mongodb.net:27017']
I however, can connect using MongoChef
Switching any localhost references to 127.0.0.1 helped me. There is a difference between localhost and 127.0.0.1
See: localhost vs. 127.0.0.1
MongoDB can be set to run on a UNIX socket or TCP/IP
If all else fails, what I've found that works most consistently across all situations is the following:
In your hosts file, make sure you have a name assigned to the IP address you want to use (other than 127.0.0.1).
192.168.0.101 coolname
or
192.168.0.101 coolname.somedomain.com
In mongodb.conf:
bind_ip = 192.168.0.101
Restart Mongo
NOTE1: When accessing mongo from the command line, you now have to specify the host.
mongo --host=coolname
NOTE2: You'll also have to change any references to either localhost or 127.0.0.1 to your new name.
$client = new MongoDB\Client("mongodb://coolname:27017");
I had the same error in a docker based setup:
container1: nginx listening on port 80
container2: php-fpm listening on port 9000
container3: mongodb listening on port 27017
nginx forwarding php to php-fpm
Trying to access mongodb from php gave this error.
In the mongodb Dockerfile, the culprit was:
CMD ["mongod", "--bind_ip", "127.0.0.1"]
Needed to change it to:
CMD ["mongod", "--bind_ip", "0.0.0.0"]
And the error went away. Hope this helps somebody.
The IP address of your home network may have changed, which would lead to MongoDB locking you out.
I solved this problem for myself by going to MongoDB Atlas and changing which IP address is allowed to connect to my data. Originally, I'd set it up to only allow connections from my home network. But my home network IP address changed, and I started getting the same error message as you.
To check if this is the same issue with you, go to MongoDB Atlas, go into your project, and click "Network Access" on the left hand side of the screen. That's where you're able to update your IP address. It shows you what IP address(es) it's allowing in. To find out what your current IP address is, go to whatismyipaddress.com and update MongoDB if it's different.
In my case, I am temporarily coding PHP from Windows7 against MongoDB on my VPS running Linux Debian 9. The PHP will be eventually running in the same Linux box to provide an API to the MongoDB data.
BTW, it does not appear this local composer install is doing me any good, it's pure ugliness. My PHP after the fix below works without the require line require_once 'C:\Users\<Windows User Name>\vendor\autoload.php'.
My fix is different than the accepted answer which to me did not make sense.
I did not have to touch any hosts file
So edit your /etc/mongod.conf with your target machine's IP and restart with sudo systemctl restart mongod that's it
I don't know what to blame
PHP and MongoDB sites for the terrible documentation skimpy and incomplete PHP examples, or...
MongoDB installation on Linux failing to mention this bindIP.
My startup experience with MongoDB is so far very negative given all the changes that have occurred nothing matches what I expected from the videos I watched. I can't seem to find any that reflect what I am going thru like
$DB_CONNECTION_STRING="mongodb://user:password#164.152.09.84:27017"
$m = new MongoDB\Driver\Manager( $DB_CONNECTION_STRING )
instead of
$m = new MongoClient()
Hope this helps someone
PS. Always say NO to semicolons, camelCAsE and anything case-sensitive... absurdity at its best.

setsockopt IPV6_TCLASS 16: Protocol not available, Cygwin64

I'm trying to install Hadoop 1.0.3 using Cygwin64 on Win8.1. After I completed the config, started SSHD service, I run ssh cyg_server#localhost and got these:
cyg_server#localhost's password:
setsockopt IPV6_TCLASS 16: Protocol not available:
I'm complete new with Cygwin64 and Hadoop, thanks advance for any help.
From the client side, just add
-oAddressFamily=inet
to the parameters passed to ssh, or add
AddressFamily inet
to ~/.ssh/config, either globally or per a specific host.
Basically you want to turn off IPV6 and use IPV4. To do this, stop your sshd service if you have it running:
net stop sshd
Then edit the file /etc/ssh_config by adding (or modifying) the AddressFamily setting:
AddressFamily inet
The default is set to all. Setting the value to inet forces IPV4 which fixed the problem for me. After you make the change, restart sshd and you should be good to go:
net start sshd
Good luck!
I had a simmilar issue with Cygwin logging in to IPv6-enabled Servers.
Upgrading Cygwin (on client side) to the lastest version solved my problem.
I'm tired of all the "Just disable IPv6" suggestions. We have 2014 and IPv6 is here. We should better fix bugs and issues with this "new" protocol instead of negating ist.
This error happens when OpenSSH attempts to set the "type of service" field for an IPv6 connection on a system that defines IPV6_TCLASS in <sys/socket.h>, but where the kernel doesn't support it (a 2.4-series kernel or older versions of Cygwin).
It may reduce performance in a situation where something is performing traffic shaping/QoS, but is otherwise harmless.

Can't access sinatra server from other computers

I am running a sinatra server with shotgun that returns a hello world when request GET in the root (typical tutorial) and works perfectly in my computer. I could only access it from localhost:9393 and then i run it with -o 0.0.0.0 and could access it as IP:9393 but still only from the computer where the server was running.
How can i access the server from other computers? already tried bind 0.0.0.0 and environment production.
Thanks in advance.
A bit more information is needed, like the OS that you are running and if you have made sure that any local firewalls are not blocking your traffic. I see that you marked this with the "Shotgun" tag which tells me that you are running on a *nix system as Shotgun uses forks and windows doesn't support them.
Check your iptables and see if you got anything in there. :)
iptables -nvL -t nat --line-numbers
iptables -nvL --line-numbers

How to generate a resolv.conf from every DHCP lease on Mac?

I just want to use a generated resolv.conf file from DHCP lease other than system's /etc/resolv.conf, how can I make a script that every time when DHCP lease then generate a resolv.conf?
Your question is very hard to understand, but I'll give it a shot...
/etc/resolv.conf is not canonical on OS X. If you want to change the system DNS settings then you need to use the System Configuration framework (from code), or networksetup or scutil (from the command line). There's an article about using scutil here.

SSH hangs on Mac Book Pro; AFS and Network Preferences?

I am having an issue with SSH hanging on my Mac Book Pro. This only happens to me once I get home from work after I have used SSH while at work. The three factors I have narrowed the issue down to are SSH, our work AFS network drive and the method of network connectivity.
At work we use an AFS drive with Kerberos Authentication to do all of our software development work on. I authenticate with Kerberos in order to gain access to the AFS drive where all my source code lives, but I open a local editor (Eclipse) which references the files on the AFS drive. Whenever I need to compile my code, I SSH in to my development server (which is also authenticated to the AFS drive) and compile from there. (Sanity Note: I know that it is a super wacky setup, but I promise I had NOTHING to do with it. I'm just making do with what I've got.)
For my Network Preferences, I use the Automatic location all the time. For that configuration I have Built-in Ethernet en1 configured to use DHCP and our company's DNS server for when I'm at work (there is no wireless available). When I go home I connect to my home network via wireless, again using DHCP.
I have a hunch that the AFS connection/Ethernet configuration is somehow the culprit here. Restarting the SSH daemon doesn't correct the problem. The only way I have found to correct the issue is by restarting the computer each time I want to use SSH. Keep in mind that I have no other (known) networking issues while at home after I've had the laptop at work.
I have a co-worker who has reported to me the same issue on his MBP.
I'm truly stumped on this one. Please provide some guidance. Thanks!
Can you be more specific about "SSH hanging"?
It sounds like your ssh client hangs after losing the connection and you are unable to do anything in the terminal. To get around this, you can use the ssh escape character (default: ‘~’) to begin an escape sequence, and use the the '.' to terminate the connection.
You can get a list of other ssh escape sequences using ~?, here's the one for OpenSSH SSH client:
Supported escape sequences:
~. - terminate connection
~B - send a BREAK to the remote system
~C - open a command line
~R - Request rekey (SSH protocol 2 only)
~^Z - suspend ssh
~# - list forwarded connections
~& - background ssh (when waiting for connections to terminate)
~? - this message
~~ - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)
If typing ~. does not work, it could be that you have the escape character disabled, in which case you can put
EscapeChar ~
inside ~/.ssh/config or /etc/ssh_config
Even when the escape character is disabled, you can simply pull up another Terminal window and type
killall ssh
to end all running ssh processes, allowing you to connect out again.
Restarting the SSH daemon would not correct this problem because sshd allows other clients to connect in to your machine, and does not affect your ssh clients connecting out to some other machine.
It appears that the fix for my issue is to delete my Kerberos tokens that are valid while at work, but not valid when at home. Hope this can help anyone having a similar issue.
Just a shot in the dark:
I recently had problems using ssh after installing Rogue Amoeba Audio Hijack Pro.
I could only use ssh as super user (sudo).
An Update to 2.8.1 resolved the issue...
Also see http://www.macobserver.com/article/2008/03/19.8.shtml for the issue.

Resources