MongoDB no suitable servers found - php-mongodb

I'm having trouble connecting to a replica set.
[MongoDB\Driver\Exception\ConnectionTimeoutException]
No suitable servers found (`serverSelectionTryOnce` set):
[Server closed connection. calling ismaster on 'a.mongodb.net:27017']
[Server closed connection. calling ismaster on 'b.mongodb.net:27017']
[Server closed connection. calling ismaster on 'c.mongodb.net:27017']
I however, can connect using MongoChef

Switching any localhost references to 127.0.0.1 helped me. There is a difference between localhost and 127.0.0.1
See: localhost vs. 127.0.0.1
MongoDB can be set to run on a UNIX socket or TCP/IP
If all else fails, what I've found that works most consistently across all situations is the following:
In your hosts file, make sure you have a name assigned to the IP address you want to use (other than 127.0.0.1).
192.168.0.101 coolname
or
192.168.0.101 coolname.somedomain.com
In mongodb.conf:
bind_ip = 192.168.0.101
Restart Mongo
NOTE1: When accessing mongo from the command line, you now have to specify the host.
mongo --host=coolname
NOTE2: You'll also have to change any references to either localhost or 127.0.0.1 to your new name.
$client = new MongoDB\Client("mongodb://coolname:27017");

I had the same error in a docker based setup:
container1: nginx listening on port 80
container2: php-fpm listening on port 9000
container3: mongodb listening on port 27017
nginx forwarding php to php-fpm
Trying to access mongodb from php gave this error.
In the mongodb Dockerfile, the culprit was:
CMD ["mongod", "--bind_ip", "127.0.0.1"]
Needed to change it to:
CMD ["mongod", "--bind_ip", "0.0.0.0"]
And the error went away. Hope this helps somebody.

The IP address of your home network may have changed, which would lead to MongoDB locking you out.
I solved this problem for myself by going to MongoDB Atlas and changing which IP address is allowed to connect to my data. Originally, I'd set it up to only allow connections from my home network. But my home network IP address changed, and I started getting the same error message as you.
To check if this is the same issue with you, go to MongoDB Atlas, go into your project, and click "Network Access" on the left hand side of the screen. That's where you're able to update your IP address. It shows you what IP address(es) it's allowing in. To find out what your current IP address is, go to whatismyipaddress.com and update MongoDB if it's different.

In my case, I am temporarily coding PHP from Windows7 against MongoDB on my VPS running Linux Debian 9. The PHP will be eventually running in the same Linux box to provide an API to the MongoDB data.
BTW, it does not appear this local composer install is doing me any good, it's pure ugliness. My PHP after the fix below works without the require line require_once 'C:\Users\<Windows User Name>\vendor\autoload.php'.
My fix is different than the accepted answer which to me did not make sense.
I did not have to touch any hosts file
So edit your /etc/mongod.conf with your target machine's IP and restart with sudo systemctl restart mongod that's it
I don't know what to blame
PHP and MongoDB sites for the terrible documentation skimpy and incomplete PHP examples, or...
MongoDB installation on Linux failing to mention this bindIP.
My startup experience with MongoDB is so far very negative given all the changes that have occurred nothing matches what I expected from the videos I watched. I can't seem to find any that reflect what I am going thru like
$DB_CONNECTION_STRING="mongodb://user:password#164.152.09.84:27017"
$m = new MongoDB\Driver\Manager( $DB_CONNECTION_STRING )
instead of
$m = new MongoClient()
Hope this helps someone
PS. Always say NO to semicolons, camelCAsE and anything case-sensitive... absurdity at its best.

Related

Issues connecting Vagrant Xdebug with PhpStorm 2020.3

Have spent over a day trying to get PhpStorm to debug a Drupal site inside a Vagrant virtual machine running Xdebug and I feel I'm close - but just not quite there yet.
Currently, in PhpStorm when debugging I have an error:
Waiting for incoming connection with ide key 'PHPSTORM'
In the Xdebug log in the VM (at /tmp/xdebug.log)
[2094] I: Checking remote connect back address.
[2094] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[2094] I: Checking header 'REMOTE_ADDR'.
[2094] I: Remote address found, connecting to 192.168.88.1:9000.
[2161] E: Time-out connecting to client (Waited: 200 ms). :-(
[2161] Log closed at 2020-12-15 04:25:32
I tried setting up the Zero-configuration debugging without any luck.
For the debugging configuration, I have a PHP Remote Debug that validates correctly although it always defaults to the 'Local Web Server or Shared Folder' rather than the 'Remote Web Server' option.
When I started the project I set it up as a local web server option and I'm worried that I haven't changed the correct settings to now make this a remote web server. The connection type for the deployment is now 'Local or mounted folder' but originally this was 'In-place'.
Under Languages & Frameworks --> PHP --> Servers, I set this up on port 80, using Xdebug and without path mappings. I tried changing the port etc but then it doesn't validate so I'm confident that the server settings are correct and that PhpStorm is talking to the virtual machine correctly.
I changed the /etc/php/7.4/cli/php.ini file but phpinfo() says the configuration is coming from /etc/php/7.4/fpm/php.ini. The changes I made in the php.ini file are active though. phpinfo() is showing:
xdebug.idekey: PHPSTORM
xdebug.remote_host: 10.0.2.2
xdebug.remote_port: 9000
xdebug.remote_autostart: Off
xdebug.remote_connect_back: On
Really at a loss now as to what to try next. It is incredibly frustrating so hope someone can shed some light.
EDIT --- As per comment here is the screenshots of the setup:
Deployment: Local or mounted folder
PHP settings
The server setup
Debug settings
Validate the debugger from the remote server
Run Chrome with the debug addon running
The error message
The Run debug configuration settings
Start of the phpinfo with the config file showing where the xdebug settings need to be edited.
The Xdebug settings of phpinfo()
Some of the tests as per the comments:
After logging in to 'vagrant SSH' it shows the IP address to be used (10.0.2.2). The local computer is also IP 10.1.1.150 a telnet test of both of these works.
A 'sudo nano' of the Xdebug ini file
NOTE: Changing the remote connect back to 0 fixed the connection:
xdebug.remote_connect_back=0
And then the path mapping needed to be turned on in the Server settings and then everything was working correctly.
Big thanks to LazyOne for his helpful and thorough comments. :)
1) Check if there are separate php.ini files used by web server, -- you need to edit the right php.ini file.
Run phpinfo() and check to see if there is an ini file for Xdebug. In my case this was at /etc/php/7.4/fpm/conf.d/20-xdebug.ini
2) What's your Xdebug version?
Version 2 and 3 of Xdebug have slightly different parameters. In my case I was running Xdebug 2.9.5
3) What is the IP address of your Host OS as seen from the inside the VM? That's where Xdebug should connect (as it's Xdebug that connects to IDE and NOT other way around).
When you first log in to the SSH there will be an IP address shown. This is what the IP should be for Xdebug. In my example, this was 10.0.2.2
4) xdebug.remote_connect_back: On -- try set it off and ensure that you have correct xdebug.remote_host (as Xdebug v2 may not fallback to remote_host value when autodetected IP fails.
This fixed my connection!
sudo nano /etc/php/7.4/fpm/conf.d/20-xdebug.ini
Then change:
xdebug.remote_connect_back=1
To:
xdebug.remote_connect_back=0
Save, and then restart the server.
Test the connection again. After this step I had a connection between the host and remote server but I also had to turn on path mapping in PhpStorm to get the debugger working 100%.
5) Ensure that PhpStorm is the one that listens on Xdebug port when "phone handle" icon is enabled (use netstat or alike to confirm) and is allowed in firewall.
6) If you know the correct IP and sure that PhpStorm is listening -- you can just use telnet from inside the VM and try to connect to IDE on Xdebug port -- if connected then IP, port and firewall is most likely set up correctly.
Even with the error messages, the telnet check was working. So it pointed to the issue being with the Xdebug setup rather than the handshaking between the host and remote server.
Thanks to LazyOne's comments for finding the answer and for presenting a great workflow to help identify the problem.
I was experiencing something very similar to what you described.
I was able to start a debug session from the xdebug cli tool (dbgpClient) which proved to me that it was an issue with phpStorm.
My project is using a legacy version of xdebug. (2.2.7)
Downgrading to phpStorm 2020.2.4 resolved my issue.
(It's one-click in the jetbrains toolbox to downgrade)
Thanks for the answers in this issue. It took me half a day to find out that xdebug 2.2.7 with php5.3.10 doesn't work on Phpstorm 2020.3. So I downgraded to 2020.2.4 and it works again.

Unable to connect to the remote server - No connection could be made because the target machine actively refused it 127.0.0.1

Running into the subject issue trying to update the proxies with nswag... funny enough, the app that this came with is preconfigured to use a specific port for that service, but I don't see anything on that port using netstat -ano in the command line. Does anyone have any thoughts?
Before running nswag/refresh.bat the host needs to be up and running.
To start host, copy the below lines and create a batch file (bat).
CD "D:\Github\MySolution\src\MyProject.Web.Host"
SET ASPNETCORE_ENVIRONMENT=Development
SET ASPNETCORE_URLS=http://*:22742
dotnet run
It was some settings on the server (so a corporate guy told me), so nothing on my end, but thank you everyone for your help!

Unable to get Mesos to run from tutorial: Setting up a Single Node Mesosphere Cluster

I have been following this tutorial to try and setup a single node mesosphere cluster from their
official tutorial:
http://mesosphere.com/docs/getting-started/developer/single-node-install/
I followed all the commands without any issues, and I also added the ports 5050 and 8080 to my security group. When I try to access the console for mesos/marathon, I get a "Internet Explorer cannot display the webpage" message.
They also recommend checking it the following way:
MASTER=$(mesos-resolve `cat /etc/mesos/zk`)
mesos-execute --master=$MASTER --name="cluster-test" --command="sleep 5"
But that comes up with an error:
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0106 17:03:08.126703 20993 process.cpp:1561] Failed to initialize, gethostbyname2: Unknown host
*** Check failure stack trace: ***
I am not really sure how to troubleshoot this either, and there are not many tutorials I could find on how to install mesos on ubuntu.
I checked the contents of the zk file, seems to be the default value.
$ cat /etc/mesos/zk
zk://localhost:2181/mesos
I would really appreciate any clues on how to go about this one.
Edit: The process is definitely running too - just an fyi:
root 31545 8.5 5.9 187464 35604 ? Ssl 17:28 0:00 /usr/local/sbin/mesos-slave --master=zk://localhost:2181/mesos --log_dir=/var/log/mesos
root 31563 28.5 2.1 116304 12856 ? Rs 17:28 0:00 /usr/local/sbin/mesos-master --zk=zk://localhost:2181/mesos --port=5050 --log_dir=/var/log/mesos --quorum=1 --wo
Mesos uses gethostbyname2 to resolve hostnames to IPs. The first thing I would recommend, is to try "ping localhost" and "ping hostname", and verify that there are no strange settings in /etc/hosts. If you're doing a multi-node cluster, I'd recommend that hostname map to the public IP address (not 127.0.x.1).
If that doesn't help, you can try setting the --ip and --hostname flags when starting mesos-master and mesos-slave, to bypass the gethostbyname2 resolution. These can also be set by writing to the file-based parameters, e.g. /etc/mesos/mesos-master/ip
For additional troubleshooting, try running wget http://localhost:5050 (or curl -L) from the mesos master, to verify that it is locally visible. Also try wget http://<public_ip>:5050 to verify that the web server is up and serving to the public IP. Depending on how your (EC2?) node is setup, you may need to expose/forward the port, or connect to a VPN.
Thanks Adam. I ran the wget and curl commands, and nothing was actually listening on port 8080 or 5050. I did open those ports in the ec2. A simple reboot did the trick however, once I ssh'ed into the ec2 instance after the reboot, both mesos and marathon were running and both ports are now showing after I ran
netstat -ntln.

Install Chef-Server 11 on EC2 Instance

I am using hosted Chef for quite some time. Wanted to explore the opensource chef server. hence I am trying to setup my Chef-Server 11 on EC2 instance.
I have Chef-server running and I can access the web GUI for the same. I have the chef-workstation configured on another ec2 instance that is also working fine.
Problem: I am not able to upload any cookbook.
I get below error when I try uploading the cookbook:
# knife cookbook upload getting-started
Uploading getting-started [0.4.0]
/opt/chef/embedded/lib/ruby/1.9.1/net/http.rb:763:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED)
However, other list commands of knife are working fine.
I did my home work and bumped on below links:
http://www.opscode.com/blog/2013/03/11/chef-11-server-up-and-running/
http://www.curvve.com/blog/servers/2013/script-to-configure-and-set-your-hostname-and-fqdn-on-ec2-instances/
So,
It is mentioned that the chef-server needs a working FQDN to work. I set the my public ec2 host name as the hostname of the server as well as set it up in /etc/hosts. Rebooted the instance. Ran chef-server-ctl reconfigure again. And still facing the same error.
QUESTION: How to figure out the FQDN part of the EC2 instance for chef-server to work? if anyone has set up chef-server successfully on EC2 and was able to upload the cookbooks, then please share your steps for FQDN workout.
I was having a hard time with this but this solution worked!
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
server_name = "THE PUBLIC IP OF YOUR INSTANCE"
api_fqdn server_name
nginx['url'] = "https://#{server_name}"
nginx['server_name'] = server_name
lb['fqdn'] = server_name
bookshelf['vip'] = server_name
I found the solution here
http://sahebjade.blogspot.com/2013/05/check-your-knife-configuration-and.html
This is how i got it working. updated the public DNS name of my ec2 instance (chef-server) in /etc/sysconfig/network and service network restart. Now I am able to upload the cookbooks fine.
Need to think about elastic IP as potential option for my chef-server.
Edit /etc/chef-server/chef-server.rb and add these lines (create the file if it doesn't exist):
bookshelf["vip"] = node["ipaddress"]
bookshelf["url"] = "https://#{node["ipaddress"]}"
erchef['s3_url_ttl'] = 3600
The first two lines will point your chef-server URL to the machine's IP and the third will solve a timeout issue that apparently always exist when the Chef Server is on EC2.
I wanted to expand some on the answers since they don't give a complete picture. This applies to Chef 11 (hopefully Chef 12 is smarter)
In my case I rolled a master up under VPC #1 which gave it an internal address like this
ip-10-0-0-10.ec2.internal
Because I was only playing with the VPC initially, I had misconfigured some things I needed so I had to drop it and I created a new scheme. Thankfully, I was able to snapshot the old Chef master and bring it up under the new VPC but I found that I couldn't log into Chef anymore. It took some digging but I found in my /var/log/chef-server/chef-server-webui/current log that the install had glommed onto the old hostname and set that as the internal URL for... everything. This caused problems after the internal hostname change
2014-12-24_16:19:09.46680 SocketError: Error connecting to https://ip-10-0-0-10.ec2.internal/users/admin - getaddrinfo: Name or service not known
Now, to the OP answer
Need to think about elastic IP as potential option for my chef-server
In my case, I just added a CNAME to CloudFlare and set that as my permanent address. Since I can set CloudFlare to a low TTL on that one address it makes it easy to move it around between IP changes (I don't need an Elastic IP while I'm just getting it configured). This way I could then tell Chef to always look for the same URL and not worry about an EIP.
Once that was done, I had to update Chef. I don't know what changed (this is 11.16.4) but I found the configs live in /var/opt/chef-server/chef-server-webui/etc/chefserver.rb as opposed to some of the other answers listing chef-server.rb. Not sure if that's a YMMV thing or not.
I changed the following towards the bottom of that file
# Environment specific application configuration.
# These values override the ones set in 'RAILS_ROOT/config/application.rb'
#config.chef_server_url = "https://ip-10-0-0-10.ec2.internal"
config.chef_server_url = "https://chef.mydomain.com"
I also changed /var/opt/chef-server/nginx/etc/chef_https_lb.conf
server_name chef.mydomain.com;
Finally I restarted Chef
chef-server-ctl restart
That seems to have done the trick. Logins work again.

Installing Membase from source

I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.

Resources