I am using Capybara Selenium to run headless Chrome, which works great, except I cannot figure out how to use remote debugging. When I add --remote-debugging-port=4444 or --remote-debugging-port=9222 or --remote-debugging-port=9521, Selenium no longer connects to the browser to run the test.
How do I get remote debugging to work? Here is my code for reference:
Capybara.register_driver :selenium do |app|
# from https://github.com/SeleniumHQ/selenium/issues/3738
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(loggingPrefs: {browser: 'ALL'})
options = Selenium::WebDriver::Chrome::Options.new
options.add_argument '--disable-infobars' # hide info bar about chrome automating test
# if we don't use this flag, every selenium test will die with the error:
# "unknown error: Chrome failed to start: exited abnormally"
options.add_argument '--no-sandbox'
# BREAKS THINGS if uncommented
# options.add_argument '--remote-debugging-port=4444'
options.add_argument '--headless'
options.add_argument '--window-size=1600,2400'
options.add_preference('profile.default_content_settings.popups', 0)
options.add_preference('download.default_directory', DownloadHelpers::PATH.to_s)
Capybara::Selenium::Driver.new(
app,
clear_local_storage: true,
clear_session_storage: true,
browser: :chrome,
options: options,
desired_capabilities: capabilities,
)
end
Since chrome 67 and chromedriver 2.39, chromedriver now correctly uses the port you specify with --remote-debugging-port. This removes quite a bit of complexity from my answer above. The steps I now take, which work for my use case of needing to configure download settings using chrome_remote, are as follows:
It makes uses of a nodejs library, crmux - which allows multiple clients to connect to the remote debug port of chrome at the same time.
Get nodejs installed first: Nodejs v9.7.0 works fine
Install crmux by running npm install crmux -g
Before you start chromedriver (Capybara::Selenium::Driver.new), you need to spawn a separate thread that will fire up crmux, which will let both you and chromedriver communicate with chrome itself via the port you specified in Capybara (4444):
crmux --port=4444 --listen=4444
You may want to add a sleep 3 after the spawn command in the main script/thread to give time for crmux to start before you continue with your test startup.
You can then use chrome_remote (for example) to access chrome using port 4444, while capybara is doing its thing.
Updating my ChromeDriver fixed it for me. I didn't have to do anything else. Before it would hang when attempting to start the test.
Specifically I was on ChromeDriver 2.36 and I upgraded to ChromeDriver 2.40. I don't think the Chrome version was a problem, since I was on Chrome 67 both before and after.
Here's how I'm registering the driver:
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w[headless window-size=1280,960 remote-debugging-port=9222] }
)
Capybara::Selenium::Driver.new(app, browser: :chrome, desired_capabilities: capabilities)
end
After that I ran my test with a debugger (binding.pry) placed where I wanted to inspect. Then when I hit the debugger I navigated to http://localhost:9222/ in a normal instance of Chrome and was able to follow a link to view what was happening in the headless Chrome instance, including the browser console output I needed.
Update: if using versions since Chrome 67/chromedriver 2.39, my alternative answer above provides a simpler solution
The core issue here is Chromedriver also uses the remote debugging port connection to communicate with Chrome. This uses the websocket protocol, which only supports a single-client connected at a time. Normally, when chromedriver starts the chromedriver process, it will choose a random free TCP port number and use this to access the remote debug port. If you specify --remote-debuggging-port=9222 however, Chrome will be opened with the debug port you have requested, but chromedriver will silently continue to try and open a connection using this random port number.
The solution I ended up with was heavily inspired by comment 20 in this chromedriver issue. It required quite a bit of code to get it working, but works solidly. It makes uses of a nodejs library, crmux - which allows multiple clients to connect to the remote debug port of chrome at the same time.
Get nodejs installed first: Nodejs v9.7.0 works fine
Install crmux by running npm install crmux -g
Before you start chromedriver (Capybara::Selenium::Driver.new), you need to spawn a separate thread that will do a few things: look for the remote debug port chromedriver is trying to use to connect to chrome, and then use this to fire up crmux. Once this has happened Capybara etc will work as normal.
My separate thread runs a ruby script that first executes a netstat command repeatedly until it finds the relevant entry for chromedriver (TCP status is SYN_SENT). This separate thread must continue to run in the background while chrome is up and running.
The code for this is:
$chrdrv_wait_timeout = 60
$chrdrv_exe = "chromedriver.exe"
def get_netstat_output
stdout = `netstat -a -b -n`
stat_lines = stdout.split("\n")
stat_lines
end
def try_get_requested_port
socket_state = "SYN_SENT" # i.e. sent with no reply
statout = get_netstat_output
n = statout.length
i = 0
loop do
i += 1
# find lines relating to chromedriver
exe_match = /^ +\[#{$chrdrv_exe}\]$/.match statout[i]
if exe_match != nil
# check preceeding lines which should contain port info
port_match = /TCP.*:([0-9]+)\W+#{socket_state}/.match statout[i-1]
if port_match != nil
return port_match[1].to_i
end
end
break unless i < n
end
return nil
end
def get_tcp_port_requested_by_chromedriver
i = 1
loop do
puts "Waiting for #{$chrdrv_exe}: #{i}"
port = try_get_requested_port
if port != nil
return port
end
break unless i < $chrdrv_wait_timeout
sleep 1
i += 1
end
raise Exception, "Failed to get TCP port requested by #{$chrdrv_exe} (gave up after #{$chrdrv_wait_timeout} seconds)"
end
(I'm working in Windows: for Mac/Linux the netstat syntax/output is probably different so the code will need adjustment; the key thing is you need it to output the executable owner of each connection entry - and parse the bit relating to chromedriver to get the port in question).
Once this has found the random port (I'll use 12225 as an example), the background ruby script can then execute a crmux process, which will reunite chromedriver with chrome itself via the port you specified in Capybara (4444):
crmux --port=4444 --listen=12225
Finally, this separate script saves the discovered listen port to a text file. This allows the main script/thread that is running capybara to know the port number it can use to get access to chrome (via crmux's multiplexed connection) by reading in the port from that file. So you can then use chrome_remote to access chrome using port 12225, for example, while capybara is doing its thing.
Related
While running automation tests I sometimes get a timeout error for Selenium Webdriver (I think this is where the issue is at least). Me and my team have all recently migrated to Macbooks (from a combination of Windows and Ubunutu machines) and are all getting this behaviour.
While running a suite of tests I will (seemingly at random) get the following error output in the console:
Errno::ETIMEDOUT: Failed to open TCP connection to 127.0.0.1:9515 (Operation timed out - connect(2) for "127.0.0.1" port 9515)
This doesn't happen consistently, sometimes I'll run a pack and not have any such errors, sometimes I'll have multiple occurrences.
Here is the code which registers the driver (in case anything here points to what the issue could be):
Capybara.register_driver :selenium do |app|
opts = Selenium::WebDriver::Chrome::Options.new
opts.add_argument '--start-maximized'
opts.add_argument 'disable-infobars'
opts.add_argument '--disable-notifications'
opts.add_preference(:safebrowsing,
enabled: true)
opts.add_preference(:browser, set_download_behavior: { behavior: 'allow' })
Capybara::Selenium::Driver.new(app, browser: :chrome, options: opts)
end
The gems I am using are Capybara (3.11.0), Cucumber (3.1.0) and Selenium-webdriver (3.141.0). I have ChromeDriver(73.0.3683.68) installed via HomeBrew
Has anyone encountered this issue and worked out what the cause is?
Port 9515 is the default port chromedriver runs on. If you happen to be using Chrome/chromedriver v74 try rolling back to 73 or forward to 75 - 74 has been reported to have issues where it will randomly hang.
Another potential solution is to upgrade to Capybara >= 3.16.0 which defaults to using a persistent connection to chromedriver. This would mean less opening/closing of connections and less chance for chromedriver to hang during connection establishment.
I am using firefox v 48.0.2 and am trying to get my selenium (selenium-server v2.53) remotedriver automated tests to run on firefox, I have geckodriver 0.9.0 installed and when i go through the documentation on the github readme and run this command: (on mac osx 10.11.3)
geckodriver -b /Applications/FirefoxNightly.app/Contents/MacOS/firefox-bin
i get this error message:
thread '< main >' panicked at 'called Result::unwrap() on an Err value: Io(Error { repr: Os { code: 48, message: "Address already in use" } })', ../src/libcore/result.rs:746
note: Run with RUST_BACKTRACE=1 for a backtrace.
ive tried ignoring this step but when i run my tests firefox does not launch, i have ensured that my webdriver capabilities include marrionette: true
WebDriver:
browser: 'firefox'
clear_cookies: false
restart: false
window_size: 414x736
marionette: true
other than that i can not find any documentation to lead me in the correct direction, did i perhaps over look something? any help is greatly appreciated!
I am also using codeception to handle my tests (php)
EDIT
I was able to get this command to work after killing the process that was listening on port 4444:
geckodriver -b /Applications/FirefoxNightly.app/Contents/MacOS/firefox-bin
But even with that running Firefox is still not launching
I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.
How can I use EventMachine.connect_unix_domain while running Thin as a service (using the init script (excerpt) and configuration below). The code directly below is the problem (I get an eventmachine not initialized: evma_connect_to_unix_server error). The second code example works, but doesn't allow me to daemonize thin (I don't think). Does Thin not already have a running instance of EventMachine?
UPDATE: Oddly enough: stopping the server (with service thin stop), seems to get into the config.ru file and run the app (so it works, until the stop command times out and kills the process). What happens when thin stops that could be causing this behavior?
Problematic Code
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Working Code
EventMachine.run do
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Server.run!(:port => 4567)
end
Init Script excerpt
DAEMON=/usr/local/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
Thin Config
---
chdir: /var/www
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
socket: /tmp/thin.server.sock
daemonize: true
Thin is built on top of EventMachine. I think that you should use EventMachine for serving your app. Try to debug further way Thin won't daemonize. (What version are you using?). Also you can run Thin on another port such as 4000 and then pass that as the upstream server to your proxy-forwarding server, if that is what you want to achieve.
What I ended up doing was removing the EventMachine.run do ... end and simply enclosing the socket connection in an EM.next_tick{ $sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler) }.
Could swear I tried this once before... but it works now.
EDIT: Idea for next_tick came from here.