I want to use selenium with Ruby on DockerCompose and chrome on another server, but I get an error.
The following are the various codes and errors.
DockerCompose.yml
version: "3"
services:
ruby:
build:
context: .
dockerfile: RubyDockerFile
ports:
- "3000:3000"
tty: true
chrome:
image: selenium/standalone-chrome:4.0.0-beta-3-20210426
container_name: chrome
volumes:
- /dev/shm:/dev/shm
ports:
- 4444:4444
- 5900:5900
- 7900:7900
Ruby Code in RubyDocker
driver = Selenium::WebDriver.for :remote, url: 'http://chrome:4444', desired_capabilities: :chrome
Error
Net::ReadTimeout: Net::ReadTimeout with #<TCPSocket:(closed)>
/usr/local/bundle/gems/rack-mini-profiler-2.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/usr/local/bundle/gems/rack-mini-profiler-2.3.1/lib/mini_profiler/profiling_methods.rb:46:in `step'
/usr/local/bundle/gems/rack-mini-profiler-2.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/default.rb:129:in `response_for'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/default.rb:82:in `request'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/common.rb:64:in `call'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/bridge.rb:167:in `execute'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/bridge.rb:102:in `create_session'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/bridge.rb:56:in `handshake'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/driver.rb:39:in `initialize'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/common/driver.rb:58:in `new'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/common/driver.rb:58:in `for'
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver.rb:88:in `for'
/hoge.rake:9:in `block (2 levels) in <main>'
If I extend the timeout, I can now connect.
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 180 # seconds
driver = Selenium::WebDriver.for :remote, url: 'http://chrome:4444', desired_capabilities: :chrome, :http_client => client
In our case, it turned out to be an issue with MTU under Docker-in-Docker (how our CI platform works).
The fix was to lower the MTU by adding the following to our docker-compose.yml used in CI:
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1450
My coworker wrote this up here: https://chen.do/docker-in-docker-dind-mtu-fix-for-docker-compose/
The actual symptoms can be varied and surprising, but it was confirmed when we opened a shell in one of the inner Docker containers and were able to ping and curl https://example.com fine but certain requests would just hang (often at TLS negotiation) indefinitely.
There
Related
I am trying to DRY up my GitHub ci.yml file somewhat. I have two jobs—one runs RSpec tests, the other runs Cucumber tests. There were a number of steps they shared, which I’ve extracted to an external action.
They both depend on a postgres and chrome Docker image however, and some environment variables, so currently both jobs include the below code. Is there any way I can put this code in one place for them both to use? Note I’m not attempting to share the image itself, I just don’t want to have the repeated code.
services:
postgres:
image: postgres:13
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
# tmpfs for faster DB in RAM
options: >-
--mount type=tmpfs,destination=/var/lib/postgresql/data
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
chrome:
image: seleniarm/standalone-chromium:4.1.2-20220227
ports:
- 4444:4444
env:
DB_HOST: localhost
CHROMEDRIVER_HOST: localhost
RAILS_ENV: test
Let me start off by stating that I know this question has been asked on many forums. I have read them all.
I have two Docker containers that are built with docker-compose and contain a Laravel project each. They are both attached to a network and can ping one another successfully, however, when I make a request from Postman to the one backend that then makes a curl request to the other, I get the connection refused error shown below.
This is my docker-compose file for each project respectfully:
version: '3.8'
services:
bumblebee:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
networks:
- picknpack
ports:
- "8010:8000"
networks:
picknpack:
external: true
version: '3.8'
services:
optimus:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
ports:
- "8020:8000"
networks:
- picknpack
depends_on:
- optimus_db
optimus_db:
image: mysql:8.0.25
environment:
MYSQL_DATABASE: optimus
MYSQL_USER: test
MYSQL_PASSWORD: test1234
MYSQL_ROOT_PASSWORD: root
volumes:
- ./storage/dbdata:/var/lib/mysql
ports:
- "33020:3306"
networks:
picknpack:
external: true
Here you can see the successful ping:
I would love to keep messing with configuration files but I have a deadline to meet and nothing is working, any help would be appreciated.
EDIT
Please see inspection of network:
Within the docker network that I created, both containers are exposed on port 8000 as per their Dockerfiles. It was looking at me square in the face: 'Connection refused on port 80'. The HTTP client was using that as default rather than 8000. I updated the curl request to hit port 8000 and it works now. Thanks to #user3532758 for your help. Note that the containers are mapped to ports 8010 and 8020 in the external local network, not within the docker network. There they are both served on port 8000 with different IPs
I'm attempting to put a ruby Cucumber test into Docker. I'm using a docker-compose.yml file to start a selenium hub container along with a chrome and firefox node. Then I'm building an alpine ruby based image with my tests.
I've gotten the process to work, however it involves finding the IP of the hub container each time it is built, and then hardcoding the IP into my env.rb file where I connect to the Selenium grid.
I've seen that containers that are linked can be connected using the name but haven't had much luck there. Is there any way I can easily pass the hub container IP to my test's container?
Here is my yml file:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
links:
- hub
myapp:
build: .
image: justinpshields/myapp
depends_on:
- hub
environment:
URL: hub
links:
- hub
networks:
default:
links is useless. Every container in a docker-compose.yml share the same network unless stated otherwise.
You should also wait until the selenium hub start and attach its browsers containers.
For instance with that:
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"ready\": true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"browserName\": \"$BROWSER\"" >/dev/null; do
echo "Waiting for the node $BROWSER"
sleep 1
done
I am attempting to write a Capybara test suite in Ruby (without Rails).
I would like to run the Ruby code in a Docker container.
FROM ruby:2.7
RUN gem install bundler
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install
COPY . .
CMD ["bundle", "exec", "rspec", "spec"]
My tests require a headless version of Chrome which is not available in the Ruby image. I have attempted to create a docker-compose file to include the Ruby code and a headless Chrome image.
version: '3.7'
networks:
mynet:
services:
admin:
container_name: mrt-integ-tests
image: cdluc3/mrt-integ-tests
build:
context: .
dockerfile: Dockerfile
volumes:
- type: bind
source: ./config/test_config.yml
target: /config/test_config.yml
stdin_open: true
tty: true
networks:
mynet:
depends_on:
- chrome
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
networks:
mynet:
This is how I am attempting to create my Capybara session:
Capybara.register_driver :selenium_chrome do |app|
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
desired_capabilities: caps,
url: "http://chrome:4444/wd/hub"
)
end
#session = Capybara::Session.new(:selenium_chrome)
When I start my containers, I see the following error.
chrome | 05:39:44.350 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
chrome | 2020-06-04 05:39:44.420:INFO::main: Logging initialized #766ms to org.seleniumhq.jetty9.util.log.StdErrLog
chrome | 05:39:44.920 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
chrome | 05:39:45.141 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
chrome | 05:39:47.431 INFO [ActiveSessionFactory.apply] - Capabilities are: {
chrome | "browserName": "chrome",
chrome | "chromeOptions": {
chrome | "args": [
chrome | "--no-default-browser-check",
chrome | "--start-maximized",
chrome | "--headless",
chrome | "--disable-dev-shm-usage"
chrome | ]
chrome | },
chrome | "cssSelectorsEnabled": true,
chrome | "javascriptEnabled": true,
chrome | "nativeEvents": false,
chrome | "rotatable": false,
chrome | "takesScreenshot": false,
chrome | "version": ""
chrome | }
chrome | 05:39:47.435 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
chrome | Starting ChromeDriver 83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103#{#416}) on port 13554
chrome | Only local connections are allowed.
chrome | Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
chrome | ChromeDriver was started succ[e1s5s9f1u2l4l9y1.8
chrome | 7.473][SEVERE]: bind() failed: Cannot assign requested address (99)
chrome | 05:39:48.211 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
chrome | 05:39:48.247 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 1ed95304843a1d5ab904708d998710a0 (org.openqa.selenium.chrome.ChromeDriverService)
What suggestions do you have to resolve this error?
I have a partial solution here. Adding --whitelisted-ips to my browser options makes my tests function. See https://github.com/RobCherry/docker-chromedriver/issues/15
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
to
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage', '--whitelisted-ips']
Interestingly, the following error is still generated in the chrome container.
[1591286416.104][SEVERE]: bind() failed: Cannot assign requested address (99)
You need to setup whitelisted-ips argument for chromedriver executable. You can achive it by set env JAVA_OPTS for docker chrome-node image. In your case:
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
environment:
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
networks:
mynet:
I am learning docker. i want to add caching functionality in my application and hence using memcached. below is my docker-compose.yml file
version: "3"
services:
app:
build: .
volumes:
- .:/project
command: 'rails s -b 0.0.0.0 -p 3000'
container_name: 'test_rails'
ports:
- 3000:3000
depends_on:
- database
links:
- memcached
database:
image: postgres:latest
volumes:
- ./data:/var/lib/postgresql/data
environment:
POSTGRES_USER: docker-user
POSTGRES_PASSWORD: docker-password
POSTGRES_DB: docker-db
memcached:
build:
context: .
dockerfile: Dockerfile.memcached
command: 'tail -f /dev/null'
when i am trying to connect to memcached server which is inside memcached container from app container using below code
require 'dalli'
options = { :namespace => "app_v1", :compress => true }
dc = Dalli::Client.new('localhost:11211', options)
Then i am getting below error
WARN -- : localhost:11211 failed (count: 0) Errno::EADDRNOTAVAIL: Cannot assign requested address - connect(2) for "localhost" port 11211
Dalli::RingError: No server available
from /usr/local/bundle/gems/dalli-2.7.10/lib/dalli/ring.rb:46:in `server_for_key'
from /usr/local/bundle/gems/dalli-2.7.10/lib/dalli/client.rb:367:in `perform'
from /usr/local/bundle/gems/dalli-2.7.10/lib/dalli/client.rb:130:in `set'
from (irb):4
from /usr/local/bin/irb:11:in `<main>'
can someone help me in understanding and resolving this issue.
You can not access one docker service from another service using localhost as each service has its own ip address and like small vm of its own. Use service name instead of localhost and docker will resolve it with ip addres of target service,
`dc = Dalli::Client.new('memcached:11211', options)`
Change :
dc = Dalli::Client.new('localhost:11211', options)
to :
dc = Dalli::Client.new('memcached:11211', options)
When you set up containers from compose they are all connected to the default network created by compose. memcached is in this case the DNS name of memcached container and will be resolved to container IP automatically.