I am attempting to write a Capybara test suite in Ruby (without Rails).
I would like to run the Ruby code in a Docker container.
FROM ruby:2.7
RUN gem install bundler
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install
COPY . .
CMD ["bundle", "exec", "rspec", "spec"]
My tests require a headless version of Chrome which is not available in the Ruby image. I have attempted to create a docker-compose file to include the Ruby code and a headless Chrome image.
version: '3.7'
networks:
mynet:
services:
admin:
container_name: mrt-integ-tests
image: cdluc3/mrt-integ-tests
build:
context: .
dockerfile: Dockerfile
volumes:
- type: bind
source: ./config/test_config.yml
target: /config/test_config.yml
stdin_open: true
tty: true
networks:
mynet:
depends_on:
- chrome
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
networks:
mynet:
This is how I am attempting to create my Capybara session:
Capybara.register_driver :selenium_chrome do |app|
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
desired_capabilities: caps,
url: "http://chrome:4444/wd/hub"
)
end
#session = Capybara::Session.new(:selenium_chrome)
When I start my containers, I see the following error.
chrome | 05:39:44.350 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
chrome | 2020-06-04 05:39:44.420:INFO::main: Logging initialized #766ms to org.seleniumhq.jetty9.util.log.StdErrLog
chrome | 05:39:44.920 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
chrome | 05:39:45.141 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
chrome | 05:39:47.431 INFO [ActiveSessionFactory.apply] - Capabilities are: {
chrome | "browserName": "chrome",
chrome | "chromeOptions": {
chrome | "args": [
chrome | "--no-default-browser-check",
chrome | "--start-maximized",
chrome | "--headless",
chrome | "--disable-dev-shm-usage"
chrome | ]
chrome | },
chrome | "cssSelectorsEnabled": true,
chrome | "javascriptEnabled": true,
chrome | "nativeEvents": false,
chrome | "rotatable": false,
chrome | "takesScreenshot": false,
chrome | "version": ""
chrome | }
chrome | 05:39:47.435 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
chrome | Starting ChromeDriver 83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103#{#416}) on port 13554
chrome | Only local connections are allowed.
chrome | Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
chrome | ChromeDriver was started succ[e1s5s9f1u2l4l9y1.8
chrome | 7.473][SEVERE]: bind() failed: Cannot assign requested address (99)
chrome | 05:39:48.211 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
chrome | 05:39:48.247 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 1ed95304843a1d5ab904708d998710a0 (org.openqa.selenium.chrome.ChromeDriverService)
What suggestions do you have to resolve this error?
I have a partial solution here. Adding --whitelisted-ips to my browser options makes my tests function. See https://github.com/RobCherry/docker-chromedriver/issues/15
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
to
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage', '--whitelisted-ips']
Interestingly, the following error is still generated in the chrome container.
[1591286416.104][SEVERE]: bind() failed: Cannot assign requested address (99)
You need to setup whitelisted-ips argument for chromedriver executable. You can achive it by set env JAVA_OPTS for docker chrome-node image. In your case:
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
environment:
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
networks:
mynet:
Related
I've been struggling to run some tests on selenium using a docker on macOS Monterey recently. No issues with these tests on a Windows machine though, so I thought it might be somehow related to the macOS. I'm using docker-compose.
Here are some relevant error messages from the logs:
test_email-1 | --- Failed steps:
test_email-1 |
test_email-1 | Scenario: Access to <edited> # features/login copy.feature:7
test_email-1 | When I navigate to "https://<edited>" # features/login copy.feature:9
test_email-1 | Error: org.openqa.selenium.NoSuchSessionException: Unable to execute request for an
existing session: Unable to find session with ID:
test_email-1 | Build info: version: '4.1.2', revision: '9a5a329c5a'
test_email-1 | System info: host: '9d1d3cff9e3e', ip: '172.19.0.2', os.name: 'Linux',
os.arch: 'amd64', os.version: '5.10.76-linuxkit', java.version: '11.0.13'
test_email-1 | Driver info: driver.version: unknown
test_email-1 |
test_email-1 | 1 scenarios (1 failed)
test_email-1 | 10 steps (1 failed, 9 skipped)
For some reason, the session ID is blank.
Some notable errors in the logs from the hub:
tests-hub-1 | 10:33:35.570 WARN [SeleniumSpanExporter$1.lambda$export$0] -
{"eventTime": 1644402815478448222,"eventName": "exception","attributes":
{"driver.url": "<localhost url>","exception.message": "Error while creating session with the
driver service. Stopping driver service: unknown error: DevToolsActivePort file doesn't exist\n
(Driver info: chromedriver=98.0.4758.80 (<...>),platform=Linux 5.10.76-linuxkit x86_64)
(WARNING: The server did not provide any stacktrace information)\n
Command duration or timeout: 60.24 seconds\n
Build info: version: '4.1.2', revision: '9a5a329c5a'\n
System info: host: '9d1d3cff9e3e', ip: '172.19.0.2', os.name: 'Linux', os.arch: 'amd64',
os.version: '5.10.76-linuxkit', java.version: '11.0.13'\nDriver info: driver.version: unknown",
"exception.stacktrace": "org.openqa.selenium.WebDriverException:
unknown error: DevToolsActivePort file doesn't exist\n
<...>
What has been tried so far:
Updating and reinstalling selenium
Updating and reinstalling chromedriver
Restarting the docker-compose multiple times
Reinstalled Java
Checked if the ports aren't blocked
Tried running docker-compose and tests with and without root
Tried using shm_size instead of volume
Added these flags to chromedriver: --disable-dev-shm-usage, --no-sandbox
Perhaps anyone has any ideas of how to fix this or at least what should I look into when resolving this?
Edit: adding the docker-compose config:
version: '3'
services:
test:
image: <edited>:latest
networks:
- default
tty: true
volumes:
- .:/tests
depends_on:
- hub
entrypoint: "go run tests/cmd/runner"
test_email:
image: <edited>:latest
networks:
- default
tty: true
volumes:
- .:/tests
depends_on:
- hub
entrypoint: "go run tests/cmd/runner"
hub:
image: selenium/standalone-chrome:latest
networks:
- default
ports:
- "4449:4444"
- "5900:5900"
networks:
default:
driver: bridge
P.S. I'm still a novice in dockers and selenium, so please forgive me if I missed something important. Thanks in advance!
I'm attempting to put a ruby Cucumber test into Docker. I'm using a docker-compose.yml file to start a selenium hub container along with a chrome and firefox node. Then I'm building an alpine ruby based image with my tests.
I've gotten the process to work, however it involves finding the IP of the hub container each time it is built, and then hardcoding the IP into my env.rb file where I connect to the Selenium grid.
I've seen that containers that are linked can be connected using the name but haven't had much luck there. Is there any way I can easily pass the hub container IP to my test's container?
Here is my yml file:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
links:
- hub
myapp:
build: .
image: justinpshields/myapp
depends_on:
- hub
environment:
URL: hub
links:
- hub
networks:
default:
links is useless. Every container in a docker-compose.yml share the same network unless stated otherwise.
You should also wait until the selenium hub start and attach its browsers containers.
For instance with that:
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"ready\": true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"browserName\": \"$BROWSER\"" >/dev/null; do
echo "Waiting for the node $BROWSER"
sleep 1
done
Hi this is somewhat similar to this question but nothing there about docker containers
I am using a mac so I started using docker since it's more convenient on macOS
Microsoft visual studio code has may feature a full development environment
It has an extension from MS Remote - Containers with should make easier
This is my docker-compose.yml
version: '2'
services:
db:
image: 'postgres:11'
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres
restart: always
volumes:
- './postgresql:/var/lib/postgresql/data'
pgadmin:
image: dpage/pgadmin4
depends_on:
- db
ports:
- '5555:80'
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- './pgadmin:/var/lib/pgadmin'
odoo13:
image: 'odoo:13'
depends_on:
- db
ports:
- '10013:8069'
tty: true
command: '-- --dev=reload'
volumes:
- './addons:/mnt/extra-addons'
- './enterprise:/mnt/enterprise-addons'
- './config:/etc/odoo'
restart: always
can you please help or share a better way to run odoo with debugger on mac
Thank you in advance
In order to understand how remote debugging works for all python services or apps which based on it such as Odoo, Flask, Django, Web2py or whatever. you have to understand three different concepts the docker container, the debugger, the python app server (in our case it's Odoo). so in many cases, when running Odoo from docker it is like the following image:
and what you really need to be able to debug would be like the following image:
please note the difference:
without debugging you have two ports, one internal the other is external which will pass http requests from the browser to Odoo & vice versa. however after debugging you have 4 ports, 2 of them for the http request & the other 2 for debugging information (which is based on json in our case) from Vscode to debugpy & vice versa (you could also use 2 ports by the way).
without debugging your entrypoint would be whats defined in the Dockerfile. with debugging, you modify the entrypoint to be debugpy. which will be responsible for running Odoo
so to debug Odoo you will be doing as following:
Edit your docker.dev file & insert RUN pip3 install -U debugpy. this will install a python package debugpy instead of the deprecated one ptvsd because your vscode (local) will be communicating with debugpy (remote) server of your docker image using it.
Start your container. you will be starting the python package that you just installed debugpy. it could be as next command from your shell.
docker-compose run --rm -p 8888:8888 -p 8869:8069 {DOCKER IMAGE[:TAG|#DIGEST]} /usr/bin/python3 -m debugpy --listen 0.0.0.0:8888 /usr/bin/odoo --db_user=odoo --db_host=db --db_password=odoo
Prepare your launcher file as following. please note that port will be related to odoo server. debugServer will be the port for the debug server
{
"name": "Odoo: Attach",
"type": "python",
"request": "attach",
"port": 8869,
"debugServer": 8888,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/mnt/extra-addons",
}
],
"logToFile": true
}
Problem
I am launching seleniumgrid with mobbrowserproxy in a docker bridged network. The remote driver proxy settings are configured correctly but i cannot connect to the proxy server from the chrome driver. I have looked at all the issues and i cannot find a solution.
Below is the error which shows the driver desired capabilities/options.
Stacktrace
selenium-hub | 15:40:24.222 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ef95a1c8-8c30-472b-9e5e-323b8a8c9045, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
chrome_1 | 15:40:24.243 INFO [ActiveSessionFactory.apply] - Capabilities are: {
chrome_1 | "browserName": "chrome",
chrome_1 | "goog:chromeOptions": {
chrome_1 | "args": [
chrome_1 | "--disable-gpu",
chrome_1 | "--headless",
chrome_1 | "--no-sandbox",
chrome_1 | "--whitelisted-ips",
chrome_1 | "--disable-dev-shm-usage",
chrome_1 | "--allow-insecure-localhost",
chrome_1 | "--disable-web-security",
chrome_1 | "--ignore-certificate-errors",
chrome_1 | "--allow-running-insecure-content"
chrome_1 | ],
chrome_1 | "extensions": [
chrome_1 | ]
chrome_1 | },
chrome_1 | "proxy": {
chrome_1 | "httpProxy": "0.0.0.0:9099",
chrome_1 | "proxyType": "manual",
chrome_1 | "sslProxy": "0.0.0.0:9099"
chrome_1 | },
chrome_1 | "version": ""
chrome_1 | }
chrome_1 | 15:40:24.243 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
chrome_1 | Starting ChromeDriver 74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729#{#29}) on port 28080
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
chrome_1 | 15:40:24.418 INFO [ProtocolHandshake.createSession] - Detected dialect: OSS
chrome_1 | 15:40:24.444 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 4c42082804845cf9571e6a843a63feca (org.openqa.selenium.chrome.ChromeDriverService)
chrome_1 | 15:40:34.798 INFO [ActiveSessions$1.onStop] - Removing session 4c42082804845cf9571e6a843a63feca (org.openqa.selenium.chrome.ChromeDriverService)
I believe the key point of this stacktrace is right here
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
To Reproduce
To reproduce this error. If you simple launch the containers and login to chromenode and use the chromedriver cli, you should see the same error in the terminal. Again the target of this issue is connecting the proxy container via chromedriver proxy settings.
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-20200409
container_name: selenium-hub
ports:
- "4444:4444"
networks:
- caowebtests
chrome:
image: selenium/node-chrome:3.141.59-oxygen
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
expose:
- 9515
firefox:
image: selenium/node-firefox:3.141.59-20200409
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
opera:
image: selenium/node-opera:3.141.59-20200409
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
proxy:
image: spothero/browsermob-proxy:1.0.0
depends_on:
- selenium-hub
networks:
- caowebtests
ports:
- "9090:9090"
links:
- selenium-hub
- firefox
- chrome
- opera
robottests:
container_name: robottests
command: /bin/sleep infinity
depends_on:
- selenium-hub
build: .
volumes:
- ./reports:/cao_ui_tests/reports
networks:
- caowebtests
networks:
caowebtests:
driver: bridge
As you can see in the trace above. the desired capabilities and options are all there. What I am trying to find out is why the chrome driver is getting this error with the proxy settings above.
My test is failing when looking for the first element on the login page. I am not sure if you need the test scripts and the ability to execute the test scripts to try to reproduce this or if this is something that can be pointed out. I can add this repo to github if needed.
Test Script Below
*** Settings ***
Library BrowserMobProxy
Library SeleniumLibrary
Library RequestsLibrary
Library Collections
Resource resources${/}base.robot
Resource resources${/}common.robot
Resource resources${/}auth.robot
Suite Setup Setup Test Suite
Test Teardown Close All Browsers
Suite Teardown Test Suite Teardown
*** Variables ***
${BMP_HOST} 0.0.0.0
${BMP_PORT} 9090
${SELENIUM} http://0.0.0.0:4444/wd/hub
${SHOT_NUM} 0
#{TIMINGS}
*** Test Cases ***
Login User
Wait Until Page Loads
Wait Until Page Contains Element ${UI['login']} timeout=10
Submit Credentials %{TEST_USER} %{TEST_PASS}
*** Keywords ***
Setup Test Suite
Load UI Repository ${REPO_PATH}
Connect To Remote Server ${BMP_HOST} ${BMP_PORT}
Set Selenium Implicit Wait 0.2 seconds
Set Selenium Timeout 30 seconds
${prefs}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
&{caps}= Set Capabilities
Call Method ${prefs} add_argument --disable-gpu
Call Method ${prefs} add_argument --headless
Call Method ${prefs} add_argument --no-sandbox
Call Method ${prefs} add_argument --whitelisted-ips
Call Method ${prefs} add_argument --disable-dev-shm-usage
Call Method ${prefs} add_argument --allow-insecure-localhost
Call Method ${prefs} add_argument --disable-web-security
Call Method ${prefs} add_argument --ignore-certificate-errors
Call Method ${prefs} add_argument --allow-running-insecure-content
Create Webdriver Remote command_executor=${SELENIUM} desired_capabilities=${caps}
options=${prefs}
New Har LoginPage
Go To https://cardatonce.eftsource.com
Test Suite Teardown
Get Har file.har
Close Proxy
Set Capabilities
[Documentation] Set the options for the selenium Driver
${port}= Create Proxy
&{proxy}= Create Dictionary
... proxyType MANUAL
... sslProxy ${BMP_HOST}:${port}
... httpProxy ${BMP_HOST}:${port}
&{caps}= Create Dictionary browserName=chrome platform=ANY proxy=&{proxy}
Log Selenium capabilities: ${caps}
[return] ${caps}
Create Proxy
[Documentation] Get a BMP port for our test
Create Session bmp http://${BMP_HOST}:${BMP_PORT}
${resp}= Get Request bmp /proxy
Should Be Equal As Strings ${resp.status_code} 200
Log BMP Sessions: ${resp.text} [${resp.status_code}]
&{headers}= Create Dictionary Content-Type=application/x-www-form-urlencoded
&{data}= Create Dictionary trustAllServers=True
${resp}= Post Request bmp /proxy data=${data} headers=${headers}
Should Be Equal As Strings ${resp.status_code} 200
Log ${resp.text} [${resp.status_code}]
${port}= Get From Dictionary ${resp.json()} port
Log New BMP port: ${port} [${resp.status_code}]
Set Global Variable ${port}
[return] ${port}
Close Proxy
${resp}= Delete Request bmp /proxy/${port}
Should Be Equal As Strings ${resp.status_code} 200
Log Closed proxy at ${port} [${resp.status_code}]
New Har
[Documentation] Name and initialize a Har
[arguments] ${pagename}
&{data}= Create Dictionary initialPageRef=${pagename}
${resp}= Put Request bmp /proxy/${port}/har params=${data}
#Should Be Equal As Strings ${resp.status_code} 204
Log New Har (${pagename}) [${resp.status_code}]
Environment
This is on a MacOs Mojave.
Suggestions
I think the most important part of this script is the variables connecting to the hub and the mob browser proxy containers
${BMP_HOST} 0.0.0.0
${BMP_PORT} 9090
${SELENIUM} http://0.0.0.0:4444/wd/hub
This script just shows you how I am setting the preferences that you already see here in the stacktrace on the chrome node.
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
References
These are the issues i have already found online
One
Two - at the bottom of this issue it was resolved with adding, and as you can see above. I have already tried this.
--disable-dev-shm-usage'
Three this guy solved this problem by adding ipv6 to his docker config. But i am using a bridged network in docker compose so i dont see why this option would matter....
Previous attempts
I am not sure if this is an issue with chromedriver or docker. To go through a couple of things i have already tried...
1. I tried logging in to the chrome container and using the chromedriver cli...I also get the same error
comment on issue 2
I did try to use the image that included chromedriver 74 as the next comment suggested
next comment on issue 2
To eliminate
chrome_1 | Only local connections are allowed.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
You need to by set env JAVA_OPTS for docker chrome-node image:
chrome:
image: selenium/node-chrome:3.141.59-oxygen
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
networks:
- caowebtests
expose:
- 9515
when entering ddev start in terminal, i get the error
Failed to start xxx: web container failed: log=, err=container exited, please use 'ddev logs -s web` to find out why it failed
the error log goes
...
+ disable_xdebug
Disabled xdebug
+ ls /var/www/html
ls: cannot open directory '/var/www/html': Stale file handle
/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting
+ echo '/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting'
+ exit 101
and i dunno what to do here. the directory /var/www does not exist and it does not help to create it. searching the web does not bring any valuable information, only thing i found is this
ls /var/www/html >/dev/null || (echo "/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting" && exit 101)
but i have no clue, what it means, nor does it explain, what to do..
this is project related, i have docker/ddev running fine in other projects, but this one is haunted or something..
my config.yaml
APIVersion: v1.12.2
name: xxx
type: php
docroot: public
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: true
provider: default
use_dns_when_possible: true
timezone: ""
docker-compose.yaml
web:
container_name: ddev-${DDEV_SITENAME}-web
build:
context: '/Users/jnz/Documents/xxx/.ddev/.webimageBuild'
args:
BASE_IMAGE: $DDEV_WEBIMAGE
username: 'jb'
uid: '504'
gid: '20'
image: ${DDEV_WEBIMAGE}-built
cap_add:
- SYS_PTRACE
volumes:
- type: volume
source: nfsmount
target: /var/www/html
volume:
nocopy: true
- ".:/mnt/ddev_config:ro"
- ddev-global-cache:/mnt/ddev-global-cache
- ddev-ssh-agent_socket_dir:/home/.ssh-agent
restart: "no"
user: "$DDEV_UID:$DDEV_GID"
hostname: xxx-web
links:
- db:db
# ports is list of exposed *container* ports
ports:
- "127.0.0.1:$DDEV_HOST_WEBSERVER_PORT:80"
- "127.0.0.1:$DDEV_HOST_HTTPS_PORT:443"
environment:
- DOCROOT=$DDEV_DOCROOT
- DDEV_PHP_VERSION=$DDEV_PHP_VERSION
- DDEV_WEBSERVER_TYPE=$DDEV_WEBSERVER_TYPE
- DDEV_PROJECT_TYPE=$DDEV_PROJECT_TYPE
- DDEV_ROUTER_HTTP_PORT=$DDEV_ROUTER_HTTP_PORT
- DDEV_ROUTER_HTTPS_PORT=$DDEV_ROUTER_HTTPS_PORT
- DDEV_XDEBUG_ENABLED=$DDEV_XDEBUG_ENABLED
- DOCKER_IP=127.0.0.1
- HOST_DOCKER_INTERNAL_IP=
- DEPLOY_NAME=local
- VIRTUAL_HOST=$DDEV_HOSTNAME
- COLUMNS=$COLUMNS
- LINES=$LINES
- TZ=
# HTTP_EXPOSE allows for ports accepting HTTP traffic to be accessible from <site>.ddev.site:<port>
# To expose a container port to a different host port, define the port as hostPort:containerPort
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025
# You can optionally expose an HTTPS port option for any ports defined in HTTP_EXPOSE.
# To expose an HTTPS port, define the port as securePort:containerPort.
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80
- SSH_AUTH_SOCK=/home/.ssh-agent/socket
- DDEV_PROJECT=xxx
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
external_links:
- "ddev-router:xxx.ddev.site"
healthcheck:
interval: 1s
retries: 10
start_period: 10s
timeout: 120s
So as #rfay pointed out in the comments, the problem was caused by macOS catalina directory restrictions.
i had to go to system settings > security > privacy > files & folders and add /sbin/nfsd. it now has full hd access.
besides that i granted docker access to documents.
now ddev is up and running, even in folders inside User/xxx/Documents.