Cannot bind docker chrome node in seleniumgrid to proxy container - proxy

Problem
I am launching seleniumgrid with mobbrowserproxy in a docker bridged network. The remote driver proxy settings are configured correctly but i cannot connect to the proxy server from the chrome driver. I have looked at all the issues and i cannot find a solution.
Below is the error which shows the driver desired capabilities/options.
Stacktrace
selenium-hub | 15:40:24.222 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ef95a1c8-8c30-472b-9e5e-323b8a8c9045, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
chrome_1 | 15:40:24.243 INFO [ActiveSessionFactory.apply] - Capabilities are: {
chrome_1 | "browserName": "chrome",
chrome_1 | "goog:chromeOptions": {
chrome_1 | "args": [
chrome_1 | "--disable-gpu",
chrome_1 | "--headless",
chrome_1 | "--no-sandbox",
chrome_1 | "--whitelisted-ips",
chrome_1 | "--disable-dev-shm-usage",
chrome_1 | "--allow-insecure-localhost",
chrome_1 | "--disable-web-security",
chrome_1 | "--ignore-certificate-errors",
chrome_1 | "--allow-running-insecure-content"
chrome_1 | ],
chrome_1 | "extensions": [
chrome_1 | ]
chrome_1 | },
chrome_1 | "proxy": {
chrome_1 | "httpProxy": "0.0.0.0:9099",
chrome_1 | "proxyType": "manual",
chrome_1 | "sslProxy": "0.0.0.0:9099"
chrome_1 | },
chrome_1 | "version": ""
chrome_1 | }
chrome_1 | 15:40:24.243 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
chrome_1 | Starting ChromeDriver 74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729#{#29}) on port 28080
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
chrome_1 | 15:40:24.418 INFO [ProtocolHandshake.createSession] - Detected dialect: OSS
chrome_1 | 15:40:24.444 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 4c42082804845cf9571e6a843a63feca (org.openqa.selenium.chrome.ChromeDriverService)
chrome_1 | 15:40:34.798 INFO [ActiveSessions$1.onStop] - Removing session 4c42082804845cf9571e6a843a63feca (org.openqa.selenium.chrome.ChromeDriverService)
I believe the key point of this stacktrace is right here
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
To Reproduce
To reproduce this error. If you simple launch the containers and login to chromenode and use the chromedriver cli, you should see the same error in the terminal. Again the target of this issue is connecting the proxy container via chromedriver proxy settings.
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-20200409
container_name: selenium-hub
ports:
- "4444:4444"
networks:
- caowebtests
chrome:
image: selenium/node-chrome:3.141.59-oxygen
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
expose:
- 9515
firefox:
image: selenium/node-firefox:3.141.59-20200409
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
opera:
image: selenium/node-opera:3.141.59-20200409
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- caowebtests
proxy:
image: spothero/browsermob-proxy:1.0.0
depends_on:
- selenium-hub
networks:
- caowebtests
ports:
- "9090:9090"
links:
- selenium-hub
- firefox
- chrome
- opera
robottests:
container_name: robottests
command: /bin/sleep infinity
depends_on:
- selenium-hub
build: .
volumes:
- ./reports:/cao_ui_tests/reports
networks:
- caowebtests
networks:
caowebtests:
driver: bridge
As you can see in the trace above. the desired capabilities and options are all there. What I am trying to find out is why the chrome driver is getting this error with the proxy settings above.
My test is failing when looking for the first element on the login page. I am not sure if you need the test scripts and the ability to execute the test scripts to try to reproduce this or if this is something that can be pointed out. I can add this repo to github if needed.
Test Script Below
*** Settings ***
Library BrowserMobProxy
Library SeleniumLibrary
Library RequestsLibrary
Library Collections
Resource resources${/}base.robot
Resource resources${/}common.robot
Resource resources${/}auth.robot
Suite Setup Setup Test Suite
Test Teardown Close All Browsers
Suite Teardown Test Suite Teardown
*** Variables ***
${BMP_HOST} 0.0.0.0
${BMP_PORT} 9090
${SELENIUM} http://0.0.0.0:4444/wd/hub
${SHOT_NUM} 0
#{TIMINGS}
*** Test Cases ***
Login User
Wait Until Page Loads
Wait Until Page Contains Element ${UI['login']} timeout=10
Submit Credentials %{TEST_USER} %{TEST_PASS}
*** Keywords ***
Setup Test Suite
Load UI Repository ${REPO_PATH}
Connect To Remote Server ${BMP_HOST} ${BMP_PORT}
Set Selenium Implicit Wait 0.2 seconds
Set Selenium Timeout 30 seconds
${prefs}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
&{caps}= Set Capabilities
Call Method ${prefs} add_argument --disable-gpu
Call Method ${prefs} add_argument --headless
Call Method ${prefs} add_argument --no-sandbox
Call Method ${prefs} add_argument --whitelisted-ips
Call Method ${prefs} add_argument --disable-dev-shm-usage
Call Method ${prefs} add_argument --allow-insecure-localhost
Call Method ${prefs} add_argument --disable-web-security
Call Method ${prefs} add_argument --ignore-certificate-errors
Call Method ${prefs} add_argument --allow-running-insecure-content
Create Webdriver Remote command_executor=${SELENIUM} desired_capabilities=${caps}
options=${prefs}
New Har LoginPage
Go To https://cardatonce.eftsource.com
Test Suite Teardown
Get Har file.har
Close Proxy
Set Capabilities
[Documentation] Set the options for the selenium Driver
${port}= Create Proxy
&{proxy}= Create Dictionary
... proxyType MANUAL
... sslProxy ${BMP_HOST}:${port}
... httpProxy ${BMP_HOST}:${port}
&{caps}= Create Dictionary browserName=chrome platform=ANY proxy=&{proxy}
Log Selenium capabilities: ${caps}
[return] ${caps}
Create Proxy
[Documentation] Get a BMP port for our test
Create Session bmp http://${BMP_HOST}:${BMP_PORT}
${resp}= Get Request bmp /proxy
Should Be Equal As Strings ${resp.status_code} 200
Log BMP Sessions: ${resp.text} [${resp.status_code}]
&{headers}= Create Dictionary Content-Type=application/x-www-form-urlencoded
&{data}= Create Dictionary trustAllServers=True
${resp}= Post Request bmp /proxy data=${data} headers=${headers}
Should Be Equal As Strings ${resp.status_code} 200
Log ${resp.text} [${resp.status_code}]
${port}= Get From Dictionary ${resp.json()} port
Log New BMP port: ${port} [${resp.status_code}]
Set Global Variable ${port}
[return] ${port}
Close Proxy
${resp}= Delete Request bmp /proxy/${port}
Should Be Equal As Strings ${resp.status_code} 200
Log Closed proxy at ${port} [${resp.status_code}]
New Har
[Documentation] Name and initialize a Har
[arguments] ${pagename}
&{data}= Create Dictionary initialPageRef=${pagename}
${resp}= Put Request bmp /proxy/${port}/har params=${data}
#Should Be Equal As Strings ${resp.status_code} 204
Log New Har (${pagename}) [${resp.status_code}]
Environment
This is on a MacOs Mojave.
Suggestions
I think the most important part of this script is the variables connecting to the hub and the mob browser proxy containers
${BMP_HOST} 0.0.0.0
${BMP_PORT} 9090
${SELENIUM} http://0.0.0.0:4444/wd/hub
This script just shows you how I am setting the preferences that you already see here in the stacktrace on the chrome node.
chrome_1 | Only local connections are allowed.
chrome_1 | Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
References
These are the issues i have already found online
One
Two - at the bottom of this issue it was resolved with adding, and as you can see above. I have already tried this.
--disable-dev-shm-usage'
Three this guy solved this problem by adding ipv6 to his docker config. But i am using a bridged network in docker compose so i dont see why this option would matter....
Previous attempts
I am not sure if this is an issue with chromedriver or docker. To go through a couple of things i have already tried...
1. I tried logging in to the chrome container and using the chromedriver cli...I also get the same error
comment on issue 2
I did try to use the image that included chromedriver 74 as the next comment suggested
next comment on issue 2

To eliminate
chrome_1 | Only local connections are allowed.
chrome_1 | [1589038824.252][SEVERE]: bind() failed: Cannot assign requested address (99)
You need to by set env JAVA_OPTS for docker chrome-node image:
chrome:
image: selenium/node-chrome:3.141.59-oxygen
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
networks:
- caowebtests
expose:
- 9515

Related

Docker: Ports are not available: exposing port TCP 0.0.0.0:61615 -> 0.0.0.0:0: listen tcp 0.0.0.0:61615

I am trying to use ActiveMQ in Docker, but I've started to get this error when starting the container.
Error invoking remote method 'docker-start-container': Error: (HTTP code 500) server error - Ports are not available: exposing port TCP 0.0.0.0:61613 -> 0.0.0.0:0: listen tcp 0.0.0.0:61613: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
I could not find any service that is using these ports (maybe I looking in a wrong way).
I'm seeing that people generally suggest to restart winnat, but I am not sure if it's a good idea and I'd like to know if there are any other solutions to this problem.
Also changing ports ranges won't work in my case since they are already set to a suggested value:
Protocol tcp Dynamic Port Range
---------------------------------
Start Port : 49152
Number of Ports : 16384
Here is a part of my docker-compose file:
version: '3.5'
networks:
test-net:
ipam:
driver: default
config:
- subnet: 172.33.1.0/24
services:
activemq:
image: privat_artifactory
container_name: test-activemq
restart: always
networks:
- test-net
ports:
- "8161:8161"
- "1883:1883"
- "5672:5672"
- "61613-61616:61613-61616"

gradle continuous build trick doesn't work in docker container

Hi I'm trying to use the trick described here to allow continuous building inside docker containers. The trick works fine when I open two separate terminals on my host machine, but fails when used in docker containers.
docker-compose.yml
build_server:
image: gradle:6.3.0-jdk8
working_dir: /home/gradle/server
volumes:
- ./server:/home/gradle/server
command: ["gradle", "build", "--continuous", "-x", "test"]
server:
image: gradle:6.3.0-jdk8
working_dir: /home/gradle/server
volumes:
- ./server:/home/gradle/server
ports:
- 8080:8080
depends_on:
- build_server
restart: on-failure
command: ["gradle", "bootRun"]
The error message I got from server container:
server_1 | FAILURE: Build failed with an exception.
server_1 |
server_1 | * What went wrong:
server_1 | Gradle could not start your build.
server_1 | > Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
server_1 | > Could not create service of type ChecksumService using BuildSessionScopeServices.createChecksumService().
server_1 | > Timeout waiting to lock checksums cache (/home/gradle/server/.gradle/checksums). It is currently in use by another Gradle instance.
server_1 | Owner PID: unknown
server_1 | Our PID: 31
server_1 | Owner Operation: unknown
server_1 | Our operation:
server_1 | Lock file: /home/gradle/server/.gradle/checksums/checksums.lock
It looks like gradle has added locks on local cache files and prevents bootRun task from being run in the other container. However, the trick works fine when I run the tasks in two terminals on my host machine, or when I only run the build_server container and run bootRun on host terminal. I wonder why it doesn't work inside docker containers. Thanks for helping in advance!
Found a workaround by setting a different project cache dir for the server container. i.e. replace the command with the following
command: ["gradle", "bootRun", "--project-cache-dir=/tmp/cache"]
Might not be the best solution but it does circumvent the problem caused by gradle's lock.

HaProxy has the Loadbalancer for spring boot application

i have configured haproxy as the load balancer for two containerised spring boot application
Below is the sample docker compose file configuration
version: '3.3'
services:
wechat-1:
image: xxxxxx/wechat-social-connector:2.0.0
container_name: wechat-1
ports:
- 81:8000
networks:
- web
#depends_on:
#- wechat-2
wechat-2:
image: xxxxxxxxx/wechat-social-connector:2.0.0
container_name: wechat-2
ports:
- 82:8000
networks:
- web
haproxy:
build: ./haproxy
container_name: haproxy
ports:
- 80:80
networks:
- web
#depends_on:
#- wechat-1
networks:
web:
Docker file
FROM haproxy:2.1.4
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
HA Configuration file
global
debug
daemon
maxconn 2000
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
mode http
option httpchk
balance roundrobin
server wechat-1 wechat-1:81 check
server wechat-2 wechat-2:82 check
when i am trying to access my endpoints using the port number 80 i always getting the service unavailable.
After debugging from the haproxy logs noticed the below error
Creating haproxy ... done
Creating wechat-2 ... done
Creating wechat-1 ... done
Attaching to wechat-2, wechat-1, haproxy
haproxy | Available polling systems :
haproxy | epoll : pref=300, test result OK
haproxy | poll : pref=200, test result OK
haproxy | select : pref=150, test result FAILED
haproxy | Total: 3 (2 usable), will use epoll.
haproxy |
haproxy | Available filters :
haproxy | [SPOE] spoe
haproxy | [CACHE] cache
haproxy | [FCGI] fcgi-app
haproxy | [TRACE] trace
haproxy | [COMP] compression
haproxy | Using epoll() as the polling mechanism.
haproxy | [NOTICE] 144/185524 (1) : New worker #1 (8) forked
haproxy | [WARNING] 144/185524 (8) : Server servers/wechat-1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy | [WARNING] 144/185525 (8) : Server servers/wechat-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy | [ALERT] 144/185525 (8) : backend 'servers' has no server available!
from the logs i understand when haproxy is not able to connect the other two containers which are running perfectly with out any issues.
i tired to use the depends_on attribute(commented for time being) still the issue same .
Can some one help me in fixing this issue?
Please try the below configuration. Few changes in the haproxy.cfg
docker-compose.yaml
version: '3.3'
services:
wechat-1:
image: nginx
container_name: wechat-1
ports:
- 81:80
networks:
- web
depends_on:
- wechat-2
wechat-2:
image: nginx
container_name: wechat-2
ports:
- 82:80
networks:
- web
haproxy:
build: ./haproxy
container_name: haproxy
ports:
- 80:80
networks:
- web
depends_on:
- wechat-1
networks:
web:
Dockerfile
FROM haproxy
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
haproxy.cfg
global
debug
daemon
maxconn 2000
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
mode http
option forwardfor
balance roundrobin
server wechat-1 wechat-1:80 check
server wechat-2 wechat-2:80 check
Logs of HAPROXY
Attaching to wechat-2, wechat-1, haproxy
haproxy | Using epoll() as the polling mechanism.
haproxy | Available polling systems :
haproxy | epoll : pref=300, test result OK
haproxy | poll : pref=200, test result OK
haproxy | select : pref=150, test result FAILED
haproxy | Total: 3 (2 usable), will use epoll.
haproxy |
haproxy | Available filters :
haproxy | [SPOE] spoe
haproxy | [CACHE] cache
haproxy | [FCGI] fcgi-app
haproxy | [TRACE] trace
haproxy | [COMP] compression
haproxy | [NOTICE] 144/204217 (1) : New worker #1 (6) forked

Using selenium/standalone-chrome docker image within a Ruby / Capybara test suite

I am attempting to write a Capybara test suite in Ruby (without Rails).
I would like to run the Ruby code in a Docker container.
FROM ruby:2.7
RUN gem install bundler
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install
COPY . .
CMD ["bundle", "exec", "rspec", "spec"]
My tests require a headless version of Chrome which is not available in the Ruby image. I have attempted to create a docker-compose file to include the Ruby code and a headless Chrome image.
version: '3.7'
networks:
mynet:
services:
admin:
container_name: mrt-integ-tests
image: cdluc3/mrt-integ-tests
build:
context: .
dockerfile: Dockerfile
volumes:
- type: bind
source: ./config/test_config.yml
target: /config/test_config.yml
stdin_open: true
tty: true
networks:
mynet:
depends_on:
- chrome
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
networks:
mynet:
This is how I am attempting to create my Capybara session:
Capybara.register_driver :selenium_chrome do |app|
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
desired_capabilities: caps,
url: "http://chrome:4444/wd/hub"
)
end
#session = Capybara::Session.new(:selenium_chrome)
When I start my containers, I see the following error.
chrome | 05:39:44.350 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
chrome | 2020-06-04 05:39:44.420:INFO::main: Logging initialized #766ms to org.seleniumhq.jetty9.util.log.StdErrLog
chrome | 05:39:44.920 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
chrome | 05:39:45.141 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
chrome | 05:39:47.431 INFO [ActiveSessionFactory.apply] - Capabilities are: {
chrome | "browserName": "chrome",
chrome | "chromeOptions": {
chrome | "args": [
chrome | "--no-default-browser-check",
chrome | "--start-maximized",
chrome | "--headless",
chrome | "--disable-dev-shm-usage"
chrome | ]
chrome | },
chrome | "cssSelectorsEnabled": true,
chrome | "javascriptEnabled": true,
chrome | "nativeEvents": false,
chrome | "rotatable": false,
chrome | "takesScreenshot": false,
chrome | "version": ""
chrome | }
chrome | 05:39:47.435 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
chrome | Starting ChromeDriver 83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103#{#416}) on port 13554
chrome | Only local connections are allowed.
chrome | Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
chrome | ChromeDriver was started succ[e1s5s9f1u2l4l9y1.8
chrome | 7.473][SEVERE]: bind() failed: Cannot assign requested address (99)
chrome | 05:39:48.211 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
chrome | 05:39:48.247 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 1ed95304843a1d5ab904708d998710a0 (org.openqa.selenium.chrome.ChromeDriverService)
What suggestions do you have to resolve this error?
I have a partial solution here. Adding --whitelisted-ips to my browser options makes my tests function. See https://github.com/RobCherry/docker-chromedriver/issues/15
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage']
to
args = ['--no-default-browser-check', '--start-maximized', '--headless', '--disable-dev-shm-usage', '--whitelisted-ips']
Interestingly, the following error is still generated in the chrome container.
[1591286416.104][SEVERE]: bind() failed: Cannot assign requested address (99)
You need to setup whitelisted-ips argument for chromedriver executable. You can achive it by set env JAVA_OPTS for docker chrome-node image. In your case:
chrome:
container_name: chrome
image: selenium/standalone-chrome
ports:
- published: 4444
target: 4444
environment:
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
networks:
mynet:

ddev start: web container failed (macOS Catalina using Documents folder for site)

when entering ddev start in terminal, i get the error
Failed to start xxx: web container failed: log=, err=container exited, please use 'ddev logs -s web` to find out why it failed
the error log goes
...
+ disable_xdebug
Disabled xdebug
+ ls /var/www/html
ls: cannot open directory '/var/www/html': Stale file handle
/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting
+ echo '/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting'
+ exit 101
and i dunno what to do here. the directory /var/www does not exist and it does not help to create it. searching the web does not bring any valuable information, only thing i found is this
ls /var/www/html >/dev/null || (echo "/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting" && exit 101)
but i have no clue, what it means, nor does it explain, what to do..
this is project related, i have docker/ddev running fine in other projects, but this one is haunted or something..
my config.yaml
APIVersion: v1.12.2
name: xxx
type: php
docroot: public
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: true
provider: default
use_dns_when_possible: true
timezone: ""
docker-compose.yaml
web:
container_name: ddev-${DDEV_SITENAME}-web
build:
context: '/Users/jnz/Documents/xxx/.ddev/.webimageBuild'
args:
BASE_IMAGE: $DDEV_WEBIMAGE
username: 'jb'
uid: '504'
gid: '20'
image: ${DDEV_WEBIMAGE}-built
cap_add:
- SYS_PTRACE
volumes:
- type: volume
source: nfsmount
target: /var/www/html
volume:
nocopy: true
- ".:/mnt/ddev_config:ro"
- ddev-global-cache:/mnt/ddev-global-cache
- ddev-ssh-agent_socket_dir:/home/.ssh-agent
restart: "no"
user: "$DDEV_UID:$DDEV_GID"
hostname: xxx-web
links:
- db:db
# ports is list of exposed *container* ports
ports:
- "127.0.0.1:$DDEV_HOST_WEBSERVER_PORT:80"
- "127.0.0.1:$DDEV_HOST_HTTPS_PORT:443"
environment:
- DOCROOT=$DDEV_DOCROOT
- DDEV_PHP_VERSION=$DDEV_PHP_VERSION
- DDEV_WEBSERVER_TYPE=$DDEV_WEBSERVER_TYPE
- DDEV_PROJECT_TYPE=$DDEV_PROJECT_TYPE
- DDEV_ROUTER_HTTP_PORT=$DDEV_ROUTER_HTTP_PORT
- DDEV_ROUTER_HTTPS_PORT=$DDEV_ROUTER_HTTPS_PORT
- DDEV_XDEBUG_ENABLED=$DDEV_XDEBUG_ENABLED
- DOCKER_IP=127.0.0.1
- HOST_DOCKER_INTERNAL_IP=
- DEPLOY_NAME=local
- VIRTUAL_HOST=$DDEV_HOSTNAME
- COLUMNS=$COLUMNS
- LINES=$LINES
- TZ=
# HTTP_EXPOSE allows for ports accepting HTTP traffic to be accessible from <site>.ddev.site:<port>
# To expose a container port to a different host port, define the port as hostPort:containerPort
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025
# You can optionally expose an HTTPS port option for any ports defined in HTTP_EXPOSE.
# To expose an HTTPS port, define the port as securePort:containerPort.
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80
- SSH_AUTH_SOCK=/home/.ssh-agent/socket
- DDEV_PROJECT=xxx
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
external_links:
- "ddev-router:xxx.ddev.site"
healthcheck:
interval: 1s
retries: 10
start_period: 10s
timeout: 120s
So as #rfay pointed out in the comments, the problem was caused by macOS catalina directory restrictions.
i had to go to system settings > security > privacy > files & folders and add /sbin/nfsd. it now has full hd access.
besides that i granted docker access to documents.
now ddev is up and running, even in folders inside User/xxx/Documents.

Resources