ddev start: web container failed (macOS Catalina using Documents folder for site) - ddev

when entering ddev start in terminal, i get the error
Failed to start xxx: web container failed: log=, err=container exited, please use 'ddev logs -s web` to find out why it failed
the error log goes
...
+ disable_xdebug
Disabled xdebug
+ ls /var/www/html
ls: cannot open directory '/var/www/html': Stale file handle
/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting
+ echo '/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting'
+ exit 101
and i dunno what to do here. the directory /var/www does not exist and it does not help to create it. searching the web does not bring any valuable information, only thing i found is this
ls /var/www/html >/dev/null || (echo "/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting" && exit 101)
but i have no clue, what it means, nor does it explain, what to do..
this is project related, i have docker/ddev running fine in other projects, but this one is haunted or something..
my config.yaml
APIVersion: v1.12.2
name: xxx
type: php
docroot: public
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: true
provider: default
use_dns_when_possible: true
timezone: ""
docker-compose.yaml
web:
container_name: ddev-${DDEV_SITENAME}-web
build:
context: '/Users/jnz/Documents/xxx/.ddev/.webimageBuild'
args:
BASE_IMAGE: $DDEV_WEBIMAGE
username: 'jb'
uid: '504'
gid: '20'
image: ${DDEV_WEBIMAGE}-built
cap_add:
- SYS_PTRACE
volumes:
- type: volume
source: nfsmount
target: /var/www/html
volume:
nocopy: true
- ".:/mnt/ddev_config:ro"
- ddev-global-cache:/mnt/ddev-global-cache
- ddev-ssh-agent_socket_dir:/home/.ssh-agent
restart: "no"
user: "$DDEV_UID:$DDEV_GID"
hostname: xxx-web
links:
- db:db
# ports is list of exposed *container* ports
ports:
- "127.0.0.1:$DDEV_HOST_WEBSERVER_PORT:80"
- "127.0.0.1:$DDEV_HOST_HTTPS_PORT:443"
environment:
- DOCROOT=$DDEV_DOCROOT
- DDEV_PHP_VERSION=$DDEV_PHP_VERSION
- DDEV_WEBSERVER_TYPE=$DDEV_WEBSERVER_TYPE
- DDEV_PROJECT_TYPE=$DDEV_PROJECT_TYPE
- DDEV_ROUTER_HTTP_PORT=$DDEV_ROUTER_HTTP_PORT
- DDEV_ROUTER_HTTPS_PORT=$DDEV_ROUTER_HTTPS_PORT
- DDEV_XDEBUG_ENABLED=$DDEV_XDEBUG_ENABLED
- DOCKER_IP=127.0.0.1
- HOST_DOCKER_INTERNAL_IP=
- DEPLOY_NAME=local
- VIRTUAL_HOST=$DDEV_HOSTNAME
- COLUMNS=$COLUMNS
- LINES=$LINES
- TZ=
# HTTP_EXPOSE allows for ports accepting HTTP traffic to be accessible from <site>.ddev.site:<port>
# To expose a container port to a different host port, define the port as hostPort:containerPort
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025
# You can optionally expose an HTTPS port option for any ports defined in HTTP_EXPOSE.
# To expose an HTTPS port, define the port as securePort:containerPort.
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80
- SSH_AUTH_SOCK=/home/.ssh-agent/socket
- DDEV_PROJECT=xxx
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
external_links:
- "ddev-router:xxx.ddev.site"
healthcheck:
interval: 1s
retries: 10
start_period: 10s
timeout: 120s

So as #rfay pointed out in the comments, the problem was caused by macOS catalina directory restrictions.
i had to go to system settings > security > privacy > files & folders and add /sbin/nfsd. it now has full hd access.
besides that i granted docker access to documents.
now ddev is up and running, even in folders inside User/xxx/Documents.

Related

Hosts aren't accessible by name in docker compose on Windows

I have two, windows-based images that I'm using with docker compose.
The docker-compose.yaml:
services:
application:
image: myapp-win:latest
container_name: "my-app"
# for diagnosis
entrypoint: ["cmd"]
stdin_open: true
tty: true
# /diagnosis
env_file: .myapp/.env
environment:
- POSTGRES_URI=jdbc:postgresql://db0:5432/mydatabase
depends_on:
db0:
condition: service_healthy
db0:
image: stellirin/postgres-windows:10.10
container_name: "my-db"
ports:
- 10000:5432 # this doesn't seem to work in windows
env_file:
- .postgres/.env
volumes:
- .postgres\initdb\:c:\docker-entrypoint-initdb.d\
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "${POSTGRES_DATABASE}", "-U", "${POSTGRES_USER}" ]
timeout: 45s
interval: 10s
retries: 10
restart: unless-stopped
With the two containers started, I accessed the terminal for the my-db container and got its IP address.
Next, I accessed the terminal for the my-app container. I was able to ping the my-db container by its IP address. However, it did not respond by its hostname:
c:\app> ping db0
Ping request could not find host db0.
This is symptommatic why the application can't reach the database using the POSTGRES_URI variable.
Is there a different syntax for the hostname in a Windows container?
** edit **
I'm not able to ping outside the network, from either container:
c:\app> ping 8.8.8.8
Request timed out.
Not sure if this is relevant.
Regardless of container OS, to my knowledge, referring to the other name (db0) directly won't directly work inside the container, but is simply exposed to the other compose entries
Instead, set an env var dependent on the name and read it in the container
environment:
- "ADDRESS_DB=db0"
Then, if you want to be able to ping db0 or similar, have a script set the env var as an available host name on start
Alternatively, you may have success setting it the extra_hosts field, but I haven't tested this and you may need to give it a different name to prevent interpolation
extra_hosts:
- db_url:db0

java.net.UnknownHostException: host.docker.internal: Name or service not known on AWS EC2

I ran into this "java.net.UnknownHostException: host.docker.internal: Name or service not known" problem when deploying a dockerized spring boot application on an AWS EC2 T2.micro instance. The spring boot application failed to start because of this error.
But the weird part is, I did not use the variable "host.docker.internal" anywhere in my application: not in the code, not in the yaml file, not in the .env file:
$ sudo grep -Rl "host.docker.internal" ~
/home/ec2-user/.bash_history
And when I run the following command it shows nothing but previous command to search for it:
$ cat /home/ec2-user/.bash_history | grep "host.docker.internal"
Locally I am using Windows 10 for development, and I can successfully bring up the stack with docker-compose.
Here is the EC2 instance OS version info:
$ cat /etc/*release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
Amazon Linux release 2 (Karoo)
And here is the docker-compose file that I used on the EC2 instance:
version: '2'
services:
backend:
container_name: backend
image: 'dockerhubuser/backend:0.0.4'
ports:
- '8080:8080'
volumes:
- /var/log/backend/logs:/var/log/backend/logs
- ./backend-ssl:/etc/ssh/backend
env_file:
- .env
depends_on:
- mysql
- redis
redis:
container_name: redis
image: 'redis:alpine'
ports:
- '6379:6379'
volumes:
- $PWD/redis/redis-data:/var/lib/redis
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
mysql:
container_name: mysql
image: 'mysql:8.0.21'
ports:
- '3306:3306'
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbrootpass
volumes:
- ./my_volume/mysql:/var/lib/mysql
volumes:
my_volume:
And here is my .env file on the EC2 instance:
SERVER_PORT=8080
KEY_STORE=/etc/ssh/backend/keystore.p12
KEY_STORE_PASSWORD=keystorepass
REDIS_HOST=redis
REDIS_PORT=6379
DB_HOST=mysql
DB_PORT=3306
DB_USERNAME=dbuser
DB_PASSWORD=dbpass
I am pretty sure that this .env file is being used when bringing up the stack with "docker-compose up" because I can see the SERVER_PORT in the log matches this file when I change it.
2021-01-02 20:55:44.870 [main] INFO o.s.b.w.e.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (https)
But I keep getting the error complaining about "host.docker.internal".
Here are things that I have tried but not working:
Hard-code the db host in property spring.datasource.url in application.yml
Add the following entry to /etc/hosts file (see https://stackoverflow.com/a/48547074/1852496)
172.17.0.1 host.docker.internal
Add the following entry to /etc/hosts file, where "ip-172-31-33-56.us-east-2.compute.internal" is what I got when running command "echo $HOSTNAME"
ip-172-31-33-56.us-east-2.compute.internal host.docker.internal
Terminate the instance and created another T2.micro instance, but got same result.
Edit inbound rules to allow TCP:3306 from anywhere.
Can someone take a look? Any help appreciated.
It works on Ubuntu 20.04 after adding "172.17.0.1 host.docker.internal" to /etc/hosts file.
Make sure the docker engine version is 20.10-beta1 or newer.

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

Can't add persistent folder to bitnami/mongodb on windows

I think this might be related to file system incompatibility (nfts/ext*)
How can I compose my containers and persist the db without the container exiting?
I'm using the bitnami-mongodb-image
Error:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Full Output:
Recreating mongodb_1 ... done
Starting node_1 ... done
Attaching to node_1, mongodb_1
mongodb_1 |
mongodb_1 | Welcome to the Bitnami mongodb container
mongodb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb_1 |
mongodb_1 | nami INFO Initializing mongodb
mongodb_1 | mongodb INFO ==> Deploying MongoDB from scratch...
mongodb_1 | Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Docker Version:
Docker version 18.06.0-ce, build 0ffa825
Windows Version:
Microsoft Windows 10 Pro
Version 10.0.17134 Build 17134
This is my docker-compose.yml so far:
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- "./data/db:/bitnami"
- "./conf/mongo:/opt/bitnami/mongodb/conf"
I do not use Windows but you can definitely try to use a named volume and see if the permission problem goes away
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- mongodata:/bitnami:rw
- "./conf/mongo:/opt/bitnami/mongodb/conf"
volumes:
mongodata:
I would like to stress this is a named volume, compared to the host volumes you are using. It is the best option for production and you need to be aware that docker will manage and store the files for you so you will not see the files in your project folder.
If you still want to use host volumes (so volumes that write to that location you specify in your project subfolder on the host machine) you need to apply a permission fix, here is an example for mariadb but it will work for mongo too
https://github.com/bitnami/bitnami-docker-mariadb/issues/136#issuecomment-354644226
In short, you need to know what is the user of the filesystem (in the example 1001 is the user id on my host machine for my logged in user) on your host and then chown that folder to this user so the user will be the same on the folder and your host system.
A full example:
version: "2"
services:
fix-mongodb-permissions:
image: 'bitnami/mongodb:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- "./data:/bitnami"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- ./data:/bitnami:rw
depends_on:
- fix-mongodb-permissions
I hope this helps

Traefik - Can't connect via https

I am trying to run Traefik on a Raspberry Pi Docker Swarm (specifally following this guide https://github.com/openfaas/faas/blob/master/guide/traefik_integration.md from the OpenFaaS project) but have run into some trouble when actually trying to connect via https.
Specifically there are two issues:
1) When I connect to http://192.168.1.20/ui I am given the username / password prompt. However the details (unhashed password) generated by htpasswd and used in the below docker-compose.yml are not accepted.
2) Visting the https version (http://192.168.1.20/ui) does not connect at all. This is the same if I try to connect using the domain I have set in --acme.domains
When I explore /etc/ I can see that no /etc/traefik/ directory exists but should presumably be created so perhaps this is the root of my problem?
The relevant part of my docker-compose.yml looks like
traefik:
image: traefik:v1.3
command: -c --docker=true
--docker.swarmmode=true
--docker.domain=traefik
--docker.watch=true
--web=true
--debug=true
--defaultEntryPoints=https,http
--acme=true
--acme.domains='<my domain>'
--acme.email=myemail#gmail.com
--acme.ondemand=true
--acme.onhostrule=true
--acme.storage=/etc/traefik/acme/acme.json
--entryPoints=Name:https Address::443 TLS
--entryPoints=Name:http Address::80 Redirect.EntryPoint:https
ports:
- 80:80
- 8080:8080
- 443:443
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "acme:/etc/traefik/acme"
networks:
- functions
deploy:
labels:
- traefik.port=8080
- traefik.frontend.rule=PathPrefix:/ui,/system,/function
- traefik.frontend.auth.basic=user:password <-- relevant credentials from htpasswd here
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 20
window: 380s
placement:
constraints: [node.role == manager]
volumes:
acme:
Any help very much appreciated.
Due to https://community.letsencrypt.org/t/2018-01-09-issue-with-tls-sni-01-and-shared-hosting-infrastructure/49996
The TLS challenge (default) for Let's Encrypt doesn't work anymore.
You must use the DNS challenge instead https://docs.traefik.io/configuration/acme/#dnsprovider.
Or waiting for the merge of https://github.com/containous/traefik/pull/2701

Resources