Can't add persistent folder to bitnami/mongodb on windows - windows

I think this might be related to file system incompatibility (nfts/ext*)
How can I compose my containers and persist the db without the container exiting?
I'm using the bitnami-mongodb-image
Error:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Full Output:
Recreating mongodb_1 ... done
Starting node_1 ... done
Attaching to node_1, mongodb_1
mongodb_1 |
mongodb_1 | Welcome to the Bitnami mongodb container
mongodb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb_1 |
mongodb_1 | nami INFO Initializing mongodb
mongodb_1 | mongodb INFO ==> Deploying MongoDB from scratch...
mongodb_1 | Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Docker Version:
Docker version 18.06.0-ce, build 0ffa825
Windows Version:
Microsoft Windows 10 Pro
Version 10.0.17134 Build 17134
This is my docker-compose.yml so far:
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- "./data/db:/bitnami"
- "./conf/mongo:/opt/bitnami/mongodb/conf"

I do not use Windows but you can definitely try to use a named volume and see if the permission problem goes away
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- mongodata:/bitnami:rw
- "./conf/mongo:/opt/bitnami/mongodb/conf"
volumes:
mongodata:
I would like to stress this is a named volume, compared to the host volumes you are using. It is the best option for production and you need to be aware that docker will manage and store the files for you so you will not see the files in your project folder.
If you still want to use host volumes (so volumes that write to that location you specify in your project subfolder on the host machine) you need to apply a permission fix, here is an example for mariadb but it will work for mongo too
https://github.com/bitnami/bitnami-docker-mariadb/issues/136#issuecomment-354644226
In short, you need to know what is the user of the filesystem (in the example 1001 is the user id on my host machine for my logged in user) on your host and then chown that folder to this user so the user will be the same on the folder and your host system.
A full example:
version: "2"
services:
fix-mongodb-permissions:
image: 'bitnami/mongodb:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- "./data:/bitnami"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- ./data:/bitnami:rw
depends_on:
- fix-mongodb-permissions
I hope this helps

Related

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

ddev start: web container failed (macOS Catalina using Documents folder for site)

when entering ddev start in terminal, i get the error
Failed to start xxx: web container failed: log=, err=container exited, please use 'ddev logs -s web` to find out why it failed
the error log goes
...
+ disable_xdebug
Disabled xdebug
+ ls /var/www/html
ls: cannot open directory '/var/www/html': Stale file handle
/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting
+ echo '/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting'
+ exit 101
and i dunno what to do here. the directory /var/www does not exist and it does not help to create it. searching the web does not bring any valuable information, only thing i found is this
ls /var/www/html >/dev/null || (echo "/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting" && exit 101)
but i have no clue, what it means, nor does it explain, what to do..
this is project related, i have docker/ddev running fine in other projects, but this one is haunted or something..
my config.yaml
APIVersion: v1.12.2
name: xxx
type: php
docroot: public
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: true
provider: default
use_dns_when_possible: true
timezone: ""
docker-compose.yaml
web:
container_name: ddev-${DDEV_SITENAME}-web
build:
context: '/Users/jnz/Documents/xxx/.ddev/.webimageBuild'
args:
BASE_IMAGE: $DDEV_WEBIMAGE
username: 'jb'
uid: '504'
gid: '20'
image: ${DDEV_WEBIMAGE}-built
cap_add:
- SYS_PTRACE
volumes:
- type: volume
source: nfsmount
target: /var/www/html
volume:
nocopy: true
- ".:/mnt/ddev_config:ro"
- ddev-global-cache:/mnt/ddev-global-cache
- ddev-ssh-agent_socket_dir:/home/.ssh-agent
restart: "no"
user: "$DDEV_UID:$DDEV_GID"
hostname: xxx-web
links:
- db:db
# ports is list of exposed *container* ports
ports:
- "127.0.0.1:$DDEV_HOST_WEBSERVER_PORT:80"
- "127.0.0.1:$DDEV_HOST_HTTPS_PORT:443"
environment:
- DOCROOT=$DDEV_DOCROOT
- DDEV_PHP_VERSION=$DDEV_PHP_VERSION
- DDEV_WEBSERVER_TYPE=$DDEV_WEBSERVER_TYPE
- DDEV_PROJECT_TYPE=$DDEV_PROJECT_TYPE
- DDEV_ROUTER_HTTP_PORT=$DDEV_ROUTER_HTTP_PORT
- DDEV_ROUTER_HTTPS_PORT=$DDEV_ROUTER_HTTPS_PORT
- DDEV_XDEBUG_ENABLED=$DDEV_XDEBUG_ENABLED
- DOCKER_IP=127.0.0.1
- HOST_DOCKER_INTERNAL_IP=
- DEPLOY_NAME=local
- VIRTUAL_HOST=$DDEV_HOSTNAME
- COLUMNS=$COLUMNS
- LINES=$LINES
- TZ=
# HTTP_EXPOSE allows for ports accepting HTTP traffic to be accessible from <site>.ddev.site:<port>
# To expose a container port to a different host port, define the port as hostPort:containerPort
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025
# You can optionally expose an HTTPS port option for any ports defined in HTTP_EXPOSE.
# To expose an HTTPS port, define the port as securePort:containerPort.
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80
- SSH_AUTH_SOCK=/home/.ssh-agent/socket
- DDEV_PROJECT=xxx
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
external_links:
- "ddev-router:xxx.ddev.site"
healthcheck:
interval: 1s
retries: 10
start_period: 10s
timeout: 120s
So as #rfay pointed out in the comments, the problem was caused by macOS catalina directory restrictions.
i had to go to system settings > security > privacy > files & folders and add /sbin/nfsd. it now has full hd access.
besides that i granted docker access to documents.
now ddev is up and running, even in folders inside User/xxx/Documents.

Docker shared volume not working in MacOs

I have a docker-compose.yml file. It works fine in Windows 10 but whenever I try to run that in MacOs it doesnt work especially the shared volume.
Here is the content of my docker-compose.yml file and directory structure
version: '3'
services:
database:
image: mongo
container_name: pcore-database
ports:
- '27017:27017'
node-server:
image: node
container_name: pcore-node-server
volumes:
- ./node-services :/usr/app/node-services
working_dir: /usr/app/node-services
command: npm run dev
ports:
- '3000:3000'
links:
- database
- nginx-server
depends_on:
- database
apache-server:
image: webdevops/php-apache
container_name: pcore-apache-server
working_dir: /app
volumes:
- ./php-services :/app
ports:
- '8000:80'
Check the node-server service and nginx-server
Now when i run command docker-compose up it creates additional directories with same name and throws error.
Check the error and additional directories it created.
I dont know whats going on. Its working fine in windows 10 but in MacOs it creates additional directories and does not share the volumes. Can someone guid me?

How to run elasticsearch via docker compose or swarm mode and install plugin with command

Problem Statement
I have a docker-compose.yml file (v3) that looks like the following:
version: '3'
services:
elastic:
restart: always
image: elasticsearch:2.3.1
command: ["sh", "-c", "./bin/plugin install delete-by-query && ./bin/elasticsearch"]
volumes:
- /home/styfle/esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
restart: always
image: kibana:4.5.4
ports:
- 5601:5601
links:
- elastic:elasticsearch
When I run docker-compose up elastic it appears that the plugin installed correctly, but I get the message "don't run elasticsearch as root".
Creating dev_elastic_1 ... done
Attaching to dev_elastic_1
elastic_1 | -> Installing delete-by-query...
elastic_1 | Trying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/delete-by-query/2.3.1/delete-by-query-2.3.1.zip ...
elastic_1 | Downloading ..DONE
elastic_1 | Verifying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/delete-by-query/2.3.1/delete-by-query-2.3.1.zip checksums if available ...
elastic_1 | Downloading .DONE
elastic_1 | Installed delete-by-query into /usr/share/elasticsearch/plugins/delete-by-query
elastic_1 | Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
elastic_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
elastic_1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elastic_1 | Refer to the log for complete error details.
dev_elastic_1 exited with code 74
Question
How can I install the plugin and run as the elasticsearch user instead of root user?
As per the docker-compose architecture and cleanup policies, you cannot run a docker-compose command to initiate a subshell.
You can do some bash and docker changes in your current docker-compose.yml file as below:
version: '3'
services:
elastic:
restart: always
image: elasticsearch:2.3.1
user: ${MY_USER_ID}
command: ["sh", "-c", "./bin/plugin install delete-by-query && ./bin/elasticsearch"]
volumes:
- /home/styfle/esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
restart: always
user: ${MY_USER_ID}
image: kibana:4.5.4
ports:
- 5601:5601
links:
- elastic:elasticsearch
I have added a line user: ${MY_USER_ID} in the above docker-compose.yml file. After this, you need to use the below command to spin up the containers and start elasticsearch:
MY_USER_ID=$(id -u):$(id -g) docker-compose up elastic
Test it and let me know the feedback.

Docker compose containers fail and exit with code 127 missing /bin/env bash

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

Resources