Sometimes you want to use a custom node version in your ddev setup. I will give an example configuration how this can be archived.
Create a file in .ddev folder named docker-compose.node.yaml with the following content:
version: '3.6'
services:
node:
container_name: ddev-${DDEV_SITENAME}-node
image: node:10.6
user: "node"
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
volumes:
- "../:/var/www/html:cached"
working_dir: /var/www/html
command: ["tail", "-f", "/dev/null"]
Ddev will start a separate node container that is not terminated after startup.
You can ssh into that container using the command ddev ssh -s node
You can also configure post-start hook like this:
hooks:
post-start:
- exec-host: ddev exec -s node npm ci --quiet
- exec-host: ddev exec -s node npm start
Related
I am working on a Springboot project with docker. I tried to mount volume so I could have access to generated files from the Springboot application in my local directory. The data is generated in the docker container but I can not find it the local directory.
I have read many topics but none seems to be helpful.
Please, I am still new to docker and would appreciate suggestions to assist.
I have tried to mount the volume directly in the dockerfile as there is a docker compose file to run the service alongside others. Below is what I have in my Dockerfile and docker-compose
Dockerfile
FROM iron/java:1.8
EXPOSE 8080
ENV USER_NAME myprofile
ENV APP_HOME /home/$USER_NAME/app
#Test Script>>>>>>>>>>>>>>>>>>>>>>
#Modifiable
ENV SQL_SCRIPT $APP_HOME/SCRIPTS_TO_RUN
ENV SQL_OUTPUT_FILE $SQL_SCRIPT/data
ENV NO_OF_USERS 3
ENV RANGE_OF_SKILLS "1-4"
ENV HOST_PATH C:"/Users/user1/IdeaProjects/path/logs"
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
RUN adduser -S $USER_NAME
RUN mkdir $APP_HOME
RUN mkdir $SQL_SCRIPT
RUN chown $USER_NAME $SQL_SCRIPT
VOLUME $HOST_PATH: $SQL_SCRIPT
ADD myprofile-*.jar $APP_HOME/myprofile.jar
RUN chown $USER_NAME $APP_HOME/myprofile.jar
USER $USER_NAME
WORKDIR $APP_HOME
RUN sh -c 'touch myprofile.jar'
ENTRYPOINT ["sh", "-c","java -Djava.security.egd=file:/dev/./urandom -jar myprofile.jar -o $SQL_OUTPUT_FILE -n $NO_OF_USERS -r $RANGE_OF_SKILLS"]
Docker-compose
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./logs/:/app
The problem here is that you are mounting the same folder ./logs twice. Docker-compose volume mount syntax is - <your-host-path>:<your-container-path>. Also, its better to use relative paths when you are building the application. So change docker-compose file to (assuming you want to see the files in ./target relative to the Dockerfile:
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./target/:/app
I'm getting
app_1 | ./entrypoint.sh: line 2: docker: command not found
when running this line of code in entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
How would i properly execute this command ?
entrypoint.sh
# entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
npm run seed # my attempt to run seed first before server kicks in. but doesnt work
npm run server
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
PSQL_PORT: 5430
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
entrypoint: ["/bin/bash", "./entrypoint.sh"]
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5439
volumes:
database:
Try this Dockerfile :
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
ENV DOCKERVERSION=18.03.1-ce
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
EXPOSE 5000
You trying to run docker container inside of the docker container. In most cases it is very bad approach and you should to avoid it. But in case if you really need it and if you really understand what are you doing, you have to apply Docker-in-Docker(dind).
As far as I understand you, you need to run script CREATE DATABASE elitypescript, the better option will be to apply sidecar pattern - to run another one container with PostgreSQL client that will run your script.
Link the containers together and connect using the hostname.
# docker-compose
services:
app:
links:
- database
...
then just:
# entrypoint.sh
# the database container is available under the hostname database
psql -h database -p 3030 -U postgres -c "CREATE DATABASE elitypescript"
Links are a legacy option, but easier to use then networks.
I am trying to run a golang app using docker-compose, below is my compose configuration.
version: '2'
services:
#Application container
go:
image: golang:1.8-alpine
ports:
- "80:8080"
links:
- mongodb
environment:
DEBUG: 'true'
PORT: '8080'
working_dir: /go/src/simple-golang-app
command: go run main.go
volumes:
- ./simple-golang-app:/go/src/simple-golang-app
mongodb:
image: mvertes/alpine-mongo:3.2.3
restart: unless-stopped
ports:
- "27017:27017"
On running the compose using command "docker-compose up" i get error "stat main.go: no such file or directory" even when main.go is available in working directory.
it works fine when your host dir layout is
oxo#thor ~/Dropbox/Documents/code/docker/golang_working_dir $ find .
.
./docker-compose.yaml
./simple-golang-app
./simple-golang-app/main.go
so here we
cd ~/Dropbox/Documents/code/docker/golang_working_dir
docker-compose up
for a more complex build involving dependancies I use a Dockerfile :
FROM golang:1.8-alpine
RUN mkdir -p /go/src/simple-golang-app/
COPY simple-golang-app/main.go /go/src/simple-golang-app
WORKDIR /go/src/simple-golang-app
RUN apk add --no-cache git mercurial && go get -v -t ./... && apk del git mercurial
RUN go install ./...
RUN go build
ENV PORT 9000
now update your docker-compose.yaml to use this new image :
old
image: golang:1.8-alpine
new
image: nirmal_golang_alpine:latest
so your commands are
docker build --tag nirmal_golang_alpine
docker-compose up
I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.
I'm using docker for windows (Version 18.03.0-ce-win59 (16762)) in a windows 10 pro. All the containers run ok after running the command docker-compose -up -d. The problem is when I restart the docker service. Then, once restarted, all the containers are stoped and when I run the command docker-compose start -d the following error is shown:
Error response from daemon: network ccccccccccccc not found
I don't know what's happening. When I run the container using run and the --restart=always option everything works as expected. No error is shown on restart.
This is the docker-compose file:
version: '3'
services:
service_1:
image: image1
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_2:
image: image2
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_3:
image: image3
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
The dockerfiles are like this:
FROM microsoft/dotnet-framework:3.5
ARG ENTRY
ENV my_env=$ENTRY
WORKDIR C:\\foo2
ENTRYPOINT C:/foo2/app.exe %my_env%
The network has changed. I used docker network prune command to meet the same problem.Recreate the container would fix the problem. Docker would set up the network again for the new containers.
#remove all containers
docker rm $(docker ps -qa)
#or
docker system prune
There might be some old container instances which were not removed. Check the instances with
docker container ls -a
You might get output like this if you have some instances which were not removed
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4678e6666b b4a75a01d539 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago zealous_allen
ee862a3418f2 1eaaf48e9b42 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago jolly_torvalds
Remove the containers by the container id
docker container rm 8b4678e6666b
docker container rm ee862a3418f2
Now start your container with docker-compose file
This worked for me. Hope it helps!
I found a possible solution editing the docker-compose.yml file as follows:
version: '3'
services:
cm04:
image: tnc530_cm04
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530/bin/x86/Release:C:/adontec
cm06:
image: tnc620_cm06
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
cm08:
image: tnc620_cm08
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
networks:
test:
external:
name: nat
As you can see I created a network called test linked with the external network nat. Now, when I restart the docker services the containers are started with no errors.
Alternatively, you can just open your docker app and manually delete the containers. Then run docker-compose up on your terminal. Now it should be working. Go to the port either 9000 or 9001 or whichever port you are using and see if minio is actually running.