Mount image multiple times in docker - windows

As far as I understand Docker, it should be very simple to create different environments like dev or prod by just mounting an image multiple times by just running "docker run" more than once.
However, I've build an image extending neo4j to have a custom configured neo4j image with the following Dockerfile:
FROM neo4j:3.5
COPY neo4j.conf /var/lib/neo4j/conf/neo4j.conf
COPY apoc-3.5.0.1.jar /var/lib/neo4j/plugins/apoc.jar
I've build it with
docker build -t myneo .
Now I'v started it 2 times using a script.bat like so:
docker run -d --rm --name neo4j-prod -p 10074:7474 -p 10087:7687 myneo
docker run -d --rm --name neo4j-dev -p 7474:7474 -p 7687:7687 myneo
Now I have 2 instances reachable under :10074 and :7474, however, when I create some date in one of those, it appears in the other one as well. What am I doing wrong?
Sadly, I have to work on Windows.

Looks like your both Neo4j instances are pointing to the same database on the file system.
You can change the database location in neo4j.conf file.
By default database is stored in data directory.
You can uncomment following line and change it as per your env.
#dbms.directories.data=data
like
dbms.directories.data=prod_data
Another option is to keep the database location the same and use the different databases for prod and dev.
You can uncomment and change the active database name on the following line.
#dbms.active_database=graph.db
like
dbms.active_database=prod_graph.db
EDIT:
If above is not the issue then, maybe you are connecting to the same host from Neo4j browser (check host in bolt connection).
Refer following screenshot:

If your issue was due to copying the same config file which might contaib common data then you might consider changing thecway you modify it for separate environments.
According to Configuration docs. There are multiple way to customize the config file - copying the file which you are using is one of them - but as you intend to use the same image for multiple environment it would be better to also configure neo4j based on environment variables to avoid making the same configuration for both like passwords or databases and so on, for example:
docker run \
--detach \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
--env=NEO4J_dbms_memory_pagecache_size=4G \
neo4j:3.5
And your Dockerfile will be like this:
FROM neo4j:3.5
COPY apoc-3.5.0.1.jar /var/lib/neo4j/plugins/apoc.jar
So you might want to enable database authentication in production but not in development then you will have to do the following:
# For production
docker run -d --rm --name neo4j-prod -e NEO4J_dbms.security.auth_enabled=true -p 10074:7474 -p 10087:7687 myneo
# For development
docker run -d --rm --name neo4j-dev -e NEO4J_dbms.security.auth_enabled=false -p 7474:7474 -p 7687:7687 myneo
Following this way will make easy to deploy, reconfigure and keeping the configuration separate, also when you go with something like docker-compose things will be easier.
More details can be found in here

Related

Cannot access environment variables in Docker environment

I am trying to dockerize my spring boot project and use it in EC2 instance.
In application.properties I have following lines,
spring.datasource.url=${SPRING_DATASOURCE_URL}
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
and I am reading SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME and SPRING_DATASOURCE_PASSWORD from my environment.
I have following Dockerfile,
FROM openjdk:latest
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-DSPRING_DATASOURCE_URL=${DATASOURCE_URL}", \
"-DSPRING_DATASOURCE_USERNAME=${DATASOURCE_USERNAME}", \
"-DSPRING_DATASOURCE_PASSWORD=${DATASOURCE_PASSWORD}", "/app.jar"]
EXPOSE 8080
When I try to run run the following command,
sudo docker run -p 80:8080 <image-repo> --env DATASOURCE_URL=<url> --env DATASOURCE_USERNAME=<username> --env DATASOURCE_PASSWORD=<password>
My application crashes because of the non-existing environment variables.
Also,
I have tried using docker-compose mentioned Spring boot app started in docker can not access environment variables link. It become much more complicated for me to debug.
TL;DR I want to achieve information hiding in my spring boot application. I want to use environment variables to hide my credentials to the database. However, my dockerized program is not having the environment variables that I want to have.
If you have any approaches other than this, I would be also happy to listen the way I can achieve information hiding. Thanks.
You need to put options like docker run --env before the image name. If they're after the image name, Docker interprets them as the command to run; with the specific setup you have here, you'll see them as the arguments to your main() function.
sudo docker run \
--env SPRING_DATASOURCE_URL=... \
image-name
# additional arguments here visible to main() function
I've spelled out SPRING_DATASOURCE_URL here. Spring already knows how to map environment variables to system properties. This means you can delete the lines from your application.properties file and the java -D options, so long as you do spell out the full Spring property name in the environment variable name.

Accessing persistent H2 DB in docker container

I'm deploying a springboot application and I want to use a persistent DB. So, in application.properties file, I have
spring.datasource.url=jdbc:h2:file:/home/ubuntu/db;AUTO_SERVER=TRUE;
Now this works as long as I start this application without using a container. Now, I build a docker image and try to run the application. Dockerfile looks like
FROM maven:3-jdk-11 AS maven
ARG BUILD = target/build.jar
COPY ${BUILD} build.jar
EXPOSE 8080
USER spring:spring
ENTRYPOINT["java","-jar","/build.jar"]
Now this doesn't work when I try to start it, because it searches for /home/ubuntu/db inside the container, which does not exist. Is there a way to make the app inside the docker container access the host folder /home/ubuntu/db? Thanks for the response.
The missing part is to tell docker when running the containter to mount /home/ubuntu/db from the host into the container.
You do that like this:
docker run -v <folder_on_host>:<folder_in_cointainer>
with your example:
docker run -v /home/ubuntu/db:/home/ubuntu/db
more info on docker docs: https://docs.docker.com/get-started/06_bind_mounts/
Just in case it is helpful to anyone else, the full command to be used is:
docker run -v /home/ubuntu/db:/home/ubuntu/db --privileged -p $HOST_PORT:$CONTAINER_PORT <image-name>

Jenkins tutorial maven project (with Docker) fails at Build stage

I'm using the current Jenkins Maven Project tutorial using Docker:
https://jenkins.io/doc/tutorials/build-a-java-app-with-maven/
I keep getting this error at the Build stage:
[simple-java-maven-app] Running shell script
sh: can't create
/var/jenkins_home/workspace/simple-java-maven-app#tmp/durable-bae402a9/jenkins-log.txt:
nonexistent directory
sh: can't create
/var/jenkins_home/workspace/simple-java-maven-app#tmp/durable-bae402a9/jenkins-result.txt:
nonexistent directory
I've tried setting least restrictive permissions with chmod -R 777, chown -R nobody and chown -R 1000 on the listed directories, but nothing seems to work.
This is happening with the jenkins image on Docker version 17.12.0-ce, build c97c6d6 on Windows 10 Professional.
As this is happening with the Maven project tutorial on the Jenkins site, I'm wondering how many others have run into this issue.
I had also the same problem on MacOSX.
After few hours of research, I have finally find the solution.
To solve the problem, it's important to understand that Jenkins is inside a container and when the docker agent inside this container talk to your docker engine, it give path to mount volume matching inner the container. But your docker engine is outer. So to allow to work correctly path inner the container must match the same path outer the container in your host.
To allow working correctly, you need to change 2 things.
docker run arguments
Jenkinsfile docker agent arguments
For my own usage, I used this
docker run -d \
--env "JENKINS_HOME=$HOME/Library/Jenkins" \
--restart always \
--name jenkins \
-u root \
-p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $HOME/Library/Jenkins:$HOME/Library/Jenkins \
-v "$HOME":/home \
jenkinsci/blueocean
In the Jenkinsfile
Replace the agent part
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
by
agent {
docker {
image 'maven:3-alpine'
args '-v <host_home_path>/.m2:/root/.m2'
}
It's quite likely that this issue resulted from a recent change in Docker behaviour, which was no longer being handled correctly by the Docker Pipeline plugin in Jenkins.
Without going into too much detail, the issue was causing Jenkins to no longer be able to identify the container it was running in, which results in the errors (above) that you encountered with these tutorials.
A new version (1.15) of the Docker Pipeline plugin was released yesterday (https://plugins.jenkins.io/docker-workflow).
If you upgrade this plugin on your Jenkins (in Docker) instance (via Manage Jenkins > Manage Plugins), you'll find that these tutorials should start working again (as documented).
The error message means that the directory durable-bae402a9 was not created.
Walk back through the tutorial to find the step that should have created that directory, and make whatever changes are needed to make sure it succeeds.

ERR_EMPTY_RESPONSE from docker container

I've been trying to figure this out in the last hours but I'm stuck.
I have a very simple Dockerfile which looks like this:
FROM alpine:3.6
COPY gempbotgo /
COPY configs /configs
CMD ["/gempbotgo"]
EXPOSE 8025
gempbotgo is just an go binary which runs a webserver and some other stuff.
The webserver is running on 8025 and should answer with an hello world.
My issue is with exposing ports. I ran my container like this (after building it)
docker run --rm -it -p 8025:8025 asd
Everything seems fine but when I try to open 127.0.0.1:8025 in the browser or try a wget i just get an empty response.
Chrome: ERR_EMPTY_RESPONSE
The port is used and not restricted by the firewall on my Windows 10 system.
Running the go binary without container just on my "Bash on Ubuntu on Windows" terminal and then browsing to 127.0.0.1:8025 works without a hitch.
Other addresses returned a "ERR_CONNECTION_REFUSED" like 127.0.0.1:8030 so there definetly is something active on the port.
I then went into the conatiner with
docker exec -it e1cc6daae4cf /bin/sh
and checked in there with a wget what happens. Also there no issues. index.html file gets downloaded with a "Hello World"
Any ideas why docker is not sending any data? I've also ran my container with docker-compose but no difference there.
I also ran the container on my VPS hosted externally. Same issue there... (Debian)
My code: (note the Makefile)
https://github.com/gempir/gempbotgo/tree/docker
Edit:
After getting some comments I changed my Dockerfile to a multi-stage build. This is my Dockerfile now:
FROM golang:latest
WORKDIR /go/src/github.com/gempir/gempbotgo
RUN go get github.com/gempir/go-twitch-irc \
&& go get github.com/stretchr/testify/assert \
&& go get github.com/labstack/echo \
&& go get github.com/op/go-logging
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY configs ./configs
COPY --from=0 /go/src/github.com/gempir/gempbotgo/app .
CMD ["./app"]
EXPOSE 8025
Sadly this did not change anything, I kept everything as close as possbile to the guide here: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
I have also tried the minimalist Dockerfile from golang.org which looks like this:
FROM golang:onbuild
EXPOSE 8025
But no success either with that.
Your issue is that you are binding to the 127.0.0.1:8025 inside your code. This makes the code work from inside the container but not outside.
You need to bind to 0.0.0.0:8025 to bind to all interfaces inside the container. So traffic coming from outside of the container is also accepted by your Go app
Adding to the accepted answer: I had the same error message trying to run docker/getting-started.
The problem was that "getting-started" is using port 80 and this was
"occupied" (netsh http show urlacl) on my machine.
I had to use docker run -d -p 8888:80 docker/getting-started where
8888 was an unused port. And then open "http://localhost:8888/tutorial/".
I have the same problem using Dockerize GatsbyJS. As Tarun Lalwani's comment above, I resolved the problem by binding or using 0.0.0.0 as hostname
yarn develop -P 0.0.0.0 -p 8000
For me this was a problem with the docker swarm mode ingress network. I had to recreate it. https://docs.docker.com/network/overlay/#customize-the-default-ingress-network
Another possibility why you are getting that error is that the docker run command is run through a normal cmd prompt and not the admin command prompt. Make sure you run as an admin!

Use environment variables in docker

I have implemented docker project for automated setup. I use docker 1.9 on Ubuntu Server and utilize feature build-arg. I using it for set dynamic subdomain in apache virtual hosts file.
docker build --no-cache --build-arg domain=demo1.myapp.com -t imagename .
docker run -d -p 8080:80 imagename
I use domain and replace it in virtual hosts file using sed command in my script file
sed -i -e "s/defaulthost.com/$domain/g" /etc/apache2/sites-enabled/myApp.conf
My Dockerfile had code
ARG domain
RUN /bin/sh /script.sh $domain
Now I need to migrate application on AWS where I get Amazon Linux AMI. But here I get supported docker version 1.7, which do not support build-arg. I tried to upgrade but lot of dependencies block me.
Now I decide to use ENV environment variables like below.
docker run -d -p 8080:80 -e domain=demo1.myapp.com
I also changed Docker file like
My Dockerfile had code
RUN /bin/sh /script.sh
But It look like they not working in my scenario as at build time sed script replace empty value in apache file and build process failed.
If it is not possible without build arg or I am doing wrong way of set/use ENV
First, AWS can support docker 1.9.
See for instance "Getting overlay networking to work in AWS with Docker 1.9"
use a Docker Machine version 0.5.2-dev, as explained here
use the right AMI (Amazon Machine Image) Ubuntu 15.10
Set up the AWS environment variables
If you chose to remain with an old AMI and its docker 1.7, then -e option are for runtime only (creating/running containers), not build time (image).
That means if your ENTRYPOINT or CMD was: /script.sh, using inside the script $domain (and then launching your main process), that would work.

Resources