How to import dump file into neo4j running on docker? - windows

I using the following dockerfile I found on the internet:
FROM neo4j
COPY --chown=neo4j db.dump db.dump
COPY --chown=neo4j mgt-entrypoint.sh mgt-entrypoint.sh
RUN chmod +x mgt-entrypoint.sh
ENV NEO4J_AUTH=neo4j/mgttgm
ENV NEO4J_dbms_read__only=true
ENTRYPOINT ["./mgt-entrypoint.sh"]
It doesn't seem to be working.

Related

Dockerizing legacy Spring war project using startup.sh instead of catalina.sh

I currently have to containerize a legacy Spring war project that uses Apache Tomcat server.
On my local machine I do not run into any trouble running with the following configurations
On the actual server it the server seems to use startup.sh instead of catalina.sh to start the server as can be seen in the shell script below
export CATALINA_OPTS="-Denv=product -Denv.servername=instance3 -Dfile.encoding=UTF-8 -Dspring.profiles.active=prod"
cd $CATALINA_HOME/bin
./startup.sh
#./catalina.sh run
I have tried building a dockerfile cotaing the information from the shell script as below.
FROM tomcat:9.0.40
EXPOSE 8083
COPY ./build/libs/ROOT.war "$CATALINA_HOME"/webapps/ROOT.war
ENV JAVA_OPTS='-Dspring.profiles.active=local'
RUN chmod +x $CATALINA_HOME/bin/startup.sh
RUN cd $CATALINA_HOME/bin
RUN /startup.sh
However I run into an error stating that /startup.sh was not found.
=> CACHED [2/5] COPY ./build/libs/ROOT.war /usr/local/tomcat/webapps/ROOT.war 0.0s
=> [3/5] RUN chmod +x /usr/local/tomcat/bin/startup.sh 0.3s
=> [4/5] RUN cd /usr/local/tomcat/bin 0.3s
=> ERROR [5/5] RUN /startup.sh 0.2s
------
> [5/5] RUN /startup.sh:
#9 0.162 /bin/sh: 1: /startup.sh: not found
This is my first time using a war file so I am a bit unfamiliar with what I am doing, so any type of feedback would be deeply appreciated.
Thank you in advance!

Copy contents from host OS into Docker image without rebuilding image

I'm building a new image and copy contents from host OS folder D:\Programs\scrapy into it like so: docker build . -t scrapy
Dockerfile
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
COPY scrapy to /root/scrapy
Now when I add new contents to the host OS folder "D:\Programs\scrapy" I want to also add it to image folder "root/scrapy", but I DON'T want to build a completely new image (it takes quite a while).
So how can I keep the existing image and just overwrite the contents of the image folder "root/scrapy".
Also: I don't want to copy the new contents EACH time I run the container (so NOT at run-time), I just want to have a SEPARATE command to add more files to an existing image and then run a new container based on that image at another time.
I checked here: How to update source code without rebuilding image (but not sure if OP tries to do the same as me)
UPDATE 1
Checking What is the purpose of VOLUME in Dockerfile and docker --volume format for Windows
I tried the commands below, all resulting in error:
docker: Error response from daemon: invalid volume specification: ''. See 'docker run --help'.
Where <pathiused> is for example D:/Programs/scrapy:/root/scrapy
docker run -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -v scrapy:/root/scrapy scrapy
docker run -it -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -it -v scrapy:/root/scrapy scrapy
UPDATE WITH cp command based on #Makariy's feedback
docker images -a gives:
REPOSITORY TAG IMAGE ID CREATED SIZE
scrapy latest e35e03c8cbbd 29 hours ago 5.71GB
<none> <none> 2089ad178feb 29 hours ago 5.71GB
<none> <none> 6162a0bec2fc 29 hours ago 5.7GB
<none> <none> 116a0c593544 29 hours ago 5.7GB
mcr.microsoft.com/windows/servercore ltsc2019 d1724c2d9a84 5 weeks ago 5.7GB
I run docker run -it scrapy and then docker container ls which gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fcda458a14c scrapy "c:\\windows\\system32…" About a minute ago Up About a minute thirsty_bassi
If I run docker cp D:\Programs\scrapy scrapy:/root/scrapy I get:
Error: No such container:path: scrapy:\root
So in a separate PowerShell instance I then run docker cp D:\Programs\scrapy thirsty_bassi:/root/scrapy whichs show no output in PowerShell whatsoever, so I think it should've done something.
But then in my container instance when I goto /root/scrapy folder I only see the files that were already added when the image was built, not the new ones I wanted to add.
Also, I think I'm adding files to the container here, but is there no way to add it to the image instead? Without rebuilding the whole image?
UPDATE 2
My folder structure:
D:\Programs
Dockerfile
\image_addons
Dockerfile
\scrapy
PS D:\Programs>docker build . -t scrapybase
Successfully built 95676d084e28
Successfully tagged scrapybase:latest
PS D:\Programs\image_addons> docker build -t scrapy .
Step 2/2 : COPY scrapy to /root/scrapy
COPY failed: file not found in build context or excluded by .dockerignore: stat to: file does not exist
Dockerfile A
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
WORKDIR /root/scrapy
Dockerfile B
FROM scrapybase
COPY scrapy to /root/scrapy
You also can use docker cp, to manually copy files from your host to running container
docker cp ./path/to/file containername:/another/path
Docs
answer if you want it quick and dirty
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
to update one file quickly:
If you don't have to build your code (I don't know what language you are using) you can build some base image with the initial code and when you want to change only one file (again I'm assuming you don't need to compile your project again for that, otherwise if you do that is not possible to due the nature of compiled programming language):
FROM previous-version-image:latest
COPY myfile dest/to/file
then because your CMD and ENTRYPOINT are saved from the previous stages no need to declare them. (if you don't remember use docker history <docker-image-name> to view virtual dockerfile for image to this stage).
Notice though not to repetitively use this method or you'll get a very big image with many useless layers. Use this only for quick testing and debugging.
explanation
Usually people use it for frontend development on docker containers but the basic idea persists, you create the basic working image with the dependencies installed and the directory layout setup with the last Dockerfile command being the development server start command.
example:
Dockerfile:
# pull the base image
FROM node:slim
# set the working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# copy dependencies files
COPY package.json ./
COPY package-lock.json ./
# install app dependencies
RUN npm install
# add app
COPY . ./
# start development server
CMD ["npm", "start"]
startup command:
docker run -it --rm \
-v ${PWD}:/app \ <mount current working directory in host to container in path /app>
-v /app/node_modules \ <or other dependency directory if exists>
-p 80:3000 \ <ports if needs exposing>
ps-container:dev
I'm not sure if that use case will 100% work for you because it needs the code to be mounted using bind-mount all the time and when needed to be exported will have to be exported as the image and the source code directory, on the other hand, it allows you to make quick changes without waiting for the image to be built each time you add something new and in the end build the final image that contains all that's needed.
more relatable example to question provided code:
As you can see there is a file on the host machine that contains some text
the command that uses bind-mount to have access to the file:
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
hope you find something that works for you from what I've provided here.
thanks to this tutorial and this example for starting examples and information.
EDIT:
Let's say your original Dockerfile looks like this:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
This will build your initial image on top of we'll add layers and change the python files.
The next Dockerfile we'd use (let's call it Dockerfile.fix file) would copy the file we want to change instead of the ones already in the image
FROM previous-image-name
COPY app.py .
Now with after building with this Dockerfile the final image Dockerfile would look (sort of) like so:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
FROM previous-image-name
COPY app.py .
And each time we'll want to change the file we'll use the second Dockerfile
There's no way you can change a Docker image without (at least partially) rebuilding it. But you don't have to rebuild all of it, you can just rebuild the layer copying your scrapy content.
You can optimize your build to have two images:
First image is your static image you don't want to rebuild each time. Let's call it scrapy-base.
Second and final image is based on first image scrapy-base and will only exist for the purpose of copying your dynamic scrapy content
scrapy-base's Dockerfile:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
And build it like:
docker build -t scrapy-base .
This command only needs to be run once. You won't have to build this image if you only change the content of local scrapy folder. (as you can see, the build does not use it at all)
scrapy's Dockerfile:
FROM scrapy-base
COPY scrapy /root/scrapy
With build command:
docker build -t scrapy .
This second build command will re-use the previous static image and only copy content without having to rebuild the entire image. Even with lots of files it should be pretty quick. You don't need to have a running container.
For your scenario :
docker run -v D:/test:/root/test your-image
A lots of valuable details available in this thread

Error in running Spring-Boot Application on Docker

I tried to run my simple Spring Web application jar on docker but I always get the following error. ubuntu & openjdk images exist and their state is UP. I could not run my jar file on docker? How can I get rid of this error?
ubuntu#ip-172-31-16-5:~/jar$ **docker run -d -p 8080:8080 spring-docker tail -f /dev/null**
c8eb92e5315adbaccfd894ed9e74b8e0d0eed88a81eaa07037cf8ada133c81fd
docker: **Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown.**
Related DockerFile:
FROM ubuntu
FROM openjdk
VOLUME /tmp
ADD /spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar
RUN sh -c 'touch /myapp.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]
Check with below sequence which works for me.
Build image using below command.
docker build -t demo-img .
DockerFile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY demo-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
then run like below.
docker run --name demo-container -d -p 8080:8080 demo-img
Make sure you are running all these command from the directory in which DockerFile and jar is present.

Copying a Laravel's .env file into a Docker container

Setup
I'm running Docker on my Ubuntu server and I'm trying create a Laravel container to run my website with artisan. The Laravel project is inside a GitHub Repository and I clone the project to the docker container with a dockerfile.
Problem
Laravel projects are dependent on the .env (environment files) which are not included in the repo project, for security reasons. So when I clone the repo to the docker container it doesn't include the .env file and thereby doesn't run the website properly. I have an .env file locally on my Ubuntu that I'm trying to COPY to the docker container Laravel project folder, ofcourse it doesn't work. This is because it's looking for the directory in the docker container's file structure.
Error
Step 6/11 : COPY /containers/.env .env
lstat containers/.env: no such file or directory
Question
How can I copy the .env file from the ubuntu server to the docker container with the COPY command?
file structure (Ubuntu) source from:
root/
containers/
- docker-compose
- .env
file structure (docker container) source to:
root/
var/www/
dockerfile
FROM hitalos/laravel
RUN git config --system http.sslverify false
RUN git clone repo /var/www
RUN git checkout test
COPY /containers/.env .env
# Run Compser Install
RUN composer install -d /var/www
RUN php /var/www/artisan key:generate
WORKDIR /var/www
CMD php /var/www/artisan serve --port=80 --host=0.0.0.0
EXPOSE 80
Simply copying the .env file is not going to work since you also have to run a source command on the file and then an export command to add each environment variable to your path.
Since you are using docker-compose then you can use the env_file like so in your docker-compose.yml:
env_file:
-.env
This should automatically set the values required by Laravel from your .env file when you build your conainer.
The path in COPY is relative to the Dockerfile
just change it to COPY containers/.env /var/www/.env
EDIT:
seems like you don't have the .env file at build time (image), only at runtime (container). That means, you have to mount the file when running the container.
Remove the COPY ... command from Dockerfile and instead run the container with
-v /containers/.env:/var/www/.env
so something like this:
docker run... -v /containers/.env:/var/www/.env ...
or change it in the compose yml file

How to override the CMD command in the docker run line

How you can replace the cmd based on the docker documentation:
https://docs.docker.com/reference/builder/#cmd
You can override the CMD command
Dockerfile:
RUN chmod +x /srv/www/bin/* & chmod -R 755 /srv/www/app
RUN pip3 install -r /srv/www/app/pip-requirements.txt
EXPOSE 80
CMD ["/srv/www/bin/gunicorn.sh"]
the docker run command is:
docker run --name test test/test-backend
I tried
docker run --name test test --cmd ["/srv/www/bin/gunicorn.sh"]
docker run --name test test cmd ["/srv/www/bin/gunicorn.sh"]
But the console say this error:
System error: exec: "cmd": executable file not found in $PATH
The right way to do it is deleting cmd ["..."]
docker run --name test test/test-backend /srv/www/bin/gunicorn.sh
The Dockerfile uses CMD instruction which allows defaults for the container being executed.
The below line will execute the script /srv/www/bin/gunicorn.sh as its already provide in CMD instruction in your Dockerfile as a default value, which internally gets executed as /bin/sh -c /srv/www/bin/gunicorn.sh during runtime.
docker run --name test test/test-backend
Now say if you want to run some thing else, just add that at a end of the docker run. Now below line should run bash instead.
docker run --name test test/test-backend /bin/bash
Ref: Dockerfile Best Practices
For those using docker-compose:
docker-compose run [your-service-name-here-from-docker-compose.yml] /srv/www/bin/gunicorn.sh
In my case I'm reusing the same docker service to run the development and production builds of my react application:
docker-compose run my-react-app npm run build
Will run my webpack.config.production.js and build the application to the dist. But yes, to the original question, you can override the CMD in the CLI.
Try this in your Dockerfile
CMD ["sh" , "-c", "command && bash"]

Resources