Windows Docker Dockerfile COPY file inside folder - windows

I'm trying to build a Dockerfile to copy a file to container, I'm using Windows 10. This is my Dockerfile
FROM openjdk:8
COPY /target/myfile.java /
And I'm getting the error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/target/myfile.java" not found: not found
I already tried //target//myfile.java, \\target\\myfile.java, \target\myfile.java, target/myfile.java, target\myfile.java but none of them worked.
If I put the myfile.java on the same directory of Dockerfile and use COPY myfile.java / works without problem. So the problem is to copy a file inside a folder. Any suggestion?

I tried your Dockerfile locally and it built fine with the following directory structure:
Project
│ Dockerfile
│
└───target
│ │ myfile.java
I built it from the 'Project' directory with the following command:
docker build . -t java-test
I could only reproduce the error when the Docker server couldn't find the 'myfile.java', i.e. using the following directory structure:
Project
│ Dockerfile
│
└───target
│ └───target
│ └───myfile.java
So your dockerfile looks fine, just make sure you build it from the right directory with the correct build context and the file is stored in the correct place locally

Related

Docker COPY index.html throws error in laravel project

My Docker file contents:
FROM nginx:1.18 COPY index.html /usr/share/nginx/html
Gitlab Runner log error message:
Step 2/2 : COPY index.html /usr/share/nginx/html COPY failed: file not found in build context or excluded by .dockerignore: stat index.html: file does not exist
Setup: It is laravel app. So no index.html file exists…
I have no idea how to proceed…
Thx!
As suggested by sytech I simply needed to delete COPY line.
Because, in my case, I don't need to copy anything. Line was taken from tutorial of setting Gitlab runner.

Dockerfile COPY ".env" directory

How can I use COPY in Dockerfile to copy folder of which name starts with a dot?
FROM confluentinc/cp-kafka-connect:6.0.0
# does not work
COPY ./aws/ /home/appuser/.aws
EXPOSE 8083
Directory structure:
/MyFolder
├── .aws
│ └── credentials
└── Dockerfile
COPY ./aws [DESTINATION] is "COPY, from the current directory './', the directory named 'aws' to the [DESTINATION]".
COPY ./.aws [DESTINATION] will COPY the hidden directory '.aws' from the current directory './' to [DESTINATION].
COPY ./.aws/ /home/appuser/.aws will result in /home/appuser/.aws/credentials existing in the built image.
Tip: [DESTINATION] is created by COPY if it doesn't already exist.
Note: if the directory is ignored in a .dockerignore then the COPY will not work.
Note: should never COPY credentials in an image if you intend on sharing the image rather bind-mount the credentials during the containers runtime i.e. docker run --rm -it -v ./.aws/credentials:/home/appuser/.aws/credentials:ro myimage

API Rest with Hyperledger Fabric and node js Heroku

I am trying to connect my Hyperledger Fabric network with my backend in Heroku.
I did all the connections as the examples suggest. This is how my code looks like:
my code
When I deploy to Heroku I get the following error:
[NetworkConfig101.js]: NetworkConfig101 - problem reading the PEM file :: Error: ENOENT: no such file or directory
My .pem files are in the same folder as my configuration file. folders
Using a path from working dir. For example, your tree structure dir:
.
├── app.js
└── artifacts
├── crypto-config
│ ├── ca.crt
│ └── key.pem
└── network-config.yaml
In network-config.yaml, you should use path:
path: ./artifacts/crypto-config/ca.crt
Another way is using absolute path.
path: /data/app/artifacts/crypto-config/ca.crt

Restart Docker Container (Automatically) when Image changes in Portainer(?)

I have a java WAR file that is of an Image (Docker) and is being started inside a Tomcat (Docker) container. Since the coding changes, the WAR will change also. I would like to do the following:
Change the java code Update to Git
Have a WAR file created (from code just pushed to Git)
create a NEW IMAGE (Docker) that uses the NEW WAR file
stop all old containers (running old image)
re-start the containers (which will be using the new image)
I am also using Portainer. Is there some series of commands that I can execute / run so that Item #4 and Item #5 can be ran automatically (without requiring human intervention)? Is there some kind of way that this can be done at all?
TIA
docker-compose can be helpful for this. You can create a yml file for your application and use docker compose cli to spin up new containers as required. For example I have tomcat/mongo based application with following yml file:
version: '3'
services:
mongodb:
image: mongo
network_mode: host
tomcat:
build:
context: ./app
dockerfile: DockerfileTomcat
network_mode: host
depends_on:
- mongodb
With folder layout as:
├── docker-compose.yml
└── app
├── DockerfileTomcat
└── app.war
Where DockerfileTomcat takes care of copying the war file in tomcat container as:
FROM tomcat:8.5-jre8
RUN rm -rf /usr/local/tomcat/webapps/*
COPY app.war /usr/local/tomcat/webapps/app.war
In order to start your application you need to run following command in the directory containing docker-compose.yml:
docker-compose up --build
Just copy the new war file over app.war each time and run the command above. It will create the base image and launch the updated container.
If this isn't something you are looking for you can write a BASH script to automate the process. Let me know if you want me to post it here.

Docker - Creating multiple containers/environments with different versions

I'm starting with MongoDB and taking four courses. All of them use different versions of mongodb, python, nodejs, asp.net, mean stack, etc. The structure of my desirable workspace:
courses
├─ mongodb_basic
│ ├─ hello_world-2.7.py
│ └─ data
│ └─ db
├─ python-3.6_mongodb
│ ├─ getting_started.py
│ └─ data
│ └─ db
├─ dotnet_and_mongodb
│ ├─ (project files)
│ └─ data
│ └─ db
├─ mongodb_node
│ ├─ (project files)
│ └─ data
│ └─ db
└─ mean_intro
└─ (project files)
I want to keep my Windows 10 system clean using Docker without installing all the stuff on the host and stuck in the first course, don't know how to:
link containers
python/pymongo <-> mongodb
aspnet <-> mongodb
... <-> mongodb
map data\folders
start/stop linked containers with one command (desirable)
I'd like to keep a workspace on the host (external HDD) in order to work on different computers (three W10 PCs).
Google results have many tuts (containerize, docker-compose, etc.) and don't know where to start.
I think it might be possible to do what you are trying to do using docker-compose and defining the dockerfiles correctly. So if you are wondering where to start, I would suggest getting acquainted with the dockerfiles and docker-compose.
To answer your question:
linking containers:
that can be done using docker-compose. Specify the container services you want to use in a compose file like the one specified here.
NOTE: the volumes: declaration is where you would specify your workspace folder structure for the containers to access.
map folder/data: Again I would check out the link mentioned above. In their dockerfile they use the ADD command to add the current directory of the container into the path of the /code directory. This was included as a volume: in the compose file. What does that mean? Well whatever you change in the host workspace, should show up in the root directory of the container.
start/stop with one command: you should be able to create,start or stop all the services or a specific service using one of the docker-compose up, docker-compose start or docker-compose stop
commands.
For your application you might even be able to get away with defining your workspace as volumes in all of the dockerfiles and then building them with a script. Or you can use some kind of orchestration service like Kubernetes as well but that might be overkill.
Hope this is helpful.

Resources