Am pulling a windows servercore:lts2019 image as my base image, adding a folder to it and creating my own image called "mygitlabpath/windows-2019". The contents of the Dockerfile are as follows:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ADD folder-z c:/windows/system32/config/systemprofile/folder-z
SHELL ["powershell"]
RUN ls c:/windows/system32/config/systemprofile/folder-z ( at this step i see all contents of folder-z)
Now I use this image i created and try to access c:/windows/system32/config/systemprofile/folder-z but there is no such folder called folder-z :
image: mygitlabpath/windows-2019
stages:
- build
build:
stage: build
script:
- ls c:/windows/system32/config/systemprofile/ ( at this step i expect to see folder-z.. but i dont)
What is that am missing? Any help is appreciated
Thanks
You should use COPY instead of ADD. Works for me
Related
I would like to build and image from Dockerfile using Earthly.
You might be wondering why do I want that, because I can describe images right inside of Earthfile, but I have 2 reasons for using external Dockerfile:
ADD command (which I need to download file by URL) is not supported by Earthly yet
I would like to use a heredoc syntax for embedding file's content into container right from Dockerfile. This requires # syntax=docker/dockerfile:1.4, which is again not available in Earthfile
So, here is what I tried to do.
My approximate Dockerfile looks like:
# syntax=docker/dockerfile:1.4
FROM gcr.io/distroless/java17:nonroot
WORKDIR /opt/app
ADD --chown=nonroot https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.7/applicationinsights-agent-3.4.7.jar agent.jar
COPY <<EOF /opt/app/applicationinsights.json
{
"instrumentation": {}
}
EOF
And this is how I try to build it with Earthly:
base-image:
FROM earthly/dind:alpine
WORKDIR /build
ENV DOCKER_BUILDKIT=1 # <---- required to support heredoc syntax
COPY distroless-runtime-17.Dockerfile Dockerfile
WITH DOCKER --allow-privileged
RUN docker build . -t base-17-image
END
While the WITH DOCKER RUN part gets executed successfully, I do not know how to use the result of base-image target in other targets to package my app using the resulting base image. The FROM base-17-image just fails as if it does not exist (and this tag really does not exist - docker run base-17-image fails with the same reason).
It turned out to be very easy and natively supported:
The whole recipe is just 2 lines of code:
base-image:
FROM DOCKERFILE -f distroless-runtime-17.Dockerfile .
and the result can of the above step can be reused to package your application as: FROM +base-image
There is no obvious way to override some php-fpm configuration in DDEV-Local's web container. Although it's easy to provide custom PHP configuration it's not as obvious how one would configure the php-fpm process itself.
In my case I want to change the security.limit-extensions value in pool.d/www.conf
There are two ways to do this. I'll create two separate answers to explain how.
The first technique is to create a custom Dockerfile (docs) which edits the www.conf (or any other file). You can also use the Dockerfile ADD command to add a complete file and override them.
In the case of this specific problem, we'll create a .ddev/web-build/Dockerfile with these contents:
# You can copy this Dockerfile.example to Dockerfile to add configuration
# or packages or anything else to your webimage
ARG BASE_IMAGE
FROM $BASE_IMAGE
ENV PHP_VERSION=7.4
RUN echo "security.limit_extensions = .php .html" >> /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
After you ddev start you'll have the new configuration.
Instead of the RUN echo approach that just appends to the file, given here for simplicity, you could RUN a sed/awk/perl statement to change the file in place.
And alternatively you could put the version of the www.conf that you want into the .ddev/web-build directory and
COPY www.conf /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
The second way to approach this is to use a custom docker-compose.*.yaml file (docs.
Here you'll copy the desired www.conf (or any other file) into your project's .ddev directory and then mount it into the web container on top of the previously provided one. For this specific example, you can copy the www.conf into the .ddev folder by cd .ddev && docker cp ddev-<projectname>-web:/etc/php/7.4/fpm/pool.d/www.conf . and edit it as you need to (edit it with "security.limit_extensions = .php .html").
Then a custom .ddev/docker-compose.*.yaml file like this can mount it into the proper directory (mine is called docker-compose.wwwconf.yaml):
version: "3.6"
services:
web:
volumes:
- "./www.conf:/etc/php/7.4/fpm/pool.d/www.conf"
if you are using docker-compose, mount zz-docker.conf where your customized configure placed, sample as bellow
php:
build: ./php
image: ctc/php:latest
container_name: ctc-php
expose:
- 9000
volumes:
- ./html:/var/www/html
- ./php/log:/var/log/php-fpm
- ./php/php-fpm.d/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- koogua
restart: always
zz-docker.conf looks like bellow:
[global]
daemonize = no
[www]
listen = 9000
pm.max_children = 50
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 500
note: mount www.conf will cause error
I've a Bitbucket Pipeline that is using an custom docker image as a base. Pulling it from the ECR. Also, I'm using this image to build dockerized Go apps in the first step with make commands. I want to cache Go modules that are being downloaded in the make build process. But when I read the examples, people are using Go base images to make caching work. How can I activate caching while using base image other than Go image itself? Related parts of my pipeline is on below and Go cache seems doesn't work.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
definitions:
caches:
go: $GOPATH/pkg
pipelines:
tags:
'*-beta*'
-step:
name: "Image Build & Push"
services:
-docker
caches:
-go
script:
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test
Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.
To begin, this is my project hierarchy:
myproj/
- commons1/
- com1_file1.go
- ...
- commons2/
- com2_file1.go
- ...
- module1/
- mod1_file1.go
- Dockerfile
- ...
- module2/
- mod2_file1.go
- Dockerfile
- ...
- docker-compose.yml
What I'd like to do is that when module1 and module2 containers start up, they each have a copy of all the commonsN directories in their GOPATH's so that each can access the common libraries exposed by each of the commonsN directories.
For example, I would like to see something like this in the container for module1:
/go/
- src/
- commons1/
- com1_file1.go
- ...
- commons2/
- com2_file1.go
- ...
- module1/
- mod1_file1.go
- ...
Reason being is that this is basically how my local GOPATH looks (with the addition of the other modules of course) so that I can do something like this in my source files:
package main
import(
"fmt"
"myproj/commons1"
)
func main() {
fmt.Println("Some thing from common library :", commons1.SomethingFromCommons)
}
From my naive understanding of Docker, it appears I'm not allowed to modify my Dockerfiles to do something along the lines of COPY ../commons1 /go/src/commons1, so I'm wondering how I would go about accomplishing this?
I would strongly prefer to not go the Github route since the source code is all behind company proxies and whatnot and I'm imagining configuring all that is going to take way longer than simply copying some directories.
Edit
I have updated my docker-compose.yml file to look something like this per suggestion from barat:
version: '2'
services:
module1:
volumes:
- ./commons1:/go/src/myproj/commons1
build: module1/
Dockerfile for module1 looks like this:
FROM golang:1.8.0
RUN mkdir -p /go/src/app
WORKDIR /go/src/app
COPY . /go/src/app
RUN go get -d -v
RUN go install -v
ENTRYPOINT /go/bin/app
EXPOSE 8080
docker-compose build fails on the go get -d -v with error:
package myproj/commons1: unrecognized import path "myproj/commons1" (import path does not begin with hostname)
If myproj/commons1 was copied into /go/src/, then this shouldn't be an issue right? I'm guessing then it hasn't been copied over then?
You could build an image including commons1 and commons2 that your other images are based on.
FROM golang:1.8.0
RUN mkdir -p /go/src/myproj/commons1 && mkdir -p /go/src/myproj/commons2
COPY commons1/ /go/src/myproj/commons1/
COPY commons2/ /go/src/myproj/commons2/
The downside is this requires an external build step whenever you update one of the common projects:
docker build -t me/myproj:commons .
Then your compose apps can rely on the commons image instead of golang and build as normal without the volumes.
FROM me/myproj:commons
...
So problem was the go get -d -v command since it was complaining myproj/commons1 wasn't installed in $GOPATH/src basically. This I of course suspect was because Docker Compose wasn't mounting the volumes I mentioned before it ran the go get on docker-compose build so I made a work around in my docker-compose.yml but it is far from elegant:
version: '2'
services:
module1:
volumes:
- ./commons1:/go/src/myproj/commons1
build: module1/
ports:
- "8080:8080"
command: bash -c "go get -d -v && go install -v && /go/bin/app
This is obviously far from ideal because my Go binary is rebuilt every time I do a docker-compose up regardless of whether or not I ran docker-compose build.
This is also problematic because I wanted to use dockerize for certain containers to wait until another container has started up completely and it becomes quite messy now I think.