Ok, this seems easy enough for Linux containers but I am trying to get this done using Windows Containers and its annoying that its so difficult.
I have a Windows Dockerfile which builds a project and part of the build process is to reversion the C# AssemblyInfo.cs files so that the built assemblies have a build version from the CI environment (Devops)
I am using a Powershell script https://github.com/microsoft/psi/blob/master/Build/ApplyVersionToAssemblies.ps1, it expects 2 Environment variables, one which I can hardcode so is not a problem, but the BUILD_BUILDNUMBER environment variable needs to be injected from Devops build system.
I have tried the following, none of which work
ARG BUILD_BUILDNUMBER
ENV BUILD_BUILDNUMBER=$BUILD_BUILDNUMBER
RUN ApplyVersionToAssemblies.ps1
and running
docker build -f Dockerfile --build-arg BUILD_BUILDNUMBER=1.2.3.4 .
also
RUN SETX BUILD_BUILDNUMBER $BUILD_BUILDNUMBER
RUN SETX BUILD_BUILDNUMBER %BUILD_BUILDNUMBER%
and a few other combinations that I dont recall, what I ended up doing which works but seems like a hack is to pass the BUILDNUMBER as a file via a COPY and then modifying the the Powershell script to read that into its local variable
So for the moment it works but I would really like to know how this is supposed to work via ARG and ENV for Windows Container builds
Windows Containers definitely feel like Linux containers poor cousin :)
Example for CMD in Docker Windows Containers:
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"https://URL.com/_packaging/Name/nuget/v3/index.json\", \"username\":\"PATForPackages\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
SHELL ["cmd", "/S", "/C"]
RUN echo %VSS_NUGET_EXTERNAL_FEED_ENDPOINTS%
Related
Problem
I am trying to use a Windows Docker container to run GitHub Actions.
I want to run scripts before and after the job (e.g. to clean the directory).
I have successfully done this before on a computer not running docker, so I figured the same should work in docker.
What I have tried
I found here that you can do that using Environment Variables.
I used the following two commands in command prompt to set the environment variables.
Pre-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_STARTED C:\actions-runner-resources\scripts\pre-post-build\pre-run-script.ps1
Post-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_COMPLETED C:\actions-runner-resources\scripts\pre-post-build\post-run-script.ps1
The scripts do not run.
I have tried restarting the docker container.
I have tried restarting the actions runner service.
I am new to docker, so I am wondering if I am doing something wrong with the environment variables that does not work with docker.
How do I get the actions runner to run pre/post job scripts in docker?
You can safely add them to your environment variable by doing this recommended method;
Inside the actions-runner directory, locate the .env file, edit it by adding your environment variable. Save and restart the runner service.
My scenario is as follow
I need to add "project" folder to docker container for production build but for development build I want to mount local volume to project folder of container
eg. ADD project /var/www/html/project in production
Nothing in development (I can copy a dummy folder in development)
If I copy whole project folder to container in development then any changes in project folder will invalidate the docker cache of layers after the add command. It will take time to build docker image in development.
I want to use same docker file for both environment
To achieve that I used ADD $PROJECT_DIR /var/www/html/project in docker file, where $PROJECT_DIR is environment variable
Setting the environment variable in docker file like ENV PROJECT_DIR project or ENV CONFIG_FILE_PATH dummy-folder adds correct folders to container, but it needs me to change docker file each time.
I can also pass "build-arg" parameter when building docker image like
docker build -t myproject --build-arg "BUILD_TYPE=PROD" --build-arg "PROJECT_DIR=project" .
As variables BUILD_TYPE and PROJECT_DIR are related, I want to set CONFIG_FILE_PATH variable based on BUILD_TYPE. This will prevent case of me forgetting to change one parameter.
For setting the PROJECT_DIR variable I written following script "set_config_path.sh"
if [ $BUILD_TYPE="PROD" ]; then
PROJECT_DIR="project";
else
PROJECT_DIR="dummy-folder";
fi
I then run the script in dockerfile using
RUN . /root/set_project_folder.sh
Doing this, set_project_folder.sh script can access BUILD_TYPE variable but PROJECT_DIR is not reflected back in docker file
When running the set_project_folder.sh in my local machine's terminal, the PROJECT_DIR variable is changed but it is not working with dockerfile
Is there anyway we can change environment variable from subshell script e.g "set_config_path.sh" in above questions?
If it is possible, It can be used in many use cases to make docker build dynamic
Am I doing anything wrong here?
OR
Is there another good way to achieve this?
You can use something like below
FROM alpine
ARG BUILD_TYPE=prod
ARG CONFIG_FILE_PATH=config-$BUILD_TYPE.yml
RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
CMD echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
The output would be like
Step 4/4 : RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
---> Running in b5de774d9ebe
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
But if you run the image
$ docker run 9df23a126bb1
BUILD_TYPE= CONFIG_FILE_PATH=
This is because build args are not persisted as environment variables. If you want to persists these variables in the image also then you need to add below
ENV BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH
And now docker run will also output
$ docker run c250a9d1d109
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it
I am running docker on windows and I have a following healthcheck directive:
HEALTHCHECK --interval=20s \
--timeout=5s \
CMD powershell C:\\healthcheck.ps1
In healthcheck.ps1 script I would likt to access ${env:something} value, but it is empty there. I added Get-ChildItem Env: to healthcheck.ps1 script to list variable to see how env does look like and there are no variables I pass to container during startup. What is interesting - when I enter container with "docker exec" I can see this variable and even more launching healthcheck script manually from inside container works as expected and variable is visible there. It just doesn't work when docker tries to perform a healthcheck.
I have a similar dockerfile on linux and of course it works just fine.
So my question is - what is different on windows? How can I achieve that? Is it even possible to have an access to environment variables on windows in a healthcheck script?
turns out it's a bug:
https://github.com/moby/moby/issues/31366
This is resolved in version 17.04 which I can confirm.
I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.