how to access environment variables in Windows Docker healthcheck script - windows

I am running docker on windows and I have a following healthcheck directive:
HEALTHCHECK --interval=20s \
--timeout=5s \
CMD powershell C:\\healthcheck.ps1
In healthcheck.ps1 script I would likt to access ${env:something} value, but it is empty there. I added Get-ChildItem Env: to healthcheck.ps1 script to list variable to see how env does look like and there are no variables I pass to container during startup. What is interesting - when I enter container with "docker exec" I can see this variable and even more launching healthcheck script manually from inside container works as expected and variable is visible there. It just doesn't work when docker tries to perform a healthcheck.
I have a similar dockerfile on linux and of course it works just fine.
So my question is - what is different on windows? How can I achieve that? Is it even possible to have an access to environment variables on windows in a healthcheck script?

turns out it's a bug:
https://github.com/moby/moby/issues/31366
This is resolved in version 17.04 which I can confirm.

Related

How to run pre/post job scripts on self-hosted GitHub Actions runner

Problem
I am trying to use a Windows Docker container to run GitHub Actions.
I want to run scripts before and after the job (e.g. to clean the directory).
I have successfully done this before on a computer not running docker, so I figured the same should work in docker.
What I have tried
I found here that you can do that using Environment Variables.
I used the following two commands in command prompt to set the environment variables.
Pre-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_STARTED C:\actions-runner-resources\scripts\pre-post-build\pre-run-script.ps1
Post-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_COMPLETED C:\actions-runner-resources\scripts\pre-post-build\post-run-script.ps1
The scripts do not run.
I have tried restarting the docker container.
I have tried restarting the actions runner service.
I am new to docker, so I am wondering if I am doing something wrong with the environment variables that does not work with docker.
How do I get the actions runner to run pre/post job scripts in docker?
You can safely add them to your environment variable by doing this recommended method;
Inside the actions-runner directory, locate the .env file, edit it by adding your environment variable. Save and restart the runner service.

How to prepare the shell environment in an image executed by GitLab Runner?

I'm running CI jobs on a self-hosted GitLab instance plus 10 GitLab Runners.
For this minimal example, two Runners are needed:
Admin-01
A shell runner with Docker installed.
It can execute e.g. docker build ... to create new images, which are then pushed to the private Docker registry (also self-hosted / part of the GitLab installation)
Docker-01
A docker runner, which executes the previously build image.
On a normal bare-metal, virtual machine or shell runner, I would modify e.g. ~/.profile to execute commands before before_script or script sections are executed. In my use case I need to set new environment variables and source some configuration files provided by the tools I want to run in an image. Yes, environment variables could be set differently, but there seams to be no way to source Bash scripts automatically before before_script or script sections are executed.
When sourcing the Bash source file manually, it works. I also notice, that I have to source it again in script block. So I assume the Bash session is ended between before_script block to script block. Of cause, it's no nice solution to manually source the tools Bash configuration script in every .gitlab-ci.yml file manually by the image users.
myjobn:
# ...
before_script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
The mentioned modifications for e.g. shell runners does not work in images executed by GitLab Runner. It feels like the Bash in the container is not started as login shell.
The minimal example image is build as follows:
fetch debian:bullseye-slim from Docker Hub
use RUN commands in Dockerfile to modify with some echo outputs
/etc/profile
/root/.bashrc
/root/.profile
# ...
RUN echo "echo GREETINGS FROM /ROOT/PROFILE" >> /root/.profile \
&& echo "echo GREETINGS FROM /ETC/PROFILE" >> /etc/profile \
&& echo "echo GREETINGS FROM /ROOT/BASH_RC" >> /root/.bashrc
When the job starts, non of the echos is printing messages, while a cat shows, the echo commands have been put at the right places while building the image.
At next I tried to modify
SHELL ["/bin/bash", "-l", "-c"]
But I assume, this has only effects in RUN commands in the Dockerfile, but not on an executed container.
CMD ["/bin/bash", "-l"]
I see no behavior changes.
Question:
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
How to modify the environment in a container before before_script or script runs. Modifying means environment variables and execution / sourcing a configuration script or patched default script like ~/.profile.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Note:
Yes, the behavior can be achieved with some Docker arguments in docker run, but as I wrote GitLab Runner is managing the container. Alternatively, how to configure, how GitLab Runner launches the images? To my knowledge, there is no configuration option available / documented for this situation.
A shell runner with Docker installed. It can execute e.g. docker build ...
Use docker-in-docker or use kaniko. https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Shell executor is like "the last resort", where you want specifically to make changes to the server, or you are deploying your application "into" this server.
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
Add ENTRYPOING bash -l to your image. Or set the entrypoint from gitlab-ci.yml. See docker documentation on ENTRYPOINT and gitlab-ci.yml documentation on image: entrypoint: .
How to modify the environment in a container before before_script or script runs.
Build the image with modified environment. Consult Dockerfile documentation on ENV statements.
Or set the environment from gitlab-ci.yml file. Read documentation on variables: in gitlab-ci.
How to prepare the shell environment in an image executed by GitLab Runner?
Don't. The idea is that the environment is reproducible, ergo, there should be no changes beforehand. Add variables: in gitlab-ci file and use base images if possible.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
Gitlab is open-source.
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Yes, first a gitlab-runner-helper is executed - it has git and git-lfs and basically clones the repository and downloads and uploads the artifacts. Then the container specified with image: is run, cloned repository is copied into it and a specially prepared shell script is executed in it.

Docker for Windows Containers environment variables

Ok, this seems easy enough for Linux containers but I am trying to get this done using Windows Containers and its annoying that its so difficult.
I have a Windows Dockerfile which builds a project and part of the build process is to reversion the C# AssemblyInfo.cs files so that the built assemblies have a build version from the CI environment (Devops)
I am using a Powershell script https://github.com/microsoft/psi/blob/master/Build/ApplyVersionToAssemblies.ps1, it expects 2 Environment variables, one which I can hardcode so is not a problem, but the BUILD_BUILDNUMBER environment variable needs to be injected from Devops build system.
I have tried the following, none of which work
ARG BUILD_BUILDNUMBER
ENV BUILD_BUILDNUMBER=$BUILD_BUILDNUMBER
RUN ApplyVersionToAssemblies.ps1
and running
docker build -f Dockerfile --build-arg BUILD_BUILDNUMBER=1.2.3.4 .
also
RUN SETX BUILD_BUILDNUMBER $BUILD_BUILDNUMBER
RUN SETX BUILD_BUILDNUMBER %BUILD_BUILDNUMBER%
and a few other combinations that I dont recall, what I ended up doing which works but seems like a hack is to pass the BUILDNUMBER as a file via a COPY and then modifying the the Powershell script to read that into its local variable
So for the moment it works but I would really like to know how this is supposed to work via ARG and ENV for Windows Container builds
Windows Containers definitely feel like Linux containers poor cousin :)
Example for CMD in Docker Windows Containers:
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"https://URL.com/_packaging/Name/nuget/v3/index.json\", \"username\":\"PATForPackages\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
SHELL ["cmd", "/S", "/C"]
RUN echo %VSS_NUGET_EXTERNAL_FEED_ENDPOINTS%

Can I use a DockerFile as a script?

We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

Resources