How can I reference a gitlab secret variable in an application.yml? I assume it is only accessable within gitlab-ci.yml context and has to be moved from there into the Docker image as a VM parameter somehow?
In case it matters, I am deploying in a Rancher environment.
Just export it or pass as a command line parameter to you CI script. Like:
gitlab-ci.yml
deploy-app:
stage: deploy
image: whatever
script:
- export MY_SECRET
- ...
or
deploy-app:
stage: deploy
image: whatever
script:
- docker run -it -e PASSWORD=$MY_SECRET whatever ...
Related
I'm trying to deploy & run a java spring boot application using github actions to a AWS Ec2 Instance. The application properties file of spring boot application points to environment variables where are present in the AWS Ec2 Instance. However, these environment variables are not available when the github action runs and so the execution of the jar fails with a null pointer exception.
What is the correct way to deploy a Spring boot (Non Docker Application) to Self hosted Ec2 Server? Can I do it without needing AWS Code Pipeline or AWS Elastic Beanstalk?
How do we read Ec2 instance environment variables while using github actions.
Thanks.
Sample Workflow file:
jobs:
build:
runs-on: [self-hosted]
steps:
- uses: actions/checkout#v3
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: "11"
distribution: "temurin"
cache: maven
- name: Build with Maven
run: mvn clean -B package
deploy:
runs-on: [self-hosted]
needs: build
steps:
- name: Run Script file
working-directory: ./
run: |
chmod +x ./script.sh
./script.sh
shell: bash
// script.sh - Try to print the env variables inside ec2.
#!/bin/bash
whoami
printenv
Example
Consider this example docker-compose file with custom .env file:
version: '3'
services:
service_example:
build:
dockerfile: Dockerfile
context: .
args:
AAA: ${AAA}
command: python3 src/service/run.py
env_file:
- custom_env.env
custom_env.env:
AAA=qqq
When I run docker-compose config I get the following output:
WARNING: The AAA variable is not set. Defaulting to a blank string.
services:
service_example:
build:
args:
AAA: '' <----------------------------- ??????
context: /Users/examples
dockerfile: Dockerfile
command: python3 src/service/run.py
environment:
AAA: qqq
version: '3'
Question
Why AAA is unset in build section?
What should I do to set it properly (to the value provided from custom file: AAA=qqq)?
I've also noticed that if I change the env file name to the default setting mv custom_env.env .env and remove env_file section from docker-compose.yml - everything will be just fine:
services:
service_example:
build:
args:
AAA: qqq
context: /Users/examples
dockerfile: Dockerfile
command: python3 src/service/run.py
version: '3'
Quick Answer
docker-compose --env-file custom_env.env config
Answers Explanation
Question-1: Why AAA is unset in build section?
Because the env file specified in env_file property custom_env.env is specific for the Container only, i.e. those variables are to be passed to container while running not during image build.
Question-2: What should I do to set it properly (to the value provided from custom file: AAA=qqq)?
To provide environment variables for build step in docker-compose file using custom env file, we need to specify the custom file path. Like
Syntax: docker-compose --env-file FILE_PATH config
Example: docker-compose --env-file custom_env.env config
Question-3: How .env works?
Because that is the default file for which docker-compose looks for.
Summary
So, In docker-compose for current scenario we can consider 2 stages for specifying env
Build Stage(Image)
Running Stage(Container)
For Build stage - we can use .env default file or we can use --env-file option to specify custom env file
For Running Stage - we can specify environment variables using environment: property or we can use env_file: property to specify a env file
References
https://docs.docker.com/compose/env-file/
https://docs.docker.com/compose/environment-variables/
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#environment
I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.
I am using a Terraform stage in GitLab CI/CD to deploy an EC2 instance. I know how to get the public IP of that instance once it's available for use throughout Terraform, but I'm not clear how to hop that over into an Ansible stage for configuration. Is there a way to output the public IP to an environment variable that can be made available to other stages?
The easiest way to pass Variables from one GitLab CI/CD job to another is to use the dotenv report artifact.
You simply put your variable in a file in the form VARIABLE_NAME=VALUE, and upload it as a specific type of artifact:
job1:
stage: stage1
script:
- echo "IP_ADDRESS='127.0.0.1'" >> .env
artifacts:
reports:
dotenv: .env
job2:
stage: stage2
script:
- echo $IP_ADDRESS # echo's 127.0.0.1
Instead of a normal artifact that is downloaded in other jobs, the dotenv report type turns the variables within the file into Environment Variables for other jobs.
I have a Dockerised application which I would like to run in both proxy and non-proxy host environments. I'm trying to resolve this problem by copying the normal environment variables, such as http_proxy, into the containers if and only if they exist in the host.
I can get 90% of the way there by running
set | grep -i _proxy=>proxies.env
in a top-level script, and then having, in my docker-compose.yml:
myserver:
build: ./myserver
env_file:
- proxies.env
This copies the host's environmental proxy variables, if any, into the server container, and it works in the sense that these variables are available at container run time, in other words by the stage that the Dockerfile CMD or ENTRYPOINT executes.
However I have one container which needs to run npm as a build step, ie from a RUN command in the Dockerfile, and these variables appear not to be present at this stage, so npm can't find the proxy and hangs. In other works, if I have
RUN set
in my Dockerfile, I can't see any variables from proxies.env, but if I do
docker exec -it myserver /bin/bash
and then run set, I can see everything from proxies.env.
Can anyone recommend a way to make these variables visible at container build time, without having to hard-code them, so that my docker-compose.yml and Dockerfile will still work both for hosts with proxies and hosts without proxies?
(Running with centos 7, docker-compose 1.3.1 and docker 1.7.0)
Update 2016, docker-compose 1.6.2, docker 1.10+, with a docker-compose.yml version 2:
You now have the args: sub-section of the build: section, which includes that very interesting possibility:
Build arguments with only a key are resolved to their environment value on the machine Compose is running on.
See PR 2653 (January 2016)
As a result, a way to introduce the proxy variables without hard-coding them in the docker-compose.yml file itself is with that precise syntax:
version: '2'
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
Before calling docker-compose up, you need to make sure your proxy environment variables are set:
export http_proxy=http://username:password#proxy.com:port
export https_proxy=http://username:password#proxy.com:port
export no_proxy=localhost,127.0.0.1,company.com
docker-compose up
Then your Dockerfile built by the docker-compose process will pick up automatically the proxy variable values, even though the docker-compose.yml does not include any hard-coded specific values.
May be you the "environment" option solves your problem. In your docker compose file would looks like:
myserver:
build: ./myserver
environment:
- HTTP_PROXY=192.168.1.8
- VARIABLE=value
- ...
Maybe you can try this:
Before you call RUN, ADD the .env file into the image
ADD proxies.env proxies.env
then prefix your RUN statement:
RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR"
This produces the following output:
root#armenubuntudev:~/Dockers/set-env# docker build -t ashimoon/envtest .
Sending build context to Docker daemon 3.584 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> 91e54dfb1179
Step 1 : ADD proxies.env proxies.env
---> Using cache
---> 181d0e082e65
Step 2 : RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR"
---> Running in 30426910a450
FOO is 1 and BAR is 2
---> 5d88fcac522c
Removing intermediate container 30426910a450
Successfully built 5d88fcac522c
docker-compose.yml
...
server:
build: .
args:
env: $ENV
...
Dockerfile
ARG env
ENV NODE_ENV $env
This example fixes YUM.
version: '2'
services:
example-service:
build:
context: .
args:
http_proxy: proxy.example.com:80