How to set up env variables in gitlab-ci.yaml? - continuous-integration

My gitlab ci-cd config file uses many env variables. To set them, I use gitlab ci-cd secret variables.
For example, dev deploy-part:
- echo "====== Deploy to dev server ======"
# Add target server`s secret key
- apk add git openssh bash
- mkdir ~/.ssh
- echo $DEV_SERVER_SECRET_KEY_BASE_64 | base64 -d > ~/.ssh/id_rsa
- chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
- echo "Test ssh connection for"
- echo "$DEV_SERVER_USER#$DEV_SERVER_HOST"
- ssh -o StrictHostKeyChecking=no -T "$DEV_SERVER_USER#$DEV_SERVER_HOST"
# Delploy
- echo "Setup target server directories"
- TARGET_SERVER_HOST=$DEV_SERVER_HOST TARGET_SERVER_USER=$DEV_SERVER_USER TARGET_SERVER_APP_FOLDER=$DEV_SERVER_APP_FOLDER pm2 deploy pm2.config.js dev setup 2>&1 || true
- echo "make deploy"
- TARGET_SERVER_HOST=$DEV_SERVER_HOST TARGET_SERVER_USER=$DEV_SERVER_USER TARGET_SERVER_APP_FOLDER=$DEV_SERVER_APP_FOLDER pm2 deploy pm2.config.js dev
I have 5 repositories in project and 3 servers (dev, preprod, prod). So I must manage many variables. Manage all them using gitlab ci-cd secret variables it's very hurt. I can't see it, change it - only delete and create. I agree to use it for secret ssh keys, but it's not suitable for specifying the names of folders, hosts, etc.
Is there some other way to provide variables to ci-cd script?

You can define custom variables in quite a few ways:
I. using GUI:
per project
per group of projects
per Gitlab instance (for all projects and groups)
II. using .gitlab-ci.yml file:
You can use the variables keyword in a job or at the top level of the .gitlab-ci.yml file. If the variable is at the top level, it’s globally available and all jobs can use it. If it’s defined in a job, only that job can use it.
variables:
TEST_VAR: "All jobs can use this variable's value"
job1:
variables:
TEST_VAR_JOB: "Only job1 can use this variable's value"
script:
- echo "$TEST_VAR" and "$TEST_VAR_JOB"
Make sure to store only non-sensitive variables in your .gitlab-ci.yaml file
Check the official documentation for more information and examples.

Related

gitlab-ci predefined is defined in script step of deploy stage but undefined inside bash script run via 'bash -s'

I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.

Self hosted environment variables not available to Github actions

When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable

Copying a war file from GitLab to EC2

I am trying to create a CI/CD pipeline to build war file and deploy it to EC2 from GitLab.
Once the war file is created, I would like to copy it to some folder in EC2 so that from there I would like to copy it to tomcat server.
The following is the ".gitlab-ci.yml" file.
stages:
- build
- deploy
build:
stage: build
image: maven:3-jdk-8
script:
- mvn install
artifacts:
paths:
- target/
deploy:
stage: deploy
before_script:
# Generate SSH Key
- mkdir -p ~/.ssh
- echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
- bash .gitlab-deploy-ec2.sh
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
But when the above pipeline is run, in the deploy stage the scp command is giving "permission denied" error.
Any idea on how to solve this?
Error Message:
Running with gitlab-runner 14.5.2 (e91107dd)
on blue-3.shared.runners-manager.gitlab.com/default zxwgkjAP
Resolving secrets
00:00
Preparing the "docker+machine" executor
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
Preparing environment
00:01
Running on runner-zxwgkjap-project-31676452-concurrent-0 via runner-zxwgkjap-shared-1639429231-955193ca...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/te2122/deploytoaws/.git/
Created fresh repository.
Checking out dc27fd6f as master...
Skipping Git submodules setup
Downloading artifacts
00:02
Downloading artifacts for build (1880203000)...
Downloading artifacts from coordinator... ok id=1880203000 responseStatus=200 OK token=9RSALYus
Executing "step_script" stage of the job script
00:02
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
$ mkdir -p ~/.ssh
$ echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
Warning: Permanently added '54.205.169.131' (ECDSA) to the list of known hosts.
scp: /gitlabec2pipeline.war: Permission denied
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
Thank you.
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
AWS keys don't matter here since you're using SSH key for authentication when you use ssh (or scp) to connect to the EC2 instance.
If you are sure you're using the correct key value, your environment variable $EC2_SSH_PRIVATE_KEY is not well-formed. Common issues come up with newlines in the variable and use of CRLF (often can be added when copy/pasting the key in the web UI) instead of LF.
To work around this more reliably, you could:
use a file type variable, then copy the file into ~/.ssh OR
base64 encode your key before putting it into the CI variable. This avoids any possible issues with newlines and CRLF. Then decode it in the job, placing the value into the file.
Update:
It looks like your user does not have permission to the destination directory on the server. You need to either give appropriate permission to the ec2-user or choose a different destination location where the user does have permission to write files.

I need to pass an env variable to docker through ssh from gitlab-ci

From my gitlab-ci I would need to pass an environment variable with the spring profiles to docker compose. Such variable is defined for each server environment where we deploy.
So, in my gitlab-ci I have this:
.deploy_template: &deploy_template
script:
- echo $ENV_SPRING_PROFILES
# start containers
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && SPRING_ACTIVE_PROFILES=$ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
deploy_811AC:
<<: *deploy_template
stage: deploy
when: manual
only:
- /^feature.*$/
- /^fix.*$/
environment:
name: ccvli-ecp626
url: 10.135.XXX.XXX
variables:
ENV_SPRING_PROFILES: "mock"
When I run the runner, I can see with this - echo $ENV_SPRING_PROFILES the value of the variable. However, it seems not be replaced in the SSH command as docker-compose say the variable SPRING_ACTIVE_PROFILES is empty.
It is becoming a kind of nightmare so any clue is welcome.
Thanks in advance
I have not to much experience with gitlab ci, but what I think is that the way variables are "declared" is not with "=" but like this:
variables:
SPRING_ACTIVE_PROFILES: $SPRING_ACTIVE_PROFILES
script:
...
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && $ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
Once it is declare on the first block, you can start using it on the code :)

GitLab CI Script variables

I have gitlab deployment activem and I want to get the deploy script to have some custom information about the deployment process (like $CI_PIPELINE_ID).
However, the script doesn't get the variables, instead it gets the "raw text".
the call performed by the script is: $ python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
How can i get it to use the variables?
My .gitlab-ci.yml:
image: python:2.7
before_script:
- whoami
- sudo apt-get --quiet update --yes
- sudo chmod +x deploy/deploy.py
deploy_production:
stage: deploy
environment: Production
only:
- tags
- trigger
except:
# - develop
- /^feature\/.*$/
- /^hotfix\/.*$/
- /^release\/.*$/
script:
- python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
It looks like potentially that you could be using a different environmental variable that you should be using.
bash/sh $variable
windows batch %variable%
PowerShell $env:variable
See using CI variables in your job script.
I don't get what you mean with "raw text", but you can declare the variables in your project settings. Also, have you configured you're runner?
Go to Settings->CI/CD->Secret Variables and just put them right there.
You can also find valuable information in the documentation.

Resources