I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.
I am writing a bitbucket pipeline to deploy my angular project to the ec2 instance. This is my pipeline using rsync.
image: node:12.18.3
pipelines:
branches:
dev:
- step:
name: Build Test Environment
caches:
- node
script:
- npm install
- npm run build-qa
artifacts:
- dist/qa/**
deployment: test
- step:
name: Deploy
trigger: manual
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SERVER >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR/dist/qa
- ls
- rsync -v -e ssh . $SSH_USER#$SERVER:/var/www/html/myproject
- echo "Deployment is done...!"
But this is giving me this error.
+ rsync -v -e ssh . $SSH_USER#$SERVER:/var/www/html/myproject
skipping directory .
rsync: link_stat "/opt/atlassian/pipelines/agent/build/dist/qa/$SSH_USER#myip" failed: No such file or directory (2)
rsync: link_stat "/opt/atlassian/pipelines/agent/build/dist/qa/ecdsa-sha2-nistp256" failed: No such file or directory (2)
rsync: change_dir#3 "/opt/atlassian/pipelines/agent/build/dist/qa//AAAAE2VjZHNhLXNtYTItbmlzdHAyNTwAAqAIbmlzdHAyNsYAAABBBGqKvzLI7IolhgM1ZEfol3VuJX4CX6jzqSyM6AzUgPbpyERywu/7U/SioMc/SLeJyfhYnWAJVApt8oOsqIjLqDg=:/var/www/html/myproject" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(713) [Receiver=3.1.2]
I tried a lot to find out a solution to this I even tried with the rsync-deploy pipe but it also gives the same above error can someone help me to write this pipeline correctly to get my requirement done?
Seems similar to the issue here https://serverfault.com/questions/363555/why-is-rsync-skipping-the-main-directory
you probably have folders inside you want to recursively include in the transfer
you need to change the source to ./ so it gets it is a folder
and as a bonus
you can/should just run ls with the path, and same with rsync, and not cd into there. Remember the trailing /
My gitlab ci-cd config file uses many env variables. To set them, I use gitlab ci-cd secret variables.
For example, dev deploy-part:
- echo "====== Deploy to dev server ======"
# Add target server`s secret key
- apk add git openssh bash
- mkdir ~/.ssh
- echo $DEV_SERVER_SECRET_KEY_BASE_64 | base64 -d > ~/.ssh/id_rsa
- chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
- echo "Test ssh connection for"
- echo "$DEV_SERVER_USER#$DEV_SERVER_HOST"
- ssh -o StrictHostKeyChecking=no -T "$DEV_SERVER_USER#$DEV_SERVER_HOST"
# Delploy
- echo "Setup target server directories"
- TARGET_SERVER_HOST=$DEV_SERVER_HOST TARGET_SERVER_USER=$DEV_SERVER_USER TARGET_SERVER_APP_FOLDER=$DEV_SERVER_APP_FOLDER pm2 deploy pm2.config.js dev setup 2>&1 || true
- echo "make deploy"
- TARGET_SERVER_HOST=$DEV_SERVER_HOST TARGET_SERVER_USER=$DEV_SERVER_USER TARGET_SERVER_APP_FOLDER=$DEV_SERVER_APP_FOLDER pm2 deploy pm2.config.js dev
I have 5 repositories in project and 3 servers (dev, preprod, prod). So I must manage many variables. Manage all them using gitlab ci-cd secret variables it's very hurt. I can't see it, change it - only delete and create. I agree to use it for secret ssh keys, but it's not suitable for specifying the names of folders, hosts, etc.
Is there some other way to provide variables to ci-cd script?
You can define custom variables in quite a few ways:
I. using GUI:
per project
per group of projects
per Gitlab instance (for all projects and groups)
II. using .gitlab-ci.yml file:
You can use the variables keyword in a job or at the top level of the .gitlab-ci.yml file. If the variable is at the top level, it’s globally available and all jobs can use it. If it’s defined in a job, only that job can use it.
variables:
TEST_VAR: "All jobs can use this variable's value"
job1:
variables:
TEST_VAR_JOB: "Only job1 can use this variable's value"
script:
- echo "$TEST_VAR" and "$TEST_VAR_JOB"
Make sure to store only non-sensitive variables in your .gitlab-ci.yaml file
Check the official documentation for more information and examples.
I am trying to deploy my Laravel app to AWS ec2 instance. I m using GitLab for code management and pipeline process.
Here is my .gitlab-ci.yml file.
# Node docker image on which this would be run
image: node:8.9.0
#This command is run before actual stages start running
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- npm install
stages:
- test
- deploy
- production
#Production stage
production:
stage: production
before_script:
#generate ssh key
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy.sh
environment:
name: production
url: MY_HOST_IP_ADDRESS
when: manual
# lint and test are two different jobs in the same stage.
# This allows us to run these two in parallel and making build faste
And here is my .gitlab-deploy.sh file
#!/bin/bash
#Get servers list
set -f
string=$DEPLOY_SERVER
array=(${string//,/ })
#Iterate servers for deploy and pull last commit
for i in "${!array[#]}"; do
echo "Deploying information to EC2 and Gitlab"
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /var/www/html && git pull origin"
done
When I push my code it's processing fine. As you can see below image.
After a successful process when I list the directory of /var/www/html it's still empty. I am using Apache2 on Ubuntu. I want to deploy my code directly AWS EC2 instance.
Thanks in advance!
I have copied this code from what seems to be various working dockerfiles around, here is mine:
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git python-virtualenv
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN chown -R root:root /root/.ssh
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Remove host checking
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
This gives me the error
Step 10 : RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
---> Running in 0d244d812a54
Cloning into '/home/docker-conf'...
Warning: Permanently added 'bitbucket.org,131.103.20.167' (RSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2014/04/30 16:07:28 The command [/bin/sh -c git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf] returned a non-zero code: 128
This is my first time using dockerfiles, but from what I have read (and taken from working configs) I cannot see why this doesn't work.
My id_rsa is in the same folder as my dockerfile and is a copy of my local key which can clone this repo no problem.
Edit:
In my dockerfile I can add:
RUN cat /root/.ssh/id_rsa
And it prints out the correct key, so I know its being copied correctly.
I have also tried to do as noah advised and ran:
RUN echo "Host bitbucket.org\n\tIdentityFile /root/.ssh/id_rsa\n\tStrictHostKeyChecking no" >> /etc/ssh/ssh_config
This sadly also doesn't work.
My key was password protected which was causing the problem, a working file is now listed below (for help of future googlers)
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
ADD id_rsa /root/.ssh/id_rsa
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Add bitbuckets key
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
You should create new SSH key set for that Docker image, as you probably don't want to embed there your own private key. To make it work, you'll have to add that key to deployment keys in your git repository. Here's complete recipe:
Generate ssh keys with ssh-keygen -q -t rsa -N '' -f repo-key which will give you repo-key and repo-key.pub files.
Add repo-key.pub to your repository deployment keys.
On GitHub, go to [your repository] -> Settings -> Deploy keys
Add something like this to your Dockerfile:
ADD repo-key /
RUN \
chmod 600 /repo-key && \
echo "IdentityFile /repo-key" >> /etc/ssh/ssh_config && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
// your git clone commands here...
Note that above switches off StrictHostKeyChecking, so you don't need .ssh/known_hosts. Although I probably like more the solution with ssh-keyscan in one of the answers above.
There's no need to fiddle around with ssh configurations. Use a configuration file (not a Dockerfile) that contains environment variables, and have a shell script update your docker file at runtime. You keep tokens out of your Dockerfiles and you can clone over https (no need to generate or pass around ssh keys).
Go to Settings > Personal Access Tokens
Generate a personal access token with repo scope enabled.
Clone like this: git clone https://MY_TOKEN#github.com/user-or-org/repo
Some commenters have noted that if you use a shared Dockerfile, this could expose your access key to other people on your project. While this may or may not be a concern for your specific use case, here are some ways you can deal with that:
Use a shell script to accept arguments which could contain your key as a variable. Replace a variable in your Dockerfile with sed or similar, i.e. calling the script with sh rundocker.sh MYTOKEN=foo which would replace on https://{{MY_TOKEN}}#github.com/user-or-org/repo. Note that you could also use a configuration file (in .yml or whatever format you want) to do the same thing but with environment variables.
Create a github user (and generate an access token for) for that project only
You often do not want to perform a git clone of a private repo from within the docker build. Doing the clone there involves placing the private ssh credentials inside the image where they can be later extracted by anyone with access to your image.
Instead, the common practice is to clone the git repo from outside of docker in your CI tool of choice, and simply COPY the files into the image. This has a second benefit: docker caching. Docker caching looks at the command being run, environment variables it includes, input files, etc, and if they are identical to a previous build from the same parent step, it reuses that previous cache. With a git clone command, the command itself is identical, so docker will reuse the cache even if the external git repo is changed. However, a COPY command will look at the files in the build context and can see if they are identical or have been updated, and use the cache only when it's appropriate.
BuildKit has a feature just for ssh which allows you to still have your password protected ssh keys, the result looks like:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=ssh \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ eval $(ssh-agent)
$ ssh-add ~/.ssh/id_rsa
(Input your passphrase here)
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--ssh default=$SSH_AUTH_SOCK .
Again, this is injected into the build without ever being written to an image layer, removing the risk that the credential could accidentally leak out.
BuildKit also has a features that allow you to pass an ssh key in as a mount that never gets written to the image:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=secret,id=ssh_id,target=/root/.ssh/id_rsa \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--secret id=ssh_id,src=$(pwd)/id_rsa .
Note that this still requires your ssh key to not be password protected, but you can at least run the build in a single stage, removing a COPY command, and avoiding the ssh credential from ever being part of an image.
If you are going to add credentials into your build, consider doing so with a multi-stage build, and only placing those credentials in an early stage that is never tagged and pushed outside of your build host. The result looks like:
FROM ubuntu as clone
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
COPY id_rsa /root/.ssh/id_rsa
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
FROM ubuntu as release
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
COPY --from=clone /repo /repo
...
To force docker to run the git clone even when the lines before have been cached, you can inject a build ARG that changes with each build to break the cache. That looks like:
# inject a datestamp arg which is treated as an environment variable and
# will break the cache for the next RUN command
ARG DATE_STAMP
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
Then you inject that changing arg in the docker build command:
date_stamp=$(date +%Y%m%d-%H%M%S)
docker build --build-arg DATE_STAMP=$date_stamp .
Another option is to use a multi-stage docker build to ensure that your SSH keys are not included in the final image.
As described in my post you can prepare your intermediate image with the required dependencies to git clone and then COPY the required files into your final image.
Additionally if we LABEL our intermediate layers, we can even delete them from the machine when finished.
# Choose and name our temporary image.
FROM alpine as intermediate
# Add metadata identifying these images as our build containers (this will be useful later!)
LABEL stage=intermediate
# Take an SSH key as a build argument.
ARG SSH_KEY
# Install dependencies required to git clone.
RUN apk update && \
apk add --update git && \
apk add --update openssh
# 1. Create the SSH directory.
# 2. Populate the private key file.
# 3. Set the required permissions.
# 4. Add github to our list of known hosts for ssh.
RUN mkdir -p /root/.ssh/ && \
echo "$SSH_KEY" > /root/.ssh/id_rsa && \
chmod -R 600 /root/.ssh/ && \
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
# Clone a repository (my website in this case)
RUN git clone git#github.com:janakerman/janakerman.git
# Choose the base image for our final image
FROM alpine
# Copy across the files from our `intermediate` container
RUN mkdir files
COPY --from=intermediate /janakerman/README.md /files/README.md
We can then build:
MY_KEY=$(cat ~/.ssh/id_rsa)
docker build --build-arg SSH_KEY="$MY_KEY" --tag clone-example .
Prove our SSH keys are gone:
docker run -ti --rm clone-example cat /root/.ssh/id_rsa
Clean intermediate images from the build machine:
docker rmi -f $(docker images -q --filter label=stage=intermediate)
For bitbucket repository, generate App Password (Bitbucket settings -> Access Management -> App Password, see the image) with read access to the repo and project.
Then the command that you should use is:
git clone https://username:generated_password#bitbucket.org/reponame/projectname.git
Nowsaday you can use the Buildkit option --ssh default when you build your container ; Prior to build, you need to add your SSH deploy key to your ssh-agent.
Here is the full process from the beginning :
Create a key pair on your deployment server. Just run ssh-keygen -t ecdsa Store your key pair into ~/.ssh
Add your public key generated (.pub extension) at your git provider website (gitlab, github..)
Add your key to your ssh-agent (a program that basically manages your keys easier than handling every file)
eval $(ssh-agent)
ssh-add /path/to/your/private/key
Add this to your Dockerfile :
# this 3 first lines add your provider public keys to known_host
# so git doesn't get an error from SSH.
RUN mkdir -m 700 /root/.ssh && \
touch -m 600 /root/.ssh/known_hosts && \
ssh-keyscan your-git-provider.com > /root/.ssh/known_hosts
# now you can clone with --mount=type=ssh option,
# forwarding to Docker your host ssh agent
RUN mkdir -p /wherever/you/want/to/clone && cd /wherever/you/want/to/clone && \
--mount=type=ssh git clone git#gitlab.com:your-project.git
And now you can finally build your Dockerfile (with buildkit enabled)
DOCKER_BUILDKIT=1 docker build . --ssh default
As you cannot currently pass console parameters to build in docker-compose, this solution is not available yet for docker-compose, but it should be soon (it's been done on github and proposed as a merge request)
p.s. this solution is quick & easy; but at a cost of reduced security (see comments by #jrh).
Create an access token: https://github.com/settings/tokens
pass it in as an argument to docker
(p.s. if you are using CapRover, set it under App Configs)
In your Dockerfile:
ARG GITHUB_TOKEN=${GITHUB_TOKEN}
RUN git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf "https://github.com/"
RUN pip install -r requirements.txt
p.s. this assumes that private repos are in the following format in requirements.txt:
git+https://github.com/<YOUR-USERNAME>/<YOUR-REPO>.git
For other people searching I had the same issue adding --ssh default flag made it work
In addition to #Calvin Froedge's approach to use Personal Access Token (PAT),
You may need to add oauth or oauth2 as username before your PAT to authenticate like this:
https://oauth:<token>#git-url.com/user/repo.git
recently had a similar issue with a private repository in a Rust project.
I suggest avoiding ssh permissions/config altogether.
Instead:
clone the repository within the build environment e.g. CI, where the permissions already exist (or can be easily configured)
copy the files into the Dockerfile (this can also be cached natively within the CI)
Example
part 1) within CI
CARGO_HOME=tmp-home cargo fetch
part 2) within Dockerfile
COPY tmp-home $CARGO_HOME
the process is the same regardless of language/package system