I wrote following pipeline:
image: maven:3-openjdk-11
variables:
TARGET_LOCATION: "/tmp/uploads/"
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- export MAVEN_ARTIFACT_VERSION=$(mvn --non-recursive help:evaluate -Dexpression=project.version | grep -v '\[.*'| tail -1)
- export MAVEN_ARTIFACT=app-${MAVEN_ARTIFACT_VERSION:+$MAVEN_ARTIFACT_VERSION.jar}
script:
- eval $(ssh-agent -s)
(SSH STUFF HERE...)
- scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
I expected the $MAVEN_ARTIFACT in scp command change to something like app-BETA-0.1.jar and TARGET_NAME change it's value but it's not parsing and I got variable name in both places. I tried with brackets as well but I can't achieve what I want.
Is there any way to pass variables generated during script execution as arguments to other programs executed in the same script section?
Below is a piece of logs from pipeline execution:
$ scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
You're using these correctly, and it is working.
The GitLab pipeline logs show the command exactly as you write it in your script. It will not replace variables with their values. If you need to verify the value or confirm it's set, use standard debugging techniques like printing the value with something like echo $MAVIN_ARTIFACT.
Related
I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.
I have the following script in my gitlab-ci and will like to run the loops same time, anyone knows a great way to do this? so that they both run at same time
NOTE the job is a manual job and am looking for a single button click to loop through all the packages in the bash script as shown below
when: manual
script:
- |-
for PACKAGE in name1 name2; do
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$PACKAGE:${BUILD_TAG}"
docker build -t ${IMAGE} -f $PACKAGE/Dockerfile .
docker push ${IMAGE}
done
currently it runs first for name1 and then after that is finished then runs for name2. I will like to run both at same exact time since there is no dependency
Here is what i tried from an answer on SO => (https://unix.stackexchange.com/a/216475/138406)
when: manual
script:
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
This works in regular bash script but when i used it with gitlab-ci, it doesnt run as expected and does not even run any of the commands and just succeeds the job instantly
Anyone willing to help on where the issue is and how to solve this issue?
To achieve your use case, I'd suggest you rather "parallelize" the build using several dedicated GitLab-CI jobs, rather than using several bash jobs in a single GitLab-CI job.
Proof-of-concept:
stages:
- push
.push-template:
stage: push
image: docker:latest
services:
- docker:dind
variables:
IMAGE: "${CI_REGISTRY}/${GITLAB_REPO}/${PACKAGE}:${BUILD_TAG}"
# where ${PACKAGE} should be provided by the child jobs...
before_script: |
if [ -z "${PACKAGE}" ]; then
echo 'Error: variable PACKAGE is undefined' >&2
false
fi
# just for completeness, this may be required:
echo "${CI_JOB_TOKEN}" | docker login -u "${CI_REGISTRY_USER}" --password-stdin "${CI_REGISTRY}"
script:
- docker build -t "${IMAGE}" -f "${PACKAGE}/Dockerfile" .
- docker push "${IMAGE}"
- docker logout "${CI_REGISTRY}" || true
push-name1:
extends: .push-template
variables:
PACKAGE: name1
push-name2:
extends: .push-template
variables:
PACKAGE: name2
See the .gitlab-ci.yml reference manual for details on the extends keyword.
just succeeds the job instantly
That is what running in the background means - it means that the main process will continue instantly. You have to wait for background processes to finish.
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
wait
But that will not catch any errors, which will result in problems going undetected. You would have to collect pids and wait on the individually:
...
childs=""
for package in name1 name2; do
task "$package" &
childs="$childs $!"
done
for pid in $childs; do
if ! wait "$pid"; then
echo "Process with pid=$pid failed"
kill $childs
wait
exit 1
fi
done
But anyway, this is cumbersome and it's reinventing the wheel. Install GNU xargs (or even better parallel) and make sure your docker container has bash shell. Then, just export the function and run it in subprocess with xargs:
...
export -f task
printf "%s\n" name1 name2 | xargs -P0 -d '\n' bash -xeuo pipefail -c 'func "$#"' --
You may want to research https://man7.org/linux/man-pages/man1/xargs.1.html or even https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html .
Definitely, instead of writing long scripts in .gitlab-ci yaml file - move it all a dedicated script file, so that you can test the run locally. Check your scripts with shellcheck. And anyway - using docker-compose might also be simpler.
Anyway, this is all odd and for sure I wouldn't do it that way anyway. Gitlab-runner is already the tool that gives you parallelization - it runs multiple jobs in the same stage in parallel. Just run two tasks.
.todo:
script:
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/${CI_JOB_NAME}:${BUILD_TAG}"
docker build -t ${IMAGE} -f ${CI_JOB_NAME}/Dockerfile .
docker push ${IMAGE}
name1:
extends: .todo
name2:
extends: .todo
Such approach will give your pipeline visible indication over what specific task failed, so you wouldn't need to scroll through mangled unreadable logs of two processes running in parallel. Just one job with one task.
I am trying to run maven through a docker image using a shell script
When running docker in the shell, I use sed to remove single quotes:
bash script:
docker run $(echo "-e CUCUMBER_FILTER_TAGS=$CUCUMBER_FILTER_TAGS $RUN_ARGUMENT $INPUT_MAVEN_COMMAND $MAVEN_ARGUMENTS $AUTHENTICATION" | sed "s/'//g")
is translated into
docker run -e 'CUCUMBER_FILTER_TAGS="#bidrag-person' and not '#ignored"' --rm -v /home/deployer/actions-bidrag-cucumber-backend/ws/bidrag-cucumber-backend/bidrag-cucumber-backend:/usr/src/mymaven -v /home/deployer/.m2:/root/.m2 -w /usr/src/mymaven maven:3.6.3-openjdk-15 mvn test -e -DUSERNAME=j104364 -DINTEGRATION_INPUT=json/integrationInput.json -DUSER_AUTH=*** -DTEST_AUTH=*** -DPIP_AUTH=***
how can I remove those extra single quotes around and within CUCUMBER_FILTER_TAGS that seems to pop up from nowhere?
I cannot solve this and are seeking a solution. This script (https://github.com/navikt/bidrag-maven/blob/feature/filter.tags/cucumber-backend/cucumber.sh) is being run from a cron job on GitHub (GitHub Actions, part of a GitHub workflow)
The other variables (which are not inputs to this script) are set as environment variables from GitHub secrets in a GitHub workflow
AUTHENTICATION="-DUSER_AUTH=$USER_AUTHENTICATION -DTEST_AUTH=$TEST_USER_AUTHENTICATION -DPIP_AUTH=$PIP_USER_AUTHENTICATION"
are set in in a GitHub workflow yaml file like this:
- uses: navikt/bidrag-maven/cucumber-backend#v6
with:
maven_image: maven:3.6.3-openjdk-15
username: j104364
env:
USER_AUTHENTICATION: ${{ secrets.USER_AUTHENTICATION }}
TEST_USER_AUTHENTICATION: ${{ secrets.TEST_USER_AUTHENTICATION }}
PIP_USER_AUTHENTICATION: ${{ secrets.PIP_USER_AUTHENTICATION }}
You should be using an array.
docker_options=(
-e "CUCUMBER_FILTER_TAGS=$CUCUMBER_FILTER_TAGS"
"$RUN_ARGUMENT"
"$INPUT_MAVEN_COMMAND"
"$MAVEN_ARGUMENTS"
"$AUTHENTICATION"
)
docker run "${docker_options[#]}"
while these answers work to some degree, they will not function in my use case... I have recently upgraded the server where these scripts are used, and I have moved on to other scripting languages...
Conclusion:
Bash scripting is hard and painstaking... Both of these suggestions are functioning (sort of), but not as intended...
I have an Azure Pipeline that I am trying to build. Inside the dockerfile there needs to be variables for some sensitive information regarding our Azure Keyvault, my goal is to not have this information stored in any file, rather have the pipeline update the dockerfile on run; I have create a simple bash script inside the repo that the pipeline is set to run, it will take the variable names and update the dockerfile, however it says it cannot find the docker file..
This is the script,
sed -i "s/<AZURE_CLIENT_ID>/${1}/g" Dockerfile
sed -i "s/<AZURE_CLIENT_SECRET>/${2}/g" Dockerfile
sed -i "s/<AZURE_TENANT_ID>/${3}/g" Dockerfile
sed -i "s/<KEY_VAULT_NAME>/${4}/g" Dockerfile
I have set the arguments in the Pipeline, and used an echo command to ensure they're returning the desired values.
This is the error I get:
sed: can't read Dockerfile: No such file or directory
/home/vsts/work/1/s/update-docker-var.sh: line 6: $'\r': command not found
I have tried using the 'ls' command and the 'pwd' command but they both return nothing... I am kind of stumped here, any help would be great. Thank you!
This is the bash section of the pipeline YAML:
- task: Bash#3
inputs:
filePath: 'update-docker-var.sh'
arguments: '$(AZURE_CLIENT_ID) $(AZURE_CLIENT_SECRET) $(AZURE_TENANT_ID) $(KEY_VAULT_NAME)'
workingDirectory: '$(Build.SourcesDirectory)'
I want to set an environment variable in my Dockerfile.
I've got a .env file that looks like this:
FOO=bar.
Inside my Dockerfile, I've got a command that parses the contents of that file and assigns it to FOO.
RUN 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'
The problem I'm running into is that the script above doesn't return what I need it to. In fact, it doesn't return anything.
When I run docker-compose up --build, it fails with this error.
The command '/bin/sh -c 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'' returned a non-zero code: 127
I know that the command /bin/sh -c 'echo "$(cut -d'=' -f2 <<< $(grep FOO .env))"' will generate the correct output, but I can't figure out how to assign that output to an environment variable.
Any suggestions on what I'm doing wrong?
Environment Variables
If you want to set a number of environment variables into your docker image (to be used within the containers) you can simply use env_file configuration option in your docker-compose.yml file. With that option, all the entries in the .env file will be set as the environment variables in image and hence into containers.
More Info about env_file
Build ARGS
If your requirement is to use some variables only within your Dockerfile then you specify them as below
ARG FOO
ARG FOO1
ARG FOO2
etc...
And you have to specify these arguments under the build key in your docker-compose.yml
build:
context: .
args:
FOO: BAR
FOO1: BAR1
FOO2: BAR2
More info about args
Accessing .env values within the docker-compose.yml file
If you are looking into passing some values into your docker-compose file from the .env then you can simply put your .env file same location as the docker-compose.yml file and you can set the configuration values as below;
ports:
- "${HOST_PORT}:80"
So, as an example you can set the host port for the service by setting it in your .env file
Please check this
First, the error you're seeing. I suspect there's a "not found" error message not included in the question. If that's the case, then the first issue is that you tried to run an executable that is the full string since you enclosed it in quotes. Rather than trying to run the shell command "export", it is trying to find a binary that is the full string with spaces in it and all. So to work past that error, you'd need to unquote your RUN string:
RUN export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")
However, that won't solve your underlying problem. The result of a RUN command is that docker saves the changes to the filesystem as a new layer to the image. Only changes to the filesystem are saved. The shell command you are running changes the shell state, but then that shell exits, the run command returns, and the state of that shell, including environment variables, is gone.
To solve this for your application, there are two options I can think of:
Option A: inject build args into your build for all the .env values, and write a script that calls build with the proper --build-arg flag for each variable. Inside the Dockerfile, you'll have two lines for each variable:
ARG FOO1=default value1
ARG FOO2=default value2
ENV FOO1=${FOO1} \
FOO2=${FOO2}
Option B: inject your .env file and process it with an entrypoint in your container. This entrypoint could run your export command before kicking off the actual application. You'll also need to do this for each RUN command during the build where you need these variables. One shorthand I use for pulling in the file contents to environment variables is:
set -a && . .env && set +a