How to fix dynamic environment in gitlab? - bash

I have built a "deploy review" stage and deploy job in my yaml file with the following code:
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain
https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
When I check the pipeline on my gitlab account, I see the commit on my review branch but I do not see "deploy review" job running. I see "test artifact" "test website" jobs running.
The link to the gitlab project is https://gitlab.com/syed.r.abdullah/my-static-website/tree/review
I took the following steps:
Added "deploy review" to the yaml file
Created a new branch "review" locally
Added the change to the yaml file in review branch
Committed the change
Pushed the change to gitlab using git push -u origin review
Visited my pipelines and saw review pipeline in failed state
Jobs, inside the review pipeline are "test artifact" and "test website", not "deploy review"
image: node
variables:
STAGING_DOMAIN: crazymonk84-staging.surge.sh
PRODUCTION_DOMAIN: crazymonk84.surge.sh
stages:
- build
- test
- deploy review
- deploy staging
- deploy production
- production tests
- cache
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: pull
paths:
- node_modules/
update cache:
stage: cache
script:
- npm install
only:
- schedules
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: push
paths:
- node_modules/
build website:
stage: build
only:
- master
- merge_requests
except:
- schedules
script:
- echo $CI_COMMIT_SHORT_SHA
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby build
- sed -i "s/%%VERSION%%/$CI_COMMIT_SHORT_SHA/" ./public/index.html
artifacts:
paths:
- ./public
test website:
stage: test
except:
- schedules
script:
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby serve &
- sleep 3
- curl "http://localhost:9000" | tac | tac | grep -q "Gatsby"
test artifact:
image: alpine
stage: test
except:
- schedules
script:
- grep -q "Gatsby" ./public/index.html
cache: {}
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
deploy staging:
stage: deploy staging
environment:
name: staging
url: http://$STAGING_DOMAIN
only:
- master
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $STAGING_DOMAIN
cache: {}
deploy production:
stage: deploy production
environment:
name: production
url: http://$PRODUCTION_DOMAIN
only:
- master
when: manual
allow_failure: false
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $PRODUCTION_DOMAIN
cache: {}
production tests:
image: alpine
stage: production tests
only:
- master
except:
- schedules
script:
- apk add --no-cache curl
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "Hi people"
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "$CI_COMMIT_SHORT_SHA"
cache: {}
I am expecting to see "deploy review" as the only job in the pipeline. However, I see "test artifact" and "test website." What can I do to fix the issue? Thanks.

I found one solution:
Add the following to the build website and deploy review stages:
only:
- master
- merge_requests

I expect you watched a course by Valentin Despa. I've just come across the same problem. I wonder if he published any solution to the issue.
Basically I'm not positive if I'm correct, but...
if we open the link, there might be found an explanation:
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html
only:
- merge_requests
runs the deploy review stage in a detached mode. We don't have an access to our environment variables since they are protected. What I did was I went to Settings -> CI/CD -> Variables -> got rid of a tick in a protected variable option for both variables.
Then, if you run the pipeline again you'll notice it'll throw an error on ./public.
Play his video Dynamic environments and notice at 4:49 (timeline) there are three green pipelines. You have one only (your pipeline is detached)! Meaning build website stage hasn't been run. That's why you'll see the error relating to ./public since your pipeline knows nothing about Gatsby. We need to install Gatsby first and then build it.
deploy review:
stage: deploy review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
only:
- merge_requests
script:
- npm install --silent
- npm install -g gatsby-cli
- gatsby build
- npm install --global surge
- surge --project ./public --domain yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
artifacts:
paths:
- ./public

Related

The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline

I'm trying to get Bitbucket Pipelines to work with multiple steps that define the deployment area. When I do, I get the error
Configuration error The deployment environment 'Staging' in your
bitbucket-pipelines.yml file occurs multiple times in the pipeline.
Please refer to our documentation for valid environments and their
ordering.
From what I read, the deployment variable has to happen on a step by step basis.
How would I set up this example pipelines file to not hit that error?
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: npm-build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:14.17.5
script:
- echo 'build initiated'
- cd build
- npm install
- npm run dev
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploychanges
name: Deploy_Changes
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring compiled assets'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/css/ -vv
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/js/ -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
<<: *deploychanges
deployment: Production
- step:
<<: *deploycompiled
deployment: Production
dev:
- step: *build
- step: *deploychanges
- step: *deploycompiled
The workaround for the issue with reusing Environment Variables without using the deployment clause for more than one steps in a pipeline I have found is to dump ENV VARS to a file and save it as an artifact that will be sourced in the following steps.
The code snippet for it would look like:
steps:
- step: &set-environment-variables
name: 'Set environment variables'
script:
- printenv | xargs echo export > .envs
artifacts:
- .envs
- step: &next-step
name: "Next step in the pipeline"
script:
- source .envs
- next_actions
pipelines:
pull-requests:
'**':
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
branches:
staging:
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
So far this solution works for me.
Update:
after having an issue of "%s" appearing in the .envs file, which caused the later source .envs statement to fail, here is a slightly different approach to the initial step. It gets around that particular issue, but also only exports those variables you know you need in your pipeline - noting that there are a lot of bitbucket environment variables available to the first script step which will be available naturally to later scripts anyway, and it makes more sense (to me anyway) that you don't just dump out all environment variables to the .env artifact, but do it in a much more controlled manner.
- step: &set-environment-variables
name: 'Set environment variables'
script:
- echo "export SSH_USER=$SSH_USER" > .envs
- echo "export SERVER_IP=$SERVER_IP" >> .envs
- echo "export ANOTHER_ENV_VAR=$ANOTHER_ENV_VAR" >> .envs
artifacts:
- .envs
In this example, .envs will now contain only those 3 environment variables, and not a whole heap of system + bitbucket variables (and of course, no pesky %s characters either!)
Normally what happens is that you deploy to an environment. So there is one step which deploys. So particularly u should put your "deployment" group to it specifically. This is how Bitbucket manages that if the deployment of the code happened or not. So its like you can have multiple steps where in one you are testing unit cases, integration cases, another one for building the binaries and the last one as an artifact deploys to the env marking deployment group. see the below example.
definitions:
steps:
- step: &test-vizdom-services
name: "Vizdom services unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- cd ./vizdom/vizdom.services.Tests
- dotnet test vizdom.services.Tests.csproj
pipelines:
custom:
DEV-AWS-api-deploy:
- step: *test-vizdom-services
- step:
name: "Vizdom Webapi unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- export ASPNETCORE_ENVIRONMENT=Dev
- cd ./vizdom/vizdom.webapi.tests
- dotnet test vizdom.webapi.tests.csproj
- step:
deployment: DEV-API
name: "API: Build > Zip > Upload > Deploy"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- apt-get update
- apt-get install zip -y
- mkdir -p ~/deployment/release_dll
- cd ./vizdom/vizdom.webapi
- cp -r ../shared_settings ~/deployment
- dotnet publish vizdom.webapi.csproj -c Release -o ~/deployment/release_dll
- cp Dockerfile ~/deployment/
- cp -r deployment_scripts ~/deployment
- cp deployment_scripts/appspec_dev.yml ~/deployment/appspec.yml
- cd ~/deployment
- zip -r $BITBUCKET_CLONE_DIR/dev-webapi-$BITBUCKET_BUILD_NUMBER.zip .
- cd $BITBUCKET_CLONE_DIR
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'upload'
APPLICATION_NAME: 'ss-webapi'
ZIP_FILE: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'deploy'
APPLICATION_NAME: 'ss-webapi'
DEPLOYMENT_GROUP: 'dev-single-instance'
WAIT: 'false'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
So as you can see I have multiple steps for running test cases but I would finally build the binaries and deploy the code in final step. I could have broken it into separate steps but I dont want to waste the minutes of having to use another step because cloning and copying the artifact takes some time. Right now there are three steps it could have been broken into 4. where the 4th one would have been the deployment step. I hope this brings some clarity.
Also you can modify the names of the deployment groups as per your needs and can have up to 50 deployment groups :)
Little did I know, it's intentional that deploy happens in one step and you can only define on one step the deployment environment. The following setup is what worked for us (plus the appropriate separate git-ftp files):
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: Build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:15.0.1
script:
- echo 'build initiated'
- cd build
- npm install
- npm run prod
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploy
name: Deploy
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: Production
dev:
- step: *build
- step: *deploy
I guess we cannot use combine the Deployment with either Artifact or Cache.
If I use standalone Deployment, so I can use the same deployment for multiple steps (as my screenshot).
In case I add cache/artifact, will get same error as yours.
Just got the same issue today.
I don't think there's currently a solution for this except rewrite the steps to not run two steps in on environment.
Waiting on https://jira.atlassian.com/browse/BCLOUD-18261 which planned to be released in July.
Related https://community.atlassian.com/t5/Bitbucket-questions/The-deployment-environment-test-in-your-bitbucket-pipelines-yml/qaq-p/971584
This is currently not available. They do have a ticket and it says it's being worked on. The best workaround currently appears to be creating multiple developer variable environments for steps that use the same variables.
Ex:
- step:
<<: *InitialSetup
deployment: Beta-Setup
- step:
<<: *Build
deployment: Beta-Build
From the comments on the ticket:
Hey everyone,
I know this is a long-winded workaround, and someone has probably already mentioned it, but I got around the issue by setting up "sub environments", one for each step. E.g. instead of having a "Staging" environment, I set up a "Staging Build" and "Staging Deploy" environment, and just had to duplicate the variables if necessary. I did the same for production.
Having to setup and maintain all these environments and variables can be a pain, but one can automate this to prevent human error, through setting up an OAuth client tool that interfaces with the API (you just need the "pipelines" scope), if one can be bothered to go to the effort (as I have: https://blog.programster.org/bitbucket-create-oauth-client-credentials).
I can't wait for this feature to be completed as that is the "real" solution, and a lot less effort!
With your cases, to solve the problems, you either solve the errors as following options:
Combine all steps into one big step
Or create different deployment variable group Staging DeployChanges and Staging DeployComplied, this way may lead to duplicate variable
ex:
- step: &deploychanges
name: Deploy_Changes
deployment: Staging DeployChanges
script:
- ....
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging DeployComplied
....

Gitlab cache not uploading due to policy

I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.

Gitlab CD/CI: The user-provided path build/ does not exist

I have created one simple react for practicing gitlab's CI/CD pipeline. I have three jobs for CD/CI pipeline. First test the app then build then deploy to the AWS' S3 bucket. After successfully pass the test and run the build production, when it goes deploy stage I got this error : The user-provided path build does not exist. I don't know how to make path in Gitlab's cd/ci pipeline.
This is my gitlab's .gitlab-ci.yml file setup
image: 'node:12'
stages:
- test
- build
- deploy
test:
stage: test
script:
- yarn install
- yarn run test
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
stage: build
only:
- master
script:
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
image: python:latest
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
If build/ folder is created as part of build stage then it should be passed as artefact to the deploy stage and deploy should reference build stage using dependencies:
build:
stage: build
only:
- master
script:
- npm install
- npm run build
artifacts:
paths:
- build/
deploy:
stage: deploy
only:
- master
image: python:latest
dependencies:
- build
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"

Is there any way to make in Gitlab CI/CD a stage with two stages inside?

I have a .gitlab-ci.yml file with two stages, and i would like to have that stages inside of a general stage called release. In other words i would like to have stage 1 and 2 inside of release. My .gitlab-ci.yml file is this:
image: google/cloud-sdk:slim
stages:
- deploy-website
- deploy-cloud-function
before_script:
- gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
- gcloud config set project $GOOGLE_PROJECT_ID
deploy-website:
stage: deploy-website
script:
- gsutil -m rm gs://ahinko.com/**
- gsutil -m cp -R src/client-side/* gs://ahinko.com
environment:
name: production
url: https://ahinko.com
only:
- ci-test
deploy-cloud-function:
stage: deploy-cloud-function
script:
- gcloud functions deploy send_contact --entry-point=send_contact_form --ingress-settings=all --runtime=python37 --source=src/server-side/cf-send-email/ --trigger-http
environment:
name: production
url: https://ahinko.com
only:
- ci-test
To achieve what you want you just need to use the same stage name like
image: google/cloud-sdk:slim
stages:
- release
before_script:
- gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
- gcloud config set project $GOOGLE_PROJECT_ID
deploy-website:
stage: release
script:
- gsutil -m rm gs://ahinko.com/**
- gsutil -m cp -R src/client-side/* gs://ahinko.com
environment:
name: production
url: https://ahinko.com
only:
- ci-test
deploy-cloud-function:
stage: release
script:
- gcloud functions deploy send_contact --entry-point=send_contact_form --ingress-settings=all --runtime=python37 --source=src/server-side/cf-send-email/ --trigger-http
environment:
name: production
url: https://ahinko.com
only:
- ci-test
but if you do it your jobs will start at the same time.
To avoid this behaviour you need to change the jobs to start manually
when: manual

Only run integration test when deploying from staging to production

I have 3 main branches in my gitlab project: dev , staging, production. I'm using postman newman for integration testing like this in .gitlab-ci.yml:
postman_tests:
stage: postman_tests
image:
name: postman/newman_alpine33
entrypoint: [""]
only:
- merge_requests
script:
- newman --version
- newman run https://api.getpostman.com/collections/zzz?apikey=zzz --environment https://api.getpostman.com/environments/xxx?apikey=xxxx
this script only run in merge request approval process from dev to staging , or staging to production. The problem is i need to only run this postman newman test when the process of merge request approval from staging to production, how can i achieve this?
This can be achieved using the 'advanced' only/except settings in combination with supplied environment variables:
postman_tests:
stage: postman_tests
image:
name: postman/newman_alpine33
entrypoint: [""]
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "staging"
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "production"
script:
- newman --version
- newman run https://api.getpostman.com/collections/zzz?apikey=zzz --environment https://api.getpostman.com/environments/xxx?apikey=xxxx
For a full list of predefined environment variables you can go here

Resources