Travis + Matrix deploy to multiple aws buckets - matrix

I wonder how I set matrix with dual deployment.
here is logic
language: node_js
node_js:
- 8
cache:
yarn: true
matrix:
include:
- env: MHTD=USER1
- env: MHTD=USER2
install:
- if [ "$MHTD" = "USER1" ]; then yarn build-web:"$BUILD_NAME"; fi
- if [ "$MHTD" = "USER2" ]; then yarn build-web:"$BUILD_NAME1"; fi
So now I have a logic but I don't know how to set deployment step. I want to push two different builds to 2 different s3 buckets. How I can do that?
Any suggestions?

Matrix expansion per build stage is an open feature request for travis-ci currently. A workaround is to manually specify multiple stages with differing inputs, like from this example:
cache: bundler
jobs:
include:
- stage: prepare cache
script: true
rvm: 2.3
- stage: test
script: bundle show
rvm: 2.3
- stage: test
script: bundle show
rvm: 2.3

Related

GitLab cache sharing between branches

I'm trying to run job A and B on merge request. After a successful merge, job C should be executed.
Job A is used to install a Nodejs dependencies and push them to cache so jobs B and C can use that cache. On Gitlab side I'm using a shared runners. Please help me to solve that task. My gitlab-ci.yml:
image: node:19-alpine
stages:
- install_dependencies
- test1
- test2
cache:
key: somekeyvalue1234
paths:
- node_modules/
policy: pull
install_dependencies:
stage: install_dependencies
cache:
key: somekeyvalue1234
paths:
- node_modules/
policy: push
script:
- npm ci
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test1:
stage: test1
script:
- npm run test1
rules:
- if: '$CI_COMMIT_BRANCH == "merge-request-test"'
test2:
stage: test2
script:
- npm run test2
rules:
- if: '$CI_COMMIT_BRANCH == "branch-foo"'
For some reason the cache can be successfully downloaded by job B while MR is triggered. But after a successful MR job C can't find that cache. How to populate the cache while running my pipeline showed above?
Finally solved the issue. It was very simple but I lost a few days by googling.
In CI/CD Settings -> General pipelines I unchecked the setting Use separate caches for protected branches and voila

Bitbuckets Pipelines create a step with parallel steps

I want to have one step, that will setup everything and then in parallel run other steps. Currently I have something like this:
image: python:3.9.16-alpine
pipelines:
default:
- step:
runs-on:
- self.hosted
- regressiontests
name: First Step
clone:
enabled: false
caches:
- pip
script:
- apk add git
- apk add openssh-client
- git clone myrepository.git
- pip install -r myrepository/requirements.txt
- echo $ENV_FILE | base64 -d -i > myrepository/.env
artifacts:
- myrepository/**
- step:
runs-on:
- self.hosted
- regressiontests
name: Second Step
clone:
enabled: false
caches:
- pip
script:
- cd myrepository
- pip install -r requirements.txt
parallel:
- step:
name: Step 2.1
script:
- python fancy command 1
- step:
name: Step 2.2
script:
- python fancy command 2
- step:
name: Step 2.3
script:
- python fancy command 3
- step:
name: Step 2.4
script:
- python fancy command 4
But the only steps that I see is First Step and Second Step none of parallel steps is executed in pipelines
That's not the correct syntax for parallel steps. Checkout https://support.atlassian.com/bitbucket-cloud/docs/set-up-or-run-parallel-steps/
image: python:3.9.16-alpine
pipelines:
default:
- step:
name: First Step
script: []
- step:
name: Second Step
script: []
- parallel:
- step:
name: Step 3.1
script: []
- step:
name: Step 3.2
script: []
# ...
This what I think you are trying to achieve
BUT
Each step script happens in a new pristine docker container, so any setup must happen in the same script where the software will be used.
Therefore I am afraid your whole effort to speed up your steps setup is futile.
Instead, you'd like to tune your caches. For python you may want to cache both ~/.cache/pip and a virtualenv so that pip install -r ... instructions are sped up.
Plus, I have a feeling that bitbucket artifacts are quite slow so I'd expect disabling the repository clone in every step to be actually slower. I'd use a shallow clone instead with
clone:
depth: 1

Gitlab CI/CD pipeline run/extend other jobs in sequence

So I have a job which is triggered with specific rules - creating a new tag app-prod-1.0.0 or app-dev-1.0.0. Whenever new tag is created I call the job, which in return extends other jobs
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
My thought was that jobs will be called in the order I have described inside the job: .install-packages, .build-project, .deploy-project. But that's not happening it seems that it just jumps to the last job .deploy-project, without installing and building and thus breaking my pipeline.
How to run/extend jobs in sequence?
This is the behaviour for which I didn't use multiple extends so far in my work with GitLab.
GitLab, attempts to merge the code from parent job.
Now all your parent jobs defines the script tag and in your job for e.g. build_prod the extends happening in below order
extends:
- .install-packages
- .build-project
- .deploy-project
the script code from .deploy-project is overwriting the other job's script tag.
It works differently for the variables. It will merge all the variables and overwrites if same variable is used.
See your own example updated with variables.
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
variables:
PACKAGE: 'install'
INSTALL: 'install'
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
variables:
PACKAGE: 'build'
BUILD: 'build'
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
variables:
PACKAGE: 'deploy'
DEPLOY: 'from deploy'
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
And now notice how PACKAGE variable is overwritten with the final value of '/app/prod' which comes from build-prod job itself. At the same time other variables from individual parent jobs are merged to look like below
variables:
PACKAGE: "/app/prod"
INSTALL: install
BUILD: build
DEPLOY: from deploy
ENVIRONMENT: prod
I really found View merged YAML feature best to understand how my yml file will be evaluated.
Its available in CI/CD -> Editor
It's not actually "jumps to the last job", but simply executes a single job you have provided - that is build_prod or build_dev, depending on commit tag.
As per docs when you are calling extends, you are basically just merging everything inside all the template jobs that you specified, so the last stage keyword, which comes from .deploy-project template job wins.
You should specify each job separately for each stage, and maybe even put your rules to a separate template job, i.e.
.dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
install-dev:
extends:
- .dev
- .install-packages
build-dev:
extends:
- .dev
- .build-project
deploy-dev:
extends:
- .dev
- .deploy-project
You should create similar jobs for prod env, define template job .prod, and create install-prod, build-prod, deploy-prod jobs

Gitlab cache not uploading due to policy

I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.

GitLab CI : merge or replace cache?

I'm using GitLab CI.
I have 2 jobs in the build stage that build differently my app. The 2 jobs upload a cache for the branch. I use the compiled sources to launch some tests in the test stage.
build:
stage: build
script:
- ./gradlew build --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
build_with_different_conf:
stage: build
script:
- ./gradlew buildDiff --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Test:
stage: test
script:
- ./gradlew test --build-cache
In my example, the job build_with_different_conf take more time to finish.
My question is : Is the last finishing build job upload the cache and replace the cache from the first build job or is it merging files whith the precedent job ?
Thanks.
From what i understand you are using global cache for gradle dependencies.
Than you want to have some kind of job, to job cache.
I would do it this way, more or less.
stages:
- build
- test
cache:
paths:
- <your_gradle_cache>
build_classes:
stage: build
script:
- ./gradlew build --build-cache --quiet
artifacts:
expire_in: 1d
paths:
- <your_build_dir>
build_war:
stage: build
dependencies:
- build_classes
script:
- ./gradlew buildDiff --build-cache --quiet
artifacts:
expire_in: 1w
paths:
- <path_to_your_war>
test_classes:
stage: test
dependencies:
- build_war
script:
- ./gradlew test --build-cache
test_war:
stage: test
dependencies:
- build_war
script:
- test # some kind of test to assure your war is in good condition
In this configuration:
build_classes --[.classes]--> build_war -> [.war]
| |
[.classes] [.war]
| |
V V
test_classes test_war
PS. Dont forget you can use shell (or whatever your runner's os) to debug, understand more about this. You can put ls -la all over the place.
build:
stage: build
Same stage jobs runs in parallel.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Cache files are managed by cache:key. It means if you're using same cache:key for different jobs they'll share same cache.zip file across jobs even you're defining different cache:paths. If you're using same key, but different path then your cache won't be effective, because of every job will overwrite cache.zip file with different path contents.
In your case you're using same cache:key across different jobs.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
It means last finished job will overwrite the cache.zip file not merge and will be used for next job and subsequent pipeline jobs with same key defined.
Bonus:
Test:
stage: test
script:
- ./gradlew test --build-cache
Also beware that if this job requires */build directory contents to exist, you have to be careful and better to use artifacts instead. Cache doesn't always exist and it's provided as best effort delivery.
For example I use gitlab ci's cache like this.
nodejs_test:
stage: test
image: node:12.13-alpine
before_script:
- npm install
script:
- yarn test
cache:
key:
files:
# New cache key will be computed on each package.json change.
- package.json
paths:
- node_modules/
nodejs_build:
stage: build
image: node:12.13-alpine
before_script:
# In case if we miss cache, we can simply install packages again.
# If cache is there npm install won't download them again.
- npm install
script:
- yarn build
cache:
policy: pull # totally optional
key:
files:
- package.json
prefix: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/

Resources