I run e2e tests on CI environment, but I cannot see the artifacts in pipelines.
bitbucket-pipelines.yml:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
-/opt/atlassian/pipelines/agent/build/cypress/screenshots/*
-screenshots/*.png
Maybe I typed in the wrong way path, but I am not sure.
Does anyone have any ideas what I am doing wrong?
I don't think it's documented anywhere but artifacts only accepts a relative directory from $BITBUCKET_CLONE_DIR. When I run my pipeline it says: Cloning into '/opt/atlassian/pipelines/agent/build'..., so I think artifacts is relative to that path. My guess is that if you change it to something like this, it will work:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
- cypress/screenshots/*.png
Edit
From your comment I now understand what the real problem is: BitBucket pipelines are configured to stop at any non-zero exit code. That means that the pipeline execution is stopped when cypress fails the tests. Because artifacts are stored after the last step of the pipeline, you won't have any artifacts.
To work around this behavior you have to make sure the pipeline doesn't stop until the images are saved. One way to do this is to prefix the npm run test part with set +e (for more details about this solution, look at this answer here: https://community.atlassian.com/t5/Bitbucket-questions/Pipeline-script-continue-even-if-a-script-fails/qaq-p/79469). This will prevent the pipeline from stopping, but also makes sure that your pipeline always finishes! Which of course is not what you want. Therefore I recommend that you run cypress tests separately and create a second step in your pipeline to check the output of cypress. Something like this:
# package.json
...
"scripts": {
"test": "<your test command>",
"testcypress": "cypress run ..."
...
# bitbucket-pipelines.yml
image: cypress/base:10
options: max-time: 20
pipelines:
default:
- step:
name: Run tests
script:
- npm install
- npm run test
- set +e npm run testcypress
artifacts:
- cypress/screenshots/*.png
-step:
name: Evaluate Cypress
script:
- chmod +x check_cypress_output.sh
- ./check_cypress_output.sh
# check_cypress_output.sh
# Check if the directory exists
if [ -d "./usertest" ]; then
# If it does, check if it's empty
if [ -z "$(ls -A ./usertest)" ]; then
# Return the "all good" signal to BitBucket if the directory is empty
exit 0
else
# Return a fault code to BitBucket if there are any images in the directory
exit 1
fi
# Return the "all good" signal to BitBucket
else
exit 0
fi
This script will check if cypress created any artifacts, and will fail the pipeline if it did. I'm not sure this is exactly what you need but it's probably a step in the direction.
Recursive search (/**) worked for me with Cypress 3.1.0, due to additional folder inside videos and screenshots
# [...]
pipelines:
default:
- step:
name: Run tests
# [...]
artifacts:
- cypress/videos/** # Double star provides recursive search.
- cypress/screenshots/**
this is my working solution
'cypress:pipeline' is the custom pipeline in my bitbucket config file to run cypress.
please try cypress/screenshots/**/*.png in artifact section
"cypress:pipeline": "cypress run --env user=${E2E_USER_EMAIL},pass=${E2E_USER_PASSWORD} --browser chrome --spec cypress/integration/src/**/*.spec.ts"
pipelines:
custom:
healthCheck:
- step:
name: Integration and E2E Test
script:
- npm install
- npm run cypress:pipeline
artifacts:
# store any generated images as artifacts
- cypress/screenshots/**/*.png
Related
My goal is to enable sharding for Playwright on Bitbucket Pipelines, so I want to use parallel steps along with caching.
My bitbucket-pipelines.yml script looks like this:
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: $HOME/.npm
browsers: ~/.cache/ms-playwright #tried $HOME/.cache/ms-playwright ; $HOME/ms-playwright ; ~/ms-playwright
steps:
- step: &base
caches:
- npm
- node
- browsers
- step: &setup
script:
- npm ci
- npx playwright install --with-deps
- step: &landing1
<<: *base
script:
- npm run landing1
- step: &landing2
<<: *base
script:
- npm run landing2
- step: &landing3
<<: *base
script:
- npm run landing3
pipelines:
custom:
landing:
- step: *setup
- parallel:
- step: *landing1
- step: *landing2
- step: *landing3
Besides trying various location for the caches definition I also tried to just set repo variable PLAYWRIGHT_BROWSERS_PATH to 0 and hope that browsers will appear within node modules.
Solution with caching browsers within default location leads to this (in all 4 cases mentioned in comment of the file):
While not caching browsers separately and using PLAYWRIGHT_BROWSERS_PATH=0 with caching node also does not work, each parallel step throws an error saying browser binaries weren't installed.
I also tried varying between npm install and npm ci, exhausting all of the solutions listed here.
I hope somebody has been able to resolve this issue specifically for Bitbucket Pipelines, as that is the tool we are currently using in the company.
You can NOT perform the setup instructions in a different step than the ones that need very that setup. Each step runs in a different agent and should be able to complete regardless of the presence of any cache. If the caches were present, the step should only run faster but with the same result!
If you try to couple steps through the cache, you loose control on what is installed: node_modules will quite often be an arbitrary past folder not honoring the package-lock.json for the git ref where the step is run.
Also, your "setup" step does not use the caches from the "base" step definition, so you are not correctly polluting the listed "base" caches either. Do not fix that, it would cause havoc to your life.
If in need to reuse some setup instructions for similar steps, use yet another YAML anchor.
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: ~/.npm
browsers: ~/.cache/ms-playwright
yaml-anchors: # nobody cares what you call this, but note anchors are not necessarily steps
- &setup-script >-
npm ci
&& npx playwright install --with-deps
# also, the "step:" prefixes are dropped by the YAML anchor
# and obviated by bitbucket-pipelines-the-application
- &base-step
caches:
- npm
# - node # don't!
- browsers
- &landing1-step
<<: *base-step
script:
- *setup-script
- npm run landing1
- &landing2-step
<<: *base
script:
- *setup-script
- npm run landing2
- &landing3-step
<<: *base
script:
- *setup-script
- npm run landing3
pipelines:
custom:
landing:
- parallel:
- step: *landing1-step
- step: *landing2-step
- step: *landing3-step
See https://stackoverflow.com/a/72144721/11715259
Bonus: do not use the default node cache, you are wasting time and resources by uploading, storing and downloading node_modules folders that will be wiped by npm ci instructions.
Question: Is there a way to tell gitlab to only wait on an earlier pipeline step, if this step has not been skipped?
Context: We did some optimization steps for our continuous integration pipeline.
With the help of the cache keyword provided by gitlab, we only install our nodejs dependencies once per branch in a separate build step. The node_modules folder is then persisted in the cache and made available to all subsequent steps and pipeline runs for this specific branch. The pipeline only re-installs the dependencies and updates the cache, if the package-lock.json has changed in a new commit.
See: https://docs.gitlab.com/ee/ci/yaml/#cache
The caching does work well so far, the only issue is that I cannot configure subsequent jobs to wait for the install job. If I would do so - for example with needs: ["install"] -, the pipeline would get blocked on all runs, if the package-lock.json has not changed and thus, the install step has been skipped.
Example:
install:
stage: "install"
rules:
- changes:
- package-lock.json
needs: [ ]
cache:
key: $CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR
paths:
- node_modules/
script:
- npm ci ...
build:
stage: "build"
needs: [ ]
cache:
key: $CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR
paths:
- node_modules/
policy: pull
script:
- ./build.sh
artifacts:
paths:
- build/
I want my Gitlab CI job to not run when the commit message starts with a particular string: [maven-scm]
So, I have the below configuration in my .gitlab-ci.yaml file:
image: maven:3.6.3-jdk-11-slim
stages:
- test
test:
stage: test
cache:
key: all
paths:
- ./.m2/repository
script:
- mvn clean checkstyle:check test spotbugs:check
rules:
- if: '$CI_COMMIT_MESSAGE !~ /^\[maven-scm\] .*$/'
My commit message is: [maven-scm] I hope the test job does not run
But the test job still runs to my frustration. I went over the GitLab documentation for rules but could not find the reason why the job still runs. I am not sure if I am missing something.
Would be great if someone can point me in the right direction.
Update:
I tried the only/except clause instead of the rules. I modified the yaml file to below:
image: maven:3.6.3-jdk-11-slim
stages:
- test
test:
stage: test
cache:
key: all
paths:
- ./.m2/repository
script:
- mvn clean checkstyle:check test spotbugs:check
except:
variables:
- $CI_COMMIT_MESSAGE =~ /^\[maven-scm\] .*$/
The job still runs when the commit message starts with [maven-scm].
This was a tricky problem, because the issue was not with the rules section. The problem is actually the regex. You only need to specify the desired pattern at the start of the commit message, i.e. don't need the following wildcard. The following works and has been tested:
test-rules:
stage: test
rules:
- if: '$CI_COMMIT_MESSAGE !~ /^\[maven-scm\] /'
script:
- echo "$CI_COMMIT_MESSAGE"
This has been tested with the following commit messages:
This commit message will run the job
This commit message [maven-scm] will run the job
[maven-scm] This commit message will NOT run the job
FYI GitLab documentation specifies that rules is preferred over only/except, so best to stick with rules: if. See onlyexcept-basic.
I have a monorepo in gitlab with angular frontend and nestjs backend. I have package.json for each of them and 1 in the root. My pipeline consists of multiple stages like these:
stages:
- build
- verify
- test
- deploy
And I have a job in a .pre stage which installs dependencies. I would like to cache those between jobs and also between branches, if any of package-lock.json changed, but also if there are no cached node_modules currently.
I have a job that looks like this:
prepare:
stage: .pre
script:
- npm run ci-deps # runs npm ci in each folder
cache:
key: $CI_PROJECT_ID
paths:
- node_modules/
- frontend/node_modules/
- backend/node_modules/
only:
changes:
- '**/package-lock.json'
Now problem with this is that if cache was somehow cleared or if I didn't introduce changes to package-lock.json with first push I won't have this job running at all and therefore everything else will fail because it requires node_modules. If I remove changes: from there, then it runs the job for every pipeline. Of course then I still can share it between jobs, but if I do another commit and push it takes almost 2 minutes to install all the dependencies even though I didn't change anything about what should be there... Am I missing something? How can I cache it in a way so that it only will reinstall dependencies if cache is outdated or doesn't exist?
Rules:Exists runs before the cache is pulled down so this was not a workable soluton for me.
In GitLab v12.5 we can now use cache:key:files
If we combine that with part of Blind Despair's conditional logic we get a nicely working solution
prepare:
stage: .pre
image: node:12
script:
- if [[ ! -d node_modules ]];
then
npm ci;
fi
cache:
key:
files:
- package-lock.json
prefix: nm-$CI_PROJECT_NAME
paths:
- node_modules/
We can then use this in subsequent build jobs
# let's keep it dry with templates
.use_cached_node_modules: &use_cached_node_modules
cache:
key:
files:
- package-lock.json
prefix: nm-$CI_PROJECT_NAME
paths:
- node_modules/
policy: pull # don't push unnecessarily
build:
<<: *use_cached_node_modules
stage: build
image: node:12
script:
- npm run build
We use this successfully across multiple branches with a shared cache.
In the end I figured that I could do this without relying on gitlab ci features, but do my own checks like so:
prepare:
stage: .pre
image: node:12
script:
- if [[ ! -d node_modules ]] || [[ -n `git diff --name-only HEAD~1 HEAD | grep "\package.json\b"` ]];
then
npm ci;
fi
- if [[ ! -d frontend/node_modules ]] || [[ -n `git diff --name-only HEAD~1 HEAD | grep "\frontend/package.json\b"` ]];
then
npm run ci-deps:frontend;
fi
- if [[ ! -d backend/node_modules ]] || [[ -n `git diff --name-only HEAD~1 HEAD | grep "\backend/package.json\b"` ]];
then
npm run ci-deps:backend;
fi
cache:
key: '$CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR'
paths:
- node_modules/
- frontend/node_modules
- backend/node_modules
The good thing about this is that it will only install dependencies for specific part of the project if it either doesn't have node_modules yet or when package.json was changed. This however will probably be wrong if I push multiple commits and package.json would change not in the last one. In that case I can still clear cache and rerun pipeline manually, but I will try to further improve my script and update my answer.
I had the same problem, and I was able to solve it using the keyword rules instead of only|except. With it, you can declare more complex cases, using if, exists, changes, for example. Also, this :
Rules can't be used in combination with only/except because it is a replacement for that functionality. If you attempt to do this, the linter returns a key may not be used with rules error.
-- https://docs.gitlab.com/ee/ci/yaml/#rules
All the more reasons to switch to rules. Here's my solution, which executes npm ci :
if the package-lock.json file was modified
OR
or if node-modules folder does not exists (in case of new branches or cache cleaning) :
npm-ci:
image: node:lts
cache:
key: $CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR
paths:
- node_modules/
script:
- npm ci
rules:
- changes:
- package-lock.json
- exists:
- node_modules
when: never
Hope it helps !
I'm having a Gitlab CI/CD pipeline, and it works OK generally.
My problem is that my testing takes more than 10 minutes and it not stable (YET..) so occasionally randomly it fails on a minor test that I don't care for.
Generally, after retry, it works, but if I need an urgent deploy I need to wait another 10 minutes.
When we have an urgent bug, another 10 minutes is waaaay too much time, so I am looking for a way to force deploy even when the test failed.
I have the next pseudo ci yaml scenario that I'd failed to find a way to accomplish
stages:
- build
- test
- deploy
setup_and_build:
stage: build
script:
- build.sh
test_branch:
stage: test
script:
- test.sh
deploy:
stage: deploy
script:
- deploy.sh
only:
- master
I'm looking for a way to deploy manually if the test phase failed.
but if I add when: manual to the deploy, then deploy never happens automatically.
so a flag like when: auto_or_manual_on_previous_fail will be great.
currently, there is no such flag in Gitlab ci.
Do you have any idea for a workaround or a way to implement it?
Another approach would be to skip the test in case of an emergency release.
For that, follow "Skipping Tests in GitLab CI" from Andi Scharfstein, and:
add "skip test" in the commit message triggering that emergency release
check a variable on the test stage
That is:
.test-template: &test-template
stage: tests
except:
variables:
- $CI_COMMIT_MESSAGE =~ /\[skip[ _-]tests?\]/i
- $SKIP_TESTS
As you can see above, we also included the variable $SKIP_TESTS in the except block of the template.
This is helpful when triggering pipelines manually from GitLab’s web interface.
Here’s an example:
It's possible to control the job attribute of your deploy job by leveraging parent-child pipelines (gitlab 12.7 and above). This will let you decide if you want the job in the child pipeline to run as manual, or always
Essentially, you will need to have a .gitlab-ci.yml with:
stages:
- build
- test
- child-deploy
child-deploy stage will be used to run the child pipeline, in which the deploy job will run with the desired attribute.
Your test could generate as artifact the code for deploy section. For example, in the after_script section of your test, you can check the value of CI_JOB_STATUS builtin variable to decide if you want to write the child job to run as manual or always:
my_test:
stage: test
script:
- echo "testing, exit 0 on success, exit 1 on failure"
after_script:
- if [ "$CI_JOB_STATUS" == "success" ]; then WHEN=always; else WHEN=manual; fi
- |
cat << 'EOF' > deploy.yml
stages:
- deploy
my_deploy:
stage: deploy
rules:
- when: $WHEN
script:
- echo "deploying"
EOF
artifacts:
when: always
paths:
- deploy.yml
Note that variable expension is disabled in the heredoc section, by the use of single quoted 'EOF'. If you need variable expension, remember to escape the $ of $WHEN.
Finally, you can trigger the child pipeline with deploy.yml
gen_deploy:
stage: child-deploy
when: always
trigger:
include:
- artifact: deploy.yml
job: my_test
strategy: depend