Caching playwright browser binaries in bitbucket pipelines - continuous-integration

My goal is to enable sharding for Playwright on Bitbucket Pipelines, so I want to use parallel steps along with caching.
My bitbucket-pipelines.yml script looks like this:
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: $HOME/.npm
browsers: ~/.cache/ms-playwright #tried $HOME/.cache/ms-playwright ; $HOME/ms-playwright ; ~/ms-playwright
steps:
- step: &base
caches:
- npm
- node
- browsers
- step: &setup
script:
- npm ci
- npx playwright install --with-deps
- step: &landing1
<<: *base
script:
- npm run landing1
- step: &landing2
<<: *base
script:
- npm run landing2
- step: &landing3
<<: *base
script:
- npm run landing3
pipelines:
custom:
landing:
- step: *setup
- parallel:
- step: *landing1
- step: *landing2
- step: *landing3
Besides trying various location for the caches definition I also tried to just set repo variable PLAYWRIGHT_BROWSERS_PATH to 0 and hope that browsers will appear within node modules.
Solution with caching browsers within default location leads to this (in all 4 cases mentioned in comment of the file):
While not caching browsers separately and using PLAYWRIGHT_BROWSERS_PATH=0 with caching node also does not work, each parallel step throws an error saying browser binaries weren't installed.
I also tried varying between npm install and npm ci, exhausting all of the solutions listed here.
I hope somebody has been able to resolve this issue specifically for Bitbucket Pipelines, as that is the tool we are currently using in the company.

You can NOT perform the setup instructions in a different step than the ones that need very that setup. Each step runs in a different agent and should be able to complete regardless of the presence of any cache. If the caches were present, the step should only run faster but with the same result!
If you try to couple steps through the cache, you loose control on what is installed: node_modules will quite often be an arbitrary past folder not honoring the package-lock.json for the git ref where the step is run.
Also, your "setup" step does not use the caches from the "base" step definition, so you are not correctly polluting the listed "base" caches either. Do not fix that, it would cause havoc to your life.
If in need to reuse some setup instructions for similar steps, use yet another YAML anchor.
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: ~/.npm
browsers: ~/.cache/ms-playwright
yaml-anchors: # nobody cares what you call this, but note anchors are not necessarily steps
- &setup-script >-
npm ci
&& npx playwright install --with-deps
# also, the "step:" prefixes are dropped by the YAML anchor
# and obviated by bitbucket-pipelines-the-application
- &base-step
caches:
- npm
# - node # don't!
- browsers
- &landing1-step
<<: *base-step
script:
- *setup-script
- npm run landing1
- &landing2-step
<<: *base
script:
- *setup-script
- npm run landing2
- &landing3-step
<<: *base
script:
- *setup-script
- npm run landing3
pipelines:
custom:
landing:
- parallel:
- step: *landing1-step
- step: *landing2-step
- step: *landing3-step
See https://stackoverflow.com/a/72144721/11715259
Bonus: do not use the default node cache, you are wasting time and resources by uploading, storing and downloading node_modules folders that will be wiped by npm ci instructions.

Related

How to trigger Gitlab CI only if CURRENT commit contains changes to some file?

I a have a Gitlab stage that I only want to run if there are changes to some file. I have tried using only and if: changes... however those not work as I want.
I want that this stage is only triggered if the current commit (no matter if it is a merge request or pushed to main) contains changes to some file
My stage:
cache frontend:
stage: cache
cache: &frontend_cache
key: frontend-packages
paths:
- frontend/node_modules/
- frontend/.yarn/
script:
- cd frontend
- yarn install --immutable
only:
changes:
- frontend/yarn.lock

[gitlab ci]: How to wait for a job, only if this job has started, but not if it is skipped

Question: Is there a way to tell gitlab to only wait on an earlier pipeline step, if this step has not been skipped?
Context: We did some optimization steps for our continuous integration pipeline.
With the help of the cache keyword provided by gitlab, we only install our nodejs dependencies once per branch in a separate build step. The node_modules folder is then persisted in the cache and made available to all subsequent steps and pipeline runs for this specific branch. The pipeline only re-installs the dependencies and updates the cache, if the package-lock.json has changed in a new commit.
See: https://docs.gitlab.com/ee/ci/yaml/#cache
The caching does work well so far, the only issue is that I cannot configure subsequent jobs to wait for the install job. If I would do so - for example with needs: ["install"] -, the pipeline would get blocked on all runs, if the package-lock.json has not changed and thus, the install step has been skipped.
Example:
install:
stage: "install"
rules:
- changes:
- package-lock.json
needs: [ ]
cache:
key: $CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR
paths:
- node_modules/
script:
- npm ci ...
build:
stage: "build"
needs: [ ]
cache:
key: $CI_COMMIT_REF_SLUG-$CI_PROJECT_DIR
paths:
- node_modules/
policy: pull
script:
- ./build.sh
artifacts:
paths:
- build/

Circleci + cypress: how to change config.yml without committing to github?

I'm looking for a way to conveniently run only a specific set of Cypress spec files on Circleci.
I can do this locally by specifying the spec files in the Cypress.json file, but I don't want to run these locally as it prevents me from using my computer while tests are running.
I can specify which files to run on circleci by listing them in the config.yml.
However, the problem with this approach is that I have to push a PR to github every time I want to run a different set of spec files (with no intention of merging this change to the repo).
I found this discussion on circle's forum that has a potential solution:
https://discuss.circleci.com/t/efficiently-testing-configuration-file-migrating-to-2-0/11620
I tried to implement it, but the build fails on circleci because it keeps reading my config.yml file incorrectly.
For example,
version: 2.1
orbs:
cypress: cypress-io/cypress#1
executors:
latest-chrome:
docker:
- image: "cypress/browsers:node14.7.0-chrome84"
workflows:
build:
jobs:
- cypress/run:
executor: latest-chrome
browser: chrome
spec:
"cypress/integration/test_lab.js,\
cypress/integration/example/example.js"
is converted to this on circleci:
version: 2.1
orbs: {cypress: cypress-io/cypress#1}
executors:
latest-chrome:
docker:
- {image: 'cypress/browsers:node14.7.0-chrome84'}
workflows:
version: 2
build:
jobs:
- build: {}
Note that the config.yml builds correctly when I push it to github - just not when I am using the method mentioned in the link I provided above.

Is there a way to have GitLab Cache be consumed without being written to?

I have a gitlab job that downloads a bunch of dependencies and stuffs them in a cache (if necessary), then I have a bunch of jobs that use that cache. I notice at the end of the consuming jobs, they spend a bunch of time creating a new cache, even though they made no changes to it.
Is it possible to have them act only as consumers? Read-only?
cache:
paths:
- assets/
configure:
stage: .pre
script:
- conda env update --prefix ./assets/env/base -f ./environment.yml;
- source activate ./assets/env/base
- bash ./download.sh
parse1:
stage: build
script:
- source activate ./assets/env/base;
- ./build.sh -b test -s 2
artifacts:
paths:
- build
parse2:
stage: build
script:
- source activate ./assets/env/base;
- ./build.sh -b test -s 2
artifacts:
paths:
- build
In the very detailed .gitlab-ci.yml documentation is a reference to a cache setting called policy. GitLab caches have the concept of push (aka write) and pull (aka read). By default it is set to pull-push (read at the beginning and write at the end).
If you know the job does not alter the cached files, you can skip the upload step by setting policy: pull in the job specification. Typically, this would be twinned with an ordinary cache job at an earlier stage to ensure the cache is updated from time to time:
.gitlab-ci.yml > cache:policy
Which pretty much describes this situation: the job configure updates the cache, and the parse jobs do not alter the cache.
In the consuming jobs, add:
cache:
paths:
- assets/
policy: pull
For clarity, it probably wouldn't hurt to make that explicit in the global setting:
cache:
paths:
- assets/
policy: pull-push
TLDR. Overwrite cache with no path element.
You probably have to add a key element to your global cache configuration too. I actually have never used without a key element.
See the cache documentation here

Bitbucket / I cannot see the artifacts in pipelines

I run e2e tests on CI environment, but I cannot see the artifacts in pipelines.
bitbucket-pipelines.yml:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
-/opt/atlassian/pipelines/agent/build/cypress/screenshots/*
-screenshots/*.png
Maybe I typed in the wrong way path, but I am not sure.
Does anyone have any ideas what I am doing wrong?
I don't think it's documented anywhere but artifacts only accepts a relative directory from $BITBUCKET_CLONE_DIR. When I run my pipeline it says: Cloning into '/opt/atlassian/pipelines/agent/build'..., so I think artifacts is relative to that path. My guess is that if you change it to something like this, it will work:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
- cypress/screenshots/*.png
Edit
From your comment I now understand what the real problem is: BitBucket pipelines are configured to stop at any non-zero exit code. That means that the pipeline execution is stopped when cypress fails the tests. Because artifacts are stored after the last step of the pipeline, you won't have any artifacts.
To work around this behavior you have to make sure the pipeline doesn't stop until the images are saved. One way to do this is to prefix the npm run test part with set +e (for more details about this solution, look at this answer here: https://community.atlassian.com/t5/Bitbucket-questions/Pipeline-script-continue-even-if-a-script-fails/qaq-p/79469). This will prevent the pipeline from stopping, but also makes sure that your pipeline always finishes! Which of course is not what you want. Therefore I recommend that you run cypress tests separately and create a second step in your pipeline to check the output of cypress. Something like this:
# package.json
...
"scripts": {
"test": "<your test command>",
"testcypress": "cypress run ..."
...
# bitbucket-pipelines.yml
image: cypress/base:10
options: max-time: 20
pipelines:
default:
- step:
name: Run tests
script:
- npm install
- npm run test
- set +e npm run testcypress
artifacts:
- cypress/screenshots/*.png
-step:
name: Evaluate Cypress
script:
- chmod +x check_cypress_output.sh
- ./check_cypress_output.sh
# check_cypress_output.sh
# Check if the directory exists
if [ -d "./usertest" ]; then
# If it does, check if it's empty
if [ -z "$(ls -A ./usertest)" ]; then
# Return the "all good" signal to BitBucket if the directory is empty
exit 0
else
# Return a fault code to BitBucket if there are any images in the directory
exit 1
fi
# Return the "all good" signal to BitBucket
else
exit 0
fi
This script will check if cypress created any artifacts, and will fail the pipeline if it did. I'm not sure this is exactly what you need but it's probably a step in the direction.
Recursive search (/**) worked for me with Cypress 3.1.0, due to additional folder inside videos and screenshots
# [...]
pipelines:
default:
- step:
name: Run tests
# [...]
artifacts:
- cypress/videos/** # Double star provides recursive search.
- cypress/screenshots/**
this is my working solution
'cypress:pipeline' is the custom pipeline in my bitbucket config file to run cypress.
please try cypress/screenshots/**/*.png in artifact section
"cypress:pipeline": "cypress run --env user=${E2E_USER_EMAIL},pass=${E2E_USER_PASSWORD} --browser chrome --spec cypress/integration/src/**/*.spec.ts"
pipelines:
custom:
healthCheck:
- step:
name: Integration and E2E Test
script:
- npm install
- npm run cypress:pipeline
artifacts:
# store any generated images as artifacts
- cypress/screenshots/**/*.png

Resources