Circleci + cypress: how to change config.yml without committing to github? - cypress

I'm looking for a way to conveniently run only a specific set of Cypress spec files on Circleci.
I can do this locally by specifying the spec files in the Cypress.json file, but I don't want to run these locally as it prevents me from using my computer while tests are running.
I can specify which files to run on circleci by listing them in the config.yml.
However, the problem with this approach is that I have to push a PR to github every time I want to run a different set of spec files (with no intention of merging this change to the repo).
I found this discussion on circle's forum that has a potential solution:
https://discuss.circleci.com/t/efficiently-testing-configuration-file-migrating-to-2-0/11620
I tried to implement it, but the build fails on circleci because it keeps reading my config.yml file incorrectly.
For example,
version: 2.1
orbs:
cypress: cypress-io/cypress#1
executors:
latest-chrome:
docker:
- image: "cypress/browsers:node14.7.0-chrome84"
workflows:
build:
jobs:
- cypress/run:
executor: latest-chrome
browser: chrome
spec:
"cypress/integration/test_lab.js,\
cypress/integration/example/example.js"
is converted to this on circleci:
version: 2.1
orbs: {cypress: cypress-io/cypress#1}
executors:
latest-chrome:
docker:
- {image: 'cypress/browsers:node14.7.0-chrome84'}
workflows:
version: 2
build:
jobs:
- build: {}
Note that the config.yml builds correctly when I push it to github - just not when I am using the method mentioned in the link I provided above.

Related

How can I set gradle/test to work on the same docker environment where other CircleCi jobs are running

I have a CircleCi's workflow that has 2 jobs. The second job (gradle/test) is dependent on the first one creating some files for it.
The problem is with the first job running inside a docker, and the second job (gradle/test) is not. Hence, the gradle/test is failing since it cannot find the files the first job created. How can I set gradle/test to work on the same space?
Here is a code of the workflow:
version: 2.1
orbs:
gradle: circleci/gradle#2.2.0
executors:
daml-executor:
docker:
- image: cimg/openjdk:11.0-node
...
workflows:
checkout-build-test:
jobs:
- daml_test:
daml_sdk_version: "2.2.0"
context: refapps
- gradle/test:
app_src_directory: prototype
executor: daml-executor
requires:
- daml_test
Can anyone help me configure gradle/test correctly?
CircleCI has a mechanism to share artifacts between jobs called "workspace" (well, they have multiple ones, but workspace is what you want here).
Concretely, you would add this at the end of your daml_test job definition, as an additional step:
- persist_to_workspace:
root: /path/to/folder
paths:
- "*"
and that would add all the files from /path/to/folder to the workspace. On the other side, you can "mount" the workspace in your gradle/test job by adding something like this before the step where you need the files:
- attach_workspace:
at: /whatever/mountpoint
I like to use /tmp/workspace for the path on both sides, but that's just personal preference.

How to print/debug all jobs of includes in GitLab CI?

In Gitlab CI it is possible to include one or many files in a .gitlab-ci.yml file.
It is even possible to nest these includes.
See https://docs.gitlab.com/ee/ci/yaml/includes.html#using-nested-includes.
How can I see the resulting CI file all at once?
Right now, when I debug a CI cycle, I open every single include file and
combine the resulting file structure by myself. There has to be a better way.
Example
Content of https://company.com/autodevops-template.yml:
variables:
POSTGRES_USER: user
POSTGRES_PASSWORD: testing_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
only:
- master
Content of .gitlab-ci.yml:
include: 'https://company.com/autodevops-template.yml'
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
stages:
- build
- test
- production
production:
environment:
url: https://example.com
This should result in the following file structure:
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
stages:
- build
- test
- production
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://example.com
only:
- master
→ How can I see this output somewhere?
Environment
Self-Hosted GitLab 13.9.1
As I continued to search for a solution, I read about a “Pipeline Editor“ which was released in GitLab version 13.8 this year. Turns out that the feature I am looking for was added to this editor just some days ago:
Version 13.9.0 (2021-02-22) „View an expanded version of the CI/CD configuration” → see Release Notes for a feature description and introduction video.
To view the fully expanded CI/CD configuration as one combined file,
go to the pipeline editor’s »View merged YAML« tab. This tab displays an
expanded configuration where:
Configuration imported with include is copied into the view.
Jobs that use extends display with the extended configuration merged into the job.
YAML anchors are replaced with the linked configuration
Usage: Open project → Module »CI / CD« → Submodule »Editor« → Tab »View merged YAML«
GitLab 15.1 offers an altervative:
Link to included CI/CD configuration from the pipeline editor
A typical CI/CD configuration uses the include keyword to import configuration stored in other files or CI/CD templates. When editing or troubleshooting your configuration though, it can be difficult to understand how all the configuration works together because the included configuration is not visible in your .gitlab-ci-yml, you only see the include entry.
In this release, we added links to all included configuration files and templates to the pipeline editor.
Now you can easily access and view all the CI/CD configuration your pipeline uses, making it much easier to manage large and complex pipelines.
See Documentation and Issue.

How to configure caching for custom base image for Bitbucket Pipelines

I've a Bitbucket Pipeline that is using an custom docker image as a base. Pulling it from the ECR. Also, I'm using this image to build dockerized Go apps in the first step with make commands. I want to cache Go modules that are being downloaded in the make build process. But when I read the examples, people are using Go base images to make caching work. How can I activate caching while using base image other than Go image itself? Related parts of my pipeline is on below and Go cache seems doesn't work.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
definitions:
caches:
go: $GOPATH/pkg
pipelines:
tags:
'*-beta*'
-step:
name: "Image Build & Push"
services:
-docker
caches:
-go
script:
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test

Using github actions to publish documentation

What I considered:
github offers github pages to host documentation in either a folder on my master branch or a dedicated gh-pages branch, but that would mean to commit build artifacts
I can also let readthedocs build and host docs for me through webhooks, but that means learning how to configure Yet Another Tool at a point in time where I try to consolidate everything related to my project in github-actions
I already have a docu-building process that works for me (using sphinx as the builder) and that I can also test locally, so I'd rather just leverage that instead. It has all the rules set up and drops some static html in an artifact - it just doesn't get served anywhere. Handling it in the workflow, where all the other deployment configuration of my project is living, feels better than scattering it over different tools or github specific options.
Is there already an action in the marketplace that allows me to do something like this?
name: CI
on: [push]
jobs:
... # do stuff like building my-project-v1.2.3.whl, testing, etc.
release_docs:
steps:
- uses: actions/sphinx-to-pages#v1 # I wish this existed
with:
dependencies:
- some-sphinx-extension
- dist/my-project*.whl
apidoc_args:
- "--no-toc"
- "--module-first"
- "-o docs/autodoc"
- "src/my-project"
build-args:
- "docs"
- "public" # the content of this folder will then be served at
# https://my_gh_name.github.io/my_project/
In other words, I'd like to still have control over how the build happens and where artifacts are dropped, but do not want to need to handle the interaction with readthedocs or github-pages.
###Actions that I tried
❌ deploy-to-github-pages: runs the docs build in an npm container - will be inconvenient to make it work with python and sphinx
❌ gh-pages-for-github-action: no documentation
❌ gh-pages-deploy: seems to target host envs like jekyll instead of static content, and correct usage with yml syntax not yet documented - I tried a little and couldn't get it to work
❌ github-pages-deploy: looks good, but correct usage with yml syntax not yet documented
✅ github-pages: needs a custom PAT in order to trigger rebuilds (which is inconvenient) and uploads broken html (which is bad, but might be my fault)
✅ deploy-action-for-github-pages: also works, and looks a little cleaner in the logs. Same limitations as the upper solution though, it needs a PAT and the served html is still broken.
The eleven other results when searching for github+pages on the action marketplace all look like they want to use their own builder, which sadly never happens to be sphinx.
In the case of managing sphinx using pip (requirements.txt), pipenv, or poetry, we can deploy our documentation to GitHub Pages as follows. For also other Python-based Static Site Generators like pelican and MkDocs, the workflow works as same. Here is a simple example of MkDocs. We just add the workflow as .github/workflows/gh-pages.yml
For more options, see the latest README: peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages 🚀 Deploy static files and publish your site easily. Static-Site-Generators-friendly.
name: github pages
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Upgrade pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: python3 -m pip install -r ./requirements.txt
- run: mkdocs build
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
I got it to work, but there is no dedicated action to build and host sphinx docs on either github pages or readthedocs as of yet, so as far as I am concerned there is quite a bit left to be desired here.
This is my current release_sphinx job that uses the deploy-action-for-github-pages action and uploads to github-pages:
release_sphinx:
needs: [build]
runs-on: ubuntu-latest
container:
image: python:3.6
volumes:
- dist:dist
- public:public
steps:
# check out sources that will be used for autodocs, plus readme
- uses: actions/checkout#v1
# download wheel that was build and uploaded in the build step
- uses: actions/download-artifact#v1
with:
name: distributions
path: dist
# didn't need to change anything here, but had to add sphinx.ext.githubpages
# to my conf.py extensions list. that fixes the broken uploads
- name: Building documentation
run: |
pip install dist/*.whl
pip install sphinx Pallets-Sphinx-Themes
sphinx-apidoc --no-toc --module-first -o docs/autodoc src/stenotype
sphinx-build docs public -b dirhtml
# still need to build and set the PAT to get a rebuild on the pages job,
# apart from that quite clean and nice
- name: github pages deploy
uses: peaceiris/actions-gh-pages#v2.3.1
env:
PERSONAL_TOKEN: ${{ secrets.PAT }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: public
# since gh-pages has a history, this step might no longer be necessary.
- uses: actions/upload-artifact#v1
with:
name: documentation
path: public
Shoutout to the deploy action's maintainer, who resolved the upload problem within 8 minutes of me posting it as an issue.

circleci v2 config - how do we filter by owner in a workflow?

In the circleci version 1 config, there was the option to specify owner as an option in a deployment. An example from the circleci docs ( https://circleci.com/docs/1.0/configuration/ ) with owner: circleci being the key line:
deployment:
master:
branch: master
owner: circleci
commands:
- ./deploy_master.sh
In version 2 of the config, there is the ability to use filters and tags to specify which branches are built, but I have yet to find (in the docs, or on the interwebs) anything that gives me the same capability.
What I'm trying to achieve is run build and test steps on forks, but only run the deploy steps if the repository owner is the main repo. Quite often people fork using the same branch name - in this case master - so having a build fail due to an inability to deploy is counter-intuitive, especially as I would like to use a protected branch in git and only merge commits based on a successful build in a pull request.
I realise we could move to only running builds based on tags being present, but nothing is stopping somebody with a fork also creating a tag in their fork, which puts us back at square one.
Is anybody aware of how to specify the owner of a repo in the version 2 config?
An example from the version 2 config document ( https://circleci.com/docs/2.0/workflows/ ) in case it helps jog somebodies memory:
workflows:
version: 2
un-tagged-build:
jobs:
- build:
filters:
tags:
ignore: /^v.*/
tagged-build:
jobs:
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
disclaimer: Developer Evangelist at CircleCI
That feature is not available on CircleCI 2.0. You can request it here.
As an alternative, you might be able to look for the branch name, say master, as well as the CIRCLE_PR_NUMBER environment variable. If that variable has any value, then it's a fork of master and you shouldn't deploy.

Resources