Run a CI job only if another manual job was triggered - continuous-integration

I have a .gitlab-ci.yml with 2 deploys and a job to check that the deployed APIs are reachable.
stages:
- deploy-stage
- deploy-prod
- check
service:stage:
stage: deploy-stage
service:production:
stage: deploy-prod
when: manual
check:stage:
stage: check
check:production:
stage: check
dependencies: service:production
At the moment, even though the specified dependencies, I'm having the check:production running even when the service:production job is skipped (I did not manually trigger it).
I could add allow_failure: false to service:production, so that check:production is not run (indirectly, because the whole pipeline gets halted), but I would prefer a way to express more explicitly the direct dependency of check:production → service:production.
How to configure check:production to run automatically, only when service:production was manually triggered?

You can use the needs keyword to state that one job needs another.
stages:
- deploy-stage
- deploy-prod
- check
service:stage:
stage: deploy-stage
service:production:
stage: deploy-prod
when: manual
check:stage:
stage: check
check:production:
stage: check
dependencies: service:production
needs: ['service:production']
In this example, check:production won't run if service:production has failed, is a manual job and hasn't been run yet, was skipped, or was cancelled.
Needs can also be used to tell jobs to run before other, unrelated jobs from previous stages finish. This means that check:production can start once service:production finishes, even if service:stage is still running.
Here's the docs for more information on this and other keywords: https://docs.gitlab.com/ee/ci/yaml/#needs
You can use the dependencies keyword for similar results, but if the other job fails or is an untriggered manual job, the dependent job will still run, and might fail depending on the outcome of the first job. Needs is a newer, and improved option.

Related

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

How to make github actions step fail if the previous step has failed?

I have written a ci process to build and deploy docker images using github actions. There is a strange behavior seen i.e. when the make file fails due to any reason the steps in the github actions are still continued instead of stopping at the failed step. I am using only one job in my ci and multiple steps within it.
Example error: make[1]: [../../Makefile:22: docker-build-generic] Error 1 (ignored)
Github Actions do not actually run sequentially: they will run in parallel if there are enough resources. If one job relies on another, use the needs syntax.
Modified example from the documentation:
jobs:
job1:
job2:
needs: job1
Here job2 will wait for job1 to succeed before running. If job1 fails, job2 will not run.

How to write code which runs upon a Gitlab Job's timeout/fail?

Sometimes a job timeouts and gets killed which results in specific resources remaining locked.
Thus the next job fails instantly because it can't access some resources.
Is there a way tell GitLab to run a script upon timeout/fail? (which would unlock all "possibly locked" resources)
As described in the documentation (https://docs.gitlab.com/ee/ci/yaml/#when) , you could create a seperate job in the next stage that uses when: on_failure to clean up resources.
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
For this to work, of course your previous job would need to fail. You can manually define timeouts per job as for example:
build:
script: build.sh
timeout: 10 minutes
The on_failure job is executed when at least one job in an earlier stage fails, so if you have multiple jobs before, you might need to use additional variables to validate if the cleanup should be run.

Trigger action/job if pipeline job/stage fails

I have a GitLab pipeline one of whose jobs writes some content into a file, a simple .txt file, and pushes a Git tag. After this job, other jobs are executed.
I want to trigger an automatic action/job or something that undoes the .txt file writing and deletes the Git tag in case any of the jobs of the pipeline fails.
Is it possible to do such a thing? Is there some kind of GitLab pipeline job that triggers only in case another fails?
You're looking for the when: on_failure option. The docs for it are here: https://docs.gitlab.com/ee/ci/yaml/README.html#when
If you put this on a job, that job will only run if another job in your pipeline from an earlier stage fails. If you only need one at the end of your pipeline, this will work fine, but it also lets you use multiple after each stage that needs it. Here's an example from the docs:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
The job in the build stage runs whatever's necessary to build the project, but if anything fails, the job in the cleanup_build stage will run and remove any build artifacts, or whatever else is needed.
If the build job passes, the test stage job runs the tests. If these pass, we don't have to do any cleanup so the pipeline would just fail immediately. Otherwise, we continue to the deploy stage.
Our deploy stage job is marked with when: manual which is another useful option for your pipelines. It means that this job will not run until either a Gitlab user or API hits a button to start the job.
Our last job in the cleanup stage has when: always. This means that no matter what happens in any previous stage, it will always run. This is great for a final cleanup, and if a deployment fails we could perform a rollback or anything else we might need.
The docs for the when keyword and all its options are here: https://docs.gitlab.com/ee/ci/yaml/README.html#when

Disable a given manual job for previous pipelines

I have this proof-of-concept (just displaying the relevant parts) in a GitLab CI pipeline:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
variables:
ANSIBLE_INVENTORY: development
deploy:test:
stage: deploy
environment:
name: test
url: https://env.url.tld
rules:
- if: $CI_COMMIT_BRANCH == "master"
when: manual
script: do_deploy
variables:
ANSIBLE_INVENTORY: test
I would like to disable/deprecate the previous deploy:test jobs when a new one is created. Basically, the deploy:test job should only be enabled for the current/latest pipeline, hence preventing an old build to take over a recent one.
I'm not saying that it should happens instantaneously; if it's running, is fine to let if finish, but if it failed and a new one is created, the old one (failed) should be disabled also. Same for the current one, if it ran successfully, it should be disabled — this is an optimal state.
Is there a configuration setting that will let me do that? I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.
I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.
It should work better with GitLab 15.5 (October 2022):
Prevent outdated deployment jobs
Previously, some outdated jobs could be manually started or retried even when Skip outdated deployment jobs is enabled.
We have updated the logic for this setting to check the deployment status when a job starts.
The job does not execute if the deployment job is outdated due to a more recent deployment.
This check ensures that outdated deployment jobs are not accidentally started, overwriting more recent code changes in production.
See Documentation and Issue.
Did you try adding the "interruptible" tag?
It seems like you have to add interruptible: true to your yaml.
For example:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
interruptible: true
variables:
ANSIBLE_INVENTORY: development
Ref: https://gitlab.com/gitlab-org/gitlab/-/issues/32022

Resources