Disable a given manual job for previous pipelines - continuous-integration

I have this proof-of-concept (just displaying the relevant parts) in a GitLab CI pipeline:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
variables:
ANSIBLE_INVENTORY: development
deploy:test:
stage: deploy
environment:
name: test
url: https://env.url.tld
rules:
- if: $CI_COMMIT_BRANCH == "master"
when: manual
script: do_deploy
variables:
ANSIBLE_INVENTORY: test
I would like to disable/deprecate the previous deploy:test jobs when a new one is created. Basically, the deploy:test job should only be enabled for the current/latest pipeline, hence preventing an old build to take over a recent one.
I'm not saying that it should happens instantaneously; if it's running, is fine to let if finish, but if it failed and a new one is created, the old one (failed) should be disabled also. Same for the current one, if it ran successfully, it should be disabled — this is an optimal state.
Is there a configuration setting that will let me do that? I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.

I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.
It should work better with GitLab 15.5 (October 2022):
Prevent outdated deployment jobs
Previously, some outdated jobs could be manually started or retried even when Skip outdated deployment jobs is enabled.
We have updated the logic for this setting to check the deployment status when a job starts.
The job does not execute if the deployment job is outdated due to a more recent deployment.
This check ensures that outdated deployment jobs are not accidentally started, overwriting more recent code changes in production.
See Documentation and Issue.

Did you try adding the "interruptible" tag?
It seems like you have to add interruptible: true to your yaml.
For example:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
interruptible: true
variables:
ANSIBLE_INVENTORY: development
Ref: https://gitlab.com/gitlab-org/gitlab/-/issues/32022

Related

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

Run a CI job only if another manual job was triggered

I have a .gitlab-ci.yml with 2 deploys and a job to check that the deployed APIs are reachable.
stages:
- deploy-stage
- deploy-prod
- check
service:stage:
stage: deploy-stage
service:production:
stage: deploy-prod
when: manual
check:stage:
stage: check
check:production:
stage: check
dependencies: service:production
At the moment, even though the specified dependencies, I'm having the check:production running even when the service:production job is skipped (I did not manually trigger it).
I could add allow_failure: false to service:production, so that check:production is not run (indirectly, because the whole pipeline gets halted), but I would prefer a way to express more explicitly the direct dependency of check:production → service:production.
How to configure check:production to run automatically, only when service:production was manually triggered?
You can use the needs keyword to state that one job needs another.
stages:
- deploy-stage
- deploy-prod
- check
service:stage:
stage: deploy-stage
service:production:
stage: deploy-prod
when: manual
check:stage:
stage: check
check:production:
stage: check
dependencies: service:production
needs: ['service:production']
In this example, check:production won't run if service:production has failed, is a manual job and hasn't been run yet, was skipped, or was cancelled.
Needs can also be used to tell jobs to run before other, unrelated jobs from previous stages finish. This means that check:production can start once service:production finishes, even if service:stage is still running.
Here's the docs for more information on this and other keywords: https://docs.gitlab.com/ee/ci/yaml/#needs
You can use the dependencies keyword for similar results, but if the other job fails or is an untriggered manual job, the dependent job will still run, and might fail depending on the outcome of the first job. Needs is a newer, and improved option.

Trigger action/job if pipeline job/stage fails

I have a GitLab pipeline one of whose jobs writes some content into a file, a simple .txt file, and pushes a Git tag. After this job, other jobs are executed.
I want to trigger an automatic action/job or something that undoes the .txt file writing and deletes the Git tag in case any of the jobs of the pipeline fails.
Is it possible to do such a thing? Is there some kind of GitLab pipeline job that triggers only in case another fails?
You're looking for the when: on_failure option. The docs for it are here: https://docs.gitlab.com/ee/ci/yaml/README.html#when
If you put this on a job, that job will only run if another job in your pipeline from an earlier stage fails. If you only need one at the end of your pipeline, this will work fine, but it also lets you use multiple after each stage that needs it. Here's an example from the docs:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
The job in the build stage runs whatever's necessary to build the project, but if anything fails, the job in the cleanup_build stage will run and remove any build artifacts, or whatever else is needed.
If the build job passes, the test stage job runs the tests. If these pass, we don't have to do any cleanup so the pipeline would just fail immediately. Otherwise, we continue to the deploy stage.
Our deploy stage job is marked with when: manual which is another useful option for your pipelines. It means that this job will not run until either a Gitlab user or API hits a button to start the job.
Our last job in the cleanup stage has when: always. This means that no matter what happens in any previous stage, it will always run. This is great for a final cleanup, and if a deployment fails we could perform a rollback or anything else we might need.
The docs for the when keyword and all its options are here: https://docs.gitlab.com/ee/ci/yaml/README.html#when

GITLAB CI Pipeline not triggered

I have written this yml file for GitLab CI/CD. There is a shared runner configured and running.
I am doing this first time and not sure where I am going wrong. The angular js project I am having
on the repo has a gulp build file and works perfectly on local machine. This code just has to trigger
that on the vm where my runner is present. On commit the pipeline does not show any job. Let me know what needs to be corrected!
image: docker:latest
cache:
paths:
- node_modules/
deploy_stage:
stage: build
only:
- master
environment: stage
script:
- rmdir -rf "build"
- mkdir "build"
- cd "build"
- git init
- git clone "my url"
- cd "path of cloned repository"
- gulp build
What branch are you commiting to? You pipeline is configured to run only for commit on master branch.
...
only:
- master
...
If you want to have triggered jobs for other branches as well then remove this restriction from .gitlab-ci.yml file.
Do not forget to Enable shared Runners (they may not be enabled by default), setting can be found on GitLab project page under Settings -> CI/CD -> Runners.
Update: Did your pipeline triggers ever work for your project?
If not then I would try configuring simple pipeline just to test if triggers work fine:
test_simple_job:
script:
- echo I should execute for any pipeline trigger.
I solved the problem by renaming the .gitlab-ci.yaml to .gitlab-ci.yml
I just wanted to add that I ran into a similar issue. I was committing my code and I was not seeing the pipeline trigger at all.There was also no error statement on gitlab nor in my vscode. It had ran perfectly before.My problem was because I had made some recent edits to my yaml that were invalid.I reverted the changes to a known valid yaml code, and it worked again and passed.
I also had this issue. I thought I would document the cause, in the hopes it may help someone (although this is not strictly an answer for the original question because my deploy script is more complex).
So in my case, the reason was that I had multiple jobs with the same job ID in my .gitlab-ci.yml. The latter one basically rendered the earlier one invisible.
# This job will never run:
deploy_my_stuff:
script:
- do something for job one
# This job overwrites the above.
deploy_my_stuff:
script:
- do something for job two
Totally obvious... after I discovered the mistake.

Gitlab-CI multi-project-pipeline

currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?
The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:
Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.
We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.
If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment

Resources