GITLAB CI Pipeline not triggered - continuous-integration

I have written this yml file for GitLab CI/CD. There is a shared runner configured and running.
I am doing this first time and not sure where I am going wrong. The angular js project I am having
on the repo has a gulp build file and works perfectly on local machine. This code just has to trigger
that on the vm where my runner is present. On commit the pipeline does not show any job. Let me know what needs to be corrected!
image: docker:latest
cache:
paths:
- node_modules/
deploy_stage:
stage: build
only:
- master
environment: stage
script:
- rmdir -rf "build"
- mkdir "build"
- cd "build"
- git init
- git clone "my url"
- cd "path of cloned repository"
- gulp build

What branch are you commiting to? You pipeline is configured to run only for commit on master branch.
...
only:
- master
...
If you want to have triggered jobs for other branches as well then remove this restriction from .gitlab-ci.yml file.
Do not forget to Enable shared Runners (they may not be enabled by default), setting can be found on GitLab project page under Settings -> CI/CD -> Runners.
Update: Did your pipeline triggers ever work for your project?
If not then I would try configuring simple pipeline just to test if triggers work fine:
test_simple_job:
script:
- echo I should execute for any pipeline trigger.

I solved the problem by renaming the .gitlab-ci.yaml to .gitlab-ci.yml

I just wanted to add that I ran into a similar issue. I was committing my code and I was not seeing the pipeline trigger at all.There was also no error statement on gitlab nor in my vscode. It had ran perfectly before.My problem was because I had made some recent edits to my yaml that were invalid.I reverted the changes to a known valid yaml code, and it worked again and passed.

I also had this issue. I thought I would document the cause, in the hopes it may help someone (although this is not strictly an answer for the original question because my deploy script is more complex).
So in my case, the reason was that I had multiple jobs with the same job ID in my .gitlab-ci.yml. The latter one basically rendered the earlier one invisible.
# This job will never run:
deploy_my_stuff:
script:
- do something for job one
# This job overwrites the above.
deploy_my_stuff:
script:
- do something for job two
Totally obvious... after I discovered the mistake.

Related

Share a file between two workflows in CircleCI

In our repo build and deploy are two different workflows.
In build we call lerna to check for changed packages and save the output in a file saved to the current workspace.
check_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: npx lerna changed > changed.tmp
- persist_to_workspace:
root: ./
paths:
- changed.tmp
I'd like to pass the exact same file from build workflow to deploy workflow and access it in another job. How do I do that?
read_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: |
echo 'Reading changed.tmp file'
cat changed.tmp
According to this blog post
Unlike caching, workspaces are not shared between runs as they no
longer exists once a workflow is complete
it feels that caching would be the only option.
But according to the CircelCI documentation, my case doesn't fit their cache defintions:
Use the cache to store data that makes your job faster, but, in the
case of a cache miss or zero cache restore, the job still runs
successfully. For example, you might cache NPM package directories
(known as node_modules).
I think you can totally use caching here. Make sure you choose your key template(s) wisely.
The caveat to keep in mind is that (unlike the job level where you can use the requires key), there's no *native *way to sequentially execute workflows. Although you could consider using an orb for that; for example the roopakv/swissknife orb.
So you'll need to make sure that the job (that needs the file) in the deploy workflow, doesn't move to the restore_cache step until the save_cache in the other job has happened.

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

Disable a given manual job for previous pipelines

I have this proof-of-concept (just displaying the relevant parts) in a GitLab CI pipeline:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
variables:
ANSIBLE_INVENTORY: development
deploy:test:
stage: deploy
environment:
name: test
url: https://env.url.tld
rules:
- if: $CI_COMMIT_BRANCH == "master"
when: manual
script: do_deploy
variables:
ANSIBLE_INVENTORY: test
I would like to disable/deprecate the previous deploy:test jobs when a new one is created. Basically, the deploy:test job should only be enabled for the current/latest pipeline, hence preventing an old build to take over a recent one.
I'm not saying that it should happens instantaneously; if it's running, is fine to let if finish, but if it failed and a new one is created, the old one (failed) should be disabled also. Same for the current one, if it ran successfully, it should be disabled — this is an optimal state.
Is there a configuration setting that will let me do that? I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.
I have checked Auto-cancel redundant, pending pipelines and Skip outdated deployment jobs in Settings > CI/CD > General pipelines, but still the job doesn't get disabled on previous pipelines.
It should work better with GitLab 15.5 (October 2022):
Prevent outdated deployment jobs
Previously, some outdated jobs could be manually started or retried even when Skip outdated deployment jobs is enabled.
We have updated the logic for this setting to check the deployment status when a job starts.
The job does not execute if the deployment job is outdated due to a more recent deployment.
This check ensures that outdated deployment jobs are not accidentally started, overwriting more recent code changes in production.
See Documentation and Issue.
Did you try adding the "interruptible" tag?
It seems like you have to add interruptible: true to your yaml.
For example:
deploy:development:
stage: deploy
rules:
- if: $CI_COMMIT_BRANCH == "master"
script: do_deploy
interruptible: true
variables:
ANSIBLE_INVENTORY: development
Ref: https://gitlab.com/gitlab-org/gitlab/-/issues/32022

Gitlab Docker build: calling shell command in .gitlab-ci.yml

I'm trying to call shell command in .gitlab-ci.yml, whose relevant parts are:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
...
build:
stage: build
script:
- apt-get update -y
- GIT_TAG=$(git tag | tail -1)
- GIT_TAG=$(/usr/bin/git tag | tail -1)
- docker ...
However all top three shell command callings have failed, all with "command not found" error. The git command being failing is really odd, because it has to get the git repo first before start the script section. I.e., I can see that git is working, but I just can't use it myself.
Is there any way to make it working?
You see git working in separate steps because GitLab is probably doing it in another container. They keep your container clean, so you have to install dependencies yourself.
Since the image you're using is based on Alpine Linux, the command to install git is:
apk add --no-cache git
You can also skip the whole thing and use the predefined environment variables if all you need is git information. $CI_COMMIT_TAG will contain the tag and $CI_COMMIT_SHA will contain the commit hash.
from the documentation of GitLab, here is the definition of CI_COMMIT_TAG: CI_COMMIT_TAG - The commit tag name. Present only when building tags
means - when you will push a commit to GitLab, then it will start a pipeline without CI_COMMIT_TAG variable. When you make a tag on this commit and push this tag to GitLab, then another pipeline (this time for the tag, not for the commit) will be started. In that case CI_COMMIT_TAG will be present.
#xpt - thanks for the vote confidence and asking to write up this as an answer, hope this helps the community!

Gitlab-CI multi-project-pipeline

currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?
The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:
Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.
We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.
If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment

Resources