I have set up a Trigger in Google Cloud Build to start a new pipeline when receiving a HTTP POST request.
The last pipeline in Build History has failed because there was problems with volumes in the yaml.
Now, I cannot start new pipelines using this Trigger. The webhook requests does receive HTTP 200 from Google, but no new pipeline is initiated.
How can I start a new pipeline from a webhook request, even when the last build failed?
I use the inline-cloudbuild-yaml, to describe the pipeline.
This issue seem to be related to the Yaml description for the pipeline, but the big problem is that it does not show any error message - it just silently fail without initiating a new run.
Here is a simple inline-pipeline that works:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!"
and here is one that does not work, it is taken from the Cloud Build documentation for integrating with GitLab, but shortened to only two steps:
steps:
- name: gcr.io/cloud-builders/git
args:
- '-c'
- |
echo "$$SSHKEY" > /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
ssh-keyscan gitlab.com > /root/.ssh/known_hosts
entrypoint: bash
secretEnv:
- SSHKEY
volumes:
- name: ssh
path: /root/.ssh
- name: gcr.io/cloud-builders/git
args:
- clone
- 'git#gitlab.com/<my-gitlab-repo>'
- .
volumes:
- name: ssh
path: /root/.ssh
availableSecrets:
secretManager:
- versionName: <my-path-to-secret-version>
env: SSHKEY
And the big problem is that no build is initiated, so no error message is shown.
In both cases, the Webhook request receives HTTP 200.
I tried to replicate the issue from my end using curl. But, i can able to trigger the build. And note that the build invocations are independent meaning that the build history makes no difference to future builds. Try using verbose -v flag with curl command to find to display detailed processing information on your screen as below.
As it is working for me it seems to be working as intended. And to resolve your issue i suggest you to, contact google support here as it seems inspection on your project is required.
Update
I tried with the yaml that you shared in the question. Still, I was able to trigger the build.
(Ignore the build error, it was due to some permission error)
If you think many people are facing the same issue. Please report the issue on public issue tracker which is best forum for reporting these kind of issue.
Related
I am trying to set up a Node app to deploy to Digital Ocean after pushing to a Github repo. I am using Github actions and have followed this tutorial but have hit a snag at step 5. I get the following error when I try to push to the repo.
! [remote rejected] master -> master (refusing to allow an OAuth App to create or update workflow `.github/workflows/main.yaml` without `workflow` scope)
error: failed to push some refs to 'https://github.com/IT-ACA/hello-node-do.git'
I have tried everything I can find, including this SO post, but nothing works. I have a .yaml file in my project, which I can't see anything immediately wrong with, that currently looks like this.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
name: Deploy NodeJS App
uses: jjst/action-digitalocean-deploy-app#v2
with:
token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
host: ${{ secrets.SSH_HOST }}
key: ${{ secrets.SSH_KEY }}
username: ${{ secrets.SSH_USER }}
script: |
cd hello-node-do
git clone https://github.com/IT-ACA/hello-node-do.git
echo 'Deploy successful to Digital Ocean..'
Note that I have a different value for uses in the yaml code above which comes from this page and is what I began my DigitalOcean deployment journey with. But, I have also tried the one from the tutorial linked above without any luck.
I think the secrets are all correctly in place and that I have done everything necessary on the DigitalOcean side but it still throws this error. This is the very first time I have tried implementing a CD/CI pipeline and I have spent hours troubleshooting it now. Running out of ideas and would appreciate any help getting over this frustrating hurdle. Thanks in advance!
I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled.
I checked the GitLab keyword reference but could not find what I need.
The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.
I am using a single gitlab-runner. Also, I don't use dind.
test_stage:
stage: test
only:
- master
- tags
- merge_requests
image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
variables:
HEADLESS: "true"
script:
- docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io
- npm install
- npm run test
- npm install wait-on
- cd example
- docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
- cd ..
- apt install -y iproute2
- export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
- echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
- ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
- cd example
- docker-compose down
after_script:
- cd example && docker-compose down
artifacts:
when: always
paths:
- /builds/project/tests/output/
retry:
max: 2
when: always
tags: [custom-runner]
Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.
As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.
There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.
Based on your use case, I can think of two different solutions:
Create a wrapper script that detects when parts of your test job are hanging
Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)
Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.
I'm learning gitlab-ci and I'm having a difficult time setting up the .yml file to run a specific job only when a certain trigger token is used or when a branch is merged into master.
I've read through gitlab-ci docs and reviewed several examples. Still, I'm not seeing what I'm looking for.
*Edit: Answering part of my own question, using only: - master should only run the job for merges and pushes to master branch.
.build_template: &base_defs
stage: build_base
<<: *tags_defs
variables:
FILE_VER: "3.4"
script:
- docker build -t "${DEV_BASE}:latest" "${VERSION}/devel/base"
--build-arg FILE_VERSION=${FILE_VER}
only:
- master
- ~ WHEN TRIGGER TOKEN MATCHES = K3K3K3K3 ~
Maybe you can use
only:
variables:
- token == "..."
and make it work with one of the predefined gitlab variables?
Reference: GitLab Docs
currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?
The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:
Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.
We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.
If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment
I'm trying to set up a workflow in CircleCI for my React project.
What I want to achieve is to get a job to build the stuff and another one to deploy the master branch to Firebase hosting.
This is what I have so far after several configurations:
witmy: &witmy
docker:
- image: circleci/node:7.10
version: 2
jobs:
build:
<<: *witmy
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: yarn install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Build app in production mode
command: |
yarn build
- persist_to_workspace:
root: .
deploy:
<<: *witmy
steps:
- attach_workspace:
at: .
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=MY_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
The build job always success, but with the deploy I have this error:
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token=MYTOKEN
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code 1
So, what I understand is that the deploy job is not running in the same place the build was, right?
I'm not sure how to fix that. I've read some examples they provide and tried several things, but it doesn't work. I've also read the documentation but I think it's not very clear how to configure everything... maybe I'm too dumb.
I hope you guys can help me out on this one.
Cheers!!
EDITED TO ADD MY CURRENT CONFIG USING WORKSPACES
I've added Workspaces... but still I'm not able to get it working, after a loooot of tries I'm getting this error:
Persisting to Workspace
The specified paths did not match any files in /home/circleci/project
And also it's a real pain to commit and push to CircleCI every single change to the config file when I want to test it... :/
Thanks!
disclaimer: I'm a CircleCI Developer Advocate
Each job is its own running Docker container (or VM). So the problem here is that nothing in node_modules exists in your deploy job. There's 2 ways to solve this:
Install Firebase and anything else you might need, on the fly, just like you do in the build job.
Utilize CircleCI Workspaces to carry over your node_modules directory from the build job to the deploy job.
In my opinion, option 2 is likely your best bet because it's more efficient.