We are using cloud build for continuous deployment on GCP. When pushing commits to fast (e.g. on development) the triggered builds are running in parallel. Sometimes those interfere which one another. For example when two app engine deployments are running at the same time.
Is there a way or best practise to force builds which are triggered from the same build trigger to run one after another?
Regards,
Carsten
I've done this by adding an initial step on my cloudbuild.yaml file. What it does is:
Gets all the ongoing build id via gcloud builds list --ongoing --format='value(id)' --filter="substitutions.TRIGGER_NAME=$TRIGGER_NAME".
Loop through it but skip the first one, the list is sorted by latest created time so it won't stop the latest build which is the first index
Run gcloud builds cancel ${on_going_build[i]} to cancel the build
Please see the cloudbuild.yaml below
steps:
- id: "Stop Other Ongoing Build"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
on_going_build=($(gcloud builds list --ongoing --format='value(id)' --filter="substitutions.TRIGGER_NAME=$TRIGGER_NAME" | xargs))
for (( i=0; i<${#on_going_build[#]}; i++ )); do
if [ "$i" -gt "0" ]; then # skip current
echo "Cancelling build ${on_going_build[i]}"
gcloud builds cancel ${on_going_build[i]}
fi
done
You can't by setup. But you can define custom builder. Create one which check if a build is running for your project with you repository. If yes, return an error code and crash the build process, else, continue the processing.
I've achieved sequential builds by using Logging Router + Pub/Sub triggers.
My first build is triggered by a commit and builds an image.
I inspected the logs and found that we have a single message like textPayload="Pushing gcr.io/my-project/my-repo/my_image:latest" when the build is finished.
By using the above as a filter this can then be routed to a Pub/Sub topic which triggers my second build that is using the image from the first one.
Might not work for all use cases since you need to find a single event emitted that signifies a successful build, but it definitely works when building images with the above.
Related
I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.
currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?
The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:
Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.
We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.
If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment
Due to build time restrictions on Docker Hub, I decided to split the Dockerfile of a time-consuming automated build into three files.
Each one of those "sub-builds" finishes within Docker Hub's time limits.
I have now the following setup within the same repository:
| branch | dockerfile | tag |
| ------ | ------------------ | ------ |
| master | /step-1.Dockerfile | step-1 |
| master | /step-2.Dockerfile | step-2 |
| master | /step-3.Dockerfile | step-3 |
The images build on each other in the following order:
step-1.Dockerfile : FROM ubuntu
step-2.Dockerfile : FROM me/complex-image:step-1
step-3.Dockerfile : FROM me/complex-image:step-2
A separate web application triggers the building of step-1 using the "build trigger" URL provided by Docker Hub (to which the {"docker_tag": "step-1"}' payload is added). However, Docker Hub doesn't provide a way to automatically trigger step-2 and then step-3 afterwards.
How can I automatically trigger the following build steps in their respective order?** (i.e., trigger step-2 after step-1 finishes. Then, trigger step-3 after step-2 finishes).
NB: I don't want to set up separate repositories for each of step-i then link them using Docker Hub's "Repository Links." I just want to link tags in the same repository.
Note: Until now, my solution is to attach a Docker Hub Webhook to a web application that I've made. When step-n finishes, (i.e., calls my web application's URL with a JSON file containing the tag name of step-n) the web application uses the "build trigger" to trigger step-n+1. It works as expected, however, I'm wondering whether there's a "better" way of doing things.
As requested by Ken Cochrane, here are the initial Dockerfile as well as the "build script" that it uses. I was just trying to dockerize Cling (a C++ interpreter). It needs to compile LLVM, Clang and Cling. As you might expect, depending on the machine, it needs a few hours to do so, and Docker Hub allows "only" 2-hour builds at most :) The "sub build" images that I added later (still in the develop branch) build a part of the whole thing each. I'm not sure that there is any further optimization to be made here.
Also, in order to test various ideas (and avoid waiting h-hours for the result) I have setup another repository with a similar structure (the only difference is that its Dockerfiles don't do as much work).
UPDATE 1: On Option 5: as expected, the curl from step-1.Dockerfile has been ignored:
Settings → Build Triggers → Last 10 Trigger Logs
| Date/Time | IP Address | Status | Status Description | Request Body | Build Request |
| ------------------------- | --------------- | ------- | ------------------------ | -------------------------- | ------------- |
| April 30th, 2016, 1:18 am | <my.ip.v4.addr> | ignored | Ignored, build throttle. | {u'docker_tag': u'step-2'} | null |
Another problem with this approach is that it requires me to put the build trigger's (secret) token in the Dockerfile for everyone to see :) (hopefully, Docker Hub has an option to invalidate it and regenerate another one)
UPDATE 2: Here is my current attempt:
It is basically a Heroku-hosted application that has an APScheduler periodic "trigger" that starts the initial build step, and a Flask webhook handler that "propagates" the build (i.e., it has the ordered list of build tags. Each time it is called by the webhook, it triggers the next build step).
I recently had the same requirement to chain dependent builds, and achieved it this way using Docker Cloud automated builds:
Create a repository with build rules for each Dockerfile that needs to be built.
Disable the Autobuild option for all build rules in dependent repositories.
Add a shell script named hooks\post_push in each directory containing a Dockerfile that has dependents with the following code:
for url in $(echo $BUILD_TRIGGERS | sed "s/,/ /g"); do
curl -X POST -H "Content-Type: application/json" --data "{ \"build\": true, \"source_name\": \"$SOURCE_BRANCH\" }" $url
done
For each repository with dependents, add a Build Environment Variable named BUILD_TRIGGERS to the automated build, and set the Value to a comma-separated list of the build trigger URLs of each dependent automated build.
Using this setup, a push to the root source repository will trigger a build of the root image, once it completes and is pushed the post_push hook will be executed. In the hook a POST is made to each dependent repositories build trigger, containing the name of the branch or tag being built in the requests body. This will cause the appropriate build rule of the dependent repository to be triggered.
How long is the build taking? Can you post your Dockerfile?
Option 1: is to find out what is taking so long with your automated build to see why it isn't finishing in time. If you post it here, we can see if there is anything you can do to optimize.
Option 2: Is what you are already doing now, using a 3rd party app to trigger the builds in the given order.
Option 3: I'm not sure if this will work for you, since you are using the same repo, but normally you would use repo links for this feature, and then chain them, when one finishes, the next one triggers the first. But since you have one repo, it won't work.
Option 4: Break it up into multiple repos, then you can use repo links.
Option 5: Total hack, last resort (not sure if it will work). You add a CURL statement on the last line of your Dockerfile, to post to the build trigger link of the repo with the given tag for the next step. You might need to add a sleep in the next step to wait for the push to finish getting pushed to the hub, if you need one tag for the next.
Honestly, the best one is Option 1: what ever you are doing should be able to finish in the allotted time, you are probably doing some things we can optimize to make the whole thing faster. If you get it to come in under the time limit, then everything else isn't needed.
It's possible to do this by tweaking the Build Settings in the Docker Hub repositories.
First, create an Automated Build for /step-1.Dockerfile of your GitHub repository, with the tag step-1. This one doesn't require any special settings.
Next, create another Automated Build for /step-2.Dockerfile of your GitHub repository, with the tag step-2. In the Build Settings, uncheck When active, builds will happen automatically on pushes. Also add a Repository Link to me/step-1.
Do the same for step-3 (linking it to me/step-2).
Now, when you push to the GitHub repository, it will trigger step-1 to build; when that finishes, step-2 will build, and after that, step-3 will build.
Note that you need to wait for the previous stage to successfully build once before you can add a repository link to it.
I just tried the other answers and they are not working for me, so I invented another way of chaining builds by using a separate branch for each build rule, e.g.:
master # This is for docker image tagged base
docker-build-stage1 # tag stage1
docker-build-latest # tag latest
docker-build-dev # tag dev
in which stage1 is dependent on the base, the latest is dependent on stage1, and dev is based on the latest.
In each of the dependencies’ post_push hook, I called the script below with the direct dependents of itself:
#!/bin/bash -x
git clone https://github.com/NobodyXu/llvm-toolchain.git
cd llvm-toolchain
git checkout ${1}
git merge --ff-only master
# Set up push.default for push
git config --local push.default simple
# Set up username and passwd
# About the credential, see my other answer:
# https://stackoverflow.com/a/57532225/8375400
git config --local credential.helper store
echo "https://${GITHUB_ROBOT_USER}:${GITHUB_ROBOT_ACCESS_TOKEN}#github.com" > ~/.git-credentials
exec git push origin HEAD
The variables GITHUB_ROBOT_USER and GITHUB_ROBOT_ACCESS_TOKEN are environment variables set in Docker Hub auto build configuration.
Personally, I prefer to register a new robot account with two-factor authentication enabled on GitHub specifically for this, invite the robot account to become a collaborator and use an access token instead of a password as it is safer than using your own account which has access to far more repositories than needed and is also easy to manage.
You need to disable the repository link, otherwise there will be a lot of unexpected build jobs in Docker Hub.
If you want to see a demo of this solution, check NobodyXu/llvm-toolchain.
I have 3 servers for PROD, with the same deployment build configuration, I choose which server to deploy depending on a build parameter.
The issue is that reviewing the history you can't check which environment are you deploying.
I wonder if it's possible one of this solutions:
- Show parameters in the history of a build
- Autotag a build with parameters
I hope I had explained well enough.
Thanks in advance
You can accomplish this with a Command Line build step that echoes the relevant parameters to the build log. For a Windows-based agent, you could do something like:
Run: Executable with parameters
Command Executable: echo
Command Parameters: Deployed to server %your.server.host%
This would simply add a line to the build log that reads, Deployed to server FOO
Autotagging would be pretty cool, but I don't know of a way to do that.
Is it possible to get the raw build log from a TeamCity build? I've written a custom test runner that gets run as a commandline build step and reports test results back by printing ##teamcity... lines to stdout. The build log from TeamCity seems to be stripping these out when it recognises them. I'd like to see the raw output to help debug my test runner.
Update:
Apparently this simply isn't possible. neverov (I assume Dimitry Neverov of JetBrains?) has explained this and given a workaround so I've accepted his answer.
You can see the raw output from the build agent by looking in the agents /logs directory. This shows the unparsed data that is being hidden on the build output shown in the TeamCity console.
For example c:\TeamCity-Agent\logs\teamcity-build.log.
You can download it by clicking "Download full build log" on build log page.
I couldn't quite tell if this is what you were talking about when you refer to ##teamcity... lines in your question, but this is what I'm currently doing for command-line build steps (which is currently all I do):
##teamcity[testStarted name='dummyTestName' captureStandardOutput='true']
echo "Do your command-line build steps here."
##teamcity[testFinished name='dummyTestName']
It's sort of a hacky workaround, but it will result in stdout/stderr being displayed on the build log page in the TeamCity web UI.
I see that this question was asked long time ago (almost 10 years ago) but nothing changed in TeamCity.
I faced similar issue with test reporter and found a way to get raw log without connecting to build agent and getting it from there (it may be difficult). My solution does not cover the whole build log, but can be helpful when step is run via custom script in Build Steps.
So the solution is to add | tee e2e_raw.log into required build step script. For example we run tests in Docker by running docker-compose command:
tee will duplicate all the output into the file. Original output will be the same and will be parsed by TeamCity as usual.
You should also add a line into artifacts field to make build able to collect newly created artifact (Build General settings):
After that you will see a new archive in artifacts tab with raw log for this build step:
Great answers here before me. I would add that your TeamCity master holds log files for the builds, and you can get them on the command line.
Have a look in <TeamCity Data Directory>/system/artifacts/<project ID>/<build configuration name>/<internal_build_id>/.teamcity/logs.
This mattered to me because
The logs on the TeamCity agents were getting removed after a day or so, but the logs on the master were still available.
I wanted to grep them on the machine itself without having to download multiple, sizeable log files or use my web browser to make multiple page views.
There's an option on the build log to see 'detailed / verbose' - it shows all the service messages. I've seen it since TC9.