Due to build time restrictions on Docker Hub, I decided to split the Dockerfile of a time-consuming automated build into three files.
Each one of those "sub-builds" finishes within Docker Hub's time limits.
I have now the following setup within the same repository:
| branch | dockerfile | tag |
| ------ | ------------------ | ------ |
| master | /step-1.Dockerfile | step-1 |
| master | /step-2.Dockerfile | step-2 |
| master | /step-3.Dockerfile | step-3 |
The images build on each other in the following order:
step-1.Dockerfile : FROM ubuntu
step-2.Dockerfile : FROM me/complex-image:step-1
step-3.Dockerfile : FROM me/complex-image:step-2
A separate web application triggers the building of step-1 using the "build trigger" URL provided by Docker Hub (to which the {"docker_tag": "step-1"}' payload is added). However, Docker Hub doesn't provide a way to automatically trigger step-2 and then step-3 afterwards.
How can I automatically trigger the following build steps in their respective order?** (i.e., trigger step-2 after step-1 finishes. Then, trigger step-3 after step-2 finishes).
NB: I don't want to set up separate repositories for each of step-i then link them using Docker Hub's "Repository Links." I just want to link tags in the same repository.
Note: Until now, my solution is to attach a Docker Hub Webhook to a web application that I've made. When step-n finishes, (i.e., calls my web application's URL with a JSON file containing the tag name of step-n) the web application uses the "build trigger" to trigger step-n+1. It works as expected, however, I'm wondering whether there's a "better" way of doing things.
As requested by Ken Cochrane, here are the initial Dockerfile as well as the "build script" that it uses. I was just trying to dockerize Cling (a C++ interpreter). It needs to compile LLVM, Clang and Cling. As you might expect, depending on the machine, it needs a few hours to do so, and Docker Hub allows "only" 2-hour builds at most :) The "sub build" images that I added later (still in the develop branch) build a part of the whole thing each. I'm not sure that there is any further optimization to be made here.
Also, in order to test various ideas (and avoid waiting h-hours for the result) I have setup another repository with a similar structure (the only difference is that its Dockerfiles don't do as much work).
UPDATE 1: On Option 5: as expected, the curl from step-1.Dockerfile has been ignored:
Settings → Build Triggers → Last 10 Trigger Logs
| Date/Time | IP Address | Status | Status Description | Request Body | Build Request |
| ------------------------- | --------------- | ------- | ------------------------ | -------------------------- | ------------- |
| April 30th, 2016, 1:18 am | <my.ip.v4.addr> | ignored | Ignored, build throttle. | {u'docker_tag': u'step-2'} | null |
Another problem with this approach is that it requires me to put the build trigger's (secret) token in the Dockerfile for everyone to see :) (hopefully, Docker Hub has an option to invalidate it and regenerate another one)
UPDATE 2: Here is my current attempt:
It is basically a Heroku-hosted application that has an APScheduler periodic "trigger" that starts the initial build step, and a Flask webhook handler that "propagates" the build (i.e., it has the ordered list of build tags. Each time it is called by the webhook, it triggers the next build step).
I recently had the same requirement to chain dependent builds, and achieved it this way using Docker Cloud automated builds:
Create a repository with build rules for each Dockerfile that needs to be built.
Disable the Autobuild option for all build rules in dependent repositories.
Add a shell script named hooks\post_push in each directory containing a Dockerfile that has dependents with the following code:
for url in $(echo $BUILD_TRIGGERS | sed "s/,/ /g"); do
curl -X POST -H "Content-Type: application/json" --data "{ \"build\": true, \"source_name\": \"$SOURCE_BRANCH\" }" $url
done
For each repository with dependents, add a Build Environment Variable named BUILD_TRIGGERS to the automated build, and set the Value to a comma-separated list of the build trigger URLs of each dependent automated build.
Using this setup, a push to the root source repository will trigger a build of the root image, once it completes and is pushed the post_push hook will be executed. In the hook a POST is made to each dependent repositories build trigger, containing the name of the branch or tag being built in the requests body. This will cause the appropriate build rule of the dependent repository to be triggered.
How long is the build taking? Can you post your Dockerfile?
Option 1: is to find out what is taking so long with your automated build to see why it isn't finishing in time. If you post it here, we can see if there is anything you can do to optimize.
Option 2: Is what you are already doing now, using a 3rd party app to trigger the builds in the given order.
Option 3: I'm not sure if this will work for you, since you are using the same repo, but normally you would use repo links for this feature, and then chain them, when one finishes, the next one triggers the first. But since you have one repo, it won't work.
Option 4: Break it up into multiple repos, then you can use repo links.
Option 5: Total hack, last resort (not sure if it will work). You add a CURL statement on the last line of your Dockerfile, to post to the build trigger link of the repo with the given tag for the next step. You might need to add a sleep in the next step to wait for the push to finish getting pushed to the hub, if you need one tag for the next.
Honestly, the best one is Option 1: what ever you are doing should be able to finish in the allotted time, you are probably doing some things we can optimize to make the whole thing faster. If you get it to come in under the time limit, then everything else isn't needed.
It's possible to do this by tweaking the Build Settings in the Docker Hub repositories.
First, create an Automated Build for /step-1.Dockerfile of your GitHub repository, with the tag step-1. This one doesn't require any special settings.
Next, create another Automated Build for /step-2.Dockerfile of your GitHub repository, with the tag step-2. In the Build Settings, uncheck When active, builds will happen automatically on pushes. Also add a Repository Link to me/step-1.
Do the same for step-3 (linking it to me/step-2).
Now, when you push to the GitHub repository, it will trigger step-1 to build; when that finishes, step-2 will build, and after that, step-3 will build.
Note that you need to wait for the previous stage to successfully build once before you can add a repository link to it.
I just tried the other answers and they are not working for me, so I invented another way of chaining builds by using a separate branch for each build rule, e.g.:
master # This is for docker image tagged base
docker-build-stage1 # tag stage1
docker-build-latest # tag latest
docker-build-dev # tag dev
in which stage1 is dependent on the base, the latest is dependent on stage1, and dev is based on the latest.
In each of the dependencies’ post_push hook, I called the script below with the direct dependents of itself:
#!/bin/bash -x
git clone https://github.com/NobodyXu/llvm-toolchain.git
cd llvm-toolchain
git checkout ${1}
git merge --ff-only master
# Set up push.default for push
git config --local push.default simple
# Set up username and passwd
# About the credential, see my other answer:
# https://stackoverflow.com/a/57532225/8375400
git config --local credential.helper store
echo "https://${GITHUB_ROBOT_USER}:${GITHUB_ROBOT_ACCESS_TOKEN}#github.com" > ~/.git-credentials
exec git push origin HEAD
The variables GITHUB_ROBOT_USER and GITHUB_ROBOT_ACCESS_TOKEN are environment variables set in Docker Hub auto build configuration.
Personally, I prefer to register a new robot account with two-factor authentication enabled on GitHub specifically for this, invite the robot account to become a collaborator and use an access token instead of a password as it is safer than using your own account which has access to far more repositories than needed and is also easy to manage.
You need to disable the repository link, otherwise there will be a lot of unexpected build jobs in Docker Hub.
If you want to see a demo of this solution, check NobodyXu/llvm-toolchain.
Related
I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.
We are using cloud build for continuous deployment on GCP. When pushing commits to fast (e.g. on development) the triggered builds are running in parallel. Sometimes those interfere which one another. For example when two app engine deployments are running at the same time.
Is there a way or best practise to force builds which are triggered from the same build trigger to run one after another?
Regards,
Carsten
I've done this by adding an initial step on my cloudbuild.yaml file. What it does is:
Gets all the ongoing build id via gcloud builds list --ongoing --format='value(id)' --filter="substitutions.TRIGGER_NAME=$TRIGGER_NAME".
Loop through it but skip the first one, the list is sorted by latest created time so it won't stop the latest build which is the first index
Run gcloud builds cancel ${on_going_build[i]} to cancel the build
Please see the cloudbuild.yaml below
steps:
- id: "Stop Other Ongoing Build"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
on_going_build=($(gcloud builds list --ongoing --format='value(id)' --filter="substitutions.TRIGGER_NAME=$TRIGGER_NAME" | xargs))
for (( i=0; i<${#on_going_build[#]}; i++ )); do
if [ "$i" -gt "0" ]; then # skip current
echo "Cancelling build ${on_going_build[i]}"
gcloud builds cancel ${on_going_build[i]}
fi
done
You can't by setup. But you can define custom builder. Create one which check if a build is running for your project with you repository. If yes, return an error code and crash the build process, else, continue the processing.
I've achieved sequential builds by using Logging Router + Pub/Sub triggers.
My first build is triggered by a commit and builds an image.
I inspected the logs and found that we have a single message like textPayload="Pushing gcr.io/my-project/my-repo/my_image:latest" when the build is finished.
By using the above as a filter this can then be routed to a Pub/Sub topic which triggers my second build that is using the image from the first one.
Might not work for all use cases since you need to find a single event emitted that signifies a successful build, but it definitely works when building images with the above.
I have problem with Extra sync option on TeamCity.
On page Edit VCS Root I am specifying Label/changelist to sync: with change list number for example 900001 and in field 'Extra sync options:' also added some changes that I want to apply 900002.
In log I see message
[Updating sources for root 'Current1.02', revision/label: #900001] Running 'p4 sync -f //depot/.../.../.../MainView.cpp#900002' in directory C:\BuildFolder\Desktop
In result i have .exe without any signs of changes that was submitted in 900002.
Why it does not apply those change? Where can be problem?
I'm not familiar with this plugin, but if it's running this command:
p4 sync -f //depot/.../.../.../MainView.cpp#900002
it might be hitting problems with too many embedded wildcards.
Our application uses VSTS for our CI flow and we have a requirement that the patch version should increase by one each time code is merged back into master.
We have created a shell script that will bump the version of the application and tag the repo, but now we are looking for the best place in the flow to inject it. We've thought of the following:
Master commit hook build - The problem with putting this here is that we have protected the master branch with certain policies (must resolve all comments, project must build, etc). Because of this, the build agent runs the script does not have access to push changes.
Pull Request build - This option would integrate the script into the build job that is used to verify the build. This actually works, but when the updates are pushed to the branch, this triggers an infinite build loop because the PR automatically rebuilds the branch. In general, this option seems more brittle.
I would love to have option 1 work out, but despite all attempts to grant the agent proper roles, it still does not have access to the repo. We're getting the following error:
TF402455: Pushes to this branch are not permitted; you must use a pull request to update this branch.
Is there a standard way to do this that I'm missing?
To update the file version and add tag for the new commit through CI build, you can add a PowerShell task. detail steps as below:
Set permission and configure in build definition
First go to Version control Tab, allow below items for Build Service:
Branch creation: Allow
Contribute: Allow
Read: Inherited allow
Tag creation: Inherited allow
Then in your build definition, add the variable system.prefergit as true in Variable Tab, and enable Allow script to access OAuth token in Options Tab.
More details, you can refer Run Git commands in a script.
Add powershell script
git -c http.extraheader="AUTHORIZATION: bearer %SYSTEM_ACCESSTOKEN%" fetch
git checkout master
# Get the line of the file that contain the version
# split the line, and only get the version number major.minor.patch.
# check the version numbers (such as if patch less than 9) and increase the version
#replace the old version with new version in the file
git add .
git commit -m 'change the version as $newVersion'
git tag $newVersion
git push origin master --tags
Note: Even the new version and tag can push to remote repo successful, the PowerShell task may failed. So you should set the task after PowerShell task with custom option as Even if a previous task has failed.
You can try to update your branch policy per documentation on https://learn.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops and https://learn.microsoft.com/en-us/azure/devops/organizations/security/permissions?view=azure-devops#git-repository-object-level
Try updating your branch policy with to "Allow" "Bypass policies when pushing"
I am trying to run Continuous Integration for iOS with an Xcode server running validation tests against Gerrit.
In order to get Xcode to pull from the gerrit server I had to upgrade it's libgit2.dylib to version 0.21.5
I downloaded it from https://codeload.github.com/libgit2/libgit2/zip/v0.21.5
Anyone have suggestion on how to get gerrit to trigger Xcode builds of particular branches?
An easy way is to create an Xcode bot that will perform the build. You can have the bot set to poll Gerrit’s repository periodically for the desired hook (most likely ‘commit’).
http://bjmiller.me/post/72937258798/continuous-integration-with-xcode-5-xctest-os-x is a good step-by-step guide on setting up an Xcode bot, but keep in mind that you are using Gerrit as the git repository.
With an Xcode bot created, you could also create a Gerrit hook that triggers a build in the same manner that an Xcode git repository would: Custom Trigger Scripts for Bot (Xcode 5 CI)
The whole thing is complicated but...
Set up a Jenkins jobs that is triggered by Gerrit (My entire goal is to bring iOS tools to parity with Android)
When that job runs it performs the following shell script. This can be improved by polling the server first and parsing out the BOT_HASH but I just did it by hand. The bot is set to integrate manually.
curl -kg -X POST "https://[XCODE_SERVER]:20343/api/bots/[BOT_HASH]/integrations/"
The bot has the following script as a pre-integration step
cd [PROJECT]
ssh -p 29418 xcode#[GERRIT_SERVER] 'gerrit query label:Verified=0 project:[PROJECT] status: open limit:1 --current-patch-set' >/tmp/junk.txt
export commit=`grep revision: /tmp/junk.txt | grep -oE '[^ ]+$'`
export ref=`grep ref: /tmp/junk.txt | grep -oE '[^ ]+$'`
git fetch "http://[GERRIT_SERVER]:8081/[PROJECT]" $ref && git checkout FETCH_HEAD
git checkout -b $commit
ssh -p 29418 xcode#[GERRIT_SERVER] 'gerrit review -p [PROJECT] -m "Starting Test" '$commit
This checks the gerrit server for the most recent update to the project that hasn't been verified. It might not get the right one, but eventually they will all be be checked. It then updates the git repository to no longer point to the commit at the head but at the one we want to check. Finally it posts a comment for users so they know it is being looked at.
This relies on a user Xcode existing on the gerrit repo with proper auth's. You can inline usernames and passwords, or you can set up the ssh key from _xcsbuildd.
The success script looks like
export commit=`grep revision: /tmp/junk.txt | grep -oE '[^ ]+$'`
ssh -p 29418 xcode#[GERRIT_SERVER] 'gerrit review -p [PROJECT] -m "Test from the script xcbot://[XCODE_SERVER]/botID/'$XCS_BOT_ID'/integrationID/'$XCS_INTEGRATION_TINY_ID'" --verified +1 '$commit
This marks it as verified and posts a link that leads directly to the integration. The link is clickable in email but not on the webpage.
The failure looks like this
export commit=`grep revision: /tmp/junk.txt | grep -oE '[^ ]+$'`
ssh -p 29418 xcode#[GERRIT_SERVER] 'gerrit review -p [PROJECT] -m "Test from the script xcbot://[XCODE_SERVER]/botID/'$XCS_BOT_ID'/integrationID/'$XCS_INTEGRATION_TINY_ID'" --verified -1 '$commit
It is up to you what you consider success or failure. Currently I succeed on warnings and success and fail on build errors and test failures.
To get it to recheck, remove the vote and manually trigger the bot.