Our gerrit version is 3.4.1, when i execute ssh -p 29418 username#xx.xx.xx.xx gerrit gsql, it return fatal: gerrit: gsql: not found.
And when i execute ssh -p 29418 usename#xx.xx.xx.xx gerrit --help, there is no gsql cmd in the returned gerrit instructions list.
How can i operate gerrit database?
Gerrit 3.0 onwards has removed the use of an external database in favor of the git based noteDb and gsql has been removed
As of version 3.x, Gerrit switched the internal database to NotesDB.
You can directly use git to access the notes (which represent the dabase content). For example, to query a change, you could
$ git init
$ git fetch https://gerrit.googlesource.com/gerrit refs/changes/40/329240/meta
$ git log -p FETCH_HEAD
(Example adopted from the NotesDb backend documentation page, where you can find more information as well.)
I have a raspberry pi running Ubuntu that I am using to host a discord bot. I was trying to figure out if there is any way to detect when master of the bot's code repository was changed and then run a script on the pi to stop the bot, pull the changes and then restart it? I have written the script already I'm just not sure how to trigger it.
There are a two main solutions to this - cronjobs and webhooks. A cronjob is likely easiest, but webhooks give more external control.
Cronjobs
A cronjob will run a script at a given interval. If you set a cronjob on the pi to run git fetch it will update the refs of the remotes. You can then do git rev-parse <branch> to get the commmit ID of a given branch. So in bash you can do:
#!/bin/bash
REMOTE=origin
BRANCH=master
git fetch
if [[ "$(git rev-parse $BRANCH)" != "$(git rev-parse "$REMOTE/$BRANCH")" ]]; then
# Run your script
fi
Webhooks
If you are hosting your code on a major platform - like GitLab, GitHub, or BitBucket - it likely supports webhooks to do this. A webhook is where you give the repo host a URL which they will call when a certain event happens. If you are using somebody else's repository, you can make a fork that mirrors theirs and add the webhook to your fork.
This requires your pi to be accessible from the web and running an HTTP server, you will either need a static IP address or a DDNS service for this to work.
I'm trying to use a post-receive git-hook to automate the deploy of a simple maven project by triggering a Jenkins pipeline I set up. The source is hosted on a GitHub repo while Jenkins on a container running on my PC. So far, the hook is not triggered after I push to master branch.
Thing is if I try and run the script manually it just works! I also tried setting chmod +x with Git Bash (after all I'm on Windows) to the post-receive file, unfortunately without success: the hook still does not get triggered. What might be the issue?
I already tried looking for answers on similar topics here on stackoverflow, but nothing solved my issue. FYI, below the post-receive script (nothing fancy, as you can see):
#!/bin/bash
JENKINS_URL="http://localhost:8080"
JOB="deploy-to-slave-pipeline"
JENKINS_CREDENTIALS="theuser:11d422ee679503eeb328c5b1998327cc7f"
echo "Triggering Jenkins job..."
crumb=$(curl -u "$JENKINS_CREDENTIALS" -s '$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
curl -u $JENKINS_CREDENTIALS -H "$crumb" -X POST "$JENKINS_URL/job/$JOB/build?delay=0sec"
EDIT
As pointed out by #bk2204 post-receive is a server-side hook. What I needed was a webhook, which can be set in the Settings/Webhook page of your GitHub repo. Just configure it as below where Payload URL is your Jenkins URL followed by /github-webhook/:
Then all you have to do is set your Jenkins job to get triggered by GitHub, by checking the related option on the Build Triggers section as below:
And then you're good to go! Also, if you're running your Jenkins instance locally, you could use ngrok to expose it and test your CI/CD pipeline!
[ref. https://dzone.com/articles/adding-a-github-webhook-in-your-jenkins-pipeline]
A post-receive hook is run on the server side, not on the client side. That means that it's run at GitHub, assuming you're pushing to GitHub, ant not on your local machine.
Normally, you'd want a GitHub webhook to notify you of the push event, but you cannot use one here because the machine is running on localhost and such an event has to be able to on a public IP address since GitHub has to send an HTTP request to it.
Due to build time restrictions on Docker Hub, I decided to split the Dockerfile of a time-consuming automated build into three files.
Each one of those "sub-builds" finishes within Docker Hub's time limits.
I have now the following setup within the same repository:
| branch | dockerfile | tag |
| ------ | ------------------ | ------ |
| master | /step-1.Dockerfile | step-1 |
| master | /step-2.Dockerfile | step-2 |
| master | /step-3.Dockerfile | step-3 |
The images build on each other in the following order:
step-1.Dockerfile : FROM ubuntu
step-2.Dockerfile : FROM me/complex-image:step-1
step-3.Dockerfile : FROM me/complex-image:step-2
A separate web application triggers the building of step-1 using the "build trigger" URL provided by Docker Hub (to which the {"docker_tag": "step-1"}' payload is added). However, Docker Hub doesn't provide a way to automatically trigger step-2 and then step-3 afterwards.
How can I automatically trigger the following build steps in their respective order?** (i.e., trigger step-2 after step-1 finishes. Then, trigger step-3 after step-2 finishes).
NB: I don't want to set up separate repositories for each of step-i then link them using Docker Hub's "Repository Links." I just want to link tags in the same repository.
Note: Until now, my solution is to attach a Docker Hub Webhook to a web application that I've made. When step-n finishes, (i.e., calls my web application's URL with a JSON file containing the tag name of step-n) the web application uses the "build trigger" to trigger step-n+1. It works as expected, however, I'm wondering whether there's a "better" way of doing things.
As requested by Ken Cochrane, here are the initial Dockerfile as well as the "build script" that it uses. I was just trying to dockerize Cling (a C++ interpreter). It needs to compile LLVM, Clang and Cling. As you might expect, depending on the machine, it needs a few hours to do so, and Docker Hub allows "only" 2-hour builds at most :) The "sub build" images that I added later (still in the develop branch) build a part of the whole thing each. I'm not sure that there is any further optimization to be made here.
Also, in order to test various ideas (and avoid waiting h-hours for the result) I have setup another repository with a similar structure (the only difference is that its Dockerfiles don't do as much work).
UPDATE 1: On Option 5: as expected, the curl from step-1.Dockerfile has been ignored:
Settings → Build Triggers → Last 10 Trigger Logs
| Date/Time | IP Address | Status | Status Description | Request Body | Build Request |
| ------------------------- | --------------- | ------- | ------------------------ | -------------------------- | ------------- |
| April 30th, 2016, 1:18 am | <my.ip.v4.addr> | ignored | Ignored, build throttle. | {u'docker_tag': u'step-2'} | null |
Another problem with this approach is that it requires me to put the build trigger's (secret) token in the Dockerfile for everyone to see :) (hopefully, Docker Hub has an option to invalidate it and regenerate another one)
UPDATE 2: Here is my current attempt:
It is basically a Heroku-hosted application that has an APScheduler periodic "trigger" that starts the initial build step, and a Flask webhook handler that "propagates" the build (i.e., it has the ordered list of build tags. Each time it is called by the webhook, it triggers the next build step).
I recently had the same requirement to chain dependent builds, and achieved it this way using Docker Cloud automated builds:
Create a repository with build rules for each Dockerfile that needs to be built.
Disable the Autobuild option for all build rules in dependent repositories.
Add a shell script named hooks\post_push in each directory containing a Dockerfile that has dependents with the following code:
for url in $(echo $BUILD_TRIGGERS | sed "s/,/ /g"); do
curl -X POST -H "Content-Type: application/json" --data "{ \"build\": true, \"source_name\": \"$SOURCE_BRANCH\" }" $url
done
For each repository with dependents, add a Build Environment Variable named BUILD_TRIGGERS to the automated build, and set the Value to a comma-separated list of the build trigger URLs of each dependent automated build.
Using this setup, a push to the root source repository will trigger a build of the root image, once it completes and is pushed the post_push hook will be executed. In the hook a POST is made to each dependent repositories build trigger, containing the name of the branch or tag being built in the requests body. This will cause the appropriate build rule of the dependent repository to be triggered.
How long is the build taking? Can you post your Dockerfile?
Option 1: is to find out what is taking so long with your automated build to see why it isn't finishing in time. If you post it here, we can see if there is anything you can do to optimize.
Option 2: Is what you are already doing now, using a 3rd party app to trigger the builds in the given order.
Option 3: I'm not sure if this will work for you, since you are using the same repo, but normally you would use repo links for this feature, and then chain them, when one finishes, the next one triggers the first. But since you have one repo, it won't work.
Option 4: Break it up into multiple repos, then you can use repo links.
Option 5: Total hack, last resort (not sure if it will work). You add a CURL statement on the last line of your Dockerfile, to post to the build trigger link of the repo with the given tag for the next step. You might need to add a sleep in the next step to wait for the push to finish getting pushed to the hub, if you need one tag for the next.
Honestly, the best one is Option 1: what ever you are doing should be able to finish in the allotted time, you are probably doing some things we can optimize to make the whole thing faster. If you get it to come in under the time limit, then everything else isn't needed.
It's possible to do this by tweaking the Build Settings in the Docker Hub repositories.
First, create an Automated Build for /step-1.Dockerfile of your GitHub repository, with the tag step-1. This one doesn't require any special settings.
Next, create another Automated Build for /step-2.Dockerfile of your GitHub repository, with the tag step-2. In the Build Settings, uncheck When active, builds will happen automatically on pushes. Also add a Repository Link to me/step-1.
Do the same for step-3 (linking it to me/step-2).
Now, when you push to the GitHub repository, it will trigger step-1 to build; when that finishes, step-2 will build, and after that, step-3 will build.
Note that you need to wait for the previous stage to successfully build once before you can add a repository link to it.
I just tried the other answers and they are not working for me, so I invented another way of chaining builds by using a separate branch for each build rule, e.g.:
master # This is for docker image tagged base
docker-build-stage1 # tag stage1
docker-build-latest # tag latest
docker-build-dev # tag dev
in which stage1 is dependent on the base, the latest is dependent on stage1, and dev is based on the latest.
In each of the dependencies’ post_push hook, I called the script below with the direct dependents of itself:
#!/bin/bash -x
git clone https://github.com/NobodyXu/llvm-toolchain.git
cd llvm-toolchain
git checkout ${1}
git merge --ff-only master
# Set up push.default for push
git config --local push.default simple
# Set up username and passwd
# About the credential, see my other answer:
# https://stackoverflow.com/a/57532225/8375400
git config --local credential.helper store
echo "https://${GITHUB_ROBOT_USER}:${GITHUB_ROBOT_ACCESS_TOKEN}#github.com" > ~/.git-credentials
exec git push origin HEAD
The variables GITHUB_ROBOT_USER and GITHUB_ROBOT_ACCESS_TOKEN are environment variables set in Docker Hub auto build configuration.
Personally, I prefer to register a new robot account with two-factor authentication enabled on GitHub specifically for this, invite the robot account to become a collaborator and use an access token instead of a password as it is safer than using your own account which has access to far more repositories than needed and is also easy to manage.
You need to disable the repository link, otherwise there will be a lot of unexpected build jobs in Docker Hub.
If you want to see a demo of this solution, check NobodyXu/llvm-toolchain.
I would like to alter the commit rules for gerrit but somehow I am seemingly unable to follow the steps described in the cookbook (for example here:
http://saros-build.imp.fu-berlin.de/gerrit/Documentation/prolog-cookbook.html#_the_rules_pl_file )
On my local gerrit system I simply created an empty project
ssh user#localhost -p 29418 gerrit create-project --empty-commit --name demo-project
Next I cloned the new project
git clone ssh://user#localhost:29418/demo-project
Then according to the description I tried
/demo-projectmaster% git fetch origin refs/meta/config:config
which resulted in
fatal: Couldn't find remote ref refs/meta/config
Could you tell me what I am doing wrong? Feels like something very basic...
Thanks,
JS
it was due to missing access criteria.
I had to (in gerrit) grant the user explicit Read/Submit rights on refs/meta/config despite the user being in the administrator group.
Jörg