Git extra hook, pre-upload - ruby

I'm a university student. Currently, I’m doing my Final Year Project that requires a hook that executes automatically on the server side of the repository, before the objects been pulled to the client side, and after the objects have been pushed to the server side, which is "post-receive" hook.
I have done lots of research regard on the hook and I came to know grack and rjgit_grack. Links will be attached below.
Grack : https://github.com/grackorg/grack
Rjgit_grack : https://github.com/grackorg/rjgit_grack
Grack and rjgit_grack is a gem from https://rubygems.org/. Grack project aims to replace the builtin git-http-backend CGI handler distributed with C Git with a Rack application and rjgit_grack is an alternative adapter of grack that support extra hooks for git that have the hook that I needed for my project, "preUpload" hook that is executed immediately before an upload-operation is performed, i.e. before data is sent to the client. But I was unable to get it to work for my project due to various reasons.
Why Git doesn't have a hook that executed immediately before the data is sent to the client? Any advice on getting this hook or any similar function of the hook?

The idea behind Git hooks is that they do at least one of two things: change the state of the repository or the current operation, or optionally block a behavior from occurring.
A hook that executes before objects are pushed wouldn't be able to do either one of these: it has no data about how to change the repository, since it doesn't have any objects, and without any data about what's going on, there's no way that it can usefully determine whether to block a push.
For your project, it might be useful to wrap the git HTTP backend script with a wrapper that executes before the push service starts, or one that looks for the flush pkt-line delimiter (0000) by intercepting the data. You could also patch Git to have the hook you need for your project at the appropriate location.

Related

Alter 'status' request interval of CloudBuild submit

I'm trying to setup the CI/CD setup of a mono repository using Google Cloud Build. We have a single Cloud Build trigger that starts a build on a new commit, it does some general steps and then then starts a build for every (micro)service in the mono repository using gcloud build submit.
This however means that if 4 or 5 people are push code to the repository roughly at the same time we can have around 50-70 concurrent builds running in cloud build. Which in itself isn't an issue for us. The only issues is that when this happens the following errors will popup:
{
“code”: 429,
“message”: “Quota exceeded for quota metric ‘Build and Operation Get requests’ and limit ‘Build and Operation Get requests per minute’ of service ‘cloudbuild.googleapis.com’ for consumer ‘project_number:<PROJECT_NUMBER>’.“,
“status”: “RESOURCE_EXHAUSTED”,
“details”: [{
“#type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “RATE_LIMIT_EXCEEDED”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “cloudbuild.googleapis.com”,
“consumer”: “projects/<PROJECT_NUMBER>”,
“quota_limit”: “GetRequestsPerMinutePerProject”,
“quota_metric”: “cloudbuild.googleapis.com/get_requests”
}
}]
}
In other words: We are running into quota limits. The quota only allows us to only make 900 operational requests per minute.
We already tried switching to private pools in the hope that the above quota limit was only there for when you don't use private pools, but this unfortunately still makes us hit the quota.
Now, I am trying to find out if I can decrease the amount of these operational requests.
A possible solution might be related to how I am using gcloud build submit. When you run gcloud build submit, it starts a new build, waits for the build to finish, and shows the output of the build. To achieve this, I presume that gcloud is making requests every few seconds to find out what the status of the build is. I suspect that these 'status' requests are why my Cloud Build quota limit is reached. Which is why I'm trying to see how I can lower the amount of these requests per minute.
One option is to simple decrease the amount of builds running in parallel, which is unfortunately not an option in my situation. If I execute them sequentially it simply takes more time than acceptable in my situation.
Another option would be to increase the time in between such 'status' requests. However, on this page I did unfortunately not find a CLI flag to alter this.
Note: I did find the --async flag, however that does NOT help me, since I still want the process to wait until the build has succeeded. And I also did find the --supress-logs, which also does NOT help me, since these requests presumably don't interact with Cloud Build but with the GCS bucket where the logs are stored.
The only option left that I can think off, is that I can start my builds with the --async flag and then manually request whether the build has succeeded using a longer interval. However I do feel like that is a lot of manual work that, for which I need to write some bash scripts that need to be maintained. This preferably isn't a path I would like to take unless really necessary.
Does anyone know of another way of achieving this?
If 4 or 5 people are push code to the repository
This shouldn't happen. The reason it shouldn't happen is because you should use the "push" trigger on the main branch, not on a development branch.
What do I mean by this?
I mean that building should occur on the main branch, which would correspond to joined effort of those five users and a responsible party in charge of unifying their changes.
So, really, your users should be pushing to the development branch, and pushes to main should be reserved for things that need to be built.
How can we work around this if we're only allowed one branch or are required to have updates visible on one branch?
My recommendation would be to use the tag filter, specifically filter the pushes by tag, as mentioned in the documentation. That way only the pushes person in charge of merging the changes will be built (assuming that this person pushes to the tag you've set)
TL;DR
Don't create push triggers for Cloud Build on a branch multiple people are working on. Either create it with a tag filter or have seperate development and main branches (people work on dev, builds are only made from pushes to main)

VAADIN: Size of UI.access() push queue

I would like to monitor my pushs' to the clients with the famous
UI.access() ... sequence on the server side.
Background is that I have to propagate lots of pushs to my client and I
want to make sure, nothing gets queued up.
I found only client RPCQueue having a size(), but I have no idea if its the correct items searching for now how to access this.
Thanks for any hint.
Gerry
If you want to know the size of the queue of tasks that have been enqueued using UI.access but not yet run, then you can use VaadinSession.getPendingAccessQueue.
This will, however, not give the full picture since it doesn't cover changes that have been applied to the server-side state (i.e. the UI.access task has already been executed) but not yet sent to the client. Those types of changes are tracked in a couple of different places depending on the type of change and the Vaadin version you're using.
For this kind of use case, it might be good to use the built-in beforeClientResponse functionality to apply your own changes as late as possible instead of applying changes eagerly.
With Vaadin versions up to 8, you do this by overriding the beforeClientResponse method in your component or extension class. You need to use markAsDirty() to ensure that beforeClientResponse will eventually be run for that instance.
Wit Vaadin 10 and newer, there's instead a UI.beforeClientResponse to which you give a callback that will be run once at an appropriate time by the framework.

How to trigger perforce changes before the submit using TeamCity

I have currently CI system which triggers submit and particular stream and then builds the change and tests it.
However as I said it is done upon submit, meaning the change is merged before the testing.
So my question is how I can trigger the changes in an earlier stage? What is the best approach?
We are not using any IDEs for development.
Thanks!
To do it on the Perforce side, you'd use a change-content trigger, which runs prior to submit while the files are available in a staging area on the server (the in-flight change is treated as a shelf and can be accessed using the #=change syntax). This allows a trigger script to access the content in-flight and reject it before it's finalized.
While a content trigger is running, the files are locked, and the submit will block the client session until it's finalized on the server and can report success, so you'd want to be careful about which codelines you enable something like this on.

How to cancel/abort an XCode Bot Integration in Before Integration Script

I have a bot working on commit and it increases the build number and pushes to the same branch. I check the commit's user in Before integration script, and if it is the CI user (which is only and only used to push the increase number commits) i want to abort current integration. I saw this one:
https://stackoverflow.com/a/30062418/767329
/xcode/api/integrations/INTEGRATION_ID/cancel
This one makes a curl call to stop the integration but i want to stop the current integration before it starts. I know i may also check and push the increase commit if the bot is not run by ci user's increase commit lately. But i dont want even archive to work if it is a ci user commit (i want the integration to be aborted even before it starts).
Unfortunately there is no way to cancel an integration before it even starts. You could use a pre-integration trigger to stop the integration from going further given whatever conditions you are looking for.
If your only goal is to bump the build number, I would suggest you use the Xcode Server environment variable 'XCS_INTEGRATION_NUMBER' in your build number field.
Whenever Xcode Server integrates your project, it will automatically use the integration number as the build number. These will always be unique.

Notify me when there is an update on the upstream of a repository I have forked

Say I have forked a repository to my github account.
Is it possible to trigger a notification if there are changes on the
original repository [the upstream].
Is it possible to trigger a webhook for that event.
Edit:
It seems that my question is not very clear.
We have been using webhooks, but these are on the repos we own and maintain. So every-time there is a push or a commit etc. We all get notified.
My question was and is, Is it possible to do so for the forked repositories that we do not own. We would want an event to be triggered when a change is made on the parent repository [something we do not control].
The reason for forking the repository even when we may not added/edit to it, is to have control over the version of the code while deploying to the overall project to avoid regression issues when directly cloning from the parent repository, when or if changes are made [we have scheduled auto updates for our plugins].
This process gives code control but it also takes a lot of time to manually sync these repos without which we lose the updates.
Yup, there are webhooks found here.
https://developer.github.com/v3/repos/hooks/
Specifically PushEvent:
https://developer.github.com/v3/activity/events/types/#pushevent

Resources