Trigger bamboo plan from bitbucket Webhooks - continuous-integration

I spent a couple of hours to figure out why I'm not able to trigger a webhook from bitbucket to bamboo, I found nothing yet
Issue:
I want to track when a PR is merged or a branch is deleted which as I see I'm not able to track this stuff from bamboo, so I need to have a webhook in bitbucket and call a bamboo reset api base on this page if there is no better idea.
base on this page I thought I can trigger a webhook
https://confluence.atlassian.com/bamboo/triggering-a-bamboo-build-from-bitbucket-cloud-using-webhooks-873949130.html
But this solution is now working because each time I got this error message
{"message":"Anonymous user can't access this resource. If it should be available, modify anonymous user permissions at Administration > Security settings","status-code":401}
The only access we have for Anonymous group is view which I see this is not enough to call this API from bitbucket
https://confluence.atlassian.com/bamboo/bamboo-permissions-369296034.html
So I don't know what to do and how to track if a PR is merged or a branch is deleted.
I would appreciate to tell me what the problem is
FYI: bamboo and bitbucket version is the latest one

What is your Bamboo version? This issue was covered at Bamboo 6.7.0. At Bamboo > Administration > Security settings you can grant/deny access of anonymous users to given webhook

The easiest way is to enable triggers for anonymous users. Also, as #Hamed mentioned, allowing anonymous access is not feasible in some environments. The problem is we cannot even go with <User>:<Password>#<Bamboo URL> and that strips off the auth details.
One possible way of doing this is to keep a proxy between Bitbucket and Bamboo and then add the Authentication headers at the proxy level.

Related

Artifactory deploy issue - unauthorized

I am deploying artifacts to jfrog Artifactory in cloud which is throwing unauthorized for few artifact deploy but not for others.
Did anyone face similar issue?
Also, I would like to check if there is a way to restore initial Artifactory user permissions as I made some changes with permissions and now I do not see many options (not able to create users, groups,repositories, couldn't see default repositories) which were there initially.
Can someone advice how to restore default settings for this user?
Below are the answers to your questions.
I am deploying artifacts to jfrog artifactory in cloud which is throwing unauthorized for few artifact deploy but not for others.
Did anyone face similar issue?
Answer: Its wired, if a user is having permissions to the particular repository to deploy, then the same user can deploy any artifacts to the repository until and unless an include/exclude pattern is set at the repository level.
Further details can be found here.
I would like to check if there is a way to restore initial artifactory user permissions as I made some changes with permissions and now I do not see many options(not able to create users, groups,repositories,couldnt see default repositories) which were there initially.
Can someone advice how to restore default settings for this user?
Answer: As its a cloud instance, so we will not have any control over restoring the users permission to previous state. If it was a on-prem instance then it could be achieved by restoring from the backup. You can reach out to JFrog Support cloud team and check if they can help you with it.
If you remember the previous permissions the user then you can login with the admin user and set it, Else the other option is to create a new user with required permissions.

GitLab Custom CI configuration path and merge request

For one of our repositories we set "Custom CI configuration path" inside GitLab to a remote gitlab-ci.yml. We want to do this to prevent Developers to change the gitlab-ci.yml file (as protected files are available in EE Premium and up). But except this purpose, the Custom CI configuration path feature should work anyway for Merge Requests.
Being in repo
group1/repo1
we set
.gitlab-ci.yml#group1/repo1-ci
repo1-ci repository exists and ci works correctly when we push to configured branches etc.
For Merge Request functionality GitLab tells us:
Detached merge request pipeline #123 failed for ...
Project group1/repo1-ci not found or access denied!
We added the developers to repo1-ci repo as developers, to be able to read the files. It does not help. Anyway the expectation is, that it is not run with user permissions, so it should simply find the gitlab-ci.yml file.
Any ideas on this?
So our expectations were right an it seems that we have to add one important thing into our considerations:
If a user interacts in the GitLab UI with the Merge Request features and you are using "Custom CI configuration path" for your gitlab-ci.yml file, please ensure
this user needs at least read permissions to that remote file, even if you moved it to another repo on purpose (e.g. use enhanced file protection in PREMIUM/ULTIMATE or push/merge protect the branches for the Developer role)
the user got this permission change applied in a running session
The last part failed for our users, as it worked one day later. Seems that they just continued working from their open merge request page and GitLab checks the accessibility out of this session (using a cookie, token or something which was not updated with the the access to the remote repo/file)
It works!

How to create a webhook between Bitbucket and Azure DevOps?

We have all our repositories in Bitbucket and I'm trying to set up a continuous intergration services to Azure DevOps that would build the project after each push.
We have created a dedicated user account for Bitbucket repositories that has real-only access to all repositories.
However, creating a CI webhook trigger from Bitbucket to Azure Devops requires admin access to repositories. We do not want to give that level of access to CI user account.
I could add the webhook to Bitbucket repository manually, but I'm missing the URL to which the webhook should post the trigger.
The url is something like https://dev.azure.com/myorganization/_apis/public/hooks/externalEvents?publisherId ...
I think it's called deployment trigger url but I cannot find it anywhere. Does the new Azure DevOps support manually adding webhooks or do we have to do it manually somehow?
I'm in the same boat with you all. I don't want to give my CI account "Admin" rights to ANY repo.
My workaround so far has been to give the CI account temporary access in order to create the webhook when the pipeline is first saved, then downgrade it after the webhook has been created, knowing that any changes will require another temporary permission elevation.
FWIW, the webhook URL that is used is this:
https://[REDACTED].visualstudio.com/_apis/public/hooks/externalEvents?publisherId=bitbucket&channelId=[REDACTED]&api-version=5.1-preview
As you can see, we are kind of in an understandable Catch-22 here, because we could conceivably create the pipeline and get that channelId to use to manually create the webhook in Bitbucket, but can't even SAVE a pipeline without repo Admin rights, so we can't get the channelId.
I wish there was a way to disable the webhook creation so we could manually create it on the Bitbucket side.
I know that this has been a long time since it was asked, but recently I was faced with the exact same issue and I thought I should add this here for anyone struggling to find out where these URLs are coming from.
I was seeing in Bitbucket two webhooks in the format https://dev.azure.com/[myorganization]/_apis/public/hooks/externalEvents?publisherId=... and I was trying to figure out how these were created in the first place.
As it turns out, when you create a new Bitbucket Pipeline in Azure and you select a repository for this pipeline, Azure automatically creates these webhooks for us in Bitbucket! In other words, it doesn't seem to be a way to deduce these URLs from anywhere, but rather they are created by Azure upon creation of the Pipeline, as well as they are deleted by Azure once you delete the Pipeline from Azure!.

How to push from Gitlab to Github with webhooks

My Google-fu is failing me for what seems obvious if I can only find the right manual.
I have a Gitlab server which was installed by our hosting provider
The Gitlab server has many projects.
For some of these projects, I want that Gitlab automatically pushes to a remote repository (in this case Github) every time there is a push from a local client to Gitlab.
Like this: client --> gitlab --> github
Any tags and branches should also be pushed.
AFAICT I have 3 options:
Configure the local client with two remotes, and push simultaneous to Gitlab and Github. I want to avoid this because developers.
Add a git post-receive hook in the repository on the Gitlab server. This would be most flexible (I have sufficient Linux experience to write shell scripts as git hooks) and I have found documentation on how to do this, but I want to avoid this too because then the hosting provider will need to give me shell access.
I use webhooks in Gitlab. I am unfamiliar with what the very basics of webhooks are, and I am unable to locate understandable documentation or even a simple step-by-step example. This is the documentation from Gitlab that I found and I do not understand it: http://demo.gitlab.com/help/web_hooks/web_hooks
I would appreciate good pointers, and I will summarize and document a solution when I find it.
EDIT
I'm using this Ruby code for a web hook:
class PewPewPew < Sinatra::Base
post '/pew' do
push = JSON.parse(request.body.read)
puts "I got some JSON: #{push.inspect}"
end
end
Next: find out how to tell the gitlab server that it has to push a repository. I am going back to the GitLab API.
EDIT
I think I have an idea. On the server where I run the webhook, I pull from GitLab and then I push to Github. I can even do some "magic" (running tests, building jars, deploying to Artifactory,...) before I push to GitHub. In fact it would be great if Jenkins were able to push to a remote repository after a succesful build, then I wouldn't need to write my own webhook, because I'm pretty sure Jenkins already provides a webhook for Gitlab, either native or via a plugin. But I don't know. Yet.
EDIT
I solved it in Jenkins.
You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
would work of course.
is possible but dangerous because GitLab shell automatically symlinks hooks into repositories for you, and those are necessary for permission checks: https://github.com/gitlabhq/gitlab-shell/tree/823aba63e444afa2f45477819770fec3cb5f0159/hooks so I'd rather stay away from it.
Web hooks are not suitable directly: they make an HTTP request with fixed format on certain events, in your case push, not Git protocol requests.
Of course, you could write a server that consumes the hook, clones and pushes, but a service (single push and no deployment) or GitLab CI (already implements hook management) would be strictly better solutions.
services are a the best option if someone implements it: live in the source tree, would do a single push, and require no extra deployment overhead.
GitLab CI or othe CIs like Jenkins are the best option currently available. They are essentially already implemented server for the webhooks, which automatically clone for you: all you have to do then is to push from them.
The keywords you want to Google for are "gitlab mirror github". That has led me to: Gitlab repository mirroring for instance. There seems to be no perfect, easy solution today.
Also this has already been proposed at the feature request forum at: http://feedback.gitlab.com/forums/176466-general/suggestions/4614663-automatic-push-to-remote-mirror-repo-after-push-to Always check there ;) Go and upvote the request.
The key difficulty now is how to store the push credentials.
I solved it in Jenkins. You can set more than one git remote in an Jenkins job. I used Git Publisher as a Post-Build Action and it worked like a charm, exactly what I wanted.
I added "-publisher" jobs that run after "" is built successfully. I could have done it in one job, but I decided to split it up. The build jobs are triggered by a web hook in GitLab; the publisher jobs are using a #daily schedule from the BuildResultTrigger plugin.

How can I prevent pseudo-users from being created for anonymous Hudson / Jenkins job builds?

With the Hudson or Jenkins continuous integration servers, when a build is triggered either by an anonymous user, or by the CI server polling the repository, a pseudo-user is created with the data scraped from the commit information of the last commit.
How do I prevent this, as it's cluttering the list of registered users? I try to default to using post-receive hooks for scheduling builds, but for some repositories (e.g. those hosted by SourceForge), this is not an option as the machine running the repository is prevented from accessing external URLs
You can't prevent these from being created, as they are involved with how Jenkins logging and tracking works. However, if you need to see a list of only "real" users, you can do this easily by going to manage jenkins/manage users - users that lack a login will not appear.

Resources