Build Manual GitlabCI pipeline job using specific Commit ID - continuous-integration

I need to build a Gitlab CI pipeline manually but not using latest of my master branch, but using a specific commitID.
I have tried running pipeline manually by using variable as below and passing its value but of no use.
Input variable key: CI_COMMIT_SHA

At the time of this writing, GitLab only supports branch/tag pipelines, merge request pipelines and scheduled pipelines. You can't run a GitLab pipeline for a specific commit, since the same commit may belong to multiple branches.
To do what you want, you need to create a branch from the commit you want to run the pipeline for. Then you can run the manual pipeline on that branch.
See this answer for step-by-step instructions on how to create a branch from a commit directly in the GitLab UI.

Use the existing (created by Gitlab CI) workspace to run the .gitlab-ci.yml and from there checkout the code again in a different directory using commitID, and perform all the operations there.

Related

How to trigger a Jenkins pipeline once a Bitbucket pipeline is finished?

I have the following requirement. I have a Jenkins pipeline which I want to be triggered once a Bitbucket pipeline has been finished with success. The problem is that I need to pass also some params and I don't want to use an asynchronous process like Bitbucket webhooks.
Is it another way to trigger the Jenkins pipeline automatically receiving multiple params?
I want to mention that these params can be retrieved also from the AWS resources created by that Bitbucket pipeline.
I faced the same issue, but i found a solution using Additional Behaviours
from git plugin.
Using Polling ignores commits with certain messages, which allow you to:
ignore any revisions committed with message matched to the regular expression pattern when determining if a build needs to be triggered.
and am using [skip ci] int the commit message, to commit changes after Bit-bucket pipeline finished.
so i used a custom regex ^(?!.*\[skip ci\]).*$ to skip any commit not including the tag.
which will result to only trigger Jenkins once the pipeline finished.

Converting DSL jobs into a pipeline in Jenkins

I'm trying to migrate old fashioned Jenkins DSL jobs (in Groovy) to the new descriptive pipeline form.
Since I'm very new to the pipeline and I could not find any answer to my noob problem, I'll firstly describe my scenario here:
Supposing I have 3 DSL jobs, one to build and save the artifact generated in a repository like Artifactory, another to tag the master branch and the last one is used to deploy to prod. All jobs use the same Git repository.
The building job is usually run many times during development. It can be triggered manually or as a response to events in the Git repo, e.g. merge requests and pushes.
For simplicity, let's assume the tagging job only needs to tag the master branch in the repo. This will only be run once in a while, manually, when we are pretty sure the master branch will go to prod.
Artifact gets deployed using a third job, also manually.
So here are my questions:
As I understand we can only have one file per branch in the repo, so how can I configure such a setup using a pipeline defined in only one Jenkinsfile?
How can I manually trigger the tagging job only (meaning compile/test/generate the artifact without uploading and then if everything tests ok, tag the version)?
In this situation, will it be easier for me if I just implement the building job in the pipeline and keep the others as DSL scripts?
Many thanks for any suggestions!

GitLab Pipeline trigger: rerun latest tagged pipeline

We have an app (let’s call it the main repo) on GitLab CE, that has a production build & deploy pipeline, which is only triggered when a tag is deployed. This is achieved in .gitlab-ci.yml via:
only:
- /^v.*$/
except:
- branches
We also have two other (let’s call them side) repositories (e.g. translations and utils). What I’d like to achieve is to rerun the latest (semver) tag’s pipeline of main, when either of those other side repositories’ master branches receives a push. A small detail is that one of the repositories is on GitHub, but I’d be happy to get them working on GitLab first and then work from there.
I presume I’d need to use the GitLab API to trigger the pipeline. What I’ve currently set up for the side repo on GitLab is a webhook integration for push events:
https://gitlab.com/api/v4/projects/{{ID}}/ref/master/trigger/pipeline?token={{TOKEN}}, where ID is the ID of the main project and TOKEN a deploy token for it.
However, this will only trigger a master pipeline for our main repo. How could I get this to (also) rerun the latest tag’s pipeline (or the latest tagged pipeline)?
Secondly, how would I go about triggering this on GitHub?
Either you can create new pipeline specifying ref which can be branches or tags, so in this case you need to know the exact tag value https://docs.gitlab.com/ee/api/pipelines.html#create-a-new-pipeline
Or you can retry already the executed pipeline by providing its id which you can get from https://docs.gitlab.com/ee/api/pipelines.html#list-project-pipelines by sorting by id and filtering by ref but it'll give you the last pipeline with a tag /^v.*$/ which may not match with the specific version you need.

Marking a commit/build for deploy

So we are currently just deploying master but are running into issues where we want to deploy the commit/build in which all our testing was ran on. This is normally a snapshop of master at 4:30pm. We run our build configuation for all tests automatically at 4:30pm (lets call this build config ALLTESTS), so we can control how this commit/build is marked in the ALLTESTS config.
We separate testing and deploy, so when a deploy is executed (either manually or automatically) it should only pick a branch/tag/commit/build that has been marked. Adding the tests to our deploy build config is not a viable solution.
Originally I had planned on using Git tags. A tag called deploy would be deleted and added to certain commits and then when the deployment is triggered that commit would be deployed.
The issue I ran into here is that there isnt an easy way to manually add git tags in a build step. Should I just write command-line build step that uses git commands remove the tag deploy from whatever commit has it and to add it to commit that is running?
Is there a better teamcity way to do this? I have successfully got teamcity tags to work via REST API but I am not sure if those fit the need either.
I suppose I could write powershell to parse the rest API to get the build id that was last successful in ALLTESTS and then feed that into the deploy somehow. How would I go about getting a build number and using that as the basis of deploy?
Should I just write command-line build step that uses git commands remove the tag deploy from whatever commit has it and to add it to commit that is running?
Quick answer is no. You can actually use something like this:
git tag -f deploy <commit-sha>
And have your tag updated to the given commit.
Cheers.

How can I compare metrics between a branch and master as part of pull request checks?

I have a ruby library which 'measures' a text file and dumps the measurements to a file. I use Travis CI to show these results whenever someone makes a pull request to change the file in a github repo. My goal is to make a pass/fail check based on whether metrics are 'Improving'.
When a pull request is submitted and Travis CI runs my rakefile, I want to compare my metrics in the pull request branch to the metrics of the master branch.
Assume I have a rake task which runs metrics on a text file and spits out the results, and that I can compare two result files.
task :run_metrics
ruby "lib/metered_object.rb"
#metered_object= MeteredObject.new(File.expand_path("./list.txt"))
#metered_object.set_targets({"metric1" => 10, "metric2" => 500})
#metered_object.display_metrics >> pull_request_output
#metered_object.compare_metrics(pull_request_output.txt, old_metrics_output.txt)
end
How should I use git and rake to either store and retrieve old_metrics_output.txt, or generate a new metrics file for master, in order to compare the newly created pull request metrics to it?
Bonus points if there's a common name for this pattern/practice I have yet to discover.
Travis CI pulls only the specific branch you are testing, so in a PR build of branch feature to master, git checkout --branch=feature ... is done in the start of the build.
If you want to compare to master, you would need to fetch the master branch as well. This could probably be achieved by git fetch --branch=master --depth=3.
After that, you can use the normal tools you need to make the compare.
Please note that if you are on a private repository, the credentials used to clone the repository initially, will have been removed at the moment where you can fetch. If that is your case, have a look at the docs for possibilities to authenticate your interaction with GitHub.

Resources