I'm using "checkout build script from scm" option, paired with lightweight checkout.
I would like to add repository polling to that.
This is Jenkinsfile that I use:
pipeline {
agent any
triggers {
pollSCM('H/1 * * * *')
}
stages {
stage('Checkout') {
steps {
checkout([
$class : 'GitSCM',
branches : [[name: 'master']],
userRemoteConfigs : [[url: 'file:///home/my-secret-home/workspace/pipeline-test']]])
}
}
stage('Echo!') {
steps {
sh 'echo TEST'
}
}
}
}
Although job is running, git polling log tries to convince me that 'Polling has not run yet.'
Is configuring such behavior possible at all?
No, it doesn't work.
With lightweight checkout, the mapping to the remote branches are lost, so git doesn't know where to look for further updates.
You can also confirm this by running git pull on the local repository. It returns:
There is no tracking information for the current branch.
Please specify which branch you want to merge with.
See git-pull(1) for details.
git pull <remote> <branch>
If you wish to set tracking information for this branch you can do so with:
git branch --set-upstream-to=origin/<branch> master
Related
I am looking for help with with our Jenkins Pipeline setup. I had a Jenkins pipeline job working just fine, where the groovy script was checked out from a Perforce stream (in stage "Declarative: Checkout SCM") and then run. The script itself performs, at its core, a p4 sync and a p4 reconcile.
pipeline {
agent {
node {
customWorkspace "workspaces/MY_WORKSPACE"
}
}
stages {
stage('Sync') {
steps {
script {
p4sync(
charset: 'none',
credential: '1',
format: "jenkins-${NODE_NAME}-MY_WORKSPACE",
populate: syncOnly(force: false, have: true, modtime: false, parallel: [enable: false, minbytes: '1024', minfiles: '1', threads: '4'], pin: '', quiet: true, revert: true),
source: streamSource('//depot/STREAM')
)
}
}
}
stage('Reconcile') {
steps {
script {
withCredentials([usernamePassword(credentialsId: '1', passwordVariable: 'SVC_USER_PW', usernameVariable: 'SVC_USER_NAME')]) {
bat label: 'P4 reconcile', true, script:
"""
p4 -c "%P4_CLIENT%" -p "%P4_PORT%" -u ${SVC_USER_NAME} -P ${SVC_USER_PW} -s reconcile -e -a -d -f "//depot/STREAM/some/folder/location/*.file"
"""
}
}
}
}
}
}
Due to an exterior requirement, we decided to move all our pipeline script files to a separate depot on the same Perforce server and changed the pipeline script checkout accordingly.
Now, the pipeline script checkout step ("Declarative: Checkout SCM") will create a new workspace called jenkins-NODE_NAME-buildsystems (for the pipeline script depot //buildsystems) which will use the same local workspace root directory D:\some\path\workspaces\MY_WORKSPACE on the build node as the actual workspace jenkins-NODE_NAME-MY_WORKSPACE, created and synced in the first pipeline step by p4sync. This means that Perforce creates two workspaces with the same local workspace root directory (which can cause all sorts of problems in itself). In addition, in the pipeline script, the P4 environment variable P4_CLIENT points to the wrong workspace jenkins-NODE_NAME-buildsystems (so the reconcile won't work), which should only have been used by the pipeline script checkout, not by the pipeline itself.
Which brings me to my question. How can I separate the workspaces of the pipeline script checkout and of the p4sync in the pipeline script? In the pipeline I can specify a customWorkspace, but not in the Jenkins configuration for the pipeline script checkout, and the latter weirdly seems to follow that customWorkspace statement, maybe because jenkins-NODE_NAME-MY_WORKSPACE had already been opened by Perforce on the node...?
Any hints are much appreciated.
Thanks,
Stefan
I have n features branch that will MR and merge into a develop branch.
I have a pipeline with 3 stages:
stages:
- feature-push
- develop-mr-retag
- develop-mr-rollout
feature-push runs on any push to a feature branch (not the develop branch we are merging into). It will test, build, and push an app in a docker image tagged with the name of the feature branch.
The latter two stages should run on commits to a branch develop after a merge request is approved and merged (assuming the source branch passed the feature-push stage). It needs to rollout the new image to some k8s pods, and it needs the name of the source branch to find the correct image.
I want to use ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} for this, but I don't think that variable exists for pipelines run after a merge, only on merge_requests pipelines. These seem to be triggered before an MR is approved, which I don't want as this is a deployment.
Is this possible or should I find a different approach?
**Edit: ** To clarify, I need to run my docker build before MR, to know it can build successfully. I don't want to just throw away that build if it's the one that gets merged, so that's why I want to build/push before MR, and deploy the previously built image after MR.
I'm looking for the same.
For the moment, I parse the commit title of the mergecommit to extract the source branch, and I check that the branch found really exist. Here is the relevant code :
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME=$(sed -r "s/^Merge branch '(.*)' into .*/\1/i"<<<$CI_COMMIT_TITLE)
if [ $(git ls-remote --heads ${CI_REPOSITORY_URL} $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME | wc -l ) -ne 1 ]; then echo "Can't find source branche ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME})" && exit 1; fi
But if the default commit title is modified by anybody, this will not work.
Is it possible to use parameters to allow users to pass a git sha to a multi branch pipeline while defaulting to the head of the branch? Also I would only need to this function for the master branch.
I'm using ...
Jenkinsfile in code
Jenkins Declarative Pipeline
I was able to do this with declarative pipelines with the following...
pipeline {
options {
skipDefaultCheckout()
}
...
steps {
script {
if (GIT_REVISION=='HEAD') {
checkout scm
} else {
checkout([$class: 'GitSCM',
branches: [[name: "${params.GIT_REVISION}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'XXXXXXX', url: 'git#github.com:xxxxx/xxxxx.git']]
])
}
...
}
}
}
Yes, this is possible, but I guess you have to use scripted pipelines instead of declarative ones.
If the current branch is the master, you configure a parameter for this build (as this isn't super intuitive, I wrote a blog article a while ago). params.INPUT_REVISION for example would then store the given revision and you can set default to HEAD or fallback to it, if the parameter is not yet specified (e.g. for the first run).
You supply this revision to the checkout(scm) step as a parameter so that it doesn't checkout the current master branch, but the specified revision.
Is there a way to push back to the repo that the Jenkins Pipeline checked out using the same process that the code was checked out with (using GIT_ASKPASS)?
I currently have a workaround solution for achieving this by grabbing the credentials like this:
withCredentials([usernamePassword(credentialsId: 'github', passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')]) {
sh('git push https://$GIT_USER:$GIT_PASS#github.com/orgname/private-repo.git master')
}
I'm not a Groovy developer, but found a method in the git-client-plugin that I would like to use. Is there a way to use the following method directly in the Jenkinsfile
launchCommandWithCredentials
https://github.com/jenkinsci/git-client-plugin/blob/master/src/main/java/org/jenkinsci/plugins/gitclient/CliGitAPIImpl.java#L1649
Using rugged how do you perform the following operations: fetch, pull and rebase?
I am using the development branch and after reviewing its documentation found here as a guide to the Remote class.
EDIT: Since git pull is just a shorthand for git fetch and git merge FETCH_HEAD the better question is how to perform git fetch, git merge and git rebase.
git fetch:
remote = Rugged::Remote.lookup(repo, "origin")
remote.connect(:fetch) do |r|
r.download
r.update_tips!
end
git merge:
merge_index = repo.merge_commits(
Rugged::Branches.lookup(repo, "master").tip,
Rugged::Branches.lookup(repo, "origin/master").tip
)
raise "Conflict detected!" if merge_index.conflicts?
merge_commit = Rugged::Commit.create(repo, {
parents: [
Rugged::Branches.lookup(repo, "master").tip,
Rugged::Branches.lookup(repo, "origin/master").tip
],
tree: merge_index.write_tree(repo),
message: 'Merged `origin/master` into `master`',
author: { name: "User", email: "example#test.com" },
committer: { name: "User", email: "example#test.com" },
update_ref: 'master'
})
git rebase:
Rebasing was not implemented yet in libgit2, and thus is not available in Rugged.
In general, your use case sounds very high level, while the rugged API is currently a bit more focused on low-level git repository access and modification. Eventually, we'll also have many higher-level helpers (like a more simple/correct pull) in the future, but we're not there yet.
The answer above seems to be outdated. The syntax has changed. I need to implement a pull action which i am trying to do by a fetch and then a merge and commit. For fetching i use the fetch method like this
repo.fetch('origin', [repo.head.name], credentials: credits)
And it seems to actually get something since the returned hash is full with information about what has been fetched. However, it is not written to disk. I would expect the branch to be behind several commits when i do git status in the command line but it is not. If i fetch a second time with the same command above then nothing is fetched. This is probably because it has already been fetched the first time but then i dont see where the fetch is.
Now if i go ahead and do the fetch manually in the command line and then try to merge the local copy of the remote branch and the local branch (local changes are already committed) using the following code
ref_name = repo.head.name # refs/heads/branchname
branch_name = ref_name.sub(/^refs\/heads\//, '') # branchname
remote_name = "#{remote}/#{branch_name}" # origin/branchname
remote_ref = "refs/heads/#{remote_name}" # refs/heads/origin/branchname
local_branch = repository.branches[branch_name]
remote_branch = repository.branches[remote_name]
index = repo.merge_commits(local_branch.target, remote_branch.target)
options = {
author: { time: Time.now }.merge(author),
committer: { time: Time.now }.merge(committer),
message: 'merged',
parents: [
local_branch.target,
remote_branch.target
],
tree: index.write_tree(repository),
update_ref: 'HEAD'
}
Rugged::Commit.create repo, options
It creates the commit as expected. The commit is also written to disk and is visible in the fistory. But for some reason the branch has now uncommitted changes. The local file contents have not changed. I would expect them to have the contents of the fetched commit.
Can anyone please provide a working example for a fetch, merge, commit? The version of rugged at time of writing this is 0.22.0b3
Update 1
This will bring my working tree to the wanted state
repo.checkout ref_name, strategy: :force
Update2
I found out how to fetch and save the state to disk
r = repo.remotes[remote]
r.fetch(credentials: git_credentials)
r.save