Goal
I'm trying to orchestrate a dependency chain using the GitHub organization plugin along with the jenkins pipeline.
As the products I'm building have a number of shared dependencies, I'm using nuget packages to manage dependency versioning and updates.
However, I'm having trouble getting necessary artifacts/info to the projects doing the orchestration.
Strategy
On a SCM change any upstream shared libraries should build a nuget package and orchestrate any downstream builds that need new references:
I am hardcoding the downstream orchestration in each upstream project. So if A is built, B and C with dependencies on A will be built with the latest artifact from A. After that, D with dependencies on B and C, and E with dependencies on A and C will be built with the latest artifacts from A, B, C as needed. And so on. These will all be triggered from the Jenkinsfile of A in stages as dependencies are built using the "Build Job: Jobname" syntax. I couldn't find a solution by which I could just pass the orchestration downstream at each step as the dependencies diverge and converge downstream and I don't want to trigger multiple builds of the same downstream project with different references to upstream projects.
I can pass the artifact information for the parent project down to any downstream jobs, but the problem I'm facing is that, the parent project doesn't have any assembly versioning information for downstream artifacts (needed to orchestrate jobs further downstream). Stash/Unstash doesn't seem to have any cross-job functionality and archive/unarchive has been deprecated.
TLDR:
I need a method of either passing a string or text file upstream to a job mid-execution (from multiple downstream jobs) OR I need a method for multiple dowstream jobs with shared downstream dependencies to coordinate and jointly pass information to a downstream job (triggering it only once).
Thanks!
This article can be useful for you - https://www.cloudbees.com/blog/using-workflow-deliver-multi-componentapp-pipeline
sometimes Artifact way is needed.
upstream job:
void runStaging(String VERSION) {
stagingJob = build job: 'staging-start', parameters: [
string(name: 'VERSION', value: VERSION),
]
step ([$class: 'CopyArtifact',
projectName: 'staging-start',
filter: 'IP',
selector: [$class: 'SpecificBuildSelector',
buildNumber: stagingJob.id
]
]);
IP = sh(returnStdout: true, script: "cat IP").trim()
...
}
downstream job
sh 'echo 10.10.0.101 > IP'
archiveArtifacts 'IP'
I ended up using the built-in "archive" step (see syntax in pipeline syntax) in combination with copyArtifact plugin (must use Java style step with class name).
I would prefer to be able to merge the workflow rather than having to orchestrate the downstream builds in each build with anything to build downstream, but haven't been able to find any solutions to that end thus far.
You could use the buildVariables of the build result.
Main job - configuration: pipeline job
node {
x = build job: 'test1', quietPeriod: 2
echo "$x.buildVariables.value1fromx"
}
test1 - configuration: pipeline job
node {
env.value1fromx = "bull"
env.value2fromx = "bear"
}
Related
We have a huge monolith application which is build by multiple tools (shell scripts, Ant and Maven). The build process is quite complex:
a lot of manually steps
hidden dependencies between Ant targets
different steps must be executed depending on the used Operating System
We decided to simplify it by creating Gradle scripts which wraps all this logic (it is quite impossible to fix it, so we create a wrapper which standardize the way of executing all the logic). We have to download some files from the Maven repository, but we cannot use the dependencies syntax:
we don't need to always download all files
the versions of the downloaded artifacts are dynamic (depends on configuration located in completely different place)
we need a path to the downloaded files (e.g. we have to unpack an artifact distributed as zip)
How we can achieve it?
The easiest way to achieve it is to create a dynamic configuration with dependencies, and next resolve it. The resolve method returns paths to the dependencies on the local disk. It is important to use a unique name for every configuration. If not, executing the logic twice would fail (cannot overwrite the configuration with XYZ name).
Here is an example method which returns a path to an artifact. If the artifact is already available in the Gradle cache it won't be downloaded for the second time, but of course the path will be returned. In this example all artifacts are downloaded from Maven Central.
Method:
ext.resolveArtifact = { CharSequence identifier ->
def configurationName = "resolveArtifact-${UUID.randomUUID()}"
return rootProject.with {
configurations.create(configurationName)
dependencies.add(configurationName, identifier)
return configurations.getByName(configurationName, {
repositories {
mavenCentral()
}
}).resolve()[0]
}
}
Usage:
def jaCoCoZip = resolveArtifact('org.jacoco:jacoco:0.8.6')
def jaCoCoAgent = resolveArtifact('org.jacoco:org.jacoco.agent:0.8.6')
Rest service : http://host:8000/v1/config/resources/removeCollection?put:database=string&put:uris=string*
I want to deploy this REST service extension in MarkLogic using gradle. How can I deploy this?
If you're already using ml-gradle, you can add your implemented interface to marklogic\src\main\ml-modules\services and deploy using the mlLoadModules task. The mlCreateResource task as part of the scaffolding would also add metadata in marklogic\src\main\ml-modules\services\metadata.
I recommend looking at ml-gradle. You can easily hook it up in gradle by adding a few lines, most importantly being:
plugins { id "com.marklogic.ml-gradle" version "4.0.4" }
As described in the readme, you can optionally follow that with invoking the mlNewProject task, which will provide you with a useful scaffold structure for a typical ml-gradle project.
ML-gradle gives you access to all kinds of tasks, including one called mlLoadModules to deploy source, and rest extensions. There is also a built-in task for removing collections in any database, called mlDeleteCollections. You can look at the Task-reference to get a glimpse of all the tasks, or just run gradle tasks.
HTH!
Two easy ways of plugin gradle to invoke REST API:
Method One:
If you don’t have any Project just yet,
create a gradle.properties file in which you define four parameters: host, mlUsername, mlPassword, RestPort
create a build.gradle file in the same folder:
plugins {
id "com.marklogic.ml-gradle" version "4.0.4"
}
task FCdeleteCollections(type: com.marklogic.gradle.task.datamovement.DeleteCollectionsTask) {
………………..
collections = ["{collection-name}"]
}
Invoke the gradle task:
[root# ~] # gradle FCdeleteCollections
Method Two:
If you already scaffolded the Project, in my opinion, it is safer to invoke one-time deletion task like this:
[root# ~] # gradle -Pdatabase={db-name} mlDeleteCollections -Pcollections={collection-name}
My preference is to invoke such task through Java API | DMSDK.
What I have for JAVA
I am using Jenkins as my CI/CD server and I created a Jenkinsfile for my JAVA project and for the scanning and quality I am using the maven sonar plugin. The mvn sonar:sonar command generate a file at target/sonar/report-task.txt. The file contains information related with the scanning process and using that information I am able to call the SonarQube REST API with the taskId generated and then I am able to call the REST API using analysisId and decide if the pipeline is broken based on the quality conditions.
What I want for Javascript (any other type of project)
I am trying to do something similar for a Javascript project but this time and using the sonar-scanner from command line but I realized that there is not file generated as report-task.txt ( I believe this file is only generated by maven sonar-plugin). So I will like to know if there is a way to generate that kind of information.
I really need the taskId value in order to do dynamically calls to SonarQube REST API once the scanner process has started.
Since you're using a Jenkinsfile there's no need to do this manually. From the docs
node {
stage('SCM') {
git 'https://github.com/foo/bar.git'
}
stage('SonarQube analysis') {
withSonarQubeEnv('My SonarQube Server') {
sh 'mvn clean package sonar:sonar'
} // SonarQube taskId is automatically attached to the pipeline context
}
}
// No need to occupy a node
stage("Quality Gate"){
timeout(time: 1, unit: 'HOURS') { // Just in case something goes wrong, pipeline will be killed after a timeout
def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv
if (qg.status != 'OK') {
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
If you're not using Maven to build and analyze, then just sub-in the correct commands as appropriate.
The file contains information related with the scanning process is:
.scannerwork/report-task.txt
I am struggling with the Gradle build lifecycle; specifically with the split between the configuration and execution phases. I have read a number of sections in the Gradle manual and have seen a number of ideas online, but have not found a solution to the following problem:
I want to run a specific task to produce an artifact at the end of my java-library-distribution build that is a flattened version of the runtime configuration jars. That is, I only want to produce the artifact when I run the specific task to create the artifact.
I have created the following task:
task packageSamplerTask(type: Tar, dependsOn: distTar) {
description "Packages the build jars including dependencies as a flattened tar file. Artifact: ${distsDir}/${archivesBaseName}-${version}.tar"
from tarTree("${distsDir}/${archivesBaseName}-${version}.tar").files
classifier = 'dist'
into "${distsDir}/${archivesBaseName}-dist-${version}.tar"
}
Although this task does produce the required artifact, the task runs during gradle's configuration phase. This behavior has the following consequences:
Irrespective of which task I run from the command line, this packageSamplerTask task is always run, often unnecessarily; and
If I clean the project, then the build fails on the next run because $distsDir doesn't exist during the configuration phase (obviously).
It appears that if I extend the Copy task in this manner I'm always going to get this kind of premature behavior.
Is there a way to use the << closure / doLast declarations to get what I want? Or is there something else I'm missing / should be doing?
Update
After further work I have clarified my requirements, and resolved my question as follows (specifically):
"I want to package my code and my code's dependencies as a flat archive of jars that can be deployed as a jMeter plugin. The package can then be installed by unpacking into the jMeter lib/ext directory, as is. The package, therefore, must not include the jMeter jars (and their dependencies) which are used for building and testing"
Because Gradle doesn't appear to support the Maven-like provided dependency management, I created a new configuration for my package which excludes the jMeter jars.
configurations {
jmpackage {
extendsFrom runtime
exclude group: 'org.apache.jmeter', name: 'ApacheJMeter_core', version: '2.11'
exclude group: 'org.apache.jmeter', name: 'ApacheJMeter_java', version: '2.11'
}
}
And then created the following task (using the closure recommendation from Peter Niederwieser):
task packageSamplerTask(type: Tar, dependsOn: assemble) {
from { libsDir }
from { configurations.jmpackage.getAsFileTree() }
classifier = 'dist'
}
This solution appears to work, and it allows me to use just theGradle java plugin, too.
The task declaration is fine, but the flattening needs to be deferred too:
...
from { tarTree("${distsDir}/${archivesBaseName}-${version}.tar").files }
Also, the Tar file should be referred to in a more abstract way. For example:
from { tarTree(distTar.archivePath).files }
First your task isn't executed in the configuration phase but like EVERY task it is configured in that phase. And your closure is just a configuration of your task (a Configuration closure, not an Action closure). That is why your code is "executed" in the configuration phase".
If you want your code to be executed in the execution phase have to write it in a doLastclosure or doFirst. But in your case it is better to keep it in a configuration closure, because you are configuring your task.
To make sure your build doesn't fail because of the missing folder, you can create it with distsDir.mkdirs().
I have next structure for my project:
Root
|- A
|- C (depends on A)
\- B (depends on A)
For all of sub-projects we use own plugin for generate-resources: https://github.com/terma/gradle-sqlj-plugin/blob/master/src/main/groovy/org/github/terma/sqljgradleplugin/SqljPlugin.groovy the task from plugin doesn't depend on any however JavaCompile task depends on it
When I build my project in build log I see:
:A:myPluginTask
:B:myPluginTask
:C:myPluginTask
:A:compileJava
:A:processResources
:A:classes
:B // next normal build way
Question why Gradle execute my plugin task for all sub-projects before java tasks? And why it execute java tasks in normal way first all java tasks for A than B ...?
Optional question how Gradle build task execution tree, separated for each project or cross projects?
Thx a lot.
All that can be said (and relied upon) is that Gradle will choose a task order that satisfies the declared task relationships (dependsOn, mustRunAfter, shouldRunAfter, finalizedBy). All execution dependencies are between tasks (not projects), and it's common that tasks belonging to different projects will get executed in alternation (or in parallel if --parallel is used). There is a single task execution graph for the whole build.