I am trying to write a shared libray which combines of global variables and shared functions to perform automated task of build and deployment for our project
The project layout as below:
The project has two major parts:
Global shared variables which are placed in vars folder
Supporting groovy scripts to abstract logics which in turn will be called in the global variable.
Inside the groovy class, I am using println to log debugging information
But it never got printed out when it is invoked through jenkins pipeline job
The log console of jenkins job as below:
Can someone show me how to propage logs from groovy class to jenkins job console as I can see only log from println in the global shared script is shown in log console.
I just found a way to do it by calling println step that is available in the jenkins job
Basically I create a wrapper function as below in Groovy class PhoenixEurekaService:
The steps is actually the jenkins job environment passed into Groovy class via constructor. By this way we can call any steps available in jenkins job in Groovy class.
In global groovy script PhoenixLib.groovy
I am not sure is there other way to do that...
All the commands/DSL e.g: println, sh, bat, checkout etc can't be accessed from shared library.
ref: https://jenkins.io/doc/book/pipeline/shared-libraries/.
You can access steps by passing them to shared library.
//your library
package org.foo
class Utilities implements Serializable {
def steps
Utilities(steps) {this.steps = steps}
def mvn(args) {
steps.println "Hello world"
steps.echo "Hello echo"
//steps.sh "${steps.tool 'Maven'}/bin/mvn -o ${args}"
}
}
jenkinsfile
#Library('utils') import org.foo.Utilities
def utils = new Utilities(this)
node {
utils.mvn '!!! so this how println can be worked out or any other step!!'
}
I am not a 100% sure if this what you are looking for, but printing things in a shared library can be achieved by passing steps and using echo. See Accessing steps in https://jenkins.io/doc/book/pipeline/shared-libraries/
This answer addresses those who need to log something from deep in the call stack. It might be cumbersome to pass pipeline "steps" object all the way in the stack of the shared library especially when the call hierarchy gets complex. One might therefore create a helper class with a static Closure that holds some logic to print to the console. For example:
class JenkinsUtils {
static Closure<Void> log = { throw new RuntimeException("Logger not configured") }
}
In the steps groovy, this needs to be initialized (ideally in a NonCPS block). For example your Jenkinsfile (or var file):
#NonCPS
def setupLogging() {
JenkinsUtils.log = { String msg-> println msg }
}
def call() {
...
setupLogging()
...
}
And then, from any arbitrary shared library class one can call print to console simply like:
class RestClient {
void doStuff() {
JenkinsUtils.log("...")
}
}
I know this is still hacky for a workaround, although I could not find any better working solution even though I spent quite some time researching.
Posted this also as a gist to my github profile
Related
I'm a little bit confused about the correct way to create custom tasks on Gradle. On the tutorial for the Creation of custom tasks, they use tasks.register like this:
def check = tasks.register("check")
def verificationTask = tasks.register("verificationTask") {
// Configure verificationTask
}
check.configure {
dependsOn verificationTask
}
Instead here (still official Gradle documentation), they create new tasks that way:
task('hello') {
doLast {
println "hello"
}
}
task('copy', type: Copy) {
from(file('srcDir'))
into(buildDir)
}
and
tasks.create('hello') {
doLast {
println "hello"
}
}
tasks.create('copy', Copy) {
from(file('srcDir'))
into(buildDir)
}
Finally, according to the document https://docs.gradle.org/current/userguide/task_configuration_avoidance.html, they suggest to move from the second/third case to the first one. Does it mean that the second/third cases are obsolete? If yes, why is Gradle still making massive usage of the old API inside its documentation?
Which variant should a user use?
The Gradle API has many ways to define tasks. There is no "right" or "wrong" way for application developers so long as you are consistent, but it does matter for Gradle plugin authors.
The Task Configuration Avoidance doc you linked states (emphasis mine):
As of Gradle 5.1, we recommend that the configuration avoidance APIs be used whenever tasks are created by custom plugins.
So if you are a plugin author, use task configuration avoidance wherever possible
For everyone else (application developers), it doesn't particularly matter, to an extent, so long as your as consistent across your entire application.
I was using some global methods in the /var directory of the shared library, and everything worked fine. Now I need to keep the state of the process, so I'm writting a groovy class.
Basically I have a class called 'ClassTest.groovy' in '/src' which is something like this;
class ClassTest {
String testString
def method1() { ... }
def method2() { ... }
}
and at the begining of the pipeline
library 'testlibrary#'
import ClassTest
with result:
WorkflowScript: 2: unable to resolve class ClassTest #line 2, column 1.
import ClassTest
before, I was just goind
library 'testlibrary#' _
and using the methods as
script {
libraryTest.method1()
...
libraryTest.method2()
}
where the methods were in a file '/var/libraryTest.groovy' and everything worked. So I know that the shared library is there, but I'm confused with the way groovy / Jenkins handle classes / shared libraries.
What's the correct way to import a class? I cannot find a simple example (with groovy file, file structure and pipeline) in the documentation.
EDIT:
I moved the file to 'src/com/company/ClassTest.groovy' and modified the pipeline as
#Library('testlibrary#') import com.company.ClassTest
def notification = new ClassTest()
but now the error is
unexpected token: package # line 2
the first two lines of the groovy file are:
// src/com/company/ClassTest.groovy
package com.company;
So far this is what I've found.
To load the library in the pipeline I used:
#Library('testlibrary#') import com.company.ClassTest
def notification = new ClassTest()
In the class file, no package instruction. I guess that I don't need one because I don't have any other files or classes, so I don't really need a package. Also, I got an error when using the same name for the class and for the file where the class is. The error specifically complained and asked for one of them to be changed. I guess this two things are related to Jenkins.
That works, and the library is loaded.
(Maybe it can help someone else)
I was having the same issue.
Once I added a package-info.java inside the folder com/lib/, containing
/**
* com.lib package
*/
package com.lib;
and adding package com.lib at the first line of each file, it started to work.
I had the same problem.
After some trial and error with the docs of Jenkins.(https://www.jenkins.io/doc/book/pipeline/shared-libraries/#using-libraries)
I found that when I wanted to import a class from the shared library I have, I needed to do it like this:
//thanks to '_', the classes are imported automatically.
// MUST have the '#' at the beginning, other wise it will not work.
#Library('my-shared-library#BRANCH') _
// only by calling them you can tell if they exist or not.
def exampleObject = new example.GlobalVars()
// then call methods or attributes from the class.
exampleObject.runExample()
I'm looking at a simple example of a custom task in a Gradle build file from Mastering Gradle by Mainak Mitra (page 70). The build script is:
println "Working on custom task in build script"
class SampleTask extends DefaultTask {
String systemName = "DefaultMachineName"
String systemGroup = "DefaultSystemGroup"
#TaskAction
def action1() {
println "System Name is "+systemName+" and group is "+systemGroup
}
#TaskAction
def action2() {
println 'Adding multiple actions for refactoring'
}
}
task hello(type: SampleTask)
hello {
systemName='MyDevelopmentMachine'
systemGroup='Development'
}
hello.doFirst {println "Executing first statement "}
hello.doLast {println "Executing last statement "}
If I run the build script with gradle -q :hello, the output is:
Executing first statement
System Name is MyDevelopmentMachine and group is Development
Adding multiple actions for refactoring
Executing last statement
As expected with the doFirst method excecuting first, the two custom actions executing in the order in which they were defined, then the doLast action executing. If I comment out the lines adding the doFirst and doLast actions, the output is:
Adding multiple actions for refactoring
System Name is MyDevelopmentMachine and group is Development
The custom actions are now executing in the reverse order in which they are defined. I'm not sure why.
I think it's simply a case that the ordering is not deterministic, and you get different results depending on how you further configure the task.
Why do you want 2 separate #TaskAction methods as opposed to a single one that calls methods in a deterministic sequence, though ? I can't think of a particular advantage of doing it that way (I realize it's from a sample given in a book).
Most other samples I find only have a single method
#TaskAction
void execute() {...}
which I think makes more sense and be more predictable.
Patrice M. is correct, the way those methods will be executed is undeterministic.
In detail
#TaskAction annotated methods are being processed by AnnotationProcessingTaskFactory.
But first Task action methods are fetched with DefaultTaskClassInfoStore and results stored in TaskClassInfo.
You can see that Class.getDeclaredMethods() is being used to fetch all methods to check if they contain #TaskAction annotation
And here the definition of public Method[] getDeclaredMethods() throws SecurityException
Description contains following note:
The elements in the returned array are not sorted and are not in any
particular order.
Link to Gradle discussion forum with the topic about #TaskAction
It doesn't guarantee the order.
For your information, just added one more link which was an issue I raised few years ago, it should be giving warnings or replaced with some other better solutions by Gradle.
https://github.com/gradle/gradle/issues/8118
Currently, I have a few utility functions defined in the top level build.gradle in a multi-project setup, for example like this:
def utilityMethod() {
doSomethingWith(project) // project is magically defined
}
I would like to move this code into a plugin, which will make the utilityMethod available within a project that applies the plugin. How do I do that? Is it a project.extension?
This seems to work using:
import org.gradle.api.Plugin
import org.gradle.api.Project
class FooPlugin implements Plugin<Project> {
void apply(Project target) {
target.extensions.create("foo", FooExtension)
target.task('sometask', type: GreetingTask)
}
}
class FooExtension{
def sayHello(String text) {
println "Hello " + text
}
}
Then in the client build.gradle file you can do this:
task HelloTask << {
foo.sayHello("DOM")
}
c:\plugintest>gradle -q HelloTask
Hello DOM
https://docs.gradle.org/current/userguide/custom_plugins.html
I implemented this recently, a full example is available at Github.
The injection basically boils down to
target.ext.utilityMethod = SomeClass.&utilityMethod
Beware:
This method could potentially conflict with some other plugin, so you should consider whether to use static imports instead.
Based on Answer 23290820.
Plugins are not meant to provide common methods but tasks.
When it comes to extensions they should be used to gather input for the applied plugins:
Most plugins need to obtain some configuration from the build script.
One method for doing this is to use extension objects.
More details here.
Have a look at Peter's answer, using closures carried via ext might be what you are looking for.
I have a puppet module which deploys a JAR file and writes some properties files (by using ERB templates).
Recently we added a "mode" feature to the application, meaning the application can run in different modes depending on the values entered in the manifest.
My hierarchy is as follows:
setup
*config
**files
*install
Meaning setup calls the config class and the install class.
The install class deploys the relevant RPM file according to the mode(s)
The config class checks the modes and for each mode calls the files class with the specific mode and directory parameters, the reason for this structure is that the value of the properties depends on the actual mode.
The technical problem is that if I have multiple modes in the manifest (which is my goal) I need to call the files class twice:
if grep($modesArray, $online_str) == [$online_str] {
class { 'topology::files' :
dir => $code_dir,
mode => $online_str
}
}
$offline_str = "offline"
$offline_suffix = "_$offline_str"
if grep($modesArray, $offline_str) == [$offline_str] {
$dir = "$code_dir$offline_suffix"
class { 'topology::files' :
dir => $dir,
mode => $offline_str
}
However, in puppet you cannot declare the same class twice.
I am trying to figure out how I can call a class twice or perhaps a method which I can access its parameters from my ERB files, but I can't figure this out
The documentation says it's possible but doesn't say how (I checked here https://docs.puppetlabs.com/puppet/latest/reference/lang_classes.html#declaring-classes).
So to summarize is there a way to either:
Call the same class more then once with different parameters
(Some other way to) Create multiple files based on the same ERB file (with different parameters each time)
You can simply declare your class as a define:
define topology::files($dir,$mode){
file{"${dir}/filename":
content=> template("topology/${mode}.erb"),
}
}
That will apply a different template for each mode
And then, instantiate it as many times as you want:
if grep($modesArray, $online_str) == [$online_str] {
topology::files{ "topology_files_${online_str}" :
dir => $code_dir,
mode => $online_str
}
}
$offline_str = "offline"
$offline_suffix = "_$offline_str"
if grep($modesArray, $offline_str) == [$offline_str] {
$dir = "$code_dir$offline_suffix"
topology::files{ "topology_files_${online_str}" :
dir => $dir,
mode => $offline_str
}
Your interpretation of the documentation is off the mark.
Classes in Puppet should be considered singletons. There is exactly one instance of each class. It is part of a node's manifest or it is not. The manifest can declare the class as often as it wants using the include keyword.
Beware of declaration using the resource like syntax.
class { 'classname': }
This can appear at most once in a manifest. Parameter values are now permanently bound to your class. Your node has chosen what specific shape the class should take for it.
Without seeing the code for your class, your question makes me believe that you are trying to use Puppet as a scripting engine. It is not. Puppet only allows you to model a target state. There are some powerful mechanics to implement complex workflows to achieve that state, but you cannot use it to run arbitrary transformations in an arbitrary order.
If you add the class code, we can try and give some advice on how to restructure it to make Puppet do what you need. I'm afraid that may not be possible, though. If it is indeed necessary to sync one or more resources to different states at different times (scripting engine? ;-) during the transaction, you should instead implement that whole workflow as an actual script and have Puppet run that through an exec resource whenever appropriate.