I want to be able to load a custom class from my pipeline script.
Specifically, I'm looking at having my pipeline script checkout fetch the Jenkinsfile and a number of .groovy files in the same directory (eg classA.groovy, classB.groovy)
I would expect to be able to have my pipeline script simply do a:
import classA
import classB
However, this results in an "Unable to resolve class" error.
I've tried the "pipeline-classpath-step-plugin", but it requires the main pipeline script to call its new step "AddToClassPath" and then load in an additional file which then can import.
Is there some other way to modify (or even to see) the classpath the script is running with?
Related
I just got started on a Jenkins pipeline school assignment. The pipeline structure resides in a library and there is a client module that refers to it, in its Jenkinsfile.
Is it possible to delegate execution to a class in the client module without invoking a script using env (sh or cmd).
I want to invoke a class directly. Does Jenkins maintain classloader isolation b/n library and external module?
Does it explode both client and lib in a common workspace and then load them together?
Or
Can I load a class lazily at runtime? Say, the pipeline in the library knows the client class (startup param or script invocation) and then instantiates it and transfers control to it? Something like an SPI.
App design:
Library ABC with src/vars/resources.
Module X uses #Library('ABC') -Jenkinsfile invokes pipeline in the Library ABC.
Module X structure
src/
package/class1
I am trying to use the GroovyClassLoader to load a groovy src file for class1 in Library script and invoke it. The Jenkins pipeline however, does not permit GroovyClassLoader in the script. How do I whitelist it?
I assume that your Jenkins shared library has the following structure.
ABC
|
|----src/
|----resorces/
|----var/
You can create a Test.groovy file under src
package com.jenkins
class Test {
def execute(def dsl){
dsl.echo "I am inside the Test class"
println("I am also inside the Test class")
}
}
Create a pipeline.groovy under var directory.
An example pipeline.groovy will be
import com.jenkins.Test
def call(final Map parameters = [:]) {
node('master'){
Test.execute(this) //This utilizes the class that you have defined. I
}
}
Now the module can have a Jenkins file with the following content.
#Library('ABC')_
pipeline // This invokes pipeline.groovy
I need to run a simple Python script to copy a set of files from one directory to another during the build-phase of my Sphinx documentation.
Copying function:
location: source/_plugins/copy_firmware_files.py
import json, os, sys
from pathlib import Path
import shutil
def copy_firmware_files(device):
# copy firmware files
I'm currently importing this module into my conf.py as the Configuration File contains the device name, which would make it a simple way to execute the code. I'm currently doing this as below:
Configuration File (conf.py)
location: source/conf.py
sys.path.append(os.path.abspath("_plugins"))
from copy_firmware_files import *
# initialize files depending on build
copy_firmware_files(device_name)
The above works as intended, i.e. the relevant files are copied to their respective folders before the build. However, I'm unsure if it's the "proper" way to do so. Is there a more correct way of achieving the same result?
There are several ways, but it sounds like two are most appropriate. They are static files and could be treated as such by Sphinx.
html_extra_path is one option.
If you want to present links to download the files, you can use the download directive.
See :download:`this example script <../example.py>`.
I've groovy Job DSL code (main script) in place that creates ~100 tests jobs. The approach is that they should be possible to execute manually, as well as we want to have pipelines for them to be executed at night.
So, the pipeline jobs (multiple) will be created too. Not a problem.
But since the main DSL groovy script will be quite big I want the corresponding pipeline script (That the pipeline jobs will load) to be placed in separate files. Since there are so many jobs I've already placed the configuration of all jobs in a separate file where they are defined in a MAP.
The logic to create all the test jobs are placed in the main script and it loops through the MAP located in separate file. Works fine.
Don't want to have the job names that must be configured in the pipeline definition hardcoded (Duplicated information)! So the plan was to create the pipeline definition files from the same main script logic that creates the test jobs. Then are all relevant information such as job names and target host etc. available for every iterations through the MAP.
Any ideas how separate pipeline script files can be created and manipulated from the groovy job dsl script?
I've tried creating files with standard groovy code. But it was obvious that they were created in the Jenkins Master. And this have to be in the slave.
def newFile = new File("${WORKSPACE}/scripts/jenkins_job_dsl/pipeline.conf")
print "${newFile}"
I got this in the seed job when the "new file" was used:
...
/home/builduser/workspace/Test_and_demo/Richard_Test/seed_job_richard/scripts/jenkins_job_dsl/pipeline.conf
FATAL: No such file or directory
13:33:50 java.io.IOException: No such file or directory
...
Is your problem to read the file pipeline.conf, that already exists in the seed job?
You should be able to use readFileFromWorkspace, like this:
// read the file release.groovy from the seed job's workspace
// and configure a Groovy build step using that script
def releaseScript = readFileFromWorkspace('release.groovy')
job('example-1') {
steps {
groovyCommand(releaseScript)
}
}
// read the file run.bat from a workspace of job project-a
// and use it to configure another job
def runScript = readFileFromWorkspace('project-a', 'run.bat')
job('example-2') {
steps {
batchFile(runScript)
}
}
We are trying to use some custom helper functions from a .jar library in our Jenkinsfile. To achieve this, we want to use the #Grab annotation from groovy/grape. Our Jenkinsfile looks like this:
#Grab('com.company:jenkins-utils:1.0')
import com.company.jenkinsutils.SomeClass
pipeline {
...
}
When trying to run the pipeline, we get the following error message:
java.lang.RuntimeException: No suitable ClassLoader found for grab
I already tried specifying #GrabConfig(systemClassLoader = true), however to no success. I suppose is has to do with the pipeline scripts running in the sandbox mode? Is there any way to make this work?
I am looking for a simple way write short shell scripts that call into jar files.
Having to keep track of (and installing) all those jar files for the runtime classpath partly defeats the purpose using a script (as opposed to building a runnable jar file in Eclipse). I'd like Maven (or something equivalent) to manage this.
Image:
#!/usr/bin/the-cool-shell
use org.apache.commons/lang/3.0.0
use org.json/json
import org.json.*;
new JSONObject("{}");
And this should get the required artifacts from Maven automatically (at basically zero overhead after downloading it for the first time).
What are my options?
If you were using Groovy and Groovy Shell you could be using the Grape infrastructure.
#!/usr/bin/env groovy
#Grab( 'log4j:log4j:1.2.14' )
import org.apache.log4j.Level
import org.apache.log4j.Logger
def logger = Logger.getLogger(GroovyShell.class)
Logger.rootLogger.level = Level.INFO
logger.info 'I am using the Log4j library by using Grape'
As for your exact example this would work:
#!/usr/bin/env groovy
#Grapes([
#Grab('org.apache.commons:commons-lang3:3.0'),
#Grab('org.json:json:20090211')
])
import org.json.*
new JSONObject('{}')
In this case I was using the Groovy syntax but ordinary Java syntax is also fine.
Taken from the Javadoc of #Grapes annotation:
Sometimes we will need more than one grab per class, but we can only add
one annotation type per annotatable node. This class allows for multiple
grabs to be added.
You could try Gradle, it's a build management tool, but it uses Groovy for its build scripts, and it uses the Maven dependency model. So your script could be a Gradle 'build' script, that just did something different than building software.