jenkins-pipeline load scoping "method code too large" - jenkins-pipeline

I'm setting up a pretty complex pipeline to handle legacy builds.
There are currently 8 stages, and more on the way - perhaps total of 12-15 stages.
Each stage does pretty similar actions:
- take a list, and for each item
- create entry in map that
- allocates a node, and executes a set of BAT scripts (yes, windows)
and then run list in parallel
The current pipeline is about 1,000 lines long, and I'm getting the "method too large" error
I'm in process of refactoring this DSL into separate load-able scripts.
So far, so good.
But I just ran a test that indicates loading a script is additive to the overall pipeline. So I'd like to learn what is best to do here.
Test:
base.groovy:
def myVar //wchi is global (to basRef, i thought)
def setTest() { myVar='abc' }
def getTest() { return myVar }
pipeline.groovy
stage('one') {
def basRef = load('base.groovy')
basRef.setTest()
echo basRef.getTest()
}
stage('two') {
def basRef = load('base.groovy')
echo basRef.getTest()
}
stage one shows "abc" as expected.
stage two ALSO SHOWS "abc"
My ask:
How do I know using loadable files will not result in "method too large"?
What is the scope of a loadable file?
I've tried setting basRef = null to allow garbage collection to work, but I'm not sure it does.
Thanks for any guidance on this.

Related

For reporting testcases to Xray we are using Karate Runtime Hooks issue here is testcases are not getting executed while using hooks & hooks code ran [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Trying to loop through an array and pass each value into a gradle task

I'm trying to do a series of things which should be quite simple, but are causing me a lot of pain. At a high level, I want to loop through an array and pass each value into a gradle task which should return an array of its own. I then want to use this array to set some Jenkins config.
I have tried a number of ways of making this work, but here is my current set-up:
project.ext.currentItemEvaluated = "microservice-1"
task getSnapshotDependencies {
def item = currentItemEvaluated
def snapshotDependencies = []
//this does a load of stuff like looping through gradle dependencies,
//which means this really needs to be a gradle task rather than a
//function etc. It eventually populates the snapshotDependencies array.
return snapshotDependencies
}
jenkins {
jobs {
def items = getItems() //returns an array of projects to loop through
items.each { item ->
"${item}-build" {
project.ext.currentItemEvaluated = item
def dependencies = project.getSnapshotDependencies
dsl {
configure configureLog()
//set some config here using the returned dependencies array
}
}
}
}
I can't really change how the jenkins block is set-up as it's already well matured, so I need to work within that structure if possible.
I have tried numerous ways of trying to pass a variable into the task - here I am using a project variable. The problem seems to be that the task evaluates before the jenkins block, and I can't work out how to properly evaluate the task again with the newly set currentItemEvaluated variable.
Any ideas on what else I can try?
After some more research, I think the problem here is that there is no concept of 'calling a task' in Gradle. Gradle tasks are just a graph of tasks and their dependencies, so they will compile in an order that only adheres to those dependencies.
I eventually had to solve this problem without trying to call a Gradle task (I have a build task printing the relevant data to a file, and my jenkins block reads from the file)
See here

Jenkins parameterized matrix job

I am in the process of setting up a Jenkins job to run a bunch of tests on some c++ code. The code is generated during one Jenkins job. There are a number of sub-projects, with their code in their own folders.
My thought is to have a matrix job where each configuration runs the test on one folder of code files. There are two things that I am not sure the best way to do though...
I would like to set up the matrix job to automatically pick up if more sub-folders are added. Something like passing a list of folders to the job as a parameter, and have that parameter used as the axis for the job.
I would like the test to not be run on a specific folder unless some of the code in that folder was changed by the parent job.
Right now how to set up this test is completely open- I am trolling for ideas. If you have ever set up something like this- how did you do it?
I had similar task - running a matrix job with variable number of folders as one axis. The folders were in version control but could easily be artifact. What I've done, is create two jobs, one main and normal, the other slave and matrix. Here is the code that needs to be run as elevated groovy in the main job:
import hudson.model.*
def currentBuild = Thread.currentThread().executable;
def jobName = 'SlaveMatrixJob' // Name of the matrix job to configure
def axisFolders = []
def strings =""
// Get the matrix job
def job = hudson.model.Hudson.instance.getItem(jobName)
assert job != null, "The job $jobName could not be found"
// Check it is a matrix job
assert job.getClass() == hudson.matrix.MatrixProject.class, "The job $jobName is of class '${job.getClass().name}', but expecting 'hudson.matrix.MatrixProject'"
// Get the folders
new File("C:\\Path\\Path").eachDirMatch ~/_test.*/, {it ->
println "Got folder: ${it.name}"
axisFolders << it.name
}
// Check if the array is empty
assert !axisFolders.isEmpty(), "No folders found to set in the matrix, aborting"
//Sort them
axisFolders.sort()
// Now set new axis list for test folders
def newAxisList = new hudson.matrix.AxisList()
newAxisList.add(new hudson.matrix.TextAxis('TEST_FOLDERS', axisFolders))
job.setAxes(newAxisList)
println "Matrix Job $jobName new axis list: ${job.getAxes().toString()}"
What this does basically is get all the folde in c:\path\path starting with _test and then inserting them in the SlaveMatrixJob parameter named TEST_FOLDERS.
I had to go with two jobs, since I was not able to make this dynamic update work without installing additional plugins, which was not possible at the time.
For the second point, you could add logic to the script to check if the folders have been updated since the last build and skip the ones that weren't. Or you could search for some plugins, but my advice is go with the script for simpler tasks.

Alternative to rspec double that does not fail test even if allow receive is not specified for a function

Many times one outcome may have two different consequences that need to be tested with a test double. For example if a network connection is successful I'd like to log a message, and also pass the resource to another object that will store it internally. On the other hand it feels unclean to put these two in one test. For example this code fails:
describe SomeClass do
let(:logger) { double('Logger') }
let(:registry) { double('Registry') }
let(:cut) { SomeClass.new }
let(:player) { Player.new }
describe "#connect" do
context "connection is successful" do
it "should log info" do
logger.should_receive(:info).with('Player connected successfully')
cut.connect player
end
it "should register player" do
registry.should_receive(:register).with(player)
cut.connect player
end
end
end
end
I could specify in each test that the function in the other one might get called, but that looks like unnecessary duplication. In that case I'd rather make this one test.
I also don't like that it's never explicit in the test that a method should NOT be called.
Does anyone know about an alternative that has an explicit 'should_not_receive' message instead of automatically rejecting calls that are not explicitly specified?
RSpec supports should_not_receive, which is equivalent to should_receive(...).exactly(0).times as discussed in this message from the original author of RSpec.

Is there any way to delay a resource's attribute resolution until the "execute" phase?

I have two LWRPs. The first deals with creating a disk volume, formatting it, and mounting it on a virtual machine, we'll call this resource cloud_volume. The second resource (not really important what it does) needs a UUID for the newly formatted volume which is a required attribute, we'll call this resource foobar.
The resources cloud_volume and foobar are used in a recipe something like the following.
volumes.each do |mount_point, volume|
cloud_volume "#{mount_point}" do
size volume['size']
label volume['label']
action [:create, :initialize]
end
foobar "#{mount_point}" do
disk_uuid node[:volumes][mount_point][:uuid] # This is set by cloud_volume
action [:do_stuff]
end
end
So, when I do a chef run I get a Required argument disk_identifier is missing! exception.
After doing some digging I discovered that recipes are processed in two phases, a compile phase and an execute phase. It looks like the issue is at compile time as that is the point in time that node[:volumes][mount_point][:uuid] is not set.
Unfortunately I can't use the trick that OpsCode has here as notifications are being used in the cloud_volume LWRP (so it would fall into the anti-pattern shown in the documentation)
So, after all this, my question is, is there any way to get around the requirement that the value of disk_uuid be known at compile time?
A cleaner way would be to use Lazy Attribute Evaluation. This will evaluate node[:volumes][mount_point][:uuid] during execution time instead of compile
foobar "#{mount_point}" do
disk_uuid lazy { node[:volumes][mount_point][:uuid] }
action [:do_stuff]
end
Disclaimer: this is the way to go with older Chef (<11.6.0), before they added lazy attribute evaluation.
Wrap your foobar resource into ruby_block and define foobar dynamically. This way after the compile stage you will have a ruby code in resource collection and it will be evaluated in run stage.
ruby_block "mount #{mount_point} using foobar" do
block do
res = Chef::Resource::Foobar.new( mount_point, run_context )
res.disk_uuid node[:volumes][mount_point][:uuid]
res.run_action :do_stuff
end
end
This way node[:volumes][mount_point][:uuid] will not be known at compile time, but it also will not be accessed at compile time. It will only be accessed in running stage, when it should already be set.

Resources