I'm trying to upload multiple Test cases at one go. How to upload multiple Test cases at one time in ALM ?
All flow files which you would upload should be updated with name attribute.
Make sure the src folder has a properties file named as “multipleFlows.properties” or you would have to create it.
Update the multipleFlows.properties file with all the flow ids and flow xml path that you would like to upload through ALMSync as mentioned below.
Ex: multipleFlows.properties file should contain as below format
flow1_id=flow1_xml_path
flow2_id=flow2_xml_path
flow3_id=flow3_xml_path
flow4_id=flow4_xml_path
Open the Run Configuration ALMSync >> Arguments tab and update the arguments as
createTestCase flow_map multipleFlows
I want to set the value of 'teamcity.build.branch' dynamically according to the result of another TC build configuration part of the build pipeline.
Is that even possible? It looks like the value is evaluated and used at the start of the build pipeline.
UseCase:
I am executing a TC build configuration that will generate a unique number
in the connected TC build configuration part of the same pipeline I want the number to be used in the 'teamcity.build.branch' - just for visualization purposes
I am already using message service to overwrite the parameter, but the change is not taken into account. It looks like the value is read in the very early stage of the build process.
Check below reference containing build number and git branch name
https://octopus.com/blog/teamcity-version-numbers-based-on-branches
You could overwrite the value of the parameter by using a simple script that emits a "set parameter" service message.
By using a dedicated service message in your build script, you can dynamically update build parameters of the build right from a build step (...)
With that approach, here are the steps that you need to perform:
In the first build config, define a custom build parameter and set its value to the unique number you're generating. Do this directly from the script that generates the unique number by writing something like this to STDOUT:
##teamcity[setParameter name='magicNumber' value='1234']
In the dependent build config, you now have access to that parameter. Using a second build script, you can overwrite the teamcity.build.branch with the same mechanism:
##teamcity[setParameter name='teamcity.build.branch' value='the new value']
Note 1: I recommend against overwriting the built-in parameters, because this might have strange side-effects. Rather, define a custom parameter in the second build config and use that for your visualization purposes.
Note 2: In case you decide to ignore Note 1, it may be necessary to overwrite the build parameters by setting the dependency property as outlined in the docs in section "Overriding Dependencies Properties":
##teamcity[setParameter name='reverse.dep.*.teamcity.build.branch' value='the new value']
Is there a way to capture and store (or write to a file) the values returned in the Response? (Checkpoint values)
Using HP UFT 11.52
Thanks,
Lynn
I figured it out. In UFT API under Standard Activities, there are File function modules including "Write to File". I added the module to the test, set the path and other properties, passed the variable to the file and it worked! Couldn't be easier.
I mentioned this on my other answer , you can also write it programatically if you have dynamic array response please refer below:
https://stackoverflow.com/a/28012383/3972994
After running a test, in the test folder, you can find a Snapshots/LastIteration directory.
In it you can find the return value for each step saved in a txt file.
Pay attention that if you data drive the step, only the last iteration will be saved to file.
However, in the Test's log (Test dir/Log/vtd_user.log) you can find all the iterations persisted
Thanks,
Yossi
You do not need to use the standard activities if you do this
var iResponse = this.Activity.responsebody;
System.IO.File.WriteLines(#"directorypath&FileName);
the above will write the response to the file and rewrite it for every run
I am in the process of setting up a Jenkins job to run a bunch of tests on some c++ code. The code is generated during one Jenkins job. There are a number of sub-projects, with their code in their own folders.
My thought is to have a matrix job where each configuration runs the test on one folder of code files. There are two things that I am not sure the best way to do though...
I would like to set up the matrix job to automatically pick up if more sub-folders are added. Something like passing a list of folders to the job as a parameter, and have that parameter used as the axis for the job.
I would like the test to not be run on a specific folder unless some of the code in that folder was changed by the parent job.
Right now how to set up this test is completely open- I am trolling for ideas. If you have ever set up something like this- how did you do it?
I had similar task - running a matrix job with variable number of folders as one axis. The folders were in version control but could easily be artifact. What I've done, is create two jobs, one main and normal, the other slave and matrix. Here is the code that needs to be run as elevated groovy in the main job:
import hudson.model.*
def currentBuild = Thread.currentThread().executable;
def jobName = 'SlaveMatrixJob' // Name of the matrix job to configure
def axisFolders = []
def strings =""
// Get the matrix job
def job = hudson.model.Hudson.instance.getItem(jobName)
assert job != null, "The job $jobName could not be found"
// Check it is a matrix job
assert job.getClass() == hudson.matrix.MatrixProject.class, "The job $jobName is of class '${job.getClass().name}', but expecting 'hudson.matrix.MatrixProject'"
// Get the folders
new File("C:\\Path\\Path").eachDirMatch ~/_test.*/, {it ->
println "Got folder: ${it.name}"
axisFolders << it.name
}
// Check if the array is empty
assert !axisFolders.isEmpty(), "No folders found to set in the matrix, aborting"
//Sort them
axisFolders.sort()
// Now set new axis list for test folders
def newAxisList = new hudson.matrix.AxisList()
newAxisList.add(new hudson.matrix.TextAxis('TEST_FOLDERS', axisFolders))
job.setAxes(newAxisList)
println "Matrix Job $jobName new axis list: ${job.getAxes().toString()}"
What this does basically is get all the folde in c:\path\path starting with _test and then inserting them in the SlaveMatrixJob parameter named TEST_FOLDERS.
I had to go with two jobs, since I was not able to make this dynamic update work without installing additional plugins, which was not possible at the time.
For the second point, you could add logic to the script to check if the folders have been updated since the last build and skip the ones that weren't. Or you could search for some plugins, but my advice is go with the script for simpler tasks.
I have two jobs in jenkins, both of which need the same parameter.
How can I run the first job with a parameter so that when it triggers the second job, the same parameter is used?
You can use Parameterized Trigger Plugin which will let you pass parameters from one task to another.
You need also add this parameter you passed from upstream in downstream.
1.Post-Build Actions > Select ”Trigger parameterized build on other projects”
2.Enter the environment variable with value.Value can also be Jenkins Build Parameters.
Detailed steps can be seen here :-
https://itisatechiesworld.wordpress.com/jenkins-related-articles/jenkins-configuration/jenkins-passing-a-parameter-from-one-job-to-another/
Hope it's helpful :)
The accepted answer here does not work for my use case. I needed to be able to dynamically create parameters in one job and pass them into another. As Mark McKenna mentions there is seemingly no way to export a variable from a shell build step to the post build actions.
I achieved a workaround using the Parameterized Trigger Plugin by writing the values to a file and using that file as the parameters to import via 'Add post-build action' -> 'Trigger parameterized build...' then selecting 'Add Parameters' -> 'Parameters from properties file'.
I think the answer above needs some update:
I was trying to create a dynamic directory to store my upstream build artifacts so I wanted to pass my upstream job build number to downstream job I tried the above steps but couldn't make it work. Here is how it worked:
I copied the artifacts from my current job using copy artifacts plugin.
In post build action of upstream job I added the variable like "SOURCE_BUILD_NUMBER=${BUILD_NUMBER}" and configured it to trigger the downstream job.
Everything worked except that my downstream job was not able to get $SOURCE_BUILD_NUMBER to create the directory.
So I found out that to use this variable I have to define the same variable in down stream job as a parameter variable like in this picture below:
This is because the new version of jenkins require's you to define the variable in the downstream job as well. I hope it's helpful.
(for fellow googlers)
If you are building a serious pipeline with the Build Flow Plugin, you can pass parameters between jobs with the DSL like this :
Supposing an available string parameter "CVS_TAG", in order to pass it to other jobs :
build("pipeline_begin", CVS_TAG: params['CVS_TAG'])
parallel (
// will be scheduled in parallel.
{ build("pipeline_static_analysis", CVS_TAG: params['CVS_TAG']) },
{ build("pipeline_nonreg", CVS_TAG: params['CVS_TAG']) }
)
// will be triggered after previous jobs complete
build("pipeline_end", CVS_TAG: params['CVS_TAG'])
Hint for displaying available variables / params :
// output values
out.println '------------------------------------'
out.println 'Triggered Parameters Map:'
out.println params
out.println '------------------------------------'
out.println 'Build Object Properties:'
build.properties.each { out.println "$it.key -> $it.value" }
out.println '------------------------------------'
Just add my answer in addition to Nigel Kirby's as I can't comment yet:
In order to pass a dynamically created parameter, you can also export the variable in 'Execute Shell' tile and then pass it through 'Trigger parameterized build on other projects' => 'Predefined parameters" => give 'YOUR_VAR=$YOUR_VAR'. My team use this feature to pass npm package version from build job to deployment jobs
UPDATE: above only works for Jenkins injected parameters, parameter created from shell still need to use same method. eg. echo YOUR_VAR=${YOUR_VAR} > variable.properties and pass that file downstream
I faced the same issue when I had to pass a pom version to a downstream Rundeck job.
What I did, was using parameters injection via a properties file as such:
1) Creating properties in properties file via shell :
Build actions:
Execute a shell script
Inject environment variables
E.g : properties definition
2) Passing defined properties to the downstream job :
Post Build Actions :
Trigger parameterized build on other project
Add parameters : Current build parameters
Add parameters : predefined parameters
E.g : properties sending
3) It was then possible to use $POM_VERSION as such in the downstream Rundeck job.
/!\ Jenkins Version : 1.636
/!\ For some reason when creating the triggered build, it was necessary to add the option 'Current build parameters' to pass the properties.
Reading through the answers, I don't see another option that I like so will offer it as well. I love the parameterization of jobs, but it doesn't always scale well. If you have jobs which are not directly downstream of the first job but farther down the pipeline, you don't really want to parameterize every job in the pipeline so as to be able to pass the parameters all the way through. Or if you have a large number of parameters used by a variety of other jobs (especially those not necessarily tied to one parent or master job), again parameterization doesn't work.
In these cases, I favor outputting the values to a properties file and then injecting that in whatever job I need using the EnvInject plugin. This can be done dynamically, which is another way to solve the issue from another answer above where parameterized jobs were still used. This solution scales very well in many scenarios.
This could be done via groovy function:
upstream Jenkinsfile - param CREDENTIALS_ID is passed downsteam
pipeline {
stage {
steps {
build job: "my_downsteam_job_name",
parameters [string(name: 'CREDENTIALS_ID', value: 'other_credentials_id')]
}
}
}
downstream Jenkinsfile - if param CREDENTIALS_ID not passed from upsteam, function returns default value
def getCredentialsId() {
if(params.CREDENTIALS_ID) {
return params.CREDENTIALS_ID;
} else {
return "default_credentials_id";
}
}
pipeline {
environment{
TEST_PASSWORD = credentials("${getCredentialsId()}")
}
}
You can use Hudson Groovy builder to do this.
First Job in pipeline
Second job in pipeline
I figured it out!
With almost 2 hours worth of trial and error, i figured it out.
This WORKS and is what you do to pass variables to remote job:
def handle = triggerRemoteJob(remoteJenkinsName: 'remoteJenkins', job: 'RemoteJob' paramters: "param1=${env.PARAM1}\nparam2=${env.param2}")
Use \n to separate two parameters, no spaces..
As opposed to
parameters: '''someparams'''
we use
paramters: "someparams"
the " ... " is what gets us the values of the desired variables. (These are double quotes, not two single quotes)
the ''' ... ''' or ' ... ' will not get us those values. (Three single quotes or just single quotes)
All parameters here are defined in environment{} block at the start of the pipeline and are modified in stages>steps>scripts wherever necessary.
I also tested and found that when you use " ... " you cannot use something like ''' ... "..." ''' or "... '..'..." or any combination of it...
The catch here is that when you are using "..." in parameters section, you cannot pass a string parameter; for example This WILL NOT WORK:
def handle = triggerRemoteJob(remoteJenkinsName: 'remoteJenkins', job: 'RemoteJob' paramters: "param1=${env.PARAM1}\nparam2='param2'")
if you want to pass something like the one above, you will need to set an environment variable param2='param2' and then use ${env.param2} in the parameters section of remote trigger plugin step
You can also make a job write into a properties file somewhere and have another job read it. One of the way to do that is to inject variables via EnvInject plugin.