I am using a Jenkins plugin to upload test run results to Jira. Using this plugin I can send two JSON blobs of data for the import, but the variables in those JSON blobs can only be environment variables (not variables generally available in the Jenkinsfile).
When I run it is recognizing environment variables that come from the parameters block (this is a parameterized build), but it does not recognize any environment variables I set, either in an environment {} block in the pipeline or by nesting the build step in a withEnv() {} block.
As a sanity check, right before the step in question, I echo two environment variables, one from the parameters block and one from the environment block, and both spit out to the console as expected, but then, as consumed by the plugin, only the variables coming from the parameters block are read as variables, with the rest being left as string.
So is there some difference in how these environment variables are stored/managed behind the scenes that might play into this?
So, for example, here are the parameters and environment blocks:
parameters {
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'staging', 'prod'], description: 'Select the environment to run against.')
choice(name: 'TESTS', choices: ['All', 'API', 'Web'], description: 'Select the tests to run.')
}
environment {
PROJECT_KEY = "$jiraProjectKey"
TEST_PLAN_KEY = "$testPlanKeys[$env.ENVIRONMENT]"
PRODUCT_NAME = "$productName"
TEAM_NAME = "$teamName"
}
When I used these environment variables in the JSON blobs to set the Summary field of a Test Execution in Jira with a line that looks like this:
...
"summary": "${ENVIRONMENT} - ${PRODUCT_NAME} - ${TESTS} Tests",
...
The resulting issue summary is:
dev - ${PRODUCT_NAME} - API Tests
So it will properly interpret the environment variables set by the parameters block, but not ones I set explicitly in the environment block.
In the JSON blobs that you are sending inline make sure that for multiline strings you are using """ to delimit those strings and not '''.
Replace:
... importInfo: '''{...'''
by:
...importInfo: """{..."""
Related
I'm trying to run the npm-script from the Jenkins pipeline via the SAP Project Piper's npmExecuteScripts:
npmExecuteScripts:
runScripts: ["testScript"]
That works! Now, I want to pass some arguments to my script.
According to the Project Piper documentation, there is a property scriptOptions, which cares about passing arguments to the called script:
Options are passed to all runScripts calls separated by a --. ./piper npmExecuteScripts --runScripts ci-e2e --scriptOptions --tag1 will correspond to npm run ci-e2e -- --tag1
Unfortunately, I can't figure out what is the proper syntax for that command.
I've tried several combinations of using scriptOptions, e.g.:
scriptOptions: ["myArg"]
scriptOptions: ["myArg=myVal"]
and many others, but still no desired outcome!
How can I call an npm-script and pass arguments / parameters to the script using the Project Piper's npmExecuteScripts?
To solve the issue, it's important to bear in mind that in contrast to the regular argument-value mapping via the npm_config_-prefix, the SAP Project Piper scriptOptions doesn't perform a mapping and passes an array of argument-value pairs «as is» instead, and then this array can be picked up via process.argv.
The Jenkins pipeline configuration:
npmExecuteScripts:
runScripts: ["testScript"]
scriptOptions: ["arg1=Val1", "arg2=Val2"]
package.json:
"scripts": {
"testScript": "node ./testScript.mjs"
}
The server-side script:
/**
* #param {Array.<String>} args - array of console input arguments to be parsed
*/
const testScript = function testScript(args) {…}
testScript(process.argv.slice(2));
P.S. Just to compare, the regular way to pass an argument's value to the npm-script looks like:
npm run testScript --arg=Val
and the server-side script:
"testScript": "echo \"*** My argument's value: ${npm_config_arg} ***\""
The output:
*** My argument's value: Val ***
The npm-script engine performs an argument-value mapping under the hood by using the npm_config_-prefix.
I have a Terraform project that I was trying to use Jenkin's Custom Checkbox plugin (Custom Checkbox Parameter) with so that I can build separate applications dynamically using the same IaC, however, I'm getting the following error when passing in the name parameter for that plugin into the Terraform plan and apply commands.
syntax error: bad substitution
The idea for all this is just to click on "select all" or each individual app and run the build, and this will create the IaC for the given application(s).
I have a terraform plan that I am running as a smoke test to verify the parameters above are being passed in correctly before running the apply. This looks like the following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[${params[${please-work}]}]" -input=false'
The documentation for the plugin states that you can reference the items checked by using this format: "${params['please-work']}" which is what I've done above. That said, one caveat to this is that Im having to set the values in quotes for this to work since the variables are being set in the Terraform using list(string).
NOTE: I have tested that all this works if I just hardcode the app names with the escapes as following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[\\"app-1\\",\\"app-2\\"]" -input=false'
Again, what I need is for this to work with the -var "app_name=[${params[${please-work}]}]" without throwing that error.
If needed, here is the setup for the JSON that the plugin is using:
Additionally, I can see the values are being set the way I need them to be set when running the echo of echo "${params['please-work']}" on the initial build step. So these are coming back as "app-1", "app-2"
Again, all but that one bit is working and I've tried various ways to escape the needed strings to get this work and I need insight on a path forward. This would be greatly appreciated.
You are casting the script argument in your sh step method as a literal string, and therefore it will not interpolate the pipeline variable of type object params within the Groovy pipeline interpreter. You also are passing the variable value for the app_name with [] syntax (attempted list constructor?), which is not syntactically valid for shell, Terraform, or JSON, but is for Jenkins Pipeline and Groovy with undesired behavior (unclear what is desired here). Finally, please-work is a literal string and not a Jenkins Pipeline or Groovy variable, and since params is technically an object and not a Map, you must use the . syntax and not the [] syntax for accessors. You must update with:
sh(label: 'Execute Terraform Plan', script: "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=${params.please-work}' -input=false")
If another issue arises after fixing all of this, then it would be recommended to convert the plugin usage to the pipeline with a parameters directive, and also to probably remove the unusual characters e.g. - from the parameter name.
Thanks for helping me think through this, Matt. I was able to resolve the issue with the following shell script in the declarative pipeline:
sh "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=[${params['please-work']}]' -input=false"
This is working now.
I have written a terraform configuration with variable definition like:
variable "GOOGLE_CLOUD_REGION" {
type = string
}
When I run terraform plan I am asked to fill in this variable even though this variable is set within my environment.
Is there a way to tell terraform to work with current env vars? Or do I have to export them and pass them somehow manually one-by-one?
You can define the environment variable TF_VAR_GOOGLE_CLOUD_REGION to set that variable.
If you are using bash, it might look like this:
export TF_VAR_GOOGLE_CLOUD_REGION="$GOOGLE_CLOUD_REGION"
terraform apply ...
From Environment Variables under Configuration Language: Input Variables.
As a fallback for the other ways of defining variables, Terraform searches the environment of its own process for environment variables named TF_VAR_ followed by the name of a declared variable.
This can be useful when running Terraform in automation, or when running a sequence of Terraform commands in succession with the same variables. For example, at a bash prompt on a Unix system:
$ export TF_VAR_image_id=ami-abc123
$ terraform plan
...
You can create a file that ends with .tfvars or .tfvars.json and then when you run a plan you specify that file:
terraform apply -var-file="example.tfvars"
If you name the file terraform.tfvars or terraform.tfvars.json or have a file with names ending in .auto.tfvars or .auto.tfvars.json
then Terraform automatically loads the variable definition file and you don't have to manually specify it when you run a plan.
An example of what the terraform.tfvars file will look like:
first_env_var = "environment_variable_one"
second_env_var = "environment_variable_two"
An example of what the terraform.tfvars.json file will look like:
{
"image_id": "ami-abc123",
"availability_zone_names": ["us-west-1a", "us-west-1c"]
}
I would approach this by creating a variables.tf file, within the project directory. with the required variable block you can specify a default:
variable "GOOGLE_CLOUD_REGION" {
type = string
default = "us-west1"
}
this will then be used as the default value during each run, and you will not be prompted.
We used to be able to check if a parameter is available via:
binding.variables.containsKey()
or
getBinding().hasVariable()
But that no longer works at least as of Jenkins v 2.39. (These functions work for variables set within the groovy script but not the parameters from 'Build with Parameters'.)
Instead of using binding.variables.containsKey() to check, you should use:
params.containsKey()
would like to know the difference and recommended approach to use between global variable vs node.run_state
test.rb
dbpassword=''
ruby_block "load_databag_secret" do
block do
secret_key = Chef::EncryptedDataBagItem.load_secret("/home/test/db_key")
db_keys = Chef::EncryptedDataBagItem.load("mydatabag", "mydatabagitem", secret_key)
end
dbpassword=db_keys['DB_PASSWORD']
node.run_state['password']=db_keys['DB_PASSWORD']
end
end
execute "Enable on hosts" do
command lazy { "echo #{node.run_state['password']} > /home/app/db.txt" }
end
template "/config/properties" do
source "properties.erb"
variables(lazy {
:db_password => { node.run_state['password'] },
})
or using node.run_state['password'] in place of global variable in this .rb file
Now execute command worked fine im able to see the password on the echoed file db.txt where as when i used lazy in template variables it outputed as empty value for db_password in template.
So a few issues, first what you have there isn't a global variable, it's a local variable. Globals in Ruby start with $. Second, you can't assign to a local variable from an enclosing scope like that in Ruby (or, indeed, in most languages). That assignment just creates a second dbpassword local variable scoped to the block. You could, however, use a mutation rather than a variable assignment (e.g. dbpassword << whatever). Third, you can't actually use lazy deeply inside the variables hash like that, it has to be at the top level. Fourth, you can straight up side-step all of this if you're only using the value you once like in that example:
template "/config/properties" do
source "properties.erb"
variables lazy {
secret_key = Chef::EncryptedDataBagItem.load_secret("/home/test/db_key")
db_keys = Chef::EncryptedDataBagItem.load("mydatabag", "mydatabagitem", secret_key)
{db_password: db_keys['DB_PASSWORD']}
}
end
Just for completeness in case others find this via Google, with real global variables the biggest difference is unit testing, the run state is tied to the converge so individual unit tests won't see each other's values which is always nice (though of course you could work around this in your code).