I need to use a variable as a part of a string that will be used to address another variable in a gstring.
In syntesys, what I'd like to do is: ${${it}_checkout}
The whole code line would be:
def checkouts = repos.collect{"${it} = ${${it}_checkout} "}
With repos being a list of repositories to checkout.
Each repo has an property called <repo>_checkout.
For instance, if I have two repos, called foo and bar, I'll have two variables called foo_checkout and bar_checkout, containing the branches to be checkouted.
I'm trying to construct the following string: "foo=$foo_checkout bar=$bar_checkout".
That will be translated to "foo=master bar=dev"
Is there a way ?
Yeh, just do:
def checkouts = repos.collect{ "$it = ${it}_checkout" }
Or, depending on how you declare your properties, you can do:
root_checkout = 'woo'
repo_checkout = 'yay'
['root', 'repo'].collect { r -> "$r = ${getProperty(r + '_checkout')}" }
Related
I want to read a list of objects from a yaml file via terraform code and map it to a local variable. Also i need search an object with a key and get the values from a yaml file. Can anyone suggest suitable solution?
my yaml file looks like below. Here use will be the primary key
list_details:
some_list:
- use: a
path: somepath
description : "some description"
- use: b
path: somepath2
description : "some description 2"
I have loaded the yaml file in my variable section in Terraform like this
locals {
list = yamldecode(file("${path.module}/mylist.yaml"))
}
Now the problem is how I can get one object with its values by passing the "use" value to the list?
"
Assuming that use values are unique, you can re-organize your list into map:
locals {
list_as_map = {for val in local.list["list_details"]["some_list"]:
val["use"] => val["path"]}
}
which gives list_as_map as:
"a" = "somepath"
"b" = "somepath2"
then you access the path based on a value of use:
path_for_a = local.list_as_map["a"]
Update
If you want to keep description, its better to do:
list_as_map = {for val in local.list["list_details"]["some_list"]:
val["use"] => {
path = val["path"]
description = val["description"]
}
}
then you access the path or description as:
local.list_as_map["a"].path
local.list_as_map["a"].description
I have a .env file that contains the following data
API_URL=${API_URL}
API_KEY=${API_KEY}
API_SECRET=${API_SECRET}
Setting environment variables in Jenkins and passing them to the pipeline is clear. But it is not clear how do I replace ${API_URL}, ${API_KEY} & ${API_SECRET} in the .env file with their values in the Jenkins environment variable? Plus, how do I loop through all the Jenkins variables?
This basically requires two steps:
Get all environment variables
Replace values of environment variables in the template (.env) file
Let's start with #2, because it dictates which kind of data #1 must produce.
2. Replace variables in a template
We can use Groovy's SimpleTemplateEngine for this task.
def result = new SimpleTemplateEngine().createTemplate( templateStr ).make( dataMap )
Here templateStr is the template string (content of your .env file) and dataMap must be a Map consisting of string keys and values (the actual values of the environment variables). Getting the template string is trivial (use Jenkins readFile step), reading the environment variables into a Map is slightly more involved.
1. Read environment variables into a Map
I wrote "slightly more involved" because Groovy goodness makes this task quite easy aswell.
#Chris has already shown how to read environment variables into a string. What we need to do is split this string, first into separate lines and then each line into key and value. Fortunately, Groovy provides the member function splitEachLine of the String class, which can do both steps with a single call!
There is a little caveat, because splitEachLine is one of the functions that doesn't behave well in Jenkins pipeline context - it would only return the first line. Moving the critical code into a separate function, annotated with #NonCPS works around this problem.
#NonCPS
Map<String,String> envStrToMap( String envStr ) {
def envMap = [:]
envStr.splitEachLine('=') {
envMap[it[0]] = it[1]
}
return envMap
}
Finally
Now we have all ingredients for letting Jenkins cook us a tasty template soup!
Here is a complete pipeline demo. It uses scripted style, but it should be easy to use in declarative style as well. Just replace node with a script block.
import groovy.text.SimpleTemplateEngine
node {
// TODO: Replace the hardcoded string with:
// def tmp = readFile file: 'yourfile.env'
def tmp = '''\
API_URL=${API_URL}
API_KEY=${API_KEY}
API_SECRET=${API_SECRET}'''
withEnv(['API_URL=http://someurl', 'API_KEY=123', 'API_SECRET=456']) {
def envMap = getEnvMap()
echo "envMap:\n$envMap"
def tmpResolved = new SimpleTemplateEngine().createTemplate( tmp ).make( envMap )
writeFile file: 'test.env', text: tmpResolved.toString()
// Just for demo, to let me see the result
archiveArtifacts artifacts: 'test.env'
}
}
// Read all environment variables into a map.
// Here, #NonCPS must NOT be used, because we are calling a Jenkins step.
Map<String,String> getEnvMap() {
def envStr = sh(script: 'env', returnStdout: true)
return envStrToMap( envStr )
}
// Split a multiline string, where each line consists of key and value separated by '='.
// It is critical to use #NonCPS to make splitEachLine() work!
#NonCPS
Map<String,String> envStrToMap( String envStr ) {
def envMap = [:]
envStr.splitEachLine('=') {
envMap[it[0]] = it[1]
}
return envMap
}
The pipeline creates an artifact "test.env" with this content:
API_URL=http://someurl
API_KEY=123
API_SECRET=456
You can access variables by executing simple shell in scripted pipeline:
def variables = sh(script: 'env|sort', returnStdout: true)
Then programatically in Groovy convert it to list and iterate using each loop.
According to replacing variables, if you're not using any solution which can access env variables then you can use simple text operations like executing sed on that file.
I'm currently trying to write a small script which parses weekly reports(e-mails) and stores the data I want in variables, so I can handle them further.
The functionality is working already, but it's sort of hacked in at the moment and I'd like to learn how the code should look idealy.
This is my code at the moment - I shortened it a bit and made it more generic:
activated_accounts_rx = Regexp.new(/example/)
canceled_accounts_rx = Regexp.new(/example/)
converted_accounts_rx = Regexp.new(/example/)
File.open("weekly_report.txt") do |f|
input = f.read
activated_accounts = input.scan(activated_accounts_rx).join
canceled_accounts = input.scan(canceled_accounts_rx).join
converted_accounts = input.scan(converted_accounts_rx).join
end
I thought of something like this and I know that it can't work, but I don't know how I can get it to work:
var_names = ["activated_accounts",
"canceled_accounts",
"converted_accounts"]
regex = { "#{var_names[0]}": Regexp.new(/example/),
"#{var_names[1]}": Regexp.new(/example/) }
File.open("weekly_report.txt") do |f|
input = f.read
for name in var_names
name = input.scan(regex[:#{name}]).join
end
end
I would like to end up with variables like this:
activated_accounts = 13
canceled_accounts = 21
converted_accounts = 5
Can someone help me please?
You probably don't need to have a separate array for your variables. Just use them as the keys in the hash. Then you can access the values later from another hash. Better yet, if you don't need the regexes anymore, you can just replace that value with the scanned contents.
regexHash = {
activated_accounts: Regexp.new(/example/),
canceled_accounts : Regexp.new(/example/),
converted_accounts: Regexp.new(/example/)
}
values = {}
contents = File.open("weekly_report.txt").read
regexHash.each do |var, regex|
values[var] = contents.scan(regex).join
end
To access your values later just use
values[:var_name] # values[:activated_accounts] for example
If you want separate name's variable in an array, you can use to_sym:
regex = { var_names[0].to_sym => Regexp.new(/example/),
var_names[1].to_sym => Regexp.new(/example/) } #Rocket notation!
And:
for name in var_names
name = input.scan(regex[var_names[0].to_sym]).join
end
Anyway, i prefer the Rob Wagner advice.
I have a string which looks like the following:
string = " <SET-TOPIC>INITIATE</SET-TOPIC>
<SETPROFILE>
<PROFILE-KEY>predicates_live</PROFILE-KEY>
<PROFILE-VALUE>yes</PROFILE-VALUE>
</SETPROFILE>
<think>
<set><name>first_time_initiate</name>yes</set>
</think>
<SETPROFILE>
<PROFILE-KEY>first_time_initiate</PROFILE-KEY>
<PROFILE-VALUE>YES</PROFILE-VALUE>
</SETPROFILE>"
My objective is to be able to read out each top level that is in caps with the parse. I use a case statement to evaluate what is the top level key, such as <SETPROFILE> but there can be lots of different values, and then run a method that does different things with the contnts of the tag.
What this means is I need to be able to know very easily:
top_level_keys = ['SET-TOPIC', 'SET-PROFILE', 'SET-PROFILE']
when I pass in the key know the full value
parsed[0].value = {:PROFILE-KEY => predicates_live, :PROFILE-VALUE => yes}
parsed[0].key = ['SET-TOPIC']
I currently parse the whole string as follows:
doc = Nokogiri::XML::DocumentFragment.parse(string)
parsed = doc.search('*').each_with_object({}){ |n, h|
h[n.name] = n.text
}
As a result, I only parse and know of the second tag. The values from the first tag do not show up in the parsed variable.
I have control over what the tags are, if that helps.
But I need to be able to parse and know the contents of both tag as a result of the parse because I need to apply a method for each instance of the node.
Note: the string also contains just regular text, both before, in between, and after the XML-like tags.
It depends on what you are going to achieve. The problem is that you are overriding hash keys by new values. The easiest way to collect values is to store them in array:
parsed = doc.search('*').each_with_object({}) do |n, h|
# h[n.name] = n.text :: removed because it overrides values
(h[n.name] ||= []) << n.text
end
i am trying ...
loadRecipe('existingpackage')
class NewPackage(PackageRecipe):
name = 'newpackage-test'
p = existingpackage.version
print p
but getting error, that existingpackage is not defined
You got it right that loadRecipe needs the name of the package. But to access information from the recipe, you should use the class defined there, not the package name or the recipe filename. (That's also quite natural. Sometimes recipes can define more than one classes.)
For example, in a firefox plugin, I want the version of firefox, so that the plugin can be installed to the right place.
loadRecipe('firefox')
class FirefoxPackageSearch(PackageRecipe):
[snip]
def setup(r):
[snip]
r.macros.ff_version = '.'.join(FireFox.version.split('.')[:2])
I load the firefox recipe and use Firefox.version to get what I want.
since conary is just like (almost) coding in python:
p = .version
print 'your package's Version Number: ' + p
rhs = p.split("_",1)
print 'Latest Your Package's Changeset: ' + rhs[1]