passing parameters from external file (template) to puppet script - ruby

Hi I'm trying to execute an exe file using puppet script. My exe file is accepting 3 parameters like param1, param2 and param3. All I want is to pass these parameters through external file. How can I do this?
Here is my sample code:
exec { "executing exe file":
command => 'copyfile.exe "DestinatoinPath" "sourcefilename" "destinationfilename" ',
}
All I want is to pass all these values from external file and use it here.
Can someone help me to resolve this
Here is my trail:
Here is my directory structure:
puppet\modules\mymodule\manifests\myfile.pp and
puppet\modules\mymodule\templates\params.erb
and my erb file is having a value of path ex: d:\test1.txt e:\test1.txt testfilename
$myparams = template("mymodule/params.erb")
exec { "executing exe file":
command => '$myparams',
}

EDIT:
The root of the problem was trying to call the module manifest directly, thus the template lookup failed. The solution was not to use a module and specify the full template path.
There are 2 main ways to go about it:
Declare the variables in scope
#acceptable for a throwaway manifest
$path = "DestinationPath"
$source = "sourcefilename"
$destination "destinationfilename"
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
Wrap it in a parameterized class/defined type
# parameterized class, included only once
class executing_exe_file ($path, $source, $destination) {
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
}
OR
# defined resource, can be repeated multiple times
define executing_exe_file ($path, $source, $destination) {
exec { "executing exe file":
command => 'copyfile.exe ${path} ${source} ${destination}',
}
}
THEN
executing_exe_file { "executing exe file":
path: "DestinationPath",
source: "sourcefilename",
destination: "destinationfilename",
}
Also as a side note, you have to make sure copyfile.exe is fully qualified.

Related

How to return output of shell script into Jenkinsfile [duplicate]

I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).

Access string variable from bash in jenkinsfile groovy script

I'm building several android apps in a docker image using gradle and a bash script. The script is triggered by jenkins, which runs the docker image.
In the bash script I gather information about the successes of the builds. I want to pass that information to the groovy script of the jenkinsfile.
I tried to create a txt file in the docker container, but the groovy script in the jenkinsfile can not find that file.
This is the groovy script of my jenkinsfile:
script {
try {
sh script:'''
#!/bin/bash
./jenkins.sh
'''
} catch(e){
currentBuild.result = "FAILURE"
} finally {
String buildResults = null
try {
def pathToBuildResults="[...]/buildResults.txt"
buildResults = readFile "${pathToBuildResults}"
} catch(e) {
buildResults = "error receiving build results. Error: " + e.toString()
}
}
}
In my jenkins.sh bash script I do the following:
[...]
buildResults+=" $appName: Build Failed!" //this is done for several apps
echo "$buildResults" | cat > $pathToBuildResults //this works I checked, if the file is created
[...]
The file is created, but groovy cannot find it. I think the reason is, that the jenkins script does not run inside the docker container.
How can I access the string buildResults of the bash script in my groovy jenkins script?
One option that you have in order to avoid the need to read the results file is to modify your jenkins.sh script to print the results to the output instead of writing them to a file and then use the sh step to capture that output and use it instead of the file.
Something like:
script {
try {
String buildResults = sh returnStdout: true, script:'''
#!/bin/bash
./jenkins.sh
'''
// You now have the output of jenkins.sh inside the buildResults parameter
} catch(e){
currentBuild.result = "FAILURE"
}
}
This way you are avoiding the need to handle the output files and directly get the results you need, which you can then parse and use however you need.

Calling Shell-methods by chain of files in subdirectories

I'm trying to call methods from file to file with structure like:
/root
/subDir
/subSubDir
inSubSub.sh
inSub.sh
inRoot.sh
Files contents:
inRoot.sh:
#!/bin/bash
source ./subDir/inSub.sh
subMethod;
inSub.sh:
#!/bin/bash
source ./subSubDir/inSubSub.sh
subMethod () {
echo "I'm in sub"
}
subSubMethod;
inSubSub.sh:
#!/bin/bash
subSubMethod () {
echo "I'm in subSub"
}
subSubMethod;
Result of running $ ./inRoot.sh
subDir/inSub.sh: line 2: subSubDir/inSubSub.sh: No such file or directory
subDir/inSub.sh: line 6: subSubMethod: command not found
I'm in sub
So, it works for the first call but doesn't work deeper.
btw: using . ./ instead of source ./ returns the same
How to do it right, if it's possible?
You must change your inSub.sh like that
cat ./subDir/inSub.sh
#!/bin/bash
var="${BASH_SOURCE[0]}"
source "${var%/*}"/subSubDir/inSubSub.sh
subMethod () {
echo "I'm in sub"
}
subSubMethod;

Are there any existing methods for importing functions from other scripts without sourcing the entire script?

I am working on a large shell program and need a way to import functions from other scripts as required without polluting the global scope with all the internal functions from that script.
UPDATE: However, those imported functions have internal dependancies. So the imported function must be executed in the context of its script.
I came up with this solution and wonder if there is any existing strategy out there and if not, perhaps this is a really bad idea?
PLEASE TAKE A LOOK AT THE POSTED SOLUTION BEFORE RESPONDING
example usage of my solution:
main.sh
import user get_name
import user set_name
echo "hello $(get_name)"
echo "Enter a new user name :"
while true; do
read user_input < /dev/tty
done
set_name $user_input
user.sh
import state
set_name () {
state save "user_name" "$1"
}
get_name () {
state get_value "user_name"
}
As one approach, you could put a comment in the script to indicate where you want to stop sourcing:
$ cat script
fn() { echo "You are running fn"; }
#STOP HERE
export var="Unwanted name space pollution"
And then, if you are using bash, source it like this:
source <(sed '/#STOP HERE/q' script)
<(...) is process substitution and our process, sed '/#STOP HERE/q' script just extracts the lines from script until the stop line is reached.
Adding more precise control
We can select particular sections from a file if we add both start and stop flags:
$ cat script
export var1="Unwanted name space pollution"
#START
fn1() { echo "You are running fn1"; }
#STOP
export var2="More unwanted name space pollution"
#START
fn2() { echo "You are running fn2"; }
#STOP
export var3="More unwanted name space pollution"
And then source the file like this:
source <(sed -n '/#START/,/#STOP/p' script)
create standalone shel script that do this
will have 2 argument the file name and the function name
it will source the input file first
it will then use declare -f function name
in your code you can include functions like this
eval "./importfunctions.sh filename functionaname"
what is happening here :
step 1 basically read the file and source it in new shell environment . then it will echo the function declaration
step 2 will eval that function into our main code
So final result is as if we wrote just that function in our main script
When the functions in the script indent untill the closing } and all start with the keyword function, you can include specific functions without changing the original files:
largeshell.sh
#!/bin/bash
function demo1 {
echo "d1"
}
function demo2 {
echo "d2"
}
function demo3 {
echo "d3"
}
function demo4 {
echo "d4"
}
echo "Main code of largeshell... "
demo2
Now show how to source demo1() and forget demo4():
source <(sed -n '/^function demo1 /,/^}/p' largeshell.sh)
source <(sed -n '/^function demo3 /,/^}/p' largeshell.sh)
demo1
demo4
Or source all functions in a loop:
for f in demo1 demo3; do
echo sourcing $f
source <(sed -n '/^function '$f' /,/^}/p' largeshell.sh)
done
demo1
demo4
You can make it more fancy when you source a special script that will:
grep all strings starting with largeshell., like largefile.demo1
generate functions like largefile.demo1 that will call demo1
and source all functions that are called.
Your new script will look like
source function_includer.sh
largeshell.demo1
largeshell.demo4
EDIT:
You might want to reconsider your requirements.
Above solution is not only slow, but it will also make it hard for the
guys and ladies who made tha largeshell.sh. As soon as they are going to refactor their code or replace it with something in another language,
they have to refactor, test and deploy your code as well.
A better path is extracting the functions from largeshell.sh into some smaller files ("modules"), and put them in a shared directory (shlib?).
With names as sqlutil.sh, datetime.sh, formatting.sh, mailstuff.sh and comm.sh you can pick the includes file you need (and largefile.sh will include them all).
It's been a while and it would appear that my original solution is the best one out there. Thanks for the feedback.

Set Environment Variables with Puppet

I am using vagrant with puppet to set up virtual machines for development environments. I would like to simply set a few environment variables in the .pp file. Using virtual box and a vagrant base box for Ubuntu 64 bit.
I have this currently.
$bar = 'bar'
class foobar {
exec { 'foobar':
command => "export Foo=${bar}",
}
}
but when provisioning I get an error: Could not find command 'export'.
This seems like it should be simple enough am I missing some sort of require or path for the exec type? I noticed in the documentation there is an environment option to set up environment variables, should I be using that?
If you only need the variables available in the puppet run, whats wrong with :
Exec { environment => [ "foo=$bar" ] }
?
Simplest way to acomplish this is to put your env vars in /etc/environment, this ensures they are available to everything (or pretty much everything).
Something like this:
class example($somevar) {
file { "/etc/environment":
content => inline_template("SOMEVAR=${somevar}")
}
}
Reason for having the class parameterised is so you can target it from hiera with automatic variable lookup (http://docs.puppetlabs.com/hiera/1/puppet.html#automatic-parameter-lookup) ... if you're sticking something in /etc/environment, it's usually best if you actually make it environment specific.
note: I've only tested this on ubuntu
The way I got around it is to also use /etc/profile.d:
$bar = 'bar'
file { "/etc/profile.d/my_test.sh":
content => "export Foo=${bar}",
mode => 755
}
This ensures that everytime you login (ex ssh), the variable $MYVAR gets exported to your environment. After you apply through puppet and login (ex ssh localhost), echo $Foo would return bar
You can set an environment variable by defining it on a line in /etc/environment and you can ensure a line inside a file using file_line in puppet. Combine these two into the following solution:
file_line { "foo_env_var":
ensure => present,
line => "Foo=${bar}",
path => "/etc/environment",
}
You could try the following, which sets the environment variable for this exec:
class foobar {
exec { 'foobar' :
command => "/bin/bash -c \"export Foo=${bar}\"",
}
}
Something like this would work while preserving existing contents of the /etc/environment file:
/code/environments/{environment}/manifests/environment/variable.pp:
define profile::environment::variable (
$variable_name,
$value,
$ensure => present,
) {
file_line { $variable_name:
path => '/etc/environment',
ensure => $ensure,
line => "$variable_name=$value",
match => "$variable_name=",
}
}
Usage (in the body of a node manifest):
profile::environment::variable { 'JAVA_HOME':
variable_name => 'JAVA_HOME',
value => '/usr/lib/jvm/java-1.8.0',
}
I know this is an old question, but I was able to set the PS1 prompt value and add it to my .bashrc file like this:
$PS1 = '\[\e[0;31m\]\u\[\e[m\] \[\e[1;34m\]\w\[\e[m\] \$ '
and within a class:
exec {"vagrant-prompt":
unless => "grep -F 'export PS1=\"${PS1}\"' ${HOME_DIR}/.bashrc",
command => "echo 'export PS1=\"${PS1}\"' >> ${HOME_DIR}/.bashrc",
user => "${APP_USER}",
}
The -F makes grep it interpret it as a fixed string. Otherwise it won't find it and keeps adding to the .bashrc file.
Another variation. This has the advantage that stdlib isn't required (as is with file_line solutions), and the existing content of /etc/environment is preserved:
exec {'echo foo=bar>>/etc/environment':
onlyif => 'test -f /etc/environment',
unless => 'grep "foo=bar" /etc/environment',
path => '/usr/bin',
}
Check out the documentation https://puppet.com/docs/puppet/5.5/types/exec.html
class envcheck {
file { '/tmp/test':
ensure => file,
}
exec { 'foobar':
command => 'echo $bar >> /tmp/test',
environment => ['bar=foo'],
path => ['/bin/'],
}
}
Creating an empty file because an echo would happen in the shell Puppet is running the command in, not the one we're looking at.
Setting an environment variable bar to equal foo.
Setting the path for the echo binary, this isn't normally necessary for system commands but useful to know about.

Resources