I would get the changed project from git commit and install the package based on this.
Here is my code
bash 'get_project' do
code <<-EOH
filelist=$(git diff-tree --no-commit-id --name-only -r $1)
for file in ${filelist[#]}; do
project=$(echo $file | cut -d "/" -f1)
projectList+=($project)
done
for changedProject in $(echo "${projectList[#]}" | sort | uniq); do
INSTALLABLE_RPM=application-$changedProject
done
EOH
environment 'INSTALLABLE_RPM' => '$INSTALLABLE_RPM'
end
zypper_package ENV['INSTALLABLE_RPM']
My idea is to generate the INSTALLABLE_RPM variable with bash and install the package with zypper. Unfortunately it doesn't work. The zypper_package resource cant recognize.
I ran out of ideas :-(
The environment property of the bash resource is to supply existing environment variables to execute the bash command(s).
(These variables must exist for a command to be run successfully.)
Specifying environment variables here will not set them in the shell. Also from within the code block you will not be able to access the Ruby's ENV hash.
There may not be a straight-forward way to do this. One of the options is to write this package (list?) to a file. Then we can read the file contents into variable, and use it with zypper_package resource.
Example:
Since you have used for loop in Shell, I believe you get a list of packages, so I am considering pkg_list as Array. I've set compile_time to true as the variable assignment below bash resource will only run during compile time.
bash 'get_project' do
code <<-EOH
# your code as-it-is
for changedProject in $(echo "${projectList[#]}" | sort | uniq); do
echo "application-$changedProject" >> /tmp/rpm_packages
done
EOH
compile_time true
end
pkg_list = File.read('/tmp/rpm_packages').split
zypper_package pkg_list
# remove the file for good measure :)
file '/tmp/rpm_packages' do
action :delete
end
Related
I am using GitLab to deploy a project and have some environmental variables setup in the GitLab console which I use in my GitLab deployment script below:
- export S3_BUCKET="$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})"
- aws s3 rm s3://$S3_BUCKET --recursive
My environmental variables are declared like so:
Key: s3_bucket_development
Value: https://dev.my-bucket.com
Key: s3_bucket_production
Value: https://prod.my-bucket.com
The plan is that it grabs the bucket URL from the environmental variables depending on which branch is trying to deploy (CI_COMMIT_REF_NAME).
The problem is that the S3_BUCKET variable does not seem to get set properly and I get the following error:
> export S3_BUCKET=$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
> /scripts-30283952-2040310190/step_script: line 150: https://dev.my-bucket.com: No such file or directory
It looks like it picks up the environmental variable value fine but does not set it properly - any ideas why?
It seems like you are trying to get the value of the variables S3_BUCKET_DEVELOPMENT and S3_BUCKET_PRODUCTION based on the value of CI_COMMIT_REF_NAME, you can do this by using parameter indirection:
$ a=b
$ b=c
$echo "${!a}" # c
and in your case, you would need a temporary variable as well, something like this might work:
- s3_bucket_variable=S3_BUCKET_${CI_COMMIT_REF_NAME^^}
- s3_bucket=${!s3_bucket_variable}
- aws s3 rm "s3://$s3_bucket" --recursive
You are basically telling bash to execute command, named https://dev.my-bucket.com, which obviously doesn't exist.
Since you want to assign output of command when using VAR=$(command) you should probably use echo
export S3_BUCKET=$(eval echo \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
Simple test:
VAR=HELL; OUTPUT="$(eval echo "\$S${VAR^^}")"; echo $OUTPUT
/bin/bash
It dynamically creates SHELL variable, and then successfully prints it
my end goal is to parse a visual studio team services ssh git url and use it to clone origin and my fork. I'm in windows and I use git bash I've made a few shell scripts to help me to clone it. Before when we used gitweb it was easy for me to parse as I could either run git_clone myproject or git_clone myproject.git or git_clone git://ourgitserver.ourcompany.com/myproject.git and the script would clone the above as origin and also add a remote with my user name in the form of ssh://git#outgitserver.ourcompany.com/myproject.git (and it handled name spaces well too). Well we started using vsts and I want to do the same thing.
The git_clone method changed a few times because of how people would tell/im/email me the link for the git project. I wanted to be able to just copy and paste it with minimal changes. thus far I have a simple git_vsts_close which requires two parameters the name of the project and the name of the repository. (in gitweb we would reference the namespace as vsts's project and the project would be vsts's repository). For the time being I'd like to take either the ssh url or the two parameters and do all the git things. in brief this is what i have so far
function git_vsts_clone {
local projectName=$1
local repositoryName=$2
if MISSING_ARG "usage: git_vsts_clone <project name> <repositoryName>\n projectName must be provided\n repositoryName must be provided" $projectName; then return 1; fi;
if MISSING_ARG "usage: git_vsts_clone <project name> <repositoryName>\n projectName must be provided\n repositoryName must be provided" $repositoryName; then return 1; fi;
local gitServer="ssh://mycompany#vs-ssh.visualstudio.com:22/${projectName}/_ssh/${repositoryName}"
local clonePath="/c/git/${projectName}/${repositoryName}"
local user_name=${USER:-${USERNAME}}
if [ ! -d $clonePath ]; then
INFO "Cloning $gitServer"
git clone $gitServer $clonePath || { ERROR "ERROR cloning $gitServer"; return 1;}
pushd $clonePath
INFO "Updating Submodules (gsui)"
git submodule update --init
INFO "adding user fork ${user_name}"
git remote add $user_name $gitServer.$user_name
git fetch $user_name
popd
INFO "Opening $clonePath in vscode"
fi
code $clonePath
}
last time when I tried to parse a url in bash I struggled with the whole split an item into an array. so I decided i'd try to use ruby (since it has a easy split method) so i've tried things like
$ gitServer='ssh://mycompany#vs-ssh.visualstudio.com:22/someProject/_ssh/myRepo'
$ ruby -e "a = '$gitServer'; b=a.split('/'); p b"
["ssh:", "", "mycompany#vs-ssh.visualstudio.com:22", "someProject", "_ssh", "myRepo"]
$ foo=`ruby -e "a = '$gitServer'; b=a.split('/'); p b"`
$ echo "${c[3]}"
$ echo "${c[0]}"
["ssh:", "", "mycompany#vs-ssh.visualstudio.com:22", "someProject", "_ssh", "myRepo"]
so I dunno. I don't have to use ruby it just seemed like a easy solution... now not so much. So how can I get the project and repository name out of the url in either bash or bash using ruby?
Here is a way you can get the values into environment variables using Ruby:
Assuming you have a URL environment variable containing the git repo url, such as created by the line below:
export URL='ssh://mycompany#vs-ssh.visualstudio.com:22/someProject/_ssh/myRepo'
You can do the following to put your desired values into other environment variables:
export PROJECT=`ruby -e "puts ENV['URL'].split('/')[3]"`
export REPO_NAME=`ruby -e "puts ENV['URL'].split('/')[5]"`
I've been trying to set a variable in a Puppet manifest that can be used across the puppet run. I have the following variables:
$package = 'hello'
$package_ensure = 'present'
$package_version = '4.4.1'
$package_maj_version = '4'
I'm trying to add another variable:
$ensure
using a BASH If statement using the above variables (since this is a source install I can't use an rpm command to see if the hello program is installed):
if [ -d "/opt/${package}${package_maj_version}" ]; then echo present; else echo absent; fi
but, I haven't been able to find a way to do so. I keep getting errors such as:
Error: Could not parse for environment production: Could not match ${package}${package_maj_version}"
Any help on this would be greatly appreciated.
So everything is in the title.
Is there a way I can pass arguments to :
msf> resource path/to/resource.rc <arg1> <arg2>
Or
msfconsole -r resource.rc <arg1> <arg2>
Those arguments would passed into the ruby resource code as follow:
<ruby>
ip = ARGV[1]
port = ARGV[2]
...
...
</ruby>
Unfortunately resource files don't accept arguments, but they do accept ruby blocks. So you can do it with a bit of trickery. Make a resource file that looks something like this:
Where it's using the ruby ENV command to pull in the environmental variable "DSTIP"
metasploit-framework [git:master]$ cat /tmp/test.rc
<ruby>
run_single("set RHOST #{ENV['DSTIP']}")
</ruby>
Now when I run msfconsole, I can set that DSTIP variable and it will set the RHOST when I start up MSF to whatever was in that environmental variable:
metasploit-framework [git:master]$ DSTIP=192.168.1.1 ./msfconsole -r /tmp/test.rc -Lq
[*] Processing /tmp/test.rc for ERB directives.
[*] resource (/tmp/test.rc)> Ruby Code (40 bytes)
RHOST => 192.168.1.1
You can do this with as many environmental variables as you want. Now if you want to run it from within MSFCONSOLE I tried changing the environmental variable after msfconsole was running with no luck. I'm sure there is a way that a beardy linux master will have to do it but I don't I'm sorry.
Side note: you can also use ruby file reads to pull in text from. (Think configuration file)
Hope this help!
mubix
I'm trying to work out the best way to set some environment variables with puppet.
I could use exec and just do export VAR=blah. However, that would only last for the current session. I also thought about just adding it onto the end of a file such as bashrc. However then I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet.
I would take a look at this related question.
*.sh scripts in /etc/profile.d are read at user-login time (as the post says, at the same time /etc/profile is sourced)
Variables export-ed in any script placed in /etc/profile.d will therefore be available to your users.
You can then use a file resource to ensure this action is idempotent. For example:
file { "/etc/profile.d/my_test.sh":
content => 'export MYVAR="123"'
}
Or an alternate means to an indempotent result:
Example
if [[ ! grep PINTO_HOME /root/.bashrc | wc -l > 0 ]] ; then
echo "export PINTO_HOME=/opt/local/pinto" >> /root/.bashrc ;
fi
This option permits this environmental variable to be set when the presence of the
pinto application makes it warrented rather than having to compose a user's
.bash_profile regardless of what applications may wind up on the box.
If you add it to your bashrc you can check that it's in the ENV hash by doing
ENV[VAR]
Which will return => "blah"
If you take a look at Github's Boxen they source a script (/opt/boxen/env.sh) from ~/.profile. This script runs a bunch of stuff including:
for f in $BOXEN_HOME/env.d/*.sh ; do
if [ -f $f ] ; then
source $f
fi
done
These scripts, in turn, set environment variables for their respective modules.
If you want the variables to affect all users /etc/profile.d is the way to go.
However, if you want them for a specific user, something like .bashrc makes more sense.
In response to "I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet," there is now a file_line resource available from the puppetlabs stdlib module:
"Ensures that a given line is contained within a file. The implementation matches the full line, including whitespace at the beginning and end. If the line is not contained in the given file, Puppet appends the line to the end of the file to ensure the desired state. Multiple resources can be declared to manage multiple lines in the same file."
Example:
file_line { 'sudo_rule':
path => '/etc/sudoers',
line => '%sudo ALL=(ALL) ALL',
}
file_line { 'sudo_rule_nopw':
path => '/etc/sudoers',
line => '%sudonopw ALL=(ALL) NOPASSWD: ALL',
}