How to set environment variable using Chef? - ruby

Theres a similar question to this, but cant manage it to work:
I want to simply set an env variable, then use it:
execute "start zookeeper" do
cwd "/opt/zookeeper-3.4.5/bin"
command "./zkServer.sh start"
environment "JVMFLAGS" => "-Xmx#{heap_jvm} -Xms#{heap_jvm}"
user "root"
action :run
end
I've also tried using bash to "export JVMFLAGS='-blabla'" but still it runs the sh with none set to the variable. Is there some issue preventing my sh script from checking the variable?
I could use the sh like a template and replace the ocurrence of JVMFLAGS... But i want to check if theres a better solution..

Have you tried setting environment variable through Ruby just before the execute block? Chef actually recommends using ENV (See the note on that page).
ENV['JVMFLAGS'] = "-Xmx#{heap_jvm} -Xms#{heap_jvm}"
Another possibility is to add JVMFLAGS to the command itself.
execute "start zookeeper" do
[...]
command "JVMFLAGS=-Xmx#{heap_jvm} -Xms#{heap_jvm} ./zkServer.sh start"
[...]
end

Related

Jenkins - Passing variable password to external shell

I am using the BUILD STEP "Execute shell script on remote host" and I'm injecting a password to my project:
The jenkins call a script.sh, but the script does not print variable PASS passed by jenkins.
As a step variable issued by Jenkins to my external script?
PASS=${PASSWORD}
echo PASSWORD=$PASS
sh /root/script.sh
You need to export your variable in order to make it available to subshells:
export PASS=${PASSWORD}
If you don't want other programs you invoke in the same script to see your password, consider this safer way:
PASS=${PASSWORD} /root/script.sh

Chef run sh script

I have a problem trying to run shell script via Chef (with docker-provisioning).
This is how I try to execute my script:
bash 'shell_try' do
user "root"
run = "#{some_path_to_script}/my_script.sh some_params"
code " #{run} > stdout.txt 2> stderr.txt"
end
(note that this script should run another scripts, processes and write logs)
Here's no errors in the output, but when I log into machine and run ps aux process isn't running.
I guess something wrong with permissions (or env variables), because when I try the same command manually - it works.
A bash resource just runs the provided script text directly, if you wanted to run a long-running process generally you would set up an Upstart or systemd service and use the service resource to start it.
Finally find a solution (thanks to #coderanger) -
Install supervisor:
Download supervisor cookbook
Add:
include_recipe 'supervisor::default'
Add my service to supervisor:
supervisor_service "name" do
action :enable
#action :start
command '/path/script.sh start'
end
Run supervisor service
All done!
Please see the Chef documentation for your resource: https://docs.chef.io/resource_bash.html. The bash resource does not support a run attribute. Text of the code attribute is run as a bash script. The default action is to run the script unless told otherwise by the resource.
bash 'shell_try' do
user "root"
code " #{run} > stdout.txt 2> stderr.txt"
action :run
end
The code attribute is written to a temporary file where it is then run using the attributes specified in the resource.
The line run = "#{some_path_to_script}/my_script.sh some_params" at this point does nothing.

How can I instruct Capistrano 3 to load my shell environment variables set at remote host?

I want to instruct Capistrano to load environment variables that are defined on remote server. How can I do that?
It seems that when I export my environment variables inside .bashrc file, they are not taken into account by Capistrano. Capistrano seems to be executing a /usr/bin/env to create the environment for executing remote commands, but this does not seem to be loading the environment variables from .bashrc.
Let me tell you also that I am using rvm-capistrano too (just in case it might help).
Any clue?
Capistrano actually does load .bashrc. But near the top of the file you will find one of the following lines:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
# If not running interactively, don't do anything
[[ $- != *i* ]] && return
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
If you do any exporting after the above lines, it will not be reached by Capistrano. The solution was simply to move my setup above this lineā€”and Capistrano works how I want.
This solution was also noted at this GitHub issue.
You can pass your current environment variables to a remote execution with ssh by issuing:
env | ssh user#host remote_program
Also taken the example from here
on roles(:app), in: :sequence, wait: 5 do
within "/opt/sites/example.com" do
# commands in this block execute in the
# directory: /opt/sites/example.com
as :deploy do
# commands in this block execute as the "deploy" user.
with rails_env: :production do
# commands in this block execute with the environment
# variable RAILS_ENV=production
rake "assets:precompile"
runner "S3::Sync.notify"
end
end
end
end
looks like you can use with set environment variables for your execution. So read your current environment variables and set them using with .
Capistrano doesn't load .bashrc since it's not interactive shell. As far as I remember though it does load .bash_profile though so you will probably have better luck using that.
In Capistrano 3 it's set :default_env, { ... }
Like here:
set :default_environment, {
'env_var1' => 'value1',
'env_var2' => 'value2'
}
You can refer to this:
Previous post..

Calling Puppet from bash script

I'm trying to call puppet from a bash script and whilst it works, it causes my script to end prematurely.
#!/bin/bash
...
function runPuppetLocally()
{
echo "...running Puppet locally"
exec puppet agent --test
echo "Puppet complete"
}
runPuppetLocally
I presume Puppet is issuing an exit or something similar which causes my script to end. Is there a means by which I can call it without it terminating my script?
Why do you use exec? Read help exec:
Replace the shell with the given command.
Your script is replaced with the puppet. If you do not want it to replace your shell, call it normally, i.e.
puppet agent --test

Why does using set -e cause my script to fail when called in crontab?

I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.

Resources