While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.
Related
I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.
I need to access the folder that is created dynamically during each bot integration. On one of the run it is something like this -
/Library/Developer/XcodeServer/Integrations/Caches/a3c682dd0c4d569a3bc84e58eab88a48/DerivedData/Build/Products/Debug-iphonesimulator/my.app
I would like to get to this folder in an post trigger, how do I go about it? Based on the wwdc talk it seems like some environment variables like 'XCS_INTEGRATION_RESULT' and XCS_ERROR_COUNT etc.. are being used. Also I can see in logs something like PROJECT_DIR.
But I can't access any of these variables from my command line(is it because I am a different user than the bot?)
Also where can I find the list of variables created by this CI system?
I have been echoing set to the bot log, the first line of my bot script is simply
set
When you view the log after the integration is complete it will be in your trigger output.
XCS_ANALYZER_WARNING_CHANGE=0
XCS_ANALYZER_WARNING_COUNT=0
XCS_ARCHIVE=/Library/Developer/XcodeServer/Integrations/Integration-76eb5292bd7eff1bfe4160670c2d4576/Archive.xcarchive
XCS_BOT_ID=4f7c7e65532389e2a741d29758466c18
XCS_BOT_NAME='Reader'
XCS_BOT_TINY_ID=00B0A7D
XCS_ERROR_CHANGE=0
XCS_ERROR_COUNT=0
XCS_INTEGRATION_ID=76eb5292bd7eff1bfe4160670c2d4576
XCS_INTEGRATION_NUMBER=15
XCS_INTEGRATION_RESULT=warnings
XCS_INTEGRATION_TINY_ID=FF39BC2
XCS_OUTPUT_DIR=/Library/Developer/XcodeServer/Integrations/Integration-76eb5292bd7eff1bfe4160670c2d4576
XCS_PRODUCT='Reader.ipa'
XCS_SOURCE_DIR=/Library/Developer/XcodeServer/Integrations/Caches/4f7c7e65532389e2a741d29758466c18/Source
XCS_TESTS_CHANGE=0
XCS_TESTS_COUNT=0
XCS_TEST_FAILURE_CHANGE=0
XCS_TEST_FAILURE_COUNT=0
XCS_WARNING_CHANGE=36
XCS_WARNING_COUNT=36
#Viktor is correct, these variables only exist during their respective sessions. #Pappy gave a great list of those variables.
They can be used in a script like so:
IPA_PATH="${XCS_OUTPUT_DIR}/${XCS_BOT_NAME}.ipa"
echo $IPA_PATH
I'm not familiar with Xcode Server but generally Unix/CI systems when export environment variables they only export it to the current session.
If you want to set an environment variable persistently you have to set it in an initializer file like ~/.bash_profile or ~/.bashrc so it always gets set/loaded when a shell session starts (ex: when you log in with Terminal - the exact file depends on what kind of shell you start).
It wouldn't make much sense to export these persistently either, because in that case if you run different integrations these would simply overwrite each others exported environment variables (they would set the same environment variables).
That's why the systems which communicate through environment variables usually don't write the variables into persistent initialiser file rather just export the variables. With export the variable is accessible from the process which exports it and from the child processes the process starts.
For example in a bash script if you export a variable you can access it from the bash script after the export and from any command/program you start from the bash script, but when the bash script finishes the environment won't be accessible anymore.
edit
Just to clarify it a bit: You should be able to access these environment variables from a post trigger script, run by Xcode Server but you most likely won't be able to access these from your Terminal/command line.
Also where can I find the list of variables created by this CI system?
You can print all the available environment variables with the env command. In a bash script simply type env in a new line like this:
#!/bin/bash
env
This will print all the available environment variables (not just the ones defined by Xcode Server!) - you can simply pipe it to a file for inspection if you want to, like this:
#!/bin/bash
env > $HOME/envinspect.txt
After this script runs you can simply open the envinspect.txt file in the user's home folder.
In REE, and MRI 1.9+, ruby's garbage collector can be tuned:
http://www.rubyenterpriseedition.com/documentation.html#_garbage_collector_performance_tuning
http://smartic.us/2010/10/27/tune-your-ruby-enterprise-edition-garbage-collection-settings-to-run-tests-faster/
http://blog.evanweaver.com/articles/2009/04/09/ruby-gc-tuning/
But none of these articles say where to put this configuration. I imagine that if it's in the environment, ruby will pick it up when it starts -- however, there's no way to check this as far as I can tell. The settings don't show up in any runtime constants that I can find.
So, where do I put this configuration, and how can I double-check that it's being used?
These settings are environment variables, so you would just need to set them in the parent process of the ruby process itself. Many people recommend creating a simple shell script for this purpose, perhaps calling it /usr/local/bin/ruby-custom:
#!/bin/bash
export RUBY_HEAP_MIN_SLOTS=20000
export RUBY_HEAP_SLOTS_INCREMENT=20000
...etc...
exec "/path/to/ruby" "$#"
The first few lines set whichever custom variables you want, and the last line invokes ruby itself, passing it whatever arguments this script was initially given.
You will next need to mark this script as executable (chmod a+x /usr/local/bin/ruby-custom) and then configure Passenger to use it as the ruby executable, by adding this to your Apache .conf file:
PassengerRuby /usr/local/bin/ruby-custom
I manage a large number of shell (ksh) scripts on server A. Each script begins with the line...
#!/usr/bin/ksh
When I deploy to machine B, C, and D I frequently need to use a different shell such as /bin/ksh, /usr/local/bin/ksh or even /usr/dt/bin/ksh. Assume I am unable to install a new version of ksh and I am unable to create links in any protected directories such as /usr/local/bin. At the moment I have a sed script which modifies all the scripts but I would prefer not to do this. I would like to standardize the header so that it no longer needs to be changed from server to server. I don't mind using something like
#!~/ksh
And creating a link which is on every server but I have had problems with finding home using "~" in the past when using rsh (maybe is was ssh) to call a script (AIX specifically I think). Another option might be to create a link in my home directory and ensuring that it is first in my PATH, and simply using
#!ksh
Looking for a good solution. Thanks.
Update 8/26/11 - Here is the solution I came up with. The installation script looks for the various versions of ksh installed on the server and then copies one of the ksh 93 programs to /tmp/ksh93. The scripts in the framework all refer to #!/tmp/ksh93 and they don't need to be changed from one server to the other. The script also set some variables so that if the file is every removed from /tmp, it will immediately be put back the next time a scheduled task runs, which is at a minimum every minute.
As rettops noted, you can use:
#!/usr/bin/env ksh
This will likely work for you. However, there can be some drawbacks. See Wikipedia on Shebang for a fairly thorough discussion.
#! /usr/bin/env ksh
will use whatever ksh is in the user's path.
This keeps happening to me all the time:
1) I write a script(ruby, shell, etc).
2) run it, it works.
3) put it in crontab so it runs in a few minutes so I know it runs from there.
4) It doesnt, no error trace, back to step 2 or 3 a 1000 times.
When I ruby script fails in crontab, I can't really know why it fails cause when I pipe output like this:
ruby script.rb >& /path/to/output
I sorta get the output of the script, but I don't get any of the errors from it and I don't get the errors coming from bash (like if ruby is not found or file isn't there)
I have no idea what environmental variables are set and whether or not it's a problem. Turns out that to run a ruby script from crontab you have to export a ton of environment variables.
Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?
When debugging, I have to reset the timer and go back to waiting. Very time consuming.
How to test things in crontab better or avoid these problems?
"Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?"
Yes:
bash -li -c /path/to/script
From the man page:
[vindaloo:pgl]:~/p/test $ man bash | grep -A2 -m1 -- -i
-i If the -i option is present, the shell is interactive.
-l Make bash act as if it had been invoked as a login shell (see
INVOCATION below).
G'day,
One of the basic problems with cron is that you get a minimal environment being set by cron. In fact, you only get four env. var's set and they are:
SHELL - set to /bin/sh
LOGNAME - set to your userid as found in /etc/passwd
HOME - set to your home dir. as found in /etc/passwd
PATH - set to "/usr/bin:/bin"
That's it.
However, what you can do is take a snapshot of the environment you want and save that to a file.
Now make your cronjob source a trivial shell script that sources this env. file and then executes your Ruby script.
BTW Having a wrapper source a common env. file is an excellent way to enforce a consistent environment for multiple cronjobs. This also enforces the DRY principle because it gives you just one point to update things as required, instead of having to search through a bunch of scripts and search for a specific string if, say, a logging location is changed or a different utility is now being used, e.g. gnutar instead of vanilla tar.
Actually, this technique is used very successfully with The Build Monkey which is used to implement Continuous Integration for a major software project that is common to several major world airlines. 3,500kSLOC being checked out and built several times a day and over 8,000 regression tests run once a day.
HTH
'Avahappy,
Run a 'set' command from inside of the ruby script, fire it from crontab, and you'll see exactly what's set and what's not.
To find out the environment in which cron runs jobs, add this cron job:
{ echo "\nenv\n" && env|sort ; echo "\nset\n" && set; } | /usr/bin/mailx -s 'my env' you#example.com
Or send the output to a file instead of email.
You could write a wrapper script, called for example rbcron, which looks something like:
#!/bin/bash
RUBY=ruby
export VAR1=foo
export VAR2=bar
export VAR3=baz
$RUBY "$*" 2>&1
This will redirect standard error from ruby to the standard output. Then you run rbcron in your cron job, and the standard output contains out+err of ruby, but also the "bash" errors existing from rbcron itself. In your cron entry, redirect 2>&1 > /path/to/output to get output+error messages to go to /path/to/output.
If you really want to run it as yourself, you may want to invoke ruby from a shell script that sources your .profile/.bashrc etc. That way it'll pull in your environment.
However, the downside is that it's not isolated from your environment, and if you change that, you may find your cron jobs suddenly stop working.