So I have a gem that has two executables say, run and run_nohup. I have created an executable file where I add all the environment stuff required to execute the run and have added this on the path.
Example:
env variable1=value variable2=value /opt/my_gem/bin/run "$#"
Now my question is, is there another way to do the same for run_nohup without duplicating this work? I ask this because, am installing all of this with chef and it would require me to create more templates, basically duplicating the old template except for the last part where I call run_nohup.
$0 is the name used to invoke the current program; thus, you can look at it to determine how you were called, or manipulate it (in the below case, stripping the directory name and using only the filename):
#!/bin/sh
exec env variable1=value variable2=value /opt/my_gem/bin/"${0##*/}" "$#"
You can take this single executable, save it in two files named run and run_nohup (which can be hardlinked together, if you like), and it'll call the appropriate tool from /opt/my_gem/bin for the name it's invoked with.
Aside: It would be slightly more efficient (save a few microseconds) to have the shell export the environment updates rather than calling through env:
#!/bin/sh
variable1=value variable2=value exec /opt/my_gem/bin/"${0##*/}" "$#"
Related
I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.
so I have a bash script right now which automates the git process for me. I have made the shell script accessible from everywhere. I want to give the script a command like "ctdir" instead of typing in "intilize_directory.sh" every time. Is there a way to make this possible?
There are (at least) three ways to do this:
First, if it's on your path, you can simply rename it to ctdir.
Second, you can create an alias for it in your startup scripts (like $HOME/.bashrc):
alias ctdir='initialize_directory.sh'
Third, you can create a function to do the work (again, defining it in your startup scripts):
ctdir() {
initialize_directory.sh
}
Just remember to make sure you load up your modified startup scripts after making the changes. New shells should pick the changes up but you may need to re-source it manually from an existing shell (or just exit and restart).
Agreed with #paxdiablo, the best way is to create an alias.
Following steps will work in Linux:
Naming the alias.
Type the following at the command line:
alias ctdir='initialize_directory.sh'
Edit bashsrc file.
This file is usually present at your home directory.
Add at the alias mentioned in step 1 at the end of the bashsrc file to make them permanent and reusable in every session.
vi ~/.bashsrc
In REE, and MRI 1.9+, ruby's garbage collector can be tuned:
http://www.rubyenterpriseedition.com/documentation.html#_garbage_collector_performance_tuning
http://smartic.us/2010/10/27/tune-your-ruby-enterprise-edition-garbage-collection-settings-to-run-tests-faster/
http://blog.evanweaver.com/articles/2009/04/09/ruby-gc-tuning/
But none of these articles say where to put this configuration. I imagine that if it's in the environment, ruby will pick it up when it starts -- however, there's no way to check this as far as I can tell. The settings don't show up in any runtime constants that I can find.
So, where do I put this configuration, and how can I double-check that it's being used?
These settings are environment variables, so you would just need to set them in the parent process of the ruby process itself. Many people recommend creating a simple shell script for this purpose, perhaps calling it /usr/local/bin/ruby-custom:
#!/bin/bash
export RUBY_HEAP_MIN_SLOTS=20000
export RUBY_HEAP_SLOTS_INCREMENT=20000
...etc...
exec "/path/to/ruby" "$#"
The first few lines set whichever custom variables you want, and the last line invokes ruby itself, passing it whatever arguments this script was initially given.
You will next need to mark this script as executable (chmod a+x /usr/local/bin/ruby-custom) and then configure Passenger to use it as the ruby executable, by adding this to your Apache .conf file:
PassengerRuby /usr/local/bin/ruby-custom
While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.
I'm trying to change the directory of the shell I start the ruby script form via the ruby script itself...
My point is to build a little program to manage favorites directories and easily change among them.
Here's what I did
#!/usr/bin/ruby
Dir.chdir("/Users/luca/mydir")
and than tried executing it in many ways...
my_script (this doesn't change the directory)
. my_script (this is interpreted as bash)
. $(ruby my_script) (this is interpreted as bash too!)
any idea?
Cannot be done. Child processes cannot modify their parents environment (including the current working directory of the parent). The . (also known as source) trick only works with shell scripts because you are telling the shell to run that code in the current process (rather than spawning a subprocess to run it). Just for fun try putting exit in a file you run this way (spoiler: you will get logged out).
If you wish to have the illusion of this working you need to create shell functions that call your Ruby script and have the shell function do the actual cd. Since the functions run in the current process, they can change the directory. For instance, given this ruby script (named temp.rb):
#!/usr/bin/ruby
print "/tmp";
You could write this BASH function (in, say, you ~/.profile):
function gotmp {
cd $(~/bin/temp.rb)
}
And then you could say gotmp at the commandline and have the directory be changed.
#!/usr/bin/env ruby
`../your_script`
Like this?
Or start your script in the directory you want it to do something.
Maybe I don't get your question. Provide some more details.