Use non-built-in bash commands without modifying .bashsrc - bash

I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.

Related

How do I give my shell script a simple word to make it run. For example "mkdir"

so I have a bash script right now which automates the git process for me. I have made the shell script accessible from everywhere. I want to give the script a command like "ctdir" instead of typing in "intilize_directory.sh" every time. Is there a way to make this possible?
There are (at least) three ways to do this:
First, if it's on your path, you can simply rename it to ctdir.
Second, you can create an alias for it in your startup scripts (like $HOME/.bashrc):
alias ctdir='initialize_directory.sh'
Third, you can create a function to do the work (again, defining it in your startup scripts):
ctdir() {
initialize_directory.sh
}
Just remember to make sure you load up your modified startup scripts after making the changes. New shells should pick the changes up but you may need to re-source it manually from an existing shell (or just exit and restart).
Agreed with #paxdiablo, the best way is to create an alias.
Following steps will work in Linux:
Naming the alias.
Type the following at the command line:
alias ctdir='initialize_directory.sh'
Edit bashsrc file.
This file is usually present at your home directory.
Add at the alias mentioned in step 1 at the end of the bashsrc file to make them permanent and reusable in every session.
vi ~/.bashsrc

Instead of giving command for batch mode, give .scm file path?

It is possible to supply batch commands directly with the -b flag, but if the commands become very long, this is no longer an option. Is there a way to give the path to an .scm script that was written to a file, without having to move the file into the scripts directory?
No as far as I know. What you give in the -b flag is a Scheme statement, which implies your function has already been loaded by the script executor process. You can of course add more directories that are searched for scripts using Edit>Preferences>Folders>Scripts.
If you write your script in Python the problem is a bit different since you can alter the Python path before loading the script code but the command line remains a bit long.

Get Input for a bash script by capturing it from a VIM session

I am creating a new CLI application, where I want to get some sensitive input from the user. Since, this input can be quite descriptive as well as the information is a bit sensitive, I wanted to allow user to enter a command like this from this app:
app new entry
after which, I want to provide user with a VIM session where he can write this descriptive input, which when he exits from this VIM session, will be captured by my script and used for further processing.
Can someone tell me a way (probably some hidden VIM feature - since, I am always amazed by them) so that I can do so, without creating any temporary file? As explained in a comment below, I would prefer a some-what in-memory file, since the information can be a bit sensitive, and hence, I would like to process it first via my script and then only, write it to disk in an encrypted manner.
Git actually does this: when you type git commit, a new Vim instance is created and a temporary file is used in this instance. In that file, you type your commit message
Once Vim gets closed again, the content of the temporary file is read and used by Git. Afterwards, the temporary file gets deleted again.
So, to get what you want, you need the following steps:
create a unique temporary file (Create a tempfile without opening it in Ruby)
open Vim on that file (Ruby, Difference between exec, system and %x() or Backticks)
wait until Vim gets terminated again (also contained in the above SO thread)
read the tempoarary file (How can I read a file with Ruby?)
delete the temporary file (Deleting files in ruby)
That's it.
You can make shell create file descriptors attached to your function and make vim write there, like this: (but you need to split script in two parts: one that calls vim and one that processes its input):
# First script
…
vim --cmd $'aug ScriptForbidReading\nau BufReadCmd /proc/self/fd/* :' --cmd 'aug END' >(second-script)
. Notes:
second-script might actually be a function defined in first script (at least in zsh). This also requires bash or zsh (tested only on the latter).
Requires *nix, maybe won’t work on some OSes considered to be *nix.
BufReadCmd is needed because vim hangs when trying to read write-only descriptor.
It is suggested that you set filetype (if needed) right away, without using ftdetect plugins: in case your script is not the only one which will use this method.
Zsh will wait for second-script to finish, so you may continue script right after vim command in case information from second-script is not needed (it would be hard to get from there).
Second script will be launched from a subshell. Thus no variable modifications will be seen in code running after vim call.
Second script will receive whatever vim saves on standard input. Parent standard input is not directly accessible, but using </dev/tty will probably work.
This is for zsh/bash script. Nothing will really prevent you from using the same idea in ruby (it is likely more convenient and does not require splitting into two scripts), but I do not know ruby enough to say how one can create file descriptors in ruby.
Using vim for this seems like overkill.
The highline ruby gem might do what you need:
require 'highline'
irb> pw = HighLine.new.ask('info: ') {|q| q.echo = false }
info:
=> "abc"
The user's text is not displayed when you set echo to false.
This is also safer than creating a file and then deleting it, because then you'd have to ensure that the delete was secure (overwriting the file several times with random data so it can't be recovered; see the shred or srm utilities).

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

How can I standardize shell script header #! on different hosts?

I manage a large number of shell (ksh) scripts on server A. Each script begins with the line...
#!/usr/bin/ksh
When I deploy to machine B, C, and D I frequently need to use a different shell such as /bin/ksh, /usr/local/bin/ksh or even /usr/dt/bin/ksh. Assume I am unable to install a new version of ksh and I am unable to create links in any protected directories such as /usr/local/bin. At the moment I have a sed script which modifies all the scripts but I would prefer not to do this. I would like to standardize the header so that it no longer needs to be changed from server to server. I don't mind using something like
#!~/ksh
And creating a link which is on every server but I have had problems with finding home using "~" in the past when using rsh (maybe is was ssh) to call a script (AIX specifically I think). Another option might be to create a link in my home directory and ensuring that it is first in my PATH, and simply using
#!ksh
Looking for a good solution. Thanks.
Update 8/26/11 - Here is the solution I came up with. The installation script looks for the various versions of ksh installed on the server and then copies one of the ksh 93 programs to /tmp/ksh93. The scripts in the framework all refer to #!/tmp/ksh93 and they don't need to be changed from one server to the other. The script also set some variables so that if the file is every removed from /tmp, it will immediately be put back the next time a scheduled task runs, which is at a minimum every minute.
As rettops noted, you can use:
#!/usr/bin/env ksh
This will likely work for you. However, there can be some drawbacks. See Wikipedia on Shebang for a fairly thorough discussion.
#! /usr/bin/env ksh
will use whatever ksh is in the user's path.

Resources