Alternative for aliases to be seen inside shell scripts - bash

I have set inside .bashrc some aliases that I need to be seen inside a shell script I can't modify.
So, as long as I can't expand the aliases inside that script, what alternative do I have?
(For example, I need to define python2.6 to be the same as python)

Define and export functions instead of using aliases.
Let's say your script uses mv without -i or -v and you want to add them, but can't modify the script.
function mv () { command mv -iv "$#"; }
export -f mv
Now your script will use those options. You can define the function from the command line or in a wrapper script.
The Bash manual says: "For almost every purpose, shell functions are preferred over aliases."
Well written scripts use absolute paths to executables (e.g. /bin/mv). Doing so will prevent this technique from working and is good security practice.

If you can wrap the script, you can define aliases in the wrapper and source (. /path/to/script) the script. Both functions and aliases should work that way.
If you can't, you have to put the commands in PATH. Either as symlinks or as scripts.

Bash functions are more versatile than aliases, and can serve the same purpose.

There is a shell option to expand aliases: shopt -s expand_aliases, however it is switched off for a reason - aliaises in shell scripts are a support nightmare. The alternative is to use the full command, and recommended.

Related

Adding bash functions to the $PATH or to bash shell automatically

Is there a way to add bash functions to the $PATH, or to the bash shell, without requiring an end-user to source them manually?
In other words, if we have a software library that exports only bash functions, we normally require the end-user to source the bash scripts with
. "$HOME/.the_scripts/"*.sh
and then using them. But is there a way to somehow get the bash functions into the shell without requiring the user to add a line of code to ~/.bashrc or ~/.bash_profile, etc?
What am I trying to do? I am trying to obviate the need for users to add a call to source a bash script for a library they just installed.
One suggestion I got was to write a container script, to a folder, where that folder is already in the $PATH.
Say I have a script like so:
#!/usr/bin/env bash
my_func(){
echo "this is my func, $1, $2, $3"
export foo="my_func"
}
my_func a b c
I could write that script to a folder in $PATH and then execute the script, which will then call the bash function(s).
Not sure how great/universal a solution this is, but it would work for some use cases I suppose. This will not work if you want to export env variables to the current shell, etc, because the bash function(s) would be run in a subshell as far as I know from the command line / current script.
If you read about the Bash Startup Files, you notice that /etc/profile is one of the files that is processed. If you read that file, you'll see that it sources all *.sh files in /etc/profile.d
If you can have your script libraries installed in /etc/profile.d, the functions will be available for all interactive login shell sessions.

Dynamically loading aliases into current shell

Haven't been able to crack this one, any help would be apprecaited.
I have a small script that does some logic and then tries to create aliases for me to load into my shell. I've tried several different approaches to do so, but haven't been able to get anything better than this. Which feels like a nasty hack.
script.pl > new_aliases.txt && source new_aliases.txt && rm new_aliases.txt
Ideally, this happens inside a .bashrc file so it's loaded when the shell starts. The best I've been able to do is wrap the line above in a shell function and then calling that manually after the shell starts.
Inside my .profile
function load_aliases () {
script.pl > new_aliases.txt && source new_aliases.txt && rm new_aliases.txt
}
Then after shell starts...
load_aliases
Like I said this does what I want, but 1) it's damn ugly and 2) manual.
Looks like you want
eval "$(script.pl)"
If you want this in an interactive function, you should put in the full path to script.pl so it works regardless of which directory you are in; or, of course, have script.pl in your PATH.
If you want to put this in a function in your .profile, you need to make sure the output doesn't produce any Bashisms, because .profile is shared with other shells. Maybe put it in .bash_profile instead (but note that of you create a new .bash_profile, this disables reading .profile when you start Bash, so you will want to do that explicitly from your .bash_profile then).
For example, source and function are (rather superfluous IMHO) Bash extensions, which are not valid commands in regular sh.
The usual caveats about eval apply, of course, but this is no more unsafe than what you are already doing.
Why don't you put the load_aliases command into your .bashrc?
In addition to the eval solution, you can also use process substitution to treat the output of script.pl as a file.
source <(script.pl)
This should work, although I can't shake the nagging suspicion that I ran into an issue once with source expecting a real file, not what is essentially a pipe.

'which' command is incorrect

I have a shell script in my home directory called "echo". I added my home directory to my path, so that this echo would replace the other one.
To do this, I used: export PATH=/home/me:$PATH
When I do which echo, it shows the one I want. /home/me/echo
But when I actually do something like echo asdf it uses the system echo.
Am I doing something wrong?
which is an external command, so it doesn't have access to your current shell's built-in commands, functions, or aliases. In fact, at least on my system, /usr/bin/which is a shell script, so you can examine it and see how it works.
If you want to know how your shell will interpret a command, use type rather than which. If you're using bash, type -a will print all possible meanings in order of precedence. Consult your shell's documentation for details.
For most shells, built-in commands take precedence over commands in your $PATH. The whole point of having a built-in echo, for example, is that it's faster than loading /bin/echo into memory.
If you want your own echo command to override the shell's built-in echo, you can define it as a shell function.
On the other hand, overriding the built-in echo command doesn't strike me as a good idea in the first place. If it behaves the same as the built-in echo, there's not much point. If it doesn't, then it could break scripts that use echo expecting it to work a certain way. If possible, I suggest giving your command a different way. If it's an enhanced version of echo, you could even call it Echo.
It is likely using the shell's builtin.
If you want the one in your path you can do
`which echo` asdf
From this little article that explains the rules, here's a list in descending order of precedence:
Aliases
Shell functions
Shell builtin commands
Hash tables
PATH variable
echo is a shell builtin command (al least in bash) and PATH has the lowest priority. I guess you'll need to create a function or an alias.

Source a script with a different shell type

Let's say a script is called with /bin/sh. Is it possible to source another script from that script and to have it be interpreted with #!/bin/bash?
It would appear that the #!/bin/bash doesn't do anything...
And by source, at this point I am meaning the functionality of manipulating the parent environment.
No. The whole point of sourcing a script is that the script is interpreted by the shell doing the sourcing. If, as is often the case, /bin/sh is bash, then you will get the desired behavior. Otherwise, you are out of luck.
Try the source command, or dot operator. You might also try the env command. Note, make sure you export if you're using source (or dot).

shebang script interpreter from shell variable

I have a number of scripts which need to specify the python binary which runs them:
#! /home/nargle/python/bin/python2.6
I need to adapt these scripts to work at two different sites. Lots of tools are installed in different places, so at new site 2 the script needs to start with:
#! /user/nargle/python/bin/python2.6
..
I want to replace directly-quoted paths with environment variables which are set differently for each site. What I would like is for this to work:
#! $MY_PYTHON_PATH
but it doesn't! I am slightly hazy on where to research this. Is it the executing shell (be it bash, csh or whatever) which detects the '#!' at the start of a script (be it bash, python or whatever) and fires up the interpreter/shell to run it?
I feel that there must be some way to do this. Please advise!
Oh yes, there is one more constraint: we cannot use the path for this. This may seem like a stupid restriction but this is for a large environment with many users
The environment is RHEL 5.7.
EDIT It has been suggested to use a shell script and that is the current plan: it works fine:
$MY_PYTHON_PATH some_script file.py $#
The problem is really that we have lots of people using the python files, and lots of automated tests which need to changed. If it has to be done it has to be done but I if possible I want to minimise the impact of a change of working practice for scores of people.
EDIT It would also be possible to have a link in a location which is the same on both systems, and which links to the real binary in a different target on each system. This is quite feasible but seems kind of messy: we use the linux 'modules' package to setup environment variables for many tools and it would be nice if we could take the python path from our modulefiles.
EDIT It isn't the answer but this feels like the kind of evil hack I was looking for:
http://docs.nscl.msu.edu/daq/bluebook/html/x3237.html
.. see "Example 4-2. #! lines for bash and for tclsh"
EDIT I hoped this might work but it didn't:
!# /usr/bin/env PATH=$PATH:$MY_PYTHON_PATH python2.6
The common solution is to change the shebang to
#!/usr/bin/env python2.6
Then, just arrange your $PATH to point to the right python2.6 on each machine.
Write a wrapper shell script. If you have script.py, write a script.py.sh with the following content:
#!/bin/bash
PYTHON_SCRIPT=$( echo "$0" | sed -e 's/\.sh$//' )
exec $MY_PYTHON_PATH $PYTHON_SCRIPT "$#"
Disclaimer: This isn't tested, just wrote it off the top of my head.
Now just set up your MY_PYTHON_PATH on each machine, and call script.py.sh instead of script.py.
Summary This solution is only second-best, since it requires a lot of script calls to be changed from script.py to script.py.sh, something that should be avoided if at all possible.
Alternative
Use env to call a python-finder script, which just calls the python binary contained in $MY_PYTHON_PATH. The python-finder script has to be in the same location on both machines, use symlinks if necessary.
#!/usr/bin/env /usr/local/bin/python-finder.sh
The contents of python-finder.sh:
#!/bin/bash
exec $MY_PYTHON_PATH "$#"
This works because for interpreter scripts (those starting with a shebang) execve calls the interpreter and passes the filename to env, which in turn passes it on to the command it calls.
I was being silly: using variable expansion with env does work.
#! /usr/bin/env PATH="$PATH:$MY_PYTHON_PATH" python2.6
We can do:
#!/bin/bash
"exec" "python" "$0"
print "Hello World"
from http://rosettacode.org/wiki/Multiline_shebang#Python

Resources