Dynamically loading aliases into current shell - bash

Haven't been able to crack this one, any help would be apprecaited.
I have a small script that does some logic and then tries to create aliases for me to load into my shell. I've tried several different approaches to do so, but haven't been able to get anything better than this. Which feels like a nasty hack.
script.pl > new_aliases.txt && source new_aliases.txt && rm new_aliases.txt
Ideally, this happens inside a .bashrc file so it's loaded when the shell starts. The best I've been able to do is wrap the line above in a shell function and then calling that manually after the shell starts.
Inside my .profile
function load_aliases () {
script.pl > new_aliases.txt && source new_aliases.txt && rm new_aliases.txt
}
Then after shell starts...
load_aliases
Like I said this does what I want, but 1) it's damn ugly and 2) manual.

Looks like you want
eval "$(script.pl)"
If you want this in an interactive function, you should put in the full path to script.pl so it works regardless of which directory you are in; or, of course, have script.pl in your PATH.
If you want to put this in a function in your .profile, you need to make sure the output doesn't produce any Bashisms, because .profile is shared with other shells. Maybe put it in .bash_profile instead (but note that of you create a new .bash_profile, this disables reading .profile when you start Bash, so you will want to do that explicitly from your .bash_profile then).
For example, source and function are (rather superfluous IMHO) Bash extensions, which are not valid commands in regular sh.
The usual caveats about eval apply, of course, but this is no more unsafe than what you are already doing.

Why don't you put the load_aliases command into your .bashrc?

In addition to the eval solution, you can also use process substitution to treat the output of script.pl as a file.
source <(script.pl)
This should work, although I can't shake the nagging suspicion that I ran into an issue once with source expecting a real file, not what is essentially a pipe.

Related

Source a script with a different shell type

Let's say a script is called with /bin/sh. Is it possible to source another script from that script and to have it be interpreted with #!/bin/bash?
It would appear that the #!/bin/bash doesn't do anything...
And by source, at this point I am meaning the functionality of manipulating the parent environment.
No. The whole point of sourcing a script is that the script is interpreted by the shell doing the sourcing. If, as is often the case, /bin/sh is bash, then you will get the desired behavior. Otherwise, you are out of luck.
Try the source command, or dot operator. You might also try the env command. Note, make sure you export if you're using source (or dot).

saving bash functions

First of all, I'd like to be corrected on my vocabulary. I assume Terminal provides an environment for bash, which is a type/version of shell. Is this correct?
I'm trying to utilize bash and shell more in my development processes to speed up deployment. However, I'm only beginning to understand the basics outside the commands I've learned from growing up.
I've started making functions in Terminal to help automate some of my more repetitious tasks. This is all find and dandy until I exit terminal.
I assume that shell runs in an instance, so that instance is lost when I exit terminal. I notice that shell leaves a .bash_history, also accessible using 'history', where I can see my old functions from old sessions. However, of course, they no longer appear to execute.
I recognize that I could create a shell script, but this introduces compiling issues as well as having to pay more attention to where the scripts are stored relatively.
Question: When I create bash scripts using command(){}, they do not persist after closing the terminal. Can I make them do that so on new terminal sessions I have access to my old functions without resorting to shell scripts?
Edit: I also wanted to mention that I tried extensively to find an issue to this using traditional means, but "save" and "exit" in the search term often will direct to other aspects of shell.
Your first statement is correct. A terminal instance runs a type of shell (bash, sh, csh)
You can add them to your ~/.bashrc file or add the saved script path (no compiling needed) to your PATH variable.
You could also just copy scripts to /usr/local/bin for quick access anytime. You would have no need to keep track of where they are relatively. This is quite handy and makes your scripts available to other users (or not if permissions are set correctly)
See the Using History Interactively section of the Bash Reference Manual for ways you can execute commands from your history.
For example, typing !?foo and pressing Enter will execute the most recent command containing "foo". I like to have shopt -s histverify histreedit in my ~/.bashrc so I can edit and confirm the command, if necessary rather than executing it immediately.
Also see the Commands For Manipulating The History section for keystrokes you can use to search for history entries to recall and execute.
For example, pressing Ctrl-r and typing foo will recall the most recent command containing "foo". You can press Ctrl-r additional times to continue searching in reverse for additional matching commands. Press Enter when you're ready to execute the one currently shown or Ctrl-g to abort the search.
If you add stty -ixon to your ~/.bashrc, then you can use Ctrl-s to search forward through history after you've begun searching backward.
Of course, you can save your functions by editing ~/.bashrc and adding them to it. I prefer to keep my functions in a file I created called ~/bin/functions and then add a line to ~/.bashrc to source that file. The line looks like . ~/bin/functions.
I save larger scripts in /usr/local/bin or ~/bin. The former should already be in your path and you can add the latter to your path by editing ~/.bashrc.
After you type in the functions on the command-line you could recall them using command-line editing (as #Dennis Williamson mentioned), but there is an easier method: declare -f. This command lists all current functions, and you can redirect them to a file:
home/user1> function myfunc {
> echo "Hollow world!"
> }
/home/user1> declare -f > myfuncs
/home/user1> more myfuncs
myfunc ()
{
echo "Hollow world"
}
Note how Bash changes the function syntax from Korn shell to Bourne shell! Fortunately there is no difference between the two in Bash (unlike ksh93).
When you need to load the function it is a simple matter:
/home/user1> source myfuncs
/home/user1> myfunc
Hollow world!
You don't need execute permissions by the way, only read.
You could (as others have said) add this to one of your start-up files, like .bashrc.
You can create a simple library which would contain all your functions. This would basically solve your problem of storing functions.
yeshwantnaik$ cat my_functions.lib
function do_something(){
#your code goes here
}
Save it wherever you want. Preferably to your $HOME directory.
Load the library. Don't miss the dot in the beginning.
yeshwantnaik$ . $HOME/my_functions.lib
Now you can run your function directly.
yeshwantnaik$ do_something
Let Linux do stuff for you
You can skip the step of manually loading the library by letting Linux do it for you when you log in automatically.
Run below commands
echo ". $HOME/my_functions.lib" >> ~/.bashrc
echo ". $HOME/my_functions.lib" >> ~/.bash_profile
source ~/.bashrc
source ~/.bash_profile
That's it. You can directly execute your function from the command line without doing anything.

Is it possible to "unsource" in bash?

I have sourced a script in bash source somescript.sh. Is it possible to undo this without restarting the terminal? Alternatively, is there a way to "reset" the shell to the settings it gets upon login without restarting?
EDIT: As suggested in one of the answers, my script sets some environment variables. Is there a way to reset to the default login environment?
It is typically sufficient to simply re-exec a shell:
$ exec bash
This is not guaranteed to undo anything (sourcing the script may remove files, or execute any arbitrary command), but if your setup scripts are well written you will get a relatively clean environment. You can also try:
$ su - $(whoami)
Note that both of these solutions assume that you are talking about resetting your current shell, and not your terminal as (mis?)stated in the question. If you want to reset the terminal, try
$ reset
No. Sourcing a script executes the commands contained therein. There is no guarantee that the script doesn't do things that can't be undone (like remove files or whatever).
If the script only sets some variables and/or runs some harmless commands, then you can "undo" its action by unsetting the same variables, but even then the script might have replaced variables that already had values before with new ones, and to undo it you'd have to remember what the old values were.
If you source a script that sets some variables for your environment but you want this to be undoable, I suggest you start a new (sub)shell first and source the script in the subshell. Then to reset the environment to what it was before, just exit the subshell.
The best option seems to be to use unset to unset the environment variables that sourcing produces. Adding OLD_PATH=$PATH; export OLD_PATH to the .bashrc profile saves a backup of the login path in case one needs to revert the $PATH.
Not the most elegant solution, but this appears to do what you want:
exec $SHELL -l
My favorite approach for this would be to use a subshell within () parantheses
#!/bin/bash
(
source some_script.sh
#do something
)
# the environment before starting previous subshell should be restored here
# ...
(
source other_script.sh
#do something else
)
# the environment before starting previous subshell should be restored here
see also
https://unix.stackexchange.com/questions/138463/do-parentheses-really-put-the-command-in-a-subshell
I don't think undo of executed commands is possible in bash. You can try tset, reset for terminal initialization.
Depending what you're sourcing, you can make this script source/unsource itself.
#!/bin/bash
if [ "$IS_SOURCED" == true ] ; then
unset -f foo
export IS_SOURCED==false
else
foo () { echo bar ; }
export IS_SOURCED==true
fi

Execute a bash function upon entering a directory

I'd like to execute a particular bash function when I enter a new directory. Somethink like:
alias cd="cd $# && myfunction"
$# doesn't work there, and adding a backslash doesn't help. I'm also a little worried about messing with cd, and it would be nice if this worked for other commands which changed directory, like pushd and popd.
Any better aliases/commands?
Aliases don't accept parameters. You should use a function. There's no need to execute it automatically every time a prompt is issued.
function cd () { builtin cd "$#" && myfunction; }
The builtin keyword allows you to redefine a Bash builtin without creating a recursion. Quoting the parameter makes it work in case there are spaces in directory names.
The Bash docs say:
For almost every purpose, shell functions are preferred over aliases.
The easiest solution I can come up with is this
myfunction() {
if [ "$PWD" != "$MYOLDPWD" ]; then
MYOLDPWD="$PWD";
# strut yer stuff here..
fi
}
export PROMPT_COMMAND=myfunction
That ought to do it. It'll work with all commands, and will get triggered before the prompt is displayed.
There are a few other versions of this out there, including
smartcd, which I wrote, and has a ton of features including templating and temporary variable saving
ondir, which is smaller and much simpler
Both of these support both bash and zsh
I've written a ZSH script utilizing the callback function chpwd to source project specific ZSH configurations. I'm not sure if it works with Bash, but I think it'll be worth a try. If it doesn't find a script file in the directory you're cd'ing into, it'll check the parent directories until it finds a script to source (or until it reaches /). It also calls a function unmagic when cd'ing out of the directory, which allows you to clean up your environment when leaving a project.
http://github.com/jkramer/home/blob/master/.zsh/func/magic
Example for a "magic" script:
export BASE=$PWD # needed for another script of mine that allows you to cd into the projects base directory by pressing ^b
ctags -R --languages=Perl $PWD # update ctags file when entering the project directory
export PERL5LIB="$BASE/lib"
# function that starts the catalyst server
function srv {
perl $BASE/script/${PROJECT_NAME}_server.pl
}
# clean up
function unmagic {
unfunction src
unset PERL5LIB
}

bash trickery using --init-file

I have a bash-script (let's call it /usr/bin/bosh) using the following she-bang line:
#!/bin/bash --init-file
It defines a couple of functions, and generally puts the interactive shell in an environment where the user can control a bunch of stuff that I want. This works pretty well. Now for the interesting part, I'd like to be able to let users use this in-between-layer for writing new scripts, without explicitly havnig to source this one. Is that at all possible?
I tried writing a script (let's call it /usr/bin/foo) using the she-bang line
#!/usr/bin/bosh
Which I thought, would be rewritten to execute the command
/usr/bin/bosh /usr/bin/foo
which in turn would result in
/bin/bash --init-file /usr/bin/bosh /usr/bin/foo
But it doesn't work, /usr/bin/foo gets executed, but /usr/bin/bosh is not source before that.
How can I make it source the init file even though the script is not interactive? Or would I have to write a wrapper script for that? I thought of having a script like this
#!/bin/bash
. /usr/bin/bosh
. "$1"
But that wouldn't turn into an interactive shell if I don't specify a script to run, which would be kind of a shame.
EDIT
For clarification, what I'm really asking is, how can I make bash source a file (like --init-file) regardless whether it's interactive (before starting the interactive part) or not (before executing the script)? If there's no way, is there any other way to solve my problem perhaps?
The program specified by the #! cannot be another script I'm afraid at least until linux kernel 2.6.27.9, which allows this feature. If you run strace on foo you'll see that you'd get an ENOEXEC or exec format error, because bosh cannot be executed as a standalone program.
What is happening is that instead of /bin/bosh being executed and handed foo as input, your login shell is simply silently falling back to executing foo itself in a sub-shell, which is why it seems to almost work.
A wrapper or C program that launches bash the way you want are probably your only options. Even with an upgrade to your kernel, it will not quite work the way you want I'm afraid.
Everything you ever wanted to know about #! here: http://www.in-ulm.de/~mascheck/various/shebang/
EDIT: If your kernel really does support chained scripts, then a work-around for /usr/bin/bosh might be something like:
#!/bin/bash
if [ ! $PS1 ]; then
exec /bin/bash --init-file "$0" -i "$#"
fi
... rest of bosh init file ...
An exec seems to be unavoidable to get this to work the way you want it to.
A script is not a runtime environment. That may be your problem. The shebang defnies the runtime environment. ie... /bin/java /bin/python /bin/bash /bin/dash. Your script is not an environment. Your "wrapper example" would be appropriate.

Resources