How to change the parent working directory from which my executable is run? - shell

My project is a command-line application in Haskell and uses the opt-parse applicative package to handle parsing of command line options.
I am trying to implement a functionality which can change the working directory of the shell from which the command was run.
For example, I want to do something like
$ program foo
and have the shell change to a specified directory my program has associated with the passed option foo.
$ program foo
input 'foo'; now changing to directory '~/mydirectories/foodirectory'
[~/mydirectories/foodirectory]$
I attempted to implement this functionality using the setCurrentDirectory function from the directory package, however this does not appear to affect the working directory in the shell from which the directory is run; no change in the working directory appears. I am imagining that it is setting some kind of internal-to-the-program working directory, as I see no change in the shell from which the executable was run.
Is it possible to have my program change directories in this manner? Such directory-switching functionality would improve the convenience of using my application.
How can I implement directory-switching functionality from a command-line executable?

In general, you can't do this. No program can change its parent process's directory, no matter what language it's written in, using standard UNIX interfaces. A program can only change its own directory (and, by extension, the initial working directory of any programs it subsequently starts).
However, with the shell's active participation, you can make something very similar. Consider instructing your users to run a shell function such as the following:
# this wrapper works on any POSIX shell, but will avoid setting a global variable "dir" if
# the active shell (like bash, ksh, or even dash) supports local variables.
chdirWithMyHaskellProgram() {
local dir 2>/dev/null || \
declare dir 2>/dev/null || \
: "current shell does not support local variables; proceeding anyhow"
dir="$(myHaskellProgram "$#")" && cd -- "$dir"
}
...will cause chdirWithMyHaskellProgram --foo-bar to call myHaskelProgram --foo-bar, and change to the directory named in the output of same, if and only if the exit status indicates success.
Consider having myHaskelProgram --eval emit the source code to the above function. If you implemented this, then your user would need to put eval "$(myHaskelProgram --eval)" in their dotfiles, but would thereafter have the desired functionality.

Related

How to execute a script within a subfolder of a directory in path

Consider the following folder structure:
$ tree ~/test_path
test_path
`-- sub_folder
`-- script.sh
1 directory, 1 file
Let's say that you have added test_path to your path by
export PATH=$PATH:~/test_path
$ whereis sub_folder
sub_folder: /home/murtraja/test_path/sub_folder
Now how to execute script.sh by calling sub_folder/script.sh?
$ sub_folder/script.sh
bash: sub_folder/script.sh: No such file or directory
EDIT: I don't want to change the call sub_folder/script.sh because this is called by another script which I cannot (am avoiding to) change.
Short answer: You can't, but depending on the set of constraints you're facing, there might be another way to handle it.
Long answer: When a command name contains at least one slash character, it treated as a path to the executable (i.e. it doesn't search the directories in $PATH). With the command name sub_folder/script.sh, it contains a slash, but doesn't start with a slash, so it'll be resolved relative to the current working directory.
So there are a couple of possibilities for making this work:
If you can cd to ~/test_path before running this, it'll find it directly. Of course, this may break other things (i.e. anything else that uses relative paths and/or plain filenames and expects them to be resolved somewhere else). Also, be sure to check for errors when you cd, or the script could execute in an unexpected directory, with unexpected consequences.
If the script needs to execute from a different working directory, you might be able to create a symbolic link from sub_folder in that working directory to ~/test_path/sub_folder. But depending on where the script's working directory is, this may be impossible or unsafe. I'd avoid using this option if possible.
There's also an option that depends on a weird/nonstandard feature of bash: the ability to define function names with slash in them. But this has weird limitations depending on the version of bash you have:
You can define a function like this:
sub_folder/script.sh() { ~/test_path/sub_folder/script.sh "$#"; }
and then either use export -f sub_folder/script.sh (so bash subprocesses inherit it), or do this in a wrapper script and then source the script you can't change from there (so it's the same shell, and inheritance isn't necessary).
Difficulty: some versions of bash refuse to export functions with weird names, and some refuse to inherit them. So the export method might or might not work (and might break unexpectedly due to an update). The source method might be better, but also might cause other trouble.
If there's any way at all to change the other script, that'd really be the best option.
Since you've added it to your path, you can just call the script by name. It should also let you tab complete as well.
$ ./script.sh

How can I trick bash to think I ran a script from within the directory where the binary is located?

I have a binary (emulator from the Android SDK Tools) that I use frequently. I've added it to my $PATH variable so that I can easily run the command with solely emulator.
However, it turns out that the binary uses relative paths; it complains if I run the script from a different directory. I'd really like to run the command from any folder without having to cd into the directory where the binary lives. Is there any way to get the script/bash into thinking I ran the command from the directory that it's located in?
A function is an appropriate tool for the job:
emu() ( cd /dir/with/emulator && exec ./emulator "$#" )
Let's break this down into pieces:
Using a function rather than an alias allows arguments to be substituted at an arbitrary location, rather than only at the end of the string an alias defines. That's critical, in this case, because we want the arguments to be evaluated inside a subshell, as described below:
Using parentheses, instead of curly brackets, causes this function's body to run in a subshell; that ensures that the cd only takes effect for that subshell, and doesn't impact the parent.
Using && to connect the cd and the following command ensures that we abort correctly if the cd fails.
Using exec tells the subshell to replace itself in memory with the emulator, rather than having an extra shell sitting around that does nothing but wait for the emulator to exit.
"$#" expands to the list of arguments passed to the function.
Add an alias (emu) to your ~/.bashrc:
alias emu="(cd /dir/with/emulator; ./emulator)"
I would look into command line aliases.
If you are in linux and using bash you can go to ~/.bash_profile and add a line such as:
alias emulator='./path-to-binary/emulator"
On windows it's a little different. Here is an example on how to do a similar thing in windows.

Why does this script work in the current directory but fail when placed in the path?

I wish to replace my failing memory with a very small shell script.
#!/bin/sh
if ! [ –a $1.sav ]; then
mv $1 $1.sav
cp $1.sav $1
fi
nano $1
is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp).
This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with
./safe.sh filename
However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with:
*-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy*
My question is, when I move this script into the path (verified by echo $PATH) why does it then fail?
D'oh? Inquiring minds want to know how to make this work.
The . command is not normally used to run standalone scripts, and that seems to be what is confusing you. . is more typically used interactively to add new bindings to your environment (e.g. defining shell functions). It is also used to similar effect within scripts (e.g. to load a script "library").
Once you mark the script executable (per the comments on your question), you should be able to run it equally well from the current directory (e.g. ./safe.sh filename) or from wherever it is in the path (e.g. safe.sh filename).
You may want to remove .sh from the name, to fit with the usual conventions of command names.
BTW: I note that you mistakenly capitalize If in the script.
The error bad interpreter: Text file busy occurs if the script is open for write (see this SE question and this SF question). Make sure you don't have it open (e.g. in a editor) when attempting to run it.

saving bash functions

First of all, I'd like to be corrected on my vocabulary. I assume Terminal provides an environment for bash, which is a type/version of shell. Is this correct?
I'm trying to utilize bash and shell more in my development processes to speed up deployment. However, I'm only beginning to understand the basics outside the commands I've learned from growing up.
I've started making functions in Terminal to help automate some of my more repetitious tasks. This is all find and dandy until I exit terminal.
I assume that shell runs in an instance, so that instance is lost when I exit terminal. I notice that shell leaves a .bash_history, also accessible using 'history', where I can see my old functions from old sessions. However, of course, they no longer appear to execute.
I recognize that I could create a shell script, but this introduces compiling issues as well as having to pay more attention to where the scripts are stored relatively.
Question: When I create bash scripts using command(){}, they do not persist after closing the terminal. Can I make them do that so on new terminal sessions I have access to my old functions without resorting to shell scripts?
Edit: I also wanted to mention that I tried extensively to find an issue to this using traditional means, but "save" and "exit" in the search term often will direct to other aspects of shell.
Your first statement is correct. A terminal instance runs a type of shell (bash, sh, csh)
You can add them to your ~/.bashrc file or add the saved script path (no compiling needed) to your PATH variable.
You could also just copy scripts to /usr/local/bin for quick access anytime. You would have no need to keep track of where they are relatively. This is quite handy and makes your scripts available to other users (or not if permissions are set correctly)
See the Using History Interactively section of the Bash Reference Manual for ways you can execute commands from your history.
For example, typing !?foo and pressing Enter will execute the most recent command containing "foo". I like to have shopt -s histverify histreedit in my ~/.bashrc so I can edit and confirm the command, if necessary rather than executing it immediately.
Also see the Commands For Manipulating The History section for keystrokes you can use to search for history entries to recall and execute.
For example, pressing Ctrl-r and typing foo will recall the most recent command containing "foo". You can press Ctrl-r additional times to continue searching in reverse for additional matching commands. Press Enter when you're ready to execute the one currently shown or Ctrl-g to abort the search.
If you add stty -ixon to your ~/.bashrc, then you can use Ctrl-s to search forward through history after you've begun searching backward.
Of course, you can save your functions by editing ~/.bashrc and adding them to it. I prefer to keep my functions in a file I created called ~/bin/functions and then add a line to ~/.bashrc to source that file. The line looks like . ~/bin/functions.
I save larger scripts in /usr/local/bin or ~/bin. The former should already be in your path and you can add the latter to your path by editing ~/.bashrc.
After you type in the functions on the command-line you could recall them using command-line editing (as #Dennis Williamson mentioned), but there is an easier method: declare -f. This command lists all current functions, and you can redirect them to a file:
home/user1> function myfunc {
> echo "Hollow world!"
> }
/home/user1> declare -f > myfuncs
/home/user1> more myfuncs
myfunc ()
{
echo "Hollow world"
}
Note how Bash changes the function syntax from Korn shell to Bourne shell! Fortunately there is no difference between the two in Bash (unlike ksh93).
When you need to load the function it is a simple matter:
/home/user1> source myfuncs
/home/user1> myfunc
Hollow world!
You don't need execute permissions by the way, only read.
You could (as others have said) add this to one of your start-up files, like .bashrc.
You can create a simple library which would contain all your functions. This would basically solve your problem of storing functions.
yeshwantnaik$ cat my_functions.lib
function do_something(){
#your code goes here
}
Save it wherever you want. Preferably to your $HOME directory.
Load the library. Don't miss the dot in the beginning.
yeshwantnaik$ . $HOME/my_functions.lib
Now you can run your function directly.
yeshwantnaik$ do_something
Let Linux do stuff for you
You can skip the step of manually loading the library by letting Linux do it for you when you log in automatically.
Run below commands
echo ". $HOME/my_functions.lib" >> ~/.bashrc
echo ". $HOME/my_functions.lib" >> ~/.bash_profile
source ~/.bashrc
source ~/.bash_profile
That's it. You can directly execute your function from the command line without doing anything.

Directory based environment variable scope - how to implement?

I have a set of tools which I need to pass parameters depending on the project I'm working on. I'd like to be able to automatically set a couple of environment variables based on the current directory. So when I switched between directories, my commonly used env vars would also change. Example:
Let's current directory is foo, thus if I do:
~/foo$ ./myscript --var1=$VAR1
VAR1 would have some foo based value.
Then, let's say I switched to bar directory. If I do:
~/bar$ ./myscript --var1=$VAR1
VAR1 should now have some bar based value.
Is that possible? How?
the ondir program lets you specify actions to run when you enter and leave directories in a terminal
There is direnv which helps you do this stuff much easily and in an elegant way. Just define a .envrc file in your project directory with all the env variables needed and it will source it once you cd into that folder.
I've written another implementation of this, which is somewhat similar to ondir. I didn't actually know about ondir when I started working on it. There are some key differences that may be useful, however.
smartcd is written entirely in shell, and is fully compatible with bash and zsh, even the more esoteric options
smartcd will run scripts all the way down and up the directory hierarchy down to their common ancestor, not just for the two directories you're entering and leaving. This means you can have a ~/foo script that will execute whether you "cd ~/foo" or "cd ~/foo/bar"
it has "variable stashing" which is a more automatic way of dealing with your environment variables, whereas ondir requires you to explicitly and manually remove and/or reset your variables
smartcd can work with "autocd" turned on by hooking your prompt command (PROMPT_COMMAND in bash, precmd in zsh)
You can find smartcd at https://github.com/cxreg/smartcd
This is not something that is directly supported with the built-in features of bash or any other common shell. However, you can create your own "cd" command that will do whatever you want. For example, you could alias cd to do the cd and then run a special script (eg: ~/bin/oncd). That script could look up the new directory in a database and run some commands, or see if there's a special file (eg: .env) in the directory and load it, etc.
I do this sort of thing a lot. I create several identically named batch files in directories where I need them that only set the variables and call the common script. I even have a batch file that creates the other small files.
This is not pretty, but you can use a combination of exported environment variables and the value of $PWD.
For example:
export VAR1=prefix
export prefix${HOME////_}_foo=42
export prefix${HOME////_}_bar=blah
Then myscript needs only to eval echo \${$VAR1${PWD////_}} to get at the directory based value.
How about wrap your script with a function (the function can be placed either in your bash profile/bashrc file in the system ones to make available for all the users ).
myscript () { case $PWD in
/path/to/foo) path/to/myscript --var1=$VAR1 ;;
/path/to/bar) path/to/myscript --var2=$VAR1 ;;
*) ;;
case
}
Hence the function myscript will call the real "myscript" knowing what to do based on the current working directory.
Take this as an example:
hmontoliu#ulises:/tmp$ myscript () { case $PWD in /tmp) echo I\'m in tmp;; /var) echo I\'m in var;; *) echo I\'m neither in tmp nor in bar; esac; }
hmontoliu#ulises:/tmp$ myscript
I'm in tmp
hmontoliu#ulises:/tmp$ cd /var
hmontoliu#ulises:/var$ myscript
I'm in var
hmontoliu#ulises:/var$ cd /etc
hmontoliu#ulises:/etc$ myscript
I'm neither in tmp nor in bar

Resources