How does autojump change the current working directory? [closed] - bash

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 14 days ago.
This post was edited and submitted for review 14 days ago.
Improve this question
autojump works by maintaining a database of the directories you use the most from the command line. Then you can cd to directories via shortcuts, e.g. to jump to a directory that contains "foo", I can just call j foo instead of cd /full/path/to/foo.
$ pwd
/some/directory
$ j foo
/full/path/to/foo
I'm trying to understand how autojump is able to change the directory by calling cd inside its bash script. As far as I know, such a script is executed in a separate shell. Here's the part in autojump's code that calls cd.
For example, calling this doesn't change the directory outside of the script, so how is autojump able to achieve it?
# myscript.sh
#!/bin/bash
cd path/to/foo
$ pwd
/some/directory
$ ./myscript.sh
/some/directory

How do zoxide and autojump change the current working directory?
Normally, with cd.
As far as I know, such a script is executed in a separate shell.
So they are not scripts, they are shell functions.
How is it able to call the function from the terminal in a way that it changed the directory in the terminal?
Like so:
$ cat mycdfunction.sh
mycdfunction() {
echo "jumping to my place"
cd tomyplace
}
$ . mycdfunction.sh
$ mycdfunction
jumping to my place
$ pwd
myplace

Related

bash/zsh - current directory (dot `.` ) command with file argument in rc file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm having difficulty finding information on what the following command is called or does? This command is meant to be called from .rc files.
What is this syntax called in bash and zsh? I know it's for both, as shell file is meant for both. What does it do when placed inside a file? What does it do in file/function/cli?
And what is the equivalent in fish (Friendly Interactive SHell)? Will the same bash/zsh shell file work with that equivalent command in config.fish?
> . /path/to/sh-file.sh # what does this do? nothing happens on cli when called.
> cat /path/to/sh-file.sh
[ -d "${_DATA:-$HOME/.kdcd}" ] && { echo ... }
_kd () {
....
}
This . does not refer to the current working directory as it usually does elsewhere. Rather, it is the short (and, it seems, the original) version of the source command, which executes the file specified in the current shell (even if it is not executable), rather than in a child shell, which is what happens when you run with ./file or bash file. Functions defined, variables assigned, and modifications to the environment made in child shells are unknown to the parent shell, so if you want a file to do something to the current shell, you probably want to source it.
$ type .
. is a shell builtin
$ help .
.: . filename [arguments]
Execute commands from a file in the current shell.
A common use of the source or . commands is to be able to immediately use aliases or functions defined in a particular file. For example, after making changes to a shell rc file, to use the new configuration immediately, one may run . .bashrc (or . .zshrc, etc).
It is also common for one file to source another as you saw. For example, as a shell's rc file is executed automatically in every interactive shell of that type when it starts, you can incorporate configurations in other files in every interactive shell you run by having the rc file source those files.
Both . and source work in the same way in the fish shell, but, of course, the syntax of the functions etc in the file you source may be incompatible with shells other than the one they were written for.

mkdir: omit leading pathname when creating multiple directories? [duplicate]

This question already has answers here:
Bash mkdir and subfolders [duplicate]
(3 answers)
Closed 6 years ago.
I'm sure this question has been asked elsewhere but I can't seem to phrase it in a way that returns a useful Google result.
I am creating a dozen directories that all have the same root path and I don't want to have to cd into it to be able to make these directories. The current command looks like something like, which is awful and repetitive:
$ mkdir frontend/app/components/Home frontend/app/components/Profile \
frontend/app/components/Post frontend/app/components/Comment
An ideal syntax would be something along the lines of:
$ mkdir frontend/app/components/{Home, Profile, Post, Comment}
Is there something like this already that I just haven't found? I don't want to have to run a for loop just to make a few directories.
Your wish is granted :-).
mkdir doesn't know and doesn't have to, but shells like bash or zsh understand the syntax {...,...,...}.
Just remove the spaces from your "along the lines of" and it works:
mkdir frontend/app/components/{Home,Profile,Post,Comment}
The shell will expand it to
mkdir frontend/app/components/Home frontend/app/components/Profile frontend/app/components/Post frontend/app/components/Comment
Since it is done by the shell, it works with any command.
Remove spaces around comma and use -p option:
mkdir -p frontend/app/components/{Home,Profile,Post,Comment}

What is the rationale for Bash ignoring errors when executing a script? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Bash -- and probably other shells too -- ignore any errors by default and just continue with the next command. I wonder why the shell was designed that way. After all, normally, one would want to abort a script in which every command needs the state generated by the previous.
I don't know the exact reasons, something like:
Every check takes extra time. For better performance no additional check everytime and no popup "Are you sure [Yes] [No] [Ignore]".
You are afraid for code like
cd /
ls
cd $HOME/temp;
rm -rf *
Terrible when you do not have a temp dir (script made by a normal user and executed by root)!
Anybody who has root access must be aware of the responsibility and dangers (s)he has. That is why you shouldn't execute scripts you don't trust (don't have the current dir in your PATH). And the person who wrote that script is wrong as well. Without checks on $? the script should be changed into something like
cd / && ls
cd "${HOME}"/temp && rm -rf *
or
cd / && ls
if [ ${#HOME} -gt 1 ]; then
rm -rf /"${HOME}"/temp/*
fi
Are these examples not a proof that exit-on-error would be better? I do not thinlk so. When Unix would fail exit on errors and you don't check everything, things can go terrible wrong with
cd /
ls
rm -rf /$HOME/temp/*
When HOME is set to / or a string with a space (ending with a space..) the last command might work. Always triple check your scripts, you are working with power.

Folder shortcut in bash [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a git repo in a folder that is inside multiple other folders. When I want to use the command line for git, I have to do cd /Desktop/.../.../.../.../repo_folder to do git commands. Is their any way that I can set a shortcut to a folder or get to a folder faster because having to type in a 70 character long path is not ideal
Thanks in advanced!
You can use the cdable_vars option of bash that allows you to call cd with a variable name. If the argument passed to cd is not a directory, then it is assumed to be a variable name and the value of the variable is used as the destination directory.
Example of use: if you put this in your ~/.bashrc:
alias show='cat ~/.dirs'
save () {
here=`pwd`
if (( $# == 0 )); then
name=`basename $here`
elif (( $# > 1 )); then
echo "usage: save [<name>]"
return -1
else
name=$1
fi
sed -i -e "/^$name=/d" ~/.dirs
echo "$name=\"$here\"" >> ~/.dirs
source ~/.dirs
}
source ~/.dirs
shopt -s cdable_vars
Then, when your current directory is one that you want to remember, just type:
save my_dir
and the next time you want to go there, just type:
cd my_dir
As long there is no my_dir directory where you type it, it will bring you where you want. The save argument is optional. If you do not provide it the defined short hand will be the base name of the current directory:
cd /Desktop/../../../../repo_folder
save
will define repo_folder as the short hand for this directory.
The ~/.dirs file contains your variable definitions for your favourite directories. You can edit it by hand, if you wish. These definitions are evaluated every time you launch a new bash shell. Beware they may overwrite others that you also need. If it is a problem, I advice you to chose unique short hands (my_dir_repo_folder instead of repo_folder). And remember the second pitfall, when you type:
cd foo
you can go either to the local sub-directory foo if there is one or to the directory for which you defined the foo short hand. And there is a third one: if you redefine a short hand, the previous one is overwritten. So, this trick is handy but somehow dangerous because when you cd you do not know any more if you are really where you want. Customizing your prompt to show the current path may be a good idea.
The show alias is just a way to list all currently defined short hands.
Just set "start in" param in your bash shortcut properties and every time you run bash it will open your repo folder :)

tab-expansion and "./" bash shell [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Maybe someone here would be able to help me out. Have installed Ubuntu 12.04 LTS (kubuntu) on two machines. The .bashrc and .bash_profile files are identical as the file structures on each machine is the same.
On machine 1: I run bash scripts within a terminal window with the simple: ./scriptname.sh
On machine 2: I cannot do this and must use: sh scriptname.sh
Nor can I use ./ and tab-complete the script filename.
All executable bits are set correctly, all files and folders have the correct permissions. In the header of the scripts the shebang is set correctly.
Any ideas why this would be occurring?
If I try to execute the script with ./file_motion_grab.sh:
bash: ./file_motion_grab.sh: Permission denied
When I try ls -l, I get:
-rwxrwxrwx 1 adelie adelie 351 Nov 4 20:32 file_motion_grab.sh
Output of getfacl is:
# file: file_motion_grab.sh
# owner: adelie
# group: adelie
user::rwx
group::rwx
other::rwx
More general - any new script on the second machine must be invoked with: sh scriptname.sh Something probably wrong in the .bash files. But not sure where to look.
I would recommend trying ls -al to check the permissions on the file and the directory. Also, try getfacl file.sh, because sometimes there are ACL permissions that override the normal Unix permission bits.
Then I would try head -n 1 file.sh | xxd, to look at the first line, and make sure the shebang is there properly as the first two characters of the file. Sometimes, hidden characters, like a Unicode BOM, can cause it not to be interpreted properly.
Then I would check the permissions on the shell itself. ls -l /bin/bash and getfacl /bin/bash. I would also check to see if this happens with other interpreters; can you use #!/bin/sh for a script? #!/bin/python (or Perl, or Ruby, or something of the sort)? Distinguishing whether this happens only for /bin/bash or for other shells would be helpful.
Also, take a look at ls /proc/sys/fs/binfmt_misc to see if you have any binary formats configured that might interfere with normal interpretation of a shell script.
Try testing from another account as well (if you can). Is the problem unique to your account? I would also try rebooting, in case there is just some transient corruption that is causing a problem (again, if you can).
(answer was originally a series of comments)

Resources