searching file existence at any depth in bash - bash

I need to check if a file exists in directory using bash. I have tried below method but it needs complete path as input.
if [ -e /*/my_file.txt ] ;
then
echo "file found"
else
echo "not found"
fi
Is there any way that I can check if the file exists at any depth dynamically.
NOTE: I don't want to use "find" as it takes lot of time to execute.

If you are using bash 4, you can write patterns that recursively descend a hierarchy:
shopt -s globstar
for f in /**/myfile.txt; do
if [[ -e $f ]]; then
found=1
echo "File found"
break
fi
done
if [[ $found -ne 1 ]]; then
echo "File not found"
fi
Using find:
found=$( find / -name myfile.txt )
if [[ -n $found ]]; then
echo "File found"
else
echo "File not found"
fi

If really speed is your concern, file globbing like ls * */* */*/* is not helping you that much. And it has its limit with this error: Argument list too long. find is a useful tool for finding stuff. It is very flexible. But like using file globbing, it has to scan the directory tree with every invocation. For occasional searches, like for maintenance, this is totally acceptable. But if this is part processing pipeline, the speed is not acceptable. You need an optimised database for that.
The simplistic way
Most every UNIX I know is shipped with locate.
If it is preinstall you can search like this:
$ locate -b '\my_file.txt'
The backslash in front of my_file.txt is intended. It switches off wildcard search. Adding -i gives case insensitive search.
If the command is not available it should be installable from your OS repository. For Debian/Ubuntu: apt install locate. For first init run /etc/cron.daily/locate as root or with sudo.
The database is updated on a daily basis. For some applications, this interval is probably too long. By moving the cronjob from daily to like every 3 hours, you get more recent results.
The realtime way ...
This is a bit out of the scope of this answer. But you would need some kind of deamon, that would watch kernel inotify events for directory changes. These in turn would be reflected in a database, that can be queried through some API. Like Spotlight from MacOS or Tracker from Gnome.

find is the proper solution.
however you can use bash expansion feature
if ls */* | grep -q my_file.txt
then echo file found
else echo file not found
fi
note
that above solution will not find my_file.txt if a top level.
if my_file.txt is part of a directory name you might get a wrong result.
if there are many (thousands) directories and many files / expansion might get paste bash limit (arg list too long)
you can ls * */* */*/* | grep with limit state above.

Related

move downloaded files to particular directories - inotify for dummies?

I wonder whether nobody else wants to move files downloaded from the web to file-specific directories? - Bank account statements go to various "Bank account statements" folders, depending which account it is, invoices go to their specific folders etc.
Assumed they all have their characteristic file name beginning, it should not be so difficult to get that automatized. There is a firefox plugin for that, but I could never make it work. So my idea was to use inotify for the download folder (or another suitable tool). But I never found really dummy-proof instuctions how to obtain that.
Would someone be so kind and provide all the steps to set up such a service?
Thanks a lot in advance,
Wolf
After quite some trial and error, I came to a working solution. This script sorts downloaded files with certain patterns to the defined target directories:
#!/bin/bash
cd <my-download-directory>
inotifywait -mq -e moved_to --format %f <my-download-directory> | while read FILE
echo "name of new file: "$FILE
do
if [[ $FILE =~ regex-search-pattern1 ]]; then mv $FILE <target-directory1> ; fi
if [[ $FILE =~ regex-search-pattern2 ]]; then mv $FILE <target-directory2> ; fi
if [[ $FILE =~ regex-search-pattern3 ]]; then mv $FILE <target-directory3>; fi
... more if desired
done
In general such a feature is quite useful - I'd really like to have a GUI for it.
Cheers,
Wolf

How to make folders for individual files within a directory via bash script?

So I've got a movie collection that's dumped into a single folder (I know, bad practice in retrospect.) I want to organize things a bit so I can use Radarr to grab all the appropriate metadata, but I need all the individual files in their own folders. I created the script below to try and automate the process a bit, but I get the following error.
Script
#! /bin/bash
for f in /the/path/to/files/* ;
do
[[ -d $f ]] && continue
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
EDIT
So I've now run the script through Shellcheck.net per the suggestion of Benjamin W. It doesn't throw any errors according to the site, though I still get the same errors when I try running the command.
EDIT 2*
No errors now, but the script does nothing when executed.
Assignments are evaluated only once, and not whenever the variable being assigned to is used, which I think is what your script assumes.
You could use a loop like this:
for f in /path/to/all/the/movie/files/*; do
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
This uses parameter expansion instead of cut to get rid of the file extension.

Simple BASH script needed: moving and renaming files

Decades ago I was a programmer (IBM assembly, Fortran, COBOL, MS DOS scripting, a bit of Visual Basic.) Thus I'm familiar with the generalities of IF-Then-Else, For loops, etc.
However, I'm now needing to delve into Bash for my current job, and I'm having a difficult time with syntax and appropriate commands for what I need.
I'm in need of a trivial (concept-wise) script, which will:
Determine if a specific folder (e.g., ~/Desktop/Archive Folder) exists on the user Desktop
If not, create it ("Archive")
Move all files/folders on desktop - except for ~/Desktop/Archive, into "Archive Folder" - AND appending a timestamp onto the end of the filenames being moved.
It is this very last piece - the timestamp addition - which is holding me up.
I'm hoping a clear and simple solution can be sent my way. Here is what I've come up with so far:
#!/bin/bash
shopt -s extglob
FOLDERARCH="Archive Folder"
cd ~/Desktop
if [ ! -d $"FOLDERARCH" ]; then
mkdir "$FOLDERARCH"
echo "$FOLDERARCH did not exist, was created"
fi
mv !(-d "$FOLDERARCH") "$FOLDERARCH"
One final note: the script above works (without the timestamp piece) yet also ends with the message
mv: rename Archive Folder to Folder/Archive Folder: Invalid argument
Why?
Any help will be deeply, deeply appreciated. Please assume I know essentially zilch about the BASH environment, cmds and their arguments - this first request for assistance marks my first step into the journey of becoming at least proficient.
Update
First: much gratitude for the replies I've gotten; they've been very useful.
I've now got was it essentially a working version, but with some oddities I do not understand and, after hours of attempted research, have yet to understand/solve.
I'm hoping for some insight; I feel I'm on the verge of making some real headway in comprehending, but these anomalies are hindering my progress. Here's my (working, with "issues") code so far:
shopt -s extglob
FOLDERARCH="Archives"
NEWARCH=$(date +%F_%T)
cd ~/Desktop
if [ ! -d $"FOLDERARCH" ]; then
mkdir "$FOLDERARCH"
echo "$FOLDERARCH did not exist, was created"
fi
mkdir "$FOLDERARCH/$NEWARCH"
mv !(-d "$FOLDERARCH") $FOLDERARCH/$NEWARCH
This in fact largely accomplishes my goal, but:
In the case where the desktop Archives folder already exists, I'm expecting the if-then construct to simply follow through (with no echo msg) to the following mkdir command, but instead the msg "Archives not exist, was created" msg is output anyway (erroneously). Any answers as to why?
The script completes with the following msg:
mv: rename Archives to Archives/2016-01-10_00:06:54/Archives: Invalid argument
I don't understand this at all; what should be happening is that all files/folders on the desktop EXCEPT the /Desktop/Archives folder should be moved into a newly created "subfolder" of /Desktop/Archives, e.g., /Desktop/Archives/2016-01-10_00:06:54. In fact, the move accomplishes my goal, but that the message arises makes no sense to me. What is the invalid argument?
One last note: at this point in my newbie-status I'm looking for code which is clear and easy to read, versus much more elegant/sophisticated one-line piped-command solutions. I look forward to working my way up to those in due time.
You have several options. One of the simplest is to loop over the directories below ~/Desktop and if they are not "$FOLDERARCH", move them to "$FOLDERARCH", e.g.:
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH"
done
I haven't run a test case, but something similar to the following should work.
#!/bin/bash
shopt -s extglob
FOLDERARCH="Archive Folder"
cd ~/Desktop || { printf "failed to change to '~/Destop'\n"; exit 1; }
if [ ! -d "$FOLDERARCH" ]; then
if mkdir "$FOLDERARCH" , then
echo "$FOLDERARCH did not exist, was created"
else
echo "error: failed to create '$FOLDERARCH'"
exit 1
fi
fi
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH"
done
I apologize, I forgot the datestamp portion. As pointed out in the comments, you can include the datestamp (set the format to your taste) with something similar to the following:
tstamp=$(date +%s)
for i in */; do
[ "$i" != "$FOLDERARCH"/ ] && mv "$i" "$FOLDERARCH/${i}_${tstamp}"
done

Getting relative paths in BASH

I already searched for this, but I guess there was no great demand on working with paths.
So I'm trying two write a bash script to convert my music collection using tta and cue files.
My directory structure is as following: /Volumes/External/Music/Just/Some/Dirs/Album.tta for the tta files and /Volumes/External/Cuesheets/Just/Some/Dirs/Album.cue for cue sheets.
My current approach is setting /Volumes/External as "root_dir" and get the relative path of the album.tta file to $ROOT_DIR/Music (in this case this would be Just/Some/Dirs/Album.tta), then add this result to $ROOT_DIR/Cuesheets and change the suffix from .tta to .cue.
My current problem is, that dirname returns paths as they are, which means /Volumes/External/Music/Just/Some/Dirs does not get converted to ./Just/Some/Dirs/ when my current folder is $ROOT_DIR/Music and the absolute path was given.
Add://Here is the script if anybody has similar problems:
#!/bin/bash
ROOT_DIR=/Volumes/External
BASE="$1"
if [ ! -f "$BASE" ]
then
echo "Not a file"
exit 1
fi
if [ -n "$2" ]
then
OUTPUT_DIR="$HOME/tmp"
else
OUTPUT_DIR="$2"
fi
mkfdir -p "$OUTPUT_DIR" || exit 1
BASE=${BASE#"$ROOT_DIR/Music/"}
BASE=${BASE%.*}
TTA_FILE="$ROOT_DIR/Music/$BASE.tta"
CUE_FILE="$ROOT_DIR/Cuesheets/$BASE.cue"
shntool split -f "${CUE_FILE}" -o aiff -t "%n %t" -d "${OUTPUT_DIR}" "${TTA_FILE}"
exit 0
If your Cuesheets dir is always in the same directory as your Music, you can just remove root_dir from the path, and what is left is the relative path. If you have the path to your album.tta in album_path (album_path=/Volumes/External/Music/Just/Some/Dirs/Album.tta) and your root_dir set(root_dir=/Volumes/External), just do ${album_path#$root_dir}. This trims root_dir from the front of album_path, so you are left with album_path=Just/Some/Dirs/Album.tta.
See bash docs for more information on bash string manipulation
EDIT:// Changed ${$album_path#$root_dir} to ${album_path#$root_dir}
Okay so I've tackled this a couple of ways in the past. I don't recommend screwing with paths and pwd environment variables, I've seen some catastrophic events because of it.
Here's what I would do
CURRENTDIR=/Volumes/External/Music # make sure you check the existence in your script
...
SEDVAL=$(echo $CURRENTDIR | sed s/'\/'/'\\\/'/g)
#run your loops for iterating through files
for a in $(find ./ -name \*ogg); do
FILE=`echo $a | sed s/$SEDVAL/./g` # strip the initial directory and replace it with .
convert_file $FILE # whatever action to be performed
done
If this is something you might do frequently I would actually just write a separate script just for this.

How to manage Long Paths in Bash?

I have a problem to manage long paths. How can I get quickly to paths like
/Users/User/.../.../.../.../.../Dev/C/card.c
I tried an alias
alias cd C='cd /Users/User/.../.../.../.../.../Dev/C'
but I am unable to do aliases for two separate words. I have long lists of Bash aliases and paths in CDPATH, so I am hesitating to make them more. How can manage long paths?
[Ideas for Replies]
The user litb's reply revealed some of my problems in the management. Things, such as "CTRL+R", "!-3:1:2:4:x" and "incremental search", are hard for me. They probably help in navigating long directories and, in the sense, management.
Using symlinks is probably the best idea; but you can do it even easier than dumping them all into your home directory.
As you mentioned, BASH has a feature called CDPATH which comes in really handy here.
Just make a hidden folder in your homedir (so it doesn't clutter your homedir too much):
$ mkdir ~/.paths
$ cd ~/.paths
$ ln -s /my/very/long/path/name/to/my/project project
$ ln -s /some/other/very/long/path/to/my/backups backups
$ echo 'CDPATH=~/.paths' >> ~/.bashrc
$ source ~/.bashrc
This creates a directory in your homedir called ".paths" which contains symlinks to all your long directory locations which you regularly use, then sets the CDPATH bash variable to that directory (in your .bashrc) and re-reads the .bashrc file.
Now, you can go to any of those paths from anywhere:
$ cd project
$ cd backups
Leaving you with a short CDPATH, no cluttering aliasses, and more importantly: A really easy way to navigate to those long paths from other applications, such as UI applications, by just going into ~/.paths or adding that directory into your UI application's sidebar or so.
Probably the easiest all-round solution you can have.
Consider using symbolic links. I have a ~/work/ directory where I place symlinks to all my current projects.
You may also use shell variables:
c='/Users/User/.../.../.../.../.../Dev/C'
Then:
cd "$c"
Create symlinks in your home directory (or somewhere else of your choosing)
ln -s longDirectoryPath ~/MySymLinkName
See man ln for more details.
Probably the easiest solution is to use:
alias cdc='cd /Users/User/.../.../.../.../.../Dev/C'
alias cdbin='cd /Users/User/.../.../.../.../.../Dev/bin'
alias cdtst='cd /Users/User/.../.../.../.../.../Dev/tst'
if you're only really working on one project at a time. If you work on multiple projects, you could have another alias which changed the directories within those aliases above.
So, you'd use something like:
proj game17
cdc
make
proj roman_numerals
cdbin
rm -f *
proj game17 ; cdc
Since this is a useful thing to have, I decided to put together a series of scripts that can be used. They're all based aroung a configuration file that you place in your home directory, along with aliases to source scripts. The file "~/.cdx_data" is of the form:
scrabble:top=~/dev/scrabble
scrabble:src=~/dev/scrabble/src
scrabble:bin=~/dev/scrabble/bin
sudoku:top=~/dev/scrabble
sudoku:src=~/dev/scrabble/src
sudoku:bin=~/dev/scrabble/bin
sudoku:data=~/dev/scrabble/data
and lists all the relevant projects (scrabble and sodoku in this case) and their directories (which may be different for each project, but have top, bin, src and data in this example).
The first action is to initialize stuff, so put:
. ~/.cdx_init
at the end of your .bash_profile and create the "~/.cdx_init" file as:
alias cdxl='. ~/.cdx_list'
alias projl='. ~/.cdx_projlist'
alias cdx='. ~/.cdx_goto'
alias proj='. ~/.cdx_proj'
This sets up the four aliases to source the files which I'll include below. Usage is:
cdxl - List all directories in current project.
projl - List all projects.
proj - Show current project.
proj <p> - Set current project to <p> (if allowed).
cdx - Show current project/directory and expected/actual real
directory, since they can get out of sync if you mix cd and cdx.
cdx . - Set actual real directory to expected directory (in other words,
get them back into sync).
cdx <d> - Set directory to <d> (if allowed).
The actual script follow. First, ".cdx_list" which just lists the allowed directories in the current project (pipelines are broken into multiple lines for readability but they should all be on one line).
echo "Possible directories are:"
cat ~/.cdx_data
| grep "^${CDX_PROJ}:"
| sed -e 's/^.*://' -e 's/=.*$//'
| sort -u
| sed 's/^/ /'
Similarly, ".cdx_projlist" shows all the possible projects:
echo "Possible projects are:"
cat ~/.cdx_data
| grep ':'
| sed 's/:.*$//'
| sort -u
| sed 's/^/ /'
In the meaty scripts, ".cdx_proj" sets and/or shows the current project:
if [[ "$1" != "" ]] ; then
grep "^$1:" ~/.cdx_data >/dev/null 2>&1
if [[ $? != 0 ]] ; then
echo "No project name '$1'."
projl
else
export CDX_PROJ="$1"
fi
fi
echo "Current project is: [${CDX_PROJ}]"
and ".cdx_goto" is the same for directories within the project:
if [[ "$1" == "." ]] ; then
CDX_TMP="${CDX_DIR}"
else
CDX_TMP="$1"
fi
if [[ "${CDX_TMP}" != "" ]] ; then
grep "^${CDX_PROJ}:${CDX_TMP}=" ~/.cdx_data >/dev/null 2>&1
if [[ $? != 0 ]] ; then
echo "No directory name '${CDX_TMP}' for project '${CDX_PROJ}'."
cdxl
else
export CDX_DIR="${CDX_TMP}"
cd $(grep "^${CDX_PROJ}:${CDX_DIR}=" ~/.cdx_data
| sed 's/^.*=//'
| head -1
| sed "s:^~:$HOME:")
fi
fi
CDX_TMP=$(grep "^${CDX_PROJ}:${CDX_DIR}=" ~/.cdx_data
| sed 's/^.*=//'
| head -1
| sed "s:^~:$HOME:")
echo "Current project is: [${CDX_PROJ}]"
echo "Current directory is: [${CDX_DIR}]"
echo " [${CDX_TMP}]"
echo "Actual directory is: [${PWD}]"
unset CDX_TMP
It uses three environment variables which are reserved for its own use: "CDX_PROJ", "CDX_DIR" and "CDX_TMP". Other than those and the afore-mentioned files and aliases, there are no other resources used. It's the simplest, yet most adaptable solution I could come up with. Best of luck.
Revisiting. Today I received this link from a social bookmarking site, then I immediately remembered this question:
Navigation with bm
We keep a simple, plain text bookmarks
file and use a tool called bm to do
the look-ups. The tool can also be
used to edit the bookmark index
dynamically as shown below where we
add the directories from the previous
example to the index.
Once i cd'ed into such a long directory, i have that in the history. Then i just type Ctrl-R for the "(reverse-i-search)" prompt and type in a few characters, like Dev/C that appear somewhere in the path, and it shows me the command what i issued back then and i can easily jump to it again.
That works pretty well in practice. Because it won't find an entry if you haven't typed that path for quite some time, which would mean doing work to make things easier probably wouldn't be worth the time. But it definitely will find it if you used it recently. Which is exactly what i need.
In some way, it's a self-organizing cache for long commands & path-names :)
You might want to consider using a script like this in your .bashrc. I've used it on a daily basis ever since I read that post. Pretty bloody useful.
The user jhs suggested Pushd and Popd-commands. I share here some of my Bash-scripts that I found in Unix Power Tools -book. They are very cool when your directories get a way too long :)
#Moving fast between directories
alias pd=pushd
alias pd2='pushd +2'
alias pd3='pushd +3'
alias pd4='pushd +4'
The command 'pushd +n' "rotates" the stack. The reverse command 'popd +n' deletes the n entry of the stack. If your stack gets too long, use 'repeat n popd'. For examle, your stack is 12 directories long:
repeat 11 popd
When you want to see your stack, write 'pushd'. For further reading, I recommend the book on pages 625-626.
In your .bashrc find PS1='${debian_chroot:+($debian_chroot)}[\033[01;32m]\u#\h[\033[00m]:[\033[01;34m]
\W[\033[00m]\$ '
and replace the \w with \W.I already have it changed here. This will only give you the main directory where you are working. You can get the full directory by typing pwd
There are fundamental well-known ideas, like creating aliases:
alias cdfoo="cd /long/path/to/foo"
and also "dropping pebbles"
export foo=/long/path/to/foo
and also making the above "project-based". I use 'ticket based' directories.
topdir=ticket_12345
alias cdfoo="cd home/me/sandbox/$topdir/long/path/to/foo"
export foo="/home/me/sandbox/$topdir/long/path/to/foo"
but beyond all this, sometimes it's just handy to jump back and forth to where you've been recently, using command-line menus. (pushd and popd are cumbersome, IMHO).
I use acd_func.sh (listed below). Once defined, you can do
cd --
to see a list of recent directories, with a numerical menu
cd -2
to go to the second-most recent directory.
Very easy to use, very handy.
Here's the code:
# Insert into .profile, .bash_profile or wherever
# acd_func 1.0.5, 10-nov-2004
# petar marinov, http:/geocities.com/h2428, this is public domain
cd_func ()
{
local x2 the_new_dir adir index
local -i cnt
if [[ $1 == "--" ]]; then
dirs -v
return 0
fi
the_new_dir=$1
[[ -z $1 ]] && the_new_dir=$HOME
if [[ ${the_new_dir:0:1} == '-' ]]; then
#
# Extract dir N from dirs
index=${the_new_dir:1}
[[ -z $index ]] && index=1
adir=$(dirs +$index)
[[ -z $adir ]] && return 1
the_new_dir=$adir
fi
#
# '~' has to be substituted by ${HOME}
[[ ${the_new_dir:0:1} == '~' ]] && the_new_dir="${HOME}${the_new_dir:1}"
#
# Now change to the new dir and add to the top of the stack
pushd "${the_new_dir}" > /dev/null
[[ $? -ne 0 ]] && return 1
the_new_dir=$(pwd)
#
# Trim down everything beyond 11th entry
popd -n +11 2>/dev/null 1>/dev/null
#
# Remove any other occurence of this dir, skipping the top of the stack
for ((cnt=1; cnt <= 10; cnt++)); do
x2=$(dirs +${cnt} 2>/dev/null)
[[ $? -ne 0 ]] && return 0
[[ ${x2:0:1} == '~' ]] && x2="${HOME}${x2:1}"
if [[ "${x2}" == "${the_new_dir}" ]]; then
popd -n +$cnt 2>/dev/null 1>/dev/null
cnt=cnt-1
fi
done
return 0
}
alias cd=cd_func
if [[ $BASH_VERSION > "2.05a" ]]; then
# ctrl+w shows the menu
bind -x "\"\C-w\":cd_func -- ;"
fi
This might also be a useful function to put in your .bashrc; it moves up either a number of directories, or to a named directory, i.e. if you're in /a/b/c/d/ you can do up 3 or up a to end up in a.
I have no idea where I found this; if you know, please comment or add the attribution.
function up()
{
dir=""
if [ -z "$1" ]; then
dir=..
elif [[ $1 =~ ^[0-9]+$ ]]; then
x=0
while [ $x -lt ${1:-1} ]; do
dir=${dir}../
x=$(($x+1))
done
else
dir=${PWD%/$1/*}/$1
fi
cd "$dir";
}
If you want to switch to zsh, this is very easy-- just use "alias -g" (global alias, i.e. an alias that works anywhere in the command, not just the first word).
# alias -g c=/my/super/long/dir/name
# cd c
# pwd
/my/super/long/dir/name
In bash, I think the closest thing you'll get to 'aliasing' style is to write a function:
function ccd {
case "$1" in
c) cd /blah/blah/blah/long/path/number/one ;;
foo) cd /blah/blah/totally/different path ;;
"multiword phrase") cd /tmp ;;
esac
}
This means using something other than "cd" as the command when you want a shortcut, but other than that, it's flexible; you can also add an "ls" to the function so that it always reminds you what's in the directory after you cd, etc.
(Note that to use a multiword argument as above, you need to quote it on the command line, like this:
ccd "multiword phrase"
so it's not really all that convenient. But it'll work if you need to.)
Based on Andrew Medico's suggestion, check out J
Look into pushd, which allows you to maintain a stack of directories which you can push onto, pop off of, or rearrange.
Check out autojmp or dirmarks
Management requires both fast creation and removal of directories. Create many directiories:
mkdir -p user/new_dir/new/_dir/.../new_dir
Remove recursively many directories (be very careful when you are in lower directories!):
rm -r dir/.../new_dir/
For further reading, the cheat sheet may help you:
http://www.scribd.com/doc/2082838/Bash-Command-Line-History-Cheat-Sheet
It contains some nuggets, but I find it rather hard to read. I cannot get commands, like Meta+>, working. They probably help you in navigating long directories.
I realize the question is pretty old, but none of the scripts out there satisfied me, so I wrote a new one.
Here's the requirements I had in mind:
1) Use only bash commands -- I intend to use this on many different unices -- Linux, cygwin, HP-UX, AIX, and a couple others, so I couldn't depend on grep being consistent. Luckily I do have bash everywhere I work.
2) Short code -- I wanted to be able to bind this to a key in GNU screen, and just hit that key to paste the script into the current bash shell I'm using, so that I don't have to setup bash profiles on every system I use. Anything super long would be annoying and take too much time to paste.
3) No file usage -- Don't want to be littering shared logons with random files.
4) Act just like "cd" in the normal case. Don't want to have to think about which command to use before I start typing.
5) Provide "up" usage like this answer: How to manage Long Paths in Bash?
6) Keep a list of recently used directories, and switch to the most recent.
Here's the script:
#Jump History - Isaiah Damron
function jfind() {
lp=${JNHIST//==${PWD}==/==}
lp=${lp%%${lp#==*$1*==}}
lp=${lp##${lp%==*$1*==*}}
lp=${lp//==/}
[[ -d "$lp" ]] && echo $lp && return 0
return 1;
}
function jadd() {
[[ -z "$JNHIST" ]] && export JNHIST='=='
[[ 3000 -lt ${#JNHIST} ]] && export JNHIST=${JNHIST:0:3000} && export JNHIST="${JNHIST%==*}=="
export JNHIST="==$PWD${JNHIST//==${PWD}==/==}"
}
function j() {
{ cd $* 2> /dev/null && jadd; } \
|| { cd ${PWD/$1*/}$1 2> /dev/null && jadd; } \
|| { jfind $1 \
&& { cd $( jfind $1 ) 2> /dev/null && jadd; } ; } \
|| cd $*
}
function jh() {
[[ -z "$1" ]] && echo -e ${JNHIST//==/\\n}
[[ -n "$1" ]] && jfind $1 && cd $(jfind $1) && jadd
}
Usage:
jh [parameters]
If called on its own, without any parameters, it outputs the current history list. If it has a parameter, then it searches through the history for the most recently used directory that contains the string $1, and cd's to it.
j {parameters}
Does cd parameters. If that fails, it checks if any of the parent directories of $PWD match $1, and cd's to it. If that fails, then it calls jh $1. If that fails, then it outputs the result of cd parameters
Note: I used '==' as an internal separator. Hopefully you don't have any directories that contain a '==', but if you do you'll have to change around the script. Just :%s/==/whatever/g

Resources