Display Only Files And Dotfiles In The Default "ls" Command Format? - bash

EDIT 2: For those with the same question, please be sure to read the comments on the accepted answer, as they go more in depth about how to properly use the command. I will post the final script once it is done with my own explanation of what is going on.
I'm trying to write a simple bash function that clears the terminal screen, then performs multiple ls commands to show the different content types of the current directory in different colors.
I've managed to write something in Python which kind of does the same thing, but it has some big drawbacks (specifically, the behavior for special characters in filenames is "iffy" on Cygwin, and it's a pain to make it fit properly on the screen). I want it to look something like this:
*With non-dotfiles in green (haven't include them in the screenshot for privacy reasons).
I've managed to list both hidden directories and visible directories, but the files are giving me trouble. Every method of using ls to get all files I've tried uses the -l argument, or something like find or grep, all of which output the files in a single column (not what I want).
Is it possible to use the ls command (or something else) to output only the files or dotfiles of a directory while keeping ls's default output format?
EDIT 1: Here's what the script currently looks like (not much yet, but some people want to see it)
function test() {
clear;
GOLD=229;
RED=203;
BLUE=39;
WHITE=15;
GREEN=121;
# Colored legend.
tput sgr0;
tput setaf bold
# echo's "-n" suppresses the new line at the end.
tput setaf $GOLD;
echo -n "Working Directory ";
tput setaf $RED;
echo -n "Hidden Directories ";
tput setaf $BLUE;
echo -n "Visible Directories ";
tput setaf $WHITE;
echo -n "Hidden Files ";
tput setaf $GREEN;
echo "Visible Files";
pwd;
ls -d .*/; # List hidden directories.
ls -d */; # List visible directories.
# List dotfiles.
# List files.
}

to list only the dot files in the current directory
find . -maxdepth 1 -type f -name ".*" | xargs ls --color=tty
to list only the other files
find . -maxdepth 1 -type f -not -name ".*" | xargs ls --color=tty

For non-hidden files and directories, try
ls -d *
For hidden files and directories, use
ls -d .*

Using the answer provided by matzeri, I managed to get a nice start. That answer is still accepted, because it answered the question that I asked. Here's the current version of the script, which I've turned into a function for easy access. I'm gonna go through it and explain some things that needed to be modified from the answer to give me the result I wanted, but didn't specifically ask for.
c() {
stty -echo; # Disable keyboard inputs during function.
local HAS_DOTDIRS=false;
local HAS_DIRS=false;
local HAS_DOTFILES=false;
local HAS_FILES=false;
local BLUE=39;
local GOLD=229;
local GREEN=121;
local PINK=170;
local RED=203;
local WHITE=15;
local CYG_CWD=$(pwd);
local WIN_CWD=$(cygpath -w "$(pwd)" | sed 's/\\/\//g');
local DOTDIRS=$(ls -Ad .*/); # Get directories with '.' prefix.
local DIRS=$(ls -Ad */); # Get normal directories.
local DOTFILES=$(find . -maxdepth 1 -type f -name ".*" | sed 's/^..//');
local FILES=$(find . -maxdepth 1 -type f -not -name ".*" | sed 's/^..//');
clear;
tput sgr0;
tput setaf bold;
local LEGEND="$(tput setaf $GOLD)Cygwin Working Directory ";
LEGEND+="$(tput setaf $PINK)Windows Working Directory ";
if ! [ -z "$DOTDIRS" ] ; then
HAS_DOTDIRS=true
LEGEND+="$(tput setaf $RED)Hidden Directories ";
fi
if ! [ -z "$DIRS" ] ; then
HAS_DIRS=true
LEGEND+="$(tput setaf $BLUE)Visible Directories ";
fi
if ! [ -z "$DOTFILES" ] ; then
HAS_DOTFILES=true
LEGEND+="$(tput setaf $WHITE)Hidden Files ";
fi
if ! [ -z "$FILES" ] ; then
HAS_FILES=true
LEGEND+="$(tput setaf $GREEN)Visible Files";
fi
echo $LEGEND;
echo "";
echo "$(tput setaf $GOLD)$CYG_CWD";
echo "$(tput setaf $PINK)$WIN_CWD";
tput setaf $RED
ls_list "$HAS_DOTDIRS" "$DOTDIRS"
tput setaf $BLUE
ls_list "$HAS_DIRS" "$DIRS"
tput setaf $WHITE
ls_list "$HAS_DOTFILES" "$DOTFILES"
tput setaf $GREEN
ls_list "$HAS_FILES" "$FILES"
tput sgr0;
stty echo; # Enable keyboard inputs after function.
}
ls_list() {
# $1 - boolean - condition to print
# $2 - list - names to 'ls'
if $1 ; then
echo "$2" | xargs -d '\n' ls -d;
fi
}
Firstly, I wrapped the whole thing with stty to stop the user from messing-up the output with keyboard inputs.
Then, I create booleans that will allow the ls outputs at the end to be performed with a very minimal delay between them (if [ bool ] rather than checking the status of a list).
The color variables are simply used by the tput command, which lets you change the color of the terminal's text.
Note: Some of the color values might be off, that's because I had issues with my Cygwin theme (currently using Nord), and these values look like their names on my setup. Your mileage may vary.
To fellow Cygwin users: I've had a few issues with tput not working when it would've worked on an actual Linux machine. If you're having trouble with tput consider defining the LS_COLORS (which can be formatted to accept RGB values), and setting the CLICOLOR variable to true, then exporting both. This can replace the tput command, although you now might want to store the initial value of both, and you have to export the LS_COLORS variable constantly.
The CYG_CWD and WIN_CWD are there because Cygwin paths aren't the same as Windows paths, and I like having both. The sed command replaces Windows' "\" with "/" for easy copy+paste.
This is where the answer to the question starts. The DOTDIRS, DIRS, DOTFILES, and FILES variables will get lists.
The ls command already allows us to filter just directories and directories starting with ., via the ls -Ad */ and the ls -Ad *./ commands. ls ignores files starting with ., unless specified (like I do), ending with / searches for directories, and the -d parameter makes it name directories instead of showing us what's in them.
As for dotfiles and regular files, ls doesn't really let us get them, because you can't specify a character which defines a file like you can with / for directories. Instead, we can use the find . -maxdepth -type f with the appropriate use of the -name or -not -name arguments to filter our files. There is an issue, however: the command will prefix the results with ./. So .bashrc becomes ./.bashrc. I don't want that. The output is thus piped to the sed command, which substitutes the first two characters for nothing (effectively removing them).
From here, we simply determine which list has elements, set the appropriate boolean values to ensure quick ls outputs at the end, print the legend, all the while changing the colors the terminal uses with the tput command.
Eventually, the ls_list function is called. If the given boolean is true, it performs the echo "$2" | xargs -d '\n' ls -d command. Essentially, it echoes the list of items found, which is piped into the xargs -d '\n' command. xargs allows us to pass the values piped to it as parameters for the ls command. The -d '\n' arguments changes the separator from spaces to newlines, because some names might have spaces in them. What ls does, when you give it filenames, is simply prints them with the regular ls output format (which is exactly what I want). I added the -d parameter so that is names directories instead of showing their contents, since I'm using that function for all my lists.
Finally, reset whatever tput is with sgr0 and re-enable keyboard inputs with stty!
This is what the output looks like for my /home directory.
Final notes:
It feels slightly slower than my Python script, but it won't bug-out with special characters in filenames and it will always respect the terminal's window size, since that's what ls does. I think the "slowness" might be because the 4 xargs ls operations don't print at the same time, while the python one printed everything at once. I can live with it being 1-2 seconds vs 0.5-1 second.

Related

Associative array, file names refering to the path, for dmenu

And I started playing with dmenu and it seems such an automation for almost every thing. Unfortunately I'm not familiar with bash and it should be on my list.
I have a folder for my markdowns with subfolders containing my files. I'm trying to have a script to show them in dmenu while using an alias.
If the path to a file is
/home/user/docs/markdown/practice01/rmd/network.rmd
I would like to have
network
as an option in my dmenu. So when I choose
network -----> /home/user/docs/markdown/practice01/rmd/network.rmd
Here is my broken script. There are a few things I'm missing.
This way I get full path on my dmenu which i don't need. I tried to read about associative arrays but I can't figure it out in bash.
This script works but in case I decide to ESC and exit, still it opens up an empty vim in my directory. Hence, I should know if statements huh!
#!/bin/bash
DMenu=("dmenu -l 10 -i -nb "#eaeaea" -sb "#E53935" -nf "#474747"")
cd ~/docs/markdown/
target=$(find -type f -name '*.rmd' | $DMenu)
st vim "$target"
I made a little example. But the problem is that it is a manual work to add each file, which definitely we don't wanna do right!
#!/bin/bash
declare -A dotfiles
dotfiles[i3]="/home/user/dotfiles/i3/.config/i3/config"
dotfiles[vimrc]="/home/user/dotfiles/vim/.vimrc"
list=("i3\nvimrc")
target=$(echo -e $list | dmenu -i -nb "#eaeaea" -sb "#E53935" -nf "#474747")
st vim "${dotfiles["$target"]}"
Thank you
Associative arrays can be weird... but returning output to a variable makes it easier to manipulate as any other string in bash, as shown in the example below:
prefix="$HOME/git/notes"
suffix=".md"
shopt -s nullglob globstar
item=( "$prefix"/**/*${suffix}) # Search *.md in all dirs/subdirs
item=( "${item[#]#"$prefix"/}" )
item=( "${item[#]%${suffix}}" ) # Removes '.md' string from item name
result=$(printf '%s\n' "${item[#]}" | dmenu)
[[ -n $result ]] || exit # exit if nothing is found
gedit "${prefix}/${result}.md" # Open file by adding again '.md'
When the percent sign (%) is used in the pattern ${variable%substring}, it will return content of the variable with the shortest occurrence of substring deleted from the back of the variable.
Listed below for reference are 2 examples I wrote, one in Bash and the other in Python, for managing pass and markdown notes with dmenu:
dmenu-pass.sh
dmenu-launch.py
Also, listed below are a couple nice articles that might help you out:
The weird, wondrous world of Bash arrays
Advanced Bash-Scripting Guide: Manipulating Strings
Instead of putting some code in an array, use a function!
my_dmenu() {
dmenu -l 10 -i -nb "#eaeaea" -sb "#e53935" -nf "#474747"
}
If your markdown files are all in the same folder (and not in subfolders), you certainly don't need find: use a glob instead! and if your files are in subfolders, use a glob instead (with the globstar shell option).
All in all:
#!/bin/bash
my_dmenu() {
dmenu -l 10 -i -nb "#eaeaea" -sb "#e53935" -nf "#474747"
}
base_dir=~/docs/markdown
# Also, check the return code of cd!
cd "$base_dir" || { echo >&2 "Can't cd to $base_dir. Exiting"; exit 1; }
# Using a glob: use the shell option nullglob
shopt -s nullglob
files=( *.rmd )
# Check that there are some files found:
if (( ${#files[#]} == 0 )); then
echo "No files found. Exiting."
exit 1
fi
# Now we're ready to send the files to dmenu:
chosen_file=$(printf '%s\n' "${files[#]}" | my_dmenu)
# If dmenu returns nothing: don't launch vim!
if [[ ! $chosen_file ]]; then
echo "No files selected. Exiting."
exit 1
fi
# Now you can launch vim!
st vim "$chosen_file"
If you also want to find the *.rmd files in subfolders: use instead:
shopt -s nullglob globstar
files=( **/*.rmd )
Edit to address the requirement in your comment (and the edit of your question):
If you want to strip the .rmd suffix to show in dmenu, use:
chosen_file=$(printf '%s\n' "${files[#]%.rmd}" | my_dmenu)
# ...
st vim "$chosen_file.rmd"
The expansion ${files[#]%.rmd} will strip the suffix .rmd from each field of the array files. Don't forget to add this suffix back when you edit the file (as shown in the last line).
dmenuoptions="-l 10 -i -nb '#eaeaea' -sb '#E53935' -nf '#474747'"
st -e vim $(find ~/docs/markdown -type f -name '*.rmd' | dmenu $dmenuoptions)

Test -d directory true - subdirectory false (POSIX)

I'm trying to print all directories/subdirectories from a given start directory.
for i in $(ls -A -R -p); do
if [ -d "$i" ]; then
printf "%s/%s \n" "$PWD" "$i"
fi
done;
This script returns all of the directories found in the . directory and all of the files in that directory, but for some reason the test fails for subdirectories. All of the directories end up in $i and the output looks exactly the same.
Let's say I have the following structure:
foo/bar/test
echo $i prints
foo/
bar/
test/
While the contents of the folders are listed like this:
./foo:
file1
file2
./bar:
file1
file2
However the test statement just prints:
PWD/TO/THIS/DIRECTORY/foo
For some reason it returns true for the first level directories, but false for all of the subdirectories.
(ls is probably not a good way of doing this and I would be glad for a find statement that solves all of my issues, but first I want to know why this script doesn't work the way you'd think.)
As pointed out in the comments, the issue is that the directory names include a :, so -d is false.
I guess that this command gives you the output you want (although it requires Bash):
# enable globstar for **
# disabled in non-interactive shell (e.g. a script)
shopt -s globstar
# print each path ending in a / (all directories)
# ** expands recursively
printf '%s\n' **/*/
The standard way would either to do the recursion yourself, or to use find:
find . -type d
Consider your output:
dir1:
dir1a
Now, the following will be true:
[ -d dir1/dir1a ]
but that's not what your code does; instead, it runs:
[ -d dir1a ]
To avoid this, don't attempt to parse ls; if you want to implement recursion in baseline POSIX sh, do it yourself:
callForEachEntry() {
# because calling this without any command provided would try to execute all found files
# as commands, checking for safe/correct invocation is essential.
if [ "$#" -lt 2 ]; then
echo "Usage: callForEachEntry starting-directory command-name [arg1 arg2...]" >&2
echo " ...calls command-name once for each file recursively found" >&2
return 1
fi
# try to declare variables local, swallow/hide error messages if this fails; code is
# defensively written to avoid breaking if recursing changes either, but may be faulty if
# the command passed as an argument modifies "dir" or "entry" variables.
local dir entry 2>/dev/null ||: "not strict POSIX, but available in dash"
dir=$1; shift
for entry in "$dir"/*; do
# skip if the glob matched nothing
[ -e "$entry" ] || [ -L "$entry" ] || continue
# invoke user-provided callback for the entry we found
"$#" "$entry"
# recurse last for if on a baseline platform where the "local" above failed.
if [ -d "$entry" ]; then
callForEachEntry "$entry" "$#"
fi
done
}
# call printf '%s\n' for each file we recursively find; replace this with the code you
# actually want to call, wrapped in a function if appropriate.
callForEachEntry "$PWD" printf '%s\n'
find can also be used safely, but not as a drop-in replacement for the way ls was used in the original code -- for dir in $(find . -type d) is just as buggy. Instead, see the "Complex Actions" and "Actions In Bulk" section of Using Find.

Need help writing bash script to move folders around

What i need to do is replace the folder amtlib.framework into each Adobe app on my mac
if i do:
cd /Applications; ls | grep Adobe, this gives me all the folders which i need
here's some pseudo code:
apps = ls | grep Adobe
for each x in apps
if (x/x.app/contents/frameworks/amtlib.framwork) //if this folder exists
add .bak extension //amtlib.framework.bak
copy ~/Downloads/.../amtlib.framwork to x/x.app/contents/frameworks/
how would i implement this as a bash script?
Something like
for x in $( ls | grep Adobe) ; do
if [[ -d "${x}"/"${x}".app/contents/frameworks/amtlib.framwork ]] ; then
# add .bak extension # //amtlib.framework.bak
#? mkdir "${x}"/"${x}".app/contents/frameworks/amtlib.framwork.bak
#? /bin/mv "${x}"/"${x}".app/contents/frameworks/amtlib.framwork {x}/${x}.app/contents/frameworks/amtlib.framwork.bak
/bin/cp ~/Downloads/.../amtlib.framwork to "${x}"/"${x}".app/contents/frameworks/
else
: # ??? what do you want to do if there's not
fi
done # loop
If you're likely to have spaces in your dirnames, (not sure if OSX support -print0), but try
find . -name 'Adobe' -print0 \
| while read x ; do
if ....
As an FYI, assignments in bash are done like (without spaces around the =):
apps=$(ls | grep Adobe)
Depending on the situation then, you'll want to use "$apps", or just plain $apps, which leaves each word in the list as a separate token. (If there are spaces in your filename or path, 1 path/file is now 2 words, and will cause issues). There are also array notations to use, apps=( $(ls | grep Adobe) ), and using those vars like ${#apps[#]} (number of elems), ${apps[#]} (all elems), ${apps[1]}, (first elem) is possible.
Also, it's not clear what your intent with add .bak extension is for. My best guess is my 2nd option, /bin/mv ... .bak.
IHTH.
First of all there's a typo error in the original post that's made its way throughout the examples given. The folder you are looking to rename/replace is amtlib.framework, not framwork.
Second, for some reason, the test for existence of the .bak directory is not working for me, even when I split this out to a separate if-then statement it doesn't work:
cd /Applications
for x in *Adobe* ; do
printf "$x \n"
printf "%s" " "
if [ -d "$x/$x.app/contents/frameworks/amtlib.framework.bak" ]; then
printf "removing old bak... "
fi
if [ -d "$x/$x.app/contents/frameworks/amtlib.framework" ]; then
printf "moving... "
printf "copying... "
printf "%s\n" "done!"
else
printf "%s\n" "nothing to do here!"
fi
done
cd ~
Finally, understanding the goal you will fail to update a couple of apps that have an additional folder level (e.g., Acrobat Pro and Illustrator).

Filter Hidden Files with Bash (for Batch Image Resize Script)

I'm writing a script to batch resize images. Originally I was applying an operation for file in $(ls $1), but I would like to be able to use globbing, so I'm looking at something more like for file in $(echo $1). The problem is that dotglob may or may not be enabled, so echo * could return hidden files (notably, .DS_Store), which cause convert to throw an error and stop the script. I would like the default behavior of the command to be that if I cd into a directory full of images and execute resize * 400x400 jpg, all of the images will be resized excluding hidden files, regardless of whether dotglob is enabled.
So, in pseudo code, I'm looking for:
for file in $(echo $1 | [filter-hidden-files])
Here is my script with the older behavior. Will update with new behavior when I find a solution:
# !/bin/bash
# resize [folder] [sizeXxsizeY] [outputformat]
# if [outputformat] is omitted, the input file format is assumed
for file in $(ls $1)
do
IMGNAME=$(echo "$file" | cut -d'.' -f1)
if test -z $3
then
EXTENSION=$(echo "$file" | cut -d'.' -f2)
convert $1/$file -resize $2 -quality 100 $1/$IMGNAME-$2.$EXTENSION
echo "$file => $IMGNAME-$2.$EXTENSION"
else
convert $1/$file -resize $2 -quality 100 $1/$IMGNAME-$2.$3
echo "$file => $IMGNAME-$2.$3"
fi
done
Here is the current script:
# !/bin/bash
# resize [pattern] [sizeXxsizeY] [outputformat]
# if [outputformat] is omitted, the input file format is assumed
for file in $(echo $1)
do
IMGNAME=$(echo "$file" | cut -d'.' -f1)
if test -z $3 && if test -f $3
then
EXTENSION=$(echo "$file" | cut -d'.' -f2)
convert $file -resize $2 -quality 100 $IMGNAME-$2.$EXTENSION
echo "$file => $IMGNAME-$2.$EXTENSION"
else
convert $file -resize $2 -quality 100 $IMGNAME-$2.$3
echo "$file => $IMGNAME-$2.$3"
fi
done
Given the command resize * 400x400, convert throws an error as it cannot process .DS_Store (a hidden file residing in every file on an OSX system). As I will never be processing hidden images, I would like to automatically filter them. I've been trying to do this with grep or find, but I haven't figured it out yet.
New script goes here:
for file in $(echo $1)
do
I would suggest changing the commandline of your script from resize * 400x400 to resize 400x400 *, the script would be more like the standard unix tools that way (grep, sed, ln, etc.). You would not have to unset the dotglob option in your script (which is better since it's up to the user of the script if he wants hidden files globbed or not).
Your script would look something like this:
#!/bin/bash
OUTPUTFORMAT=$1
# Remove original $1 from the list of arguments
shift
for i in "$#"
do
# Use $OUTPUTFORMAT here
etc....
If you do not want to change the commandline for your script. You could try setting GLOBIGNORE
export GLOBIGNORE=".*"
Or if extglob is set you could try file globbing like so:
echo !(.*)
There is a dotglob shell option that decides if files starting with . are included when globbing. You can check if this is the case with
shopt dotglob
You also can explicitly disable it in your script:
#!/bin/bash
shopt -u dotglob
for file in $1/*; do
...
done
there's no need to use ls with a for loop, most of the time its useless. also the for loop with * doesn't return hidden files, unless you specifically specify it. To show hidden files,
for files in .*
do
echo $files
done
as you are getting a list of files in $* you can check them one by one
for i in $*
do
expr $i : '^\..*' > /dev/null && continue
# process file
done

How to manage Long Paths in Bash?

I have a problem to manage long paths. How can I get quickly to paths like
/Users/User/.../.../.../.../.../Dev/C/card.c
I tried an alias
alias cd C='cd /Users/User/.../.../.../.../.../Dev/C'
but I am unable to do aliases for two separate words. I have long lists of Bash aliases and paths in CDPATH, so I am hesitating to make them more. How can manage long paths?
[Ideas for Replies]
The user litb's reply revealed some of my problems in the management. Things, such as "CTRL+R", "!-3:1:2:4:x" and "incremental search", are hard for me. They probably help in navigating long directories and, in the sense, management.
Using symlinks is probably the best idea; but you can do it even easier than dumping them all into your home directory.
As you mentioned, BASH has a feature called CDPATH which comes in really handy here.
Just make a hidden folder in your homedir (so it doesn't clutter your homedir too much):
$ mkdir ~/.paths
$ cd ~/.paths
$ ln -s /my/very/long/path/name/to/my/project project
$ ln -s /some/other/very/long/path/to/my/backups backups
$ echo 'CDPATH=~/.paths' >> ~/.bashrc
$ source ~/.bashrc
This creates a directory in your homedir called ".paths" which contains symlinks to all your long directory locations which you regularly use, then sets the CDPATH bash variable to that directory (in your .bashrc) and re-reads the .bashrc file.
Now, you can go to any of those paths from anywhere:
$ cd project
$ cd backups
Leaving you with a short CDPATH, no cluttering aliasses, and more importantly: A really easy way to navigate to those long paths from other applications, such as UI applications, by just going into ~/.paths or adding that directory into your UI application's sidebar or so.
Probably the easiest all-round solution you can have.
Consider using symbolic links. I have a ~/work/ directory where I place symlinks to all my current projects.
You may also use shell variables:
c='/Users/User/.../.../.../.../.../Dev/C'
Then:
cd "$c"
Create symlinks in your home directory (or somewhere else of your choosing)
ln -s longDirectoryPath ~/MySymLinkName
See man ln for more details.
Probably the easiest solution is to use:
alias cdc='cd /Users/User/.../.../.../.../.../Dev/C'
alias cdbin='cd /Users/User/.../.../.../.../.../Dev/bin'
alias cdtst='cd /Users/User/.../.../.../.../.../Dev/tst'
if you're only really working on one project at a time. If you work on multiple projects, you could have another alias which changed the directories within those aliases above.
So, you'd use something like:
proj game17
cdc
make
proj roman_numerals
cdbin
rm -f *
proj game17 ; cdc
Since this is a useful thing to have, I decided to put together a series of scripts that can be used. They're all based aroung a configuration file that you place in your home directory, along with aliases to source scripts. The file "~/.cdx_data" is of the form:
scrabble:top=~/dev/scrabble
scrabble:src=~/dev/scrabble/src
scrabble:bin=~/dev/scrabble/bin
sudoku:top=~/dev/scrabble
sudoku:src=~/dev/scrabble/src
sudoku:bin=~/dev/scrabble/bin
sudoku:data=~/dev/scrabble/data
and lists all the relevant projects (scrabble and sodoku in this case) and their directories (which may be different for each project, but have top, bin, src and data in this example).
The first action is to initialize stuff, so put:
. ~/.cdx_init
at the end of your .bash_profile and create the "~/.cdx_init" file as:
alias cdxl='. ~/.cdx_list'
alias projl='. ~/.cdx_projlist'
alias cdx='. ~/.cdx_goto'
alias proj='. ~/.cdx_proj'
This sets up the four aliases to source the files which I'll include below. Usage is:
cdxl - List all directories in current project.
projl - List all projects.
proj - Show current project.
proj <p> - Set current project to <p> (if allowed).
cdx - Show current project/directory and expected/actual real
directory, since they can get out of sync if you mix cd and cdx.
cdx . - Set actual real directory to expected directory (in other words,
get them back into sync).
cdx <d> - Set directory to <d> (if allowed).
The actual script follow. First, ".cdx_list" which just lists the allowed directories in the current project (pipelines are broken into multiple lines for readability but they should all be on one line).
echo "Possible directories are:"
cat ~/.cdx_data
| grep "^${CDX_PROJ}:"
| sed -e 's/^.*://' -e 's/=.*$//'
| sort -u
| sed 's/^/ /'
Similarly, ".cdx_projlist" shows all the possible projects:
echo "Possible projects are:"
cat ~/.cdx_data
| grep ':'
| sed 's/:.*$//'
| sort -u
| sed 's/^/ /'
In the meaty scripts, ".cdx_proj" sets and/or shows the current project:
if [[ "$1" != "" ]] ; then
grep "^$1:" ~/.cdx_data >/dev/null 2>&1
if [[ $? != 0 ]] ; then
echo "No project name '$1'."
projl
else
export CDX_PROJ="$1"
fi
fi
echo "Current project is: [${CDX_PROJ}]"
and ".cdx_goto" is the same for directories within the project:
if [[ "$1" == "." ]] ; then
CDX_TMP="${CDX_DIR}"
else
CDX_TMP="$1"
fi
if [[ "${CDX_TMP}" != "" ]] ; then
grep "^${CDX_PROJ}:${CDX_TMP}=" ~/.cdx_data >/dev/null 2>&1
if [[ $? != 0 ]] ; then
echo "No directory name '${CDX_TMP}' for project '${CDX_PROJ}'."
cdxl
else
export CDX_DIR="${CDX_TMP}"
cd $(grep "^${CDX_PROJ}:${CDX_DIR}=" ~/.cdx_data
| sed 's/^.*=//'
| head -1
| sed "s:^~:$HOME:")
fi
fi
CDX_TMP=$(grep "^${CDX_PROJ}:${CDX_DIR}=" ~/.cdx_data
| sed 's/^.*=//'
| head -1
| sed "s:^~:$HOME:")
echo "Current project is: [${CDX_PROJ}]"
echo "Current directory is: [${CDX_DIR}]"
echo " [${CDX_TMP}]"
echo "Actual directory is: [${PWD}]"
unset CDX_TMP
It uses three environment variables which are reserved for its own use: "CDX_PROJ", "CDX_DIR" and "CDX_TMP". Other than those and the afore-mentioned files and aliases, there are no other resources used. It's the simplest, yet most adaptable solution I could come up with. Best of luck.
Revisiting. Today I received this link from a social bookmarking site, then I immediately remembered this question:
Navigation with bm
We keep a simple, plain text bookmarks
file and use a tool called bm to do
the look-ups. The tool can also be
used to edit the bookmark index
dynamically as shown below where we
add the directories from the previous
example to the index.
Once i cd'ed into such a long directory, i have that in the history. Then i just type Ctrl-R for the "(reverse-i-search)" prompt and type in a few characters, like Dev/C that appear somewhere in the path, and it shows me the command what i issued back then and i can easily jump to it again.
That works pretty well in practice. Because it won't find an entry if you haven't typed that path for quite some time, which would mean doing work to make things easier probably wouldn't be worth the time. But it definitely will find it if you used it recently. Which is exactly what i need.
In some way, it's a self-organizing cache for long commands & path-names :)
You might want to consider using a script like this in your .bashrc. I've used it on a daily basis ever since I read that post. Pretty bloody useful.
The user jhs suggested Pushd and Popd-commands. I share here some of my Bash-scripts that I found in Unix Power Tools -book. They are very cool when your directories get a way too long :)
#Moving fast between directories
alias pd=pushd
alias pd2='pushd +2'
alias pd3='pushd +3'
alias pd4='pushd +4'
The command 'pushd +n' "rotates" the stack. The reverse command 'popd +n' deletes the n entry of the stack. If your stack gets too long, use 'repeat n popd'. For examle, your stack is 12 directories long:
repeat 11 popd
When you want to see your stack, write 'pushd'. For further reading, I recommend the book on pages 625-626.
In your .bashrc find PS1='${debian_chroot:+($debian_chroot)}[\033[01;32m]\u#\h[\033[00m]:[\033[01;34m]
\W[\033[00m]\$ '
and replace the \w with \W.I already have it changed here. This will only give you the main directory where you are working. You can get the full directory by typing pwd
There are fundamental well-known ideas, like creating aliases:
alias cdfoo="cd /long/path/to/foo"
and also "dropping pebbles"
export foo=/long/path/to/foo
and also making the above "project-based". I use 'ticket based' directories.
topdir=ticket_12345
alias cdfoo="cd home/me/sandbox/$topdir/long/path/to/foo"
export foo="/home/me/sandbox/$topdir/long/path/to/foo"
but beyond all this, sometimes it's just handy to jump back and forth to where you've been recently, using command-line menus. (pushd and popd are cumbersome, IMHO).
I use acd_func.sh (listed below). Once defined, you can do
cd --
to see a list of recent directories, with a numerical menu
cd -2
to go to the second-most recent directory.
Very easy to use, very handy.
Here's the code:
# Insert into .profile, .bash_profile or wherever
# acd_func 1.0.5, 10-nov-2004
# petar marinov, http:/geocities.com/h2428, this is public domain
cd_func ()
{
local x2 the_new_dir adir index
local -i cnt
if [[ $1 == "--" ]]; then
dirs -v
return 0
fi
the_new_dir=$1
[[ -z $1 ]] && the_new_dir=$HOME
if [[ ${the_new_dir:0:1} == '-' ]]; then
#
# Extract dir N from dirs
index=${the_new_dir:1}
[[ -z $index ]] && index=1
adir=$(dirs +$index)
[[ -z $adir ]] && return 1
the_new_dir=$adir
fi
#
# '~' has to be substituted by ${HOME}
[[ ${the_new_dir:0:1} == '~' ]] && the_new_dir="${HOME}${the_new_dir:1}"
#
# Now change to the new dir and add to the top of the stack
pushd "${the_new_dir}" > /dev/null
[[ $? -ne 0 ]] && return 1
the_new_dir=$(pwd)
#
# Trim down everything beyond 11th entry
popd -n +11 2>/dev/null 1>/dev/null
#
# Remove any other occurence of this dir, skipping the top of the stack
for ((cnt=1; cnt <= 10; cnt++)); do
x2=$(dirs +${cnt} 2>/dev/null)
[[ $? -ne 0 ]] && return 0
[[ ${x2:0:1} == '~' ]] && x2="${HOME}${x2:1}"
if [[ "${x2}" == "${the_new_dir}" ]]; then
popd -n +$cnt 2>/dev/null 1>/dev/null
cnt=cnt-1
fi
done
return 0
}
alias cd=cd_func
if [[ $BASH_VERSION > "2.05a" ]]; then
# ctrl+w shows the menu
bind -x "\"\C-w\":cd_func -- ;"
fi
This might also be a useful function to put in your .bashrc; it moves up either a number of directories, or to a named directory, i.e. if you're in /a/b/c/d/ you can do up 3 or up a to end up in a.
I have no idea where I found this; if you know, please comment or add the attribution.
function up()
{
dir=""
if [ -z "$1" ]; then
dir=..
elif [[ $1 =~ ^[0-9]+$ ]]; then
x=0
while [ $x -lt ${1:-1} ]; do
dir=${dir}../
x=$(($x+1))
done
else
dir=${PWD%/$1/*}/$1
fi
cd "$dir";
}
If you want to switch to zsh, this is very easy-- just use "alias -g" (global alias, i.e. an alias that works anywhere in the command, not just the first word).
# alias -g c=/my/super/long/dir/name
# cd c
# pwd
/my/super/long/dir/name
In bash, I think the closest thing you'll get to 'aliasing' style is to write a function:
function ccd {
case "$1" in
c) cd /blah/blah/blah/long/path/number/one ;;
foo) cd /blah/blah/totally/different path ;;
"multiword phrase") cd /tmp ;;
esac
}
This means using something other than "cd" as the command when you want a shortcut, but other than that, it's flexible; you can also add an "ls" to the function so that it always reminds you what's in the directory after you cd, etc.
(Note that to use a multiword argument as above, you need to quote it on the command line, like this:
ccd "multiword phrase"
so it's not really all that convenient. But it'll work if you need to.)
Based on Andrew Medico's suggestion, check out J
Look into pushd, which allows you to maintain a stack of directories which you can push onto, pop off of, or rearrange.
Check out autojmp or dirmarks
Management requires both fast creation and removal of directories. Create many directiories:
mkdir -p user/new_dir/new/_dir/.../new_dir
Remove recursively many directories (be very careful when you are in lower directories!):
rm -r dir/.../new_dir/
For further reading, the cheat sheet may help you:
http://www.scribd.com/doc/2082838/Bash-Command-Line-History-Cheat-Sheet
It contains some nuggets, but I find it rather hard to read. I cannot get commands, like Meta+>, working. They probably help you in navigating long directories.
I realize the question is pretty old, but none of the scripts out there satisfied me, so I wrote a new one.
Here's the requirements I had in mind:
1) Use only bash commands -- I intend to use this on many different unices -- Linux, cygwin, HP-UX, AIX, and a couple others, so I couldn't depend on grep being consistent. Luckily I do have bash everywhere I work.
2) Short code -- I wanted to be able to bind this to a key in GNU screen, and just hit that key to paste the script into the current bash shell I'm using, so that I don't have to setup bash profiles on every system I use. Anything super long would be annoying and take too much time to paste.
3) No file usage -- Don't want to be littering shared logons with random files.
4) Act just like "cd" in the normal case. Don't want to have to think about which command to use before I start typing.
5) Provide "up" usage like this answer: How to manage Long Paths in Bash?
6) Keep a list of recently used directories, and switch to the most recent.
Here's the script:
#Jump History - Isaiah Damron
function jfind() {
lp=${JNHIST//==${PWD}==/==}
lp=${lp%%${lp#==*$1*==}}
lp=${lp##${lp%==*$1*==*}}
lp=${lp//==/}
[[ -d "$lp" ]] && echo $lp && return 0
return 1;
}
function jadd() {
[[ -z "$JNHIST" ]] && export JNHIST='=='
[[ 3000 -lt ${#JNHIST} ]] && export JNHIST=${JNHIST:0:3000} && export JNHIST="${JNHIST%==*}=="
export JNHIST="==$PWD${JNHIST//==${PWD}==/==}"
}
function j() {
{ cd $* 2> /dev/null && jadd; } \
|| { cd ${PWD/$1*/}$1 2> /dev/null && jadd; } \
|| { jfind $1 \
&& { cd $( jfind $1 ) 2> /dev/null && jadd; } ; } \
|| cd $*
}
function jh() {
[[ -z "$1" ]] && echo -e ${JNHIST//==/\\n}
[[ -n "$1" ]] && jfind $1 && cd $(jfind $1) && jadd
}
Usage:
jh [parameters]
If called on its own, without any parameters, it outputs the current history list. If it has a parameter, then it searches through the history for the most recently used directory that contains the string $1, and cd's to it.
j {parameters}
Does cd parameters. If that fails, it checks if any of the parent directories of $PWD match $1, and cd's to it. If that fails, then it calls jh $1. If that fails, then it outputs the result of cd parameters
Note: I used '==' as an internal separator. Hopefully you don't have any directories that contain a '==', but if you do you'll have to change around the script. Just :%s/==/whatever/g

Resources