File globbing without word splitting? - bash

This is a simplified example to hopefully illustrate my problem.
I have a script that takes a parameter to be used as a wildcard. Sometimes this wildcard contains whitespace. I need to be able to use the wildcard for globbing, but word splitting is causing it to fail.
For example, consider the following example files:
$ ls -l "/home/me/dir with whitespace"
total 0
-rw-r--r-- 1 me Domain Users 0 Jun 25 16:58 file_a.txt
-rw-r--r-- 1 me Domain Users 0 Jun 25 16:58 file_b.txt
My script - simplified to use a hard coded pattern variable - looks like this:
#!/bin/bash
# Here this is hard coded, but normally it would be passed via parameter
# For example: pattern="${1}"
# The whitespace and wildcard can appear anywhere in the pattern
pattern="/home/me/dir with whitespace/file_*.txt"
# First attempt: without quoting
ls -l ${pattern}
# Result: word splitting AND globbing
# ls: cannot access /home/me/dir: No such file or directory
# ls: cannot access with: No such file or directory
# ls: cannot access whitespace/file_*.txt: No such file or directory
####################
# Second attempt: with quoting
ls -l "${pattern}"
# Result: no word splitting, no globbing
# ls: cannot access /home/me/dir with whitespace/file_*.txt: No such file or directory
Is there a way to enable globbing, but disable word splitting?
Do I have any options except manually escaping whitespace in my pattern?

Don't keep glob inside the quote to be able to expand it:
pattern="/home/me/dir with whitespace/file_"
ls -l "${pattern}"*
EDIT:
Based on edited question and comment you can use find:
find . -path "./$pattern" -print0 | xargs -0 ls -l

I finally got it!
The trick is modifying the internal field separator (IFS) to be null. This prevents word splitting on unquoted variables until IFS is reverted to its old value or until it becomes unset.
Example:
$ pattern="/home/me/dir with whitespace/file_*.txt"
$ ls -l $pattern
ls: cannot access /home/me/dir: No such file or directory
ls: cannot access with: No such file or directory
ls: cannot access whitespace/file_*.txt: No such file or directory
$ IFS=""
$ ls -l $pattern
-rw-r--r-- 1 me Domain Users 0 Jun 26 09:14 /home/me/dir with whitespace/file_a.txt
-rw-r--r-- 1 me Domain Users 0 Jun 26 09:14 /home/me/dir with whitespace/file_b.txt
$ unset IFS
$ ls -l $pattern
ls: cannot access /home/me/dir: No such file or directory
ls: cannot access with: No such file or directory
ls: cannot access whitespace/file_*.txt: No such file or directory
I found out the hard way that you cannot set and use IFS with ls. For example, this doesn't work:
$ IFS="" ls -l $pattern
This is because the command has already undergone word splitting before IFS changes.

Related

failed to copy of a series of file using the for loop, cannot find the problem

I was trying to copy some files with numbering with for i in {};do cp ***;done, but I have encountered an error.
$ for i in {0..2};do cp ./CTCC_noSDS_0$i_hg19.bwt2pairs.validPairs 3_dataset_pool/;done
cp: cannot stat ‘./CTCC_noSDS_0.bwt2pairs.validPairs’: No such file or directory
cp: cannot stat ‘./CTCC_noSDS_0.bwt2pairs.validPairs’: No such file or directory
cp: cannot stat ‘./CTCC_noSDS_0.bwt2pairs.validPairs’: No such file or directory
The file name look like below:
-rw-r--r-- 1 jiangxu lc_lc 456M Nov 12 20:22 CTCC_noSDS_00_hg19.bwt2pairs.validPairs
-rw-r--r-- 1 jiangxu lc_lc 466M Nov 12 20:23 CTCC_noSDS_01_hg19.bwt2pairs.validPairs
-rw-r--r-- 1 jiangxu lc_lc 473M Nov 12 20:23 CTCC_noSDS_02_hg19.bwt2pairs.validPairs
I can cp the file one by one manually but can not use the for loop. It seems that the system just ignored the $i for no reason, So, could anyone tell me what is the problem with the command?
Using a similar method to the OP
You can do something like this (call this script b.bash):
#!/bin/bash
DST_DIR=./mydir/
SRC_DIR=./
for i in {1..5}; do
echo "[*] Trying to find files with number $i"
if [ "$i" -lt 10 ]; then
potential_file="$SRC_DIR/CTCC_noSDS_0${i}_hg19.bwt2pairs.validPairs"
else
potential_file="$SRC_DIR/CTCC_noSDS_${i}_hg19.bwt2pairs.validPairs"
fi
if [ -f "$potential_file" ]; then
echo "[!] Moving $potential_file"
cp "$potential_file" "$DST_DIR"
fi
done
Let us say we have the following in the current directory:
$ ls -1
b.bash
CTCC_noSDS_00_hg19.bwt2pairs.validPairs
CTCC_noSDS_01_hg19.bwt2pairs.validPairs
CTCC_noSDS_02_hg19.bwt2pairs.validPairs
CTCC_noSDS_12_hg19.bwt2pairs.validPairs
mydir
And let us say we want to copy these files to mydir. If we run the above script, we see this output:
$ ./b.bash
[*] Trying to find files with number 1
[!] Moving .//CTCC_noSDS_01_hg19.bwt2pairs.validPairs
[*] Trying to find files with number 2
[!] Moving .//CTCC_noSDS_02_hg19.bwt2pairs.validPairs
[*] Trying to find files with number 3
[*] Trying to find files with number 4
[*] Trying to find files with number 5
Then looking in mydir we see the files:
$ ls -1 mydir/
CTCC_noSDS_01_hg19.bwt2pairs.validPairs
CTCC_noSDS_02_hg19.bwt2pairs.validPairs
Note that the above for loop only goes to 5. You can modify that as
you see fit.
Note the if statement in the for loop and what it is
for.
Using find : )
You can instead use the find command like so:
find . -type f -iname 'CTCC_noSDS_*_hg19.bwt2pairs.validPairs' -exec bash -c 'cp {} mydir/' \;
Here is a safer/stricter find command using regex:
find . -regextype sed -regex ".*CTCC_noSDS_[0-9]\+_hg19.bwt2pairs.validPairs" -exec bash -c 'cp {} ./mydir/' \;
The shell did not ignore the variable. You are misusing variable expansion. Valid characters for variable names include underscores (one of which you have next to $i). So, what the shell is actually seeing is a variable called i__hg19, which is undefined. Thus, the filename is unexistent.
The solution is to wrap $i between curly braces like this:
cp ./CTCC_noSDS_0${i}_hg19.bwt2pairs.validPairs

Why quantificators don't pass through xargs

The problem I faced with is that wildcard quantificator doesn't pass through xargs to command for some reason.
Assume, that we have file file1 with such content:
-l
-a
f*
we need to pass arguments to ls via xargs.
cat file1 | xargs -n3 ls
The output equivalent to ls -la command, with additon from terminal, that
ls: cannot access f*: No such file or directory.
But the file is in directory (ls -la f* returns suitable output)
If we substitute f* to file1 for example, we'll have right output too.
Can you explain me, why does it happen? Thanks.
EDIT1:
It seems interesting to add, how we can pass arguments from file file1 through shell interpreter to ls command. Below is the example:
ls `xargs -n3 < file1`
Now shell expansion is made before ls invocation resulting in same output as for ls -la f*
The f* expression is also known as a shell glob, and is supposed to be interpreted by the shell. You can try it out independently e.g. by running echo f*.
When you run ls -la f* from the shell, the shell interprets it according to your directory contents, and calls ls with the expanded version, like: ls -la file1 file2 file3. You can get some commands confused about this if it matches no file and the command expects one.
But when you pass that argument to ls through xargs, the shell doesn't get a chance to expand it, and ls feels like it's invoked exactly as ls -la f*. It recognizes -la as options, and f* as a filename to list. That file happens not to exist, so it fails with the error message you see.
You could get it to fail on a more usual filename:
$ ls non_existing_file
ls: cannot access non_existing_file: No such file or directory.
Or you could get it to succeed by actually having a file of that name:
$ touch 'f*'
$ xargs ls -la <<< 'f*'
-rw-rw-r-- 1 jb jb 0 2013-11-13 23:08 f*
$ rm 'f*'
Notice how I had to use single quotes so that the shell would not interpret f* as a glob when creating and deleting it.
Unfortuately, you can't get it to expand it when it gets passed directly from xargs to ls, since there's no shell there.
The contents of file are passed to xargs via standard input, so the shell never sees them to process the glob. xargs then passes them to ls, again without the shell ever seeing them, so f* is treated as a literal 2-character file name.

Ger proper whitespaces in bash script [duplicate]

This question already has answers here:
How can I escape white space in a bash loop list?
(20 answers)
Closed 9 years ago.
I'm not so experienced in bash scripting, so consider studying it on practice. Recently i was trying to make simple script which should reveal all files at least 1 GB sized and faced with problem escaping white-spaces in names.
It's working fine in terminal if i do:
$ find /home/dem -size +1000M -print|sed -e 's/ /\\ /'
/home/dem/WEB/CMS/WP/Themes/Premium_elegant_themes/ETPSD.rar
/home/dem/VirtualBox\ VMs/Lubuntu13.04x86/Lubuntu13.04x86.vdi
/home/dem/VirtualBox\ VMs/Win7/Win7-test.vdi
/home/dem/VirtualBox\ VMs/FreeBSD9.1/FreeBSD9.1.vdi
/home/dem/VirtualBox\ VMs/backup_Lubuntu13.04x86/Lubuntu13.04x86.vdi
/home/dem/VirtualBox\ VMs/Beini-1.2.3/Beini-1.2.3.vdi
/home/dem/VirtualBox\ VMs/BackTrack5RC3/BackTrack5RC3.vdi
/home/dem/VirtualBox\ VMs/WinXPx32/WinXPx32.vdi
But in this script:
#!/bin/bash
for i in "$( find /home/dem -size +1000M -print|sed -e 's/ /\\ /' )"
do
res="$( ls -lh $i )"
echo $res
done
It gives error, and as you may see left part stripped:
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/Lubuntu13.04x86/Lubuntu13.04x86.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/Win7/Win7-test.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/FreeBSD9.1/FreeBSD9.1.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/backup_Lubuntu13.04x86/Lubuntu13.04x86.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/Beini-1.2.3/Beini-1.2.3.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/BackTrack5RC3/BackTrack5RC3.vdi: No such file or directory
ls: cannot access /home/dem/VirtualBox\: No such file or directory
ls: cannot access VMs/WinXPx32/WinXPx32.vdi: No such file or directory
-rw-rw-r-- 1 dem dem 3.1G Jul 13 02:54 /home/dem/Downloads/BT5R3-GNOME-32/BT5R3-GNOME-32.iso -rw------- 1 dem dem 1.1G Dec 27 2012 /home/dem/WEB/CMS/WP/Themes/Premium_elegant_themes/ETPSD.rar
I need script to show files with white-spaces + retrieving actual size of each file which ls -lh do.
Without sed formatting:
$ find /home/dem -size +1000M -print
/home/dem/WEB/CMS/WP/Themes/Premium_elegant_themes/ETPSD.rar
/home/dem/VirtualBox VMs/Lubuntu13.04x86/Lubuntu13.04x86.vdi
/home/dem/VirtualBox VMs/Win7/Win7-test.vdi
/home/dem/VirtualBox VMs/FreeBSD9.1/FreeBSD9.1.vdi
/home/dem/VirtualBox VMs/backup_Lubuntu13.04x86/Lubuntu13.04x86.vdi
/home/dem/VirtualBox VMs/Beini-1.2.3/Beini-1.2.3.vdi
/home/dem/VirtualBox VMs/BackTrack5RC3/BackTrack5RC3.vdi
/home/dem/VirtualBox VMs/WinXPx32/WinXPx32.vdi
xargs is great for simple cases, though it needs -0 (NUL-delimited inputs) to behave correctly when handling filenames with newlines in their paths (which are legal on UNIX). If you really do need to read the filenames into a shell script, you can do it like so:
while IFS='' read -r -d '' filename; do
ls -lh "$filename"
done < <(find /home/dem -size +1000M -print0)
...or like so, using functionality in modern versions of the POSIX standard for find to duplicate the behavior of xargs:
find /home/dem -size +1000M -exec ls -lh '{}' +
Simply use xargs:
find /home/dem -size +1000M -print0 | xargs -0 ls -lh
In shell script, parameters are divided by white space and can be troublesome if you are looking for file names that contain white spaces. This is a problem when you use a for loop because the for loop will treat each white space as a parameter separator:
$ ls -l
this is file number one
this is file number two
$ for file in $(find . -type f)
> do
> echo "My file is '$file'"
> done
my file is 'this'
my file is 'is'
my file is 'file'
my file is 'number'
my file is 'one'
my file is 'this'
my file is 'is'
my file is 'file'
my file is 'number'
my file is 'two'
In this case, the for is treating each space as a separate file which is what you don't want. There are other issues with for:
The for loop cannot start until it finishes processing the command in the $(...).
It is possible to overrun your command line buffer. What the shell does is execute the command in $(...) and the replaces the $(...) with the results of that command. If you used a find command that returned a few hundred thousand files, you will probably overrun your command line buffer. Even worse, it will happen silently. Unless you take a look you will never know that files were dropped. In fact, I've seen where someone tests a shell script using this type of for ... $(...) loop thinks everything is great, but then the command fails in a very critical situation.
It is inefficient because it has to spawn a separate shell process. Okay, it's not that big a deal anymore, but still...
A better way to handle this is to use a while read loop. IN BASH, it would look like this:
find ... -print0 | while read -d $'\0' file
do
....
done
The -print0 parameter prints out all found files, but separates them with a NULL character. The while read -d\$0 ... syntax breaks the parameter names on the NULL character and not on new lines as it normally does. Thus, even if your files have new lines in them (and file names are allowed in Unix to contain new lines, the while read -d\$0... will still read your file names properly.
Even better, this solves a few other problems:
The command line buffer can't be overloaded.
Your while read loop will execute in parallel with the find. No need for the find to find all of your files first.
You're not spawning a separate process.
Observe:
$ ls -l
this is file number one
this is file number two
$ find . -type f -print0 | while read -d\$0 file
> echo "My file is '$file'"
> done
my file is 'this is file number one'
my file is 'this is file number two'
By the way, another command called xargs has a similar parameter:
find . -type f -mtime +100 -print0 | xargs -0 rm
The xargs command takes the file names from STDIN, and passes them to the command it is given. It guarantees that the parameters passed will not over run the command line buffer. If they do, xargs will run the command passed to it multiple times.
Normally, (like for) xargs parses file names on whitespace. However, you can pass it a paramter to parse names on nulls.
THIS PARAMETER DIFFERS FROM SYSTEM TO SYSTEM
Sorry for the shouting, but I need to make this very clear. Different systems have different parameters for the xargs command, and you need to refer to the manpage to see which parameter your system takes. On my Mac, it is the -0. On GNU, it is --null although some Linux distributions take -0 too. And, some Unix versions may not even have this parameter.

Listing only directories using ls in Bash? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 28 days ago.
The community reviewed whether to reopen this question 28 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
This command lists directories in the current path:
ls -d */
What exactly does the pattern */ do?
And how can we give the absolute path in the above command (e.g. ls -d /home/alice/Documents) for listing only directories in that path?
*/ is a pattern that matches all of the subdirectories in the current directory (* would match all files and subdirectories; the / restricts it to directories). Similarly, to list all subdirectories under /home/alice/Documents, use ls -d /home/alice/Documents/*/
Four ways to get this done, each with a different output format
1. Using echo
Example: echo */, echo */*/
Here is what I got:
cs/ draft/ files/ hacks/ masters/ static/
cs/code/ files/images/ static/images/ static/stylesheets/
2. Using ls only
Example: ls -d */
Here is exactly what I got:
cs/ files/ masters/
draft/ hacks/ static/
Or as list (with detail info): ls -dl */
3. Using ls and grep
Example: ls -l | grep "^d"
Here is what I got:
drwxr-xr-x 24 h staff 816 Jun 8 10:55 cs
drwxr-xr-x 6 h staff 204 Jun 8 10:55 draft
drwxr-xr-x 9 h staff 306 Jun 8 10:55 files
drwxr-xr-x 2 h staff 68 Jun 9 13:19 hacks
drwxr-xr-x 6 h staff 204 Jun 8 10:55 masters
drwxr-xr-x 4 h staff 136 Jun 8 10:55 static
4. Bash Script (Not recommended for filename containing spaces)
Example: for i in $(ls -d */); do echo ${i%%/}; done
Here is what I got:
cs
draft
files
hacks
masters
static
If you like to have '/' as ending character, the command will be: for i in $(ls -d */); do echo ${i}; done
cs/
draft/
files/
hacks/
masters/
static/
I use:
ls -d */ | cut -f1 -d'/'
This creates a single column without a trailing slash - useful in scripts.
For all folders without subfolders:
find /home/alice/Documents -maxdepth 1 -type d
For all folders with subfolders:
find /home/alice/Documents -type d
Four (more) Reliable Options.
An unquoted asterisk * will be interpreted as a pattern (glob) by the shell. The shell will use it in pathname expansion. It will then generate a list of filenames that match the pattern.
A simple asterisk will match all filenames in the PWD (present working directory). A more complex pattern as */ will match all filenames that end in /. Thus, all directories. That is why the command:
1.- echo.
echo */
echo ./*/ ### Avoid misinterpreting filenames like "-e dir"
will be expanded (by the shell) to echo all directories in the PWD.
To test this: Create a directory (mkdir) named like test-dir, and cd into it:
mkdir test-dir; cd test-dir
Create some directories:
mkdir {cs,files,masters,draft,static} # Safe directories.
mkdir {*,-,--,-v\ var,-h,-n,dir\ with\ spaces} # Some a bit less secure.
touch -- 'file with spaces' '-a' '-l' 'filename' # And some files:
The command echo ./*/ will remain reliable even with odd named files:
./--/ ./-/ ./*/ ./cs/ ./dir with spaces/ ./draft/ ./files/ ./-h/
./masters/ ./-n/ ./static/ ./-v var/
But the spaces in filenames make reading a bit confusing.
If instead of echo, we use ls. The shell is still what is expanding the list of filenames. The shell is the reason to get a list of directories in the PWD. The -d option to ls makes it list the present directory entry instead of the contents of each directory (as presented by default).
ls -d */
However, this command is (somewhat) less reliable. It will fail with the odd named files listed above. It will choke with several names. You need to erase one by one till you find the ones with problems.
2.- ls
The GNU ls will accept the "end of options" (--) key.
ls -d ./*/ ### More reliable BSD ls
ls -d -- */ ### More reliable GNU ls
3.-printf
To list each directory in its own line (in one column, similar to ls -1), use:
$ printf "%s\n" */ ### Correct even with "-", spaces or newlines.
And, even better, we could remove the trailing /:
$ set -- */; printf "%s\n" "${#%/}" ### Correct with spaces and newlines.
An attempt like
$ for i in $(ls -d */); do echo ${i%%/}; done
will fail on:
some names (ls -d */) as already shown above.
will be affected by the value of IFS.
will split names on spaces and tabs (with default IFS).
each newline in the name will start a new echo command.
4.- Function
Finally, using the argument list inside a function will not affect the arguments list of the present running shell. Simply
$ listdirs(){ set -- */; printf "%s\n" "${#%/}"; }
$ listdirs
presents this list:
--
-
*
cs
dir with spaces
draft
files
-h
masters
-n
static
-v var
These options are safe with several types of odd filenames.
The tree command is also pretty useful here. By default it will show all files and directories to a complete depth, with some ASCII characters showing the directory tree.
$ tree
.
├── config.dat
├── data
│ ├── data1.bin
│ ├── data2.inf
│ └── sql
| │ └── data3.sql
├── images
│ ├── background.jpg
│ ├── icon.gif
│ └── logo.jpg
├── program.exe
└── readme.txt
But if we wanted to get just the directories, without the ASCII tree, and with the full path from the current directory, you could do:
$ tree -dfi
.
./data
./data/sql
./images
The arguments being:
-d List directories only.
-f Prints the full path prefix for each file.
-i Makes tree not print the indentation lines, useful when used in conjunction with the -f option.
And if you then want the absolute path, you could start by specifying the full path to the current directory:
$ tree -dfi "$(pwd)"
/home/alice/Documents
/home/alice/Documents/data
/home/alice/Documents/data/sql
/home/alice/Documents/images
And to limit the number of subdirectories, you can set the max level of subdirectories with -L level, e.g.:
$ tree -dfi -L 1 "$(pwd)"
/home/alice/Documents
/home/alice/Documents/data
/home/alice/Documents/images
More arguments can be seen with man tree.
In case you're wondering why output from 'ls -d */' gives you two trailing slashes, like:
[prompt]$ ls -d */
app// cgi-bin// lib// pub//
it's probably because somewhere your shell or session configuration files alias the ls command to a version of ls that includes the -F flag. That flag appends a character to each output name (that's not a plain file) indicating the kind of thing it is. So one slash is from matching the pattern '*/', and the other slash is the appended type indicator.
To get rid of this issue, you could of course define a different alias for ls. However, to temporarily not invoke the alias, you can prepend the command with backslash:
\ls -d */
Actual ls solution, including symlinks to directories
Many answers here don't actually use ls (or only use it in the trivial sense of ls -d, while using wildcards for the actual subdirectory matching. A true ls solution is useful, since it allows the use of ls options for sorting order, etc.
Excluding symlinks
One solution using ls has been given, but it does something different from the other solutions in that it excludes symlinks to directories:
ls -l | grep '^d'
(possibly piping through sed or awk to isolate the file names)
Including symlinks
In the (probably more common) case that symlinks to directories should be included, we can use the -p option of ls, which makes it append a slash character to names of directories (including symlinked ones):
ls -1p | grep '/$'
or, getting rid of the trailing slashes:
ls -1p | grep '/$' | sed 's/\/$//'
We can add options to ls as needed (if a long listing is used, the -1 is no longer required).
Note: if we want trailing slashes, but don't want them highlighted by grep, we can hackishly remove the highlighting by making the actual matched portion of the line empty:
ls -1p | grep -P '(?=/$)'
A plain list of the current directory, it'd be:
ls -1d */
If you want it sorted and clean:
ls -1d */ | cut -c 1- | rev | cut -c 2- | rev | sort
Remember: capitalized characters have different behavior in the sort
I just add this to my .bashrc file (you could also just type it on the command line if you only need/want it for one session):
alias lsd='ls -ld */'
Then lsd will produce the desired result.
Here is what I am using
ls -d1 /Directory/Path/*;
If a hidden directory is not needed to be listed, I offer:
ls -l | grep "^d" | awk -F" " '{print $9}'
And if hidden directories are needed to be listed, use:
ls -Al | grep "^d" | awk -F" " '{print $9}'
Or
find -maxdepth 1 -type d | awk -F"./" '{print $2}'
For listing only directories:
ls -l | grep ^d
For listing only files:
ls -l | grep -v ^d
Or also you can do as:
ls -ld */
Try this one. It works for all Linux distribution.
ls -ltr | grep drw
ls and awk (without grep)
No need to use grep since awk can perform regularexpressino check so it is enough to do this:
ls -l | awk '/^d/ {print $9}'
where ls -l list files with permisions
awk filter output
'/^d/' regularexpresion that search only for lines starting with letter d (as directory) looking at first line - permisions
{print} would prints all columns
{print $9} will print only 9th column (name) from ls -l output
Very simple and clean
To show folder lists without /:
ls -d */|sed 's|[/]||g'
I found this solution the most comfortable, I add to the list:
find * -maxdepth 0 -type d
The difference is that it has no ./ at the beginning, and the folder names are ready to use.
Test whether the item is a directory with test -d:
for i in $(ls); do test -d $i && echo $i ; done
FYI, if you want to print all the files in multi-line, you can do a ls -1 which will print each file in a separate line.
file1
file2
file3
*/ is a filename matching pattern that matches directories in the current directory.
To list directories only, I like this function:
# Long list only directories
llod () {
ls -l --color=always "$#" | grep --color=never '^d'
}
Put it in your .bashrc file.
Usage examples:
llod # Long listing of all directories in current directory
llod -tr # Same but in chronological order oldest first
llod -d a* # Limit to directories beginning with letter 'a'
llod -d .* # Limit to hidden directories
Note: it will break if you use the -i option. Here is a fix for that:
# Long list only directories
llod () {
ls -l --color=always "$#" | egrep --color=never '^d|^[[:digit:]]+ d'
}
file * | grep directory
Output (on my machine) --
[root#rhel6 ~]# file * | grep directory
mongo-example-master: directory
nostarch: directory
scriptzz: directory
splunk: directory
testdir: directory
The above output can be refined more by using cut:
file * | grep directory | cut -d':' -f1
mongo-example-master
nostarch
scriptzz
splunk
testdir
* could be replaced with any path that's permitted
file - determine file type
grep - searches for string named directory
-d - to specify a field delimiter
-f1 - denotes field 1
One-liner to list directories only from "here".
With file count.
for i in `ls -d */`; do g=`find ./$i -type f -print| wc -l`; echo "Directory $i contains $g files."; done
Using Perl:
ls | perl -nle 'print if -d;'
I partially solved it with:
cd "/path/to/pricipal/folder"
for i in $(ls -d .*/); do sudo ln -s "$PWD"/${i%%/} /home/inukaze/${i%%/}; done
 
ln: «/home/inukaze/./.»: can't overwrite a directory
ln: «/home/inukaze/../..»: can't overwrite a directory
ln: accesing to «/home/inukaze/.config»: too much symbolics links levels
ln: accesing to «/home/inukaze/.disruptive»: too much symbolics links levels
ln: accesing to «/home/inukaze/innovations»: too much symbolics links levels
ln: accesing to «/home/inukaze/sarl»: too much symbolics links levels
ln: accesing to «/home/inukaze/.e_old»: too much symbolics links levels
ln: accesing to «/home/inukaze/.gnome2_private»: too much symbolics links levels
ln: accesing to «/home/inukaze/.gvfs»: too much symbolics links levels
ln: accesing to «/home/inukaze/.kde»: too much symbolics links levels
ln: accesing to «/home/inukaze/.local»: too much symbolics links levels
ln: accesing to «/home/inukaze/.xVideoServiceThief»: too much symbolics links levels
Well, this reduce to me, the major part :)
Here is a variation using tree which outputs directory names only on separate lines, yes it's ugly, but hey, it works.
tree -d | grep -E '^[├|└]' | cut -d ' ' -f2
or with awk
tree -d | grep -E '^[├|└]' | awk '{print $2}'
This is probably better however and will retain the / after directory name.
ls -l | awk '/^d/{print $9}'
if you have space in your folder name $9 print wont work try below command
ls -l yourfolder/alldata/ | grep '^d' | awk '{print $9" " $10}'
output
ls -l yourfolder/alldata/ | grep '^d' | awk '{print $9" " $10}'
testing_Data
Folder 1
To answer the original question, */ has nothing to do with ls per se; it is done by the shell/Bash, in a process known as globbing.
This is why echo */ and ls -d */ output the same elements. (The -d flag makes ls output the directory names and not contents of the directories.)
Adding on to make it full circle, to retrieve the path of every folder, use a combination of Albert's answer as well as Gordans. That should be pretty useful.
for i in $(ls -d /pathto/parent/folder/*/); do echo ${i%%/}; done
Output:
/pathto/parent/folder/childfolder1/
/pathto/parent/folder/childfolder2/
/pathto/parent/folder/childfolder3/
/pathto/parent/folder/childfolder4/
/pathto/parent/folder/childfolder5/
/pathto/parent/folder/childfolder6/
/pathto/parent/folder/childfolder7/
/pathto/parent/folder/childfolder8/
Here is what I use for listing only directory names:
ls -1d /some/folder/*/ | awk -F "/" "{print \$(NF-1)}"

Rename multiple sequentially numbered files and change numbering format using BASH?

I have a bunch of sequentially named files in this format: imageXXX.jpg. So it would be like image001.jpg and onward. I just want to keep the number part of this, and get rid of the prepended 0's. So instead, that file would be named 1.jpg. How could I achieve this using Bash?
Pure Bash:
shopt -s extglob
for f in image*.jpg; do mv "$f" "${f/#image*(0)}"; done
Additional code could check for name collisions or handle other error conditions. You could use mv -i to prompt before files are overwritten.
On Linux the venerable Perl utility rename is friendly:
$ rename 's/^image0+//' image*.jpg
You should be aware that stripping leading zeros will ruin the sort order, that is *.jpg orders like:
1.jpg
10.jpg
11.jpg
...
2.jpg
20.jpg
If you want to keep the order just use
$ rename 's/^image//' image*.jpg
instead.
added in response to system identification
You can likely script it in bash alone, but it would be non-trivial and the failure cases really need to be handled correctly. Yeah, hoisting Perl onto a system is non-trivial too, but it is easy and that wheel's already been invented
Fedora Core 8 Perl RPM: http://rpm.pbone.net/index.php3/stat/4/idpl/5152898/dir/fedora_8/com/perl-5.8.8-30.n0i.51.fc8.i386.rpm.html
CPAN rename: http://metacpan.org/pod/File::Rename
added in response to silent failure
rename like chmod will complain if you give it malformed arguments, but both are silent if what you request has no effect. For example
$ ls -l junk
-rw-r--r-- 1 msw msw 0 2010-09-24 01:59 junk
$ chmod 688 junk
chmod: invalid mode: '688'
$ chmod 644 junk # was already 644 mode, nothing happened no error
$ rename 's/bob/alice/' ju*k
# there was no 'bob' in 'junk' to substitute, no change, no error
$ ls -l junk
-rw-r--r-- 1 msw msw 0 2010-09-24 01:59 junk
$ rename 's/un/ac/' j*k # but there is an 'un' in 'junk', change it
$ ls -l j*k
-rw-r--r-- 1 msw msw 0 2010-09-24 01:59 jack
You can make rename less silent:
$ rename --verbose 's/ac/er/' j*k
jack renamed as jerk
$ rename --verbose 's/ac/er/' j*k # nothing to rename
$

Resources