How to list only files and not directories of a directory Bash? - bash

How can I list all the files of one folder but not their folders or subfiles. In other words: How can I list only the files?

Using find:
find . -maxdepth 1 -type f
Using the -maxdepth 1 option ensures that you only look in the current directory (or, if you replace the . with some path, that directory). If you want a full recursive listing of all files in that and subdirectories, just remove that option.

ls -p | grep -v /
ls -p lets you show / after the folder name, which acts as a tag for you to remove.

carlpett's find-based answer (find . -maxdepth 1 -type f) works in principle, but is not quite the same as using ls: you get a potentially unsorted list of filenames all prefixed with ./, and you lose the ability to apply ls's many options;
also find invariably finds hidden items too, whereas ls' behavior depends on the presence or absence of the -a or -A options.
An improvement, suggested by Alex Hall in a comment on the question is to combine shell globbing with find:
find * -maxdepth 0 -type f # find -L * ... includes symlinks to files
However, while this addresses the prefix problem and gives you alphabetically sorted output, you still have neither (inline) control over inclusion of hidden items nor access to ls's many other sorting / output-format options.
Hans Roggeman's ls + grep answer is pragmatic, but locks you into using long (-l) output format.
To address these limitations I wrote the fls (filtering ls) utility,
a utility that provides the output flexibility of ls while also providing type-filtering capability,
simply by placing type-filtering characters such as f for files, d for directories, and l for symlinks before a list of ls arguments (run fls --help or fls --man to learn more).
Examples:
fls f # list all files in current dir.
fls d -tA ~ # list dirs. in home dir., including hidden ones, most recent first
fls f^l /usr/local/bin/c* # List matches that are files, but not (^) symlinks (l)
Installation
Supported platforms
When installing from the npm registry: Linux and macOS
When installing manually: any Unix-like platform with Bash
From the npm registry
Note: Even if you don't use Node.js, its package manager, npm, works across platforms and is easy to install; try
curl -L https://git.io/n-install | bash
With Node.js installed, install as follows:
[sudo] npm install fls -g
Note:
Whether you need sudo depends on how you installed Node.js / io.js and whether you've changed permissions later; if you get an EACCES error, try again with sudo.
The -g ensures global installation and is needed to put fls in your system's $PATH.
Manual installation
Download this bash script as fls.
Make it executable with chmod +x fls.
Move it or symlink it to a folder in your $PATH, such as /usr/local/bin (macOS) or /usr/bin (Linux).

Listing content of some directory, without subdirectories
I like using ls options, for sample:
-l use a long listing format
-t sort by modification time, newest first
-r reverse order while sorting
-F, --classify append indicator (one of */=>#|) to entries
-h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc...
Sometime --color and all others. (See ls --help)
Listing everything but folders
This will show files, symlinks, devices, pipe, sockets etc.
so
find /some/path -maxdepth 1 ! -type d
could be sorted by date easily:
find /some/path -maxdepth 1 ! -type d -exec ls -hltrF {} +
Listing files only:
or
find /some/path -maxdepth 1 -type f
sorted by size:
find /some/path -maxdepth 1 -type f -exec ls -lSF --color {} +
Prevent listing of hidden entries:
To not show hidden entries, where name begin by a dot, you could add ! -name '.*':
find /some/path -maxdepth 1 ! -type d ! -name '.*' -exec ls -hltrF {} +
Then
You could replace /some/path by . to list for current directory or .. for parent directory.

You can also use ls with grep or egrep and put it in your profile as an alias:
ls -l | egrep -v '^d'
ls -l | grep -v '^d'

find files: ls -l /home | grep "^-" | tr -s ' ' | cut -d ' ' -f 9
find directories: ls -l /home | grep "^d" | tr -s ' ' | cut -d ' ' -f 9
find links: ls -l /home | grep "^l" | tr -s ' ' | cut -d ' ' -f 9
tr -s ' ' turns the output into a space-delimited file
the cut command says the delimiter is a space, and return the 9th field (always the filename/directory name/linkname).
I use this all the time!

You are welcome!
ls -l | grep '^-'
Looking just for the name, pipe to cut or awk.
ls -l | grep '^-' | awk '{print $9}'
ls -l | grep '^-' | cut -d " " -f 13

{ find . -maxdepth 1 -type f | xargs ls -1t | less; }
added xargs to make it works, and used -1 instead of -l to show only filenames without additional ls info

You can one of these:
echo *.* | cut -d ' ' -f 1- --output-delimiter=$'\n'
echo *.* | tr ' ' '\n'
echo *.* | sed 's/\s\+/\n/g'
ls -Ap | sort | grep -v /

This method does not use external commands.
bash$ res=$( IFS=$'\n'; AA=(`compgen -d`); IFS='|'; eval compgen -f -X '#("${AA[*]}")' )
bash$ echo "$res"
. . .

Just adding on to carlpett's answer.
For a much useful view of the files, you could pipe the output to ls.
find . -maxdepth 1 -type f|ls -lt|less
Shows the most recently modified files in a list format, quite useful when you have downloaded a lot of files, and want to see a non-cluttered version of the recent ones.

"find '-maxdepth' " does not work with my old version of bash, therefore I use:
for f in $(ls) ; do if [ -f $f ] ; then echo $f ; fi ; done

Related

"find | xargs | ls" not running ls on filenames from find

So I have a directory with files and sub-directories in it. I want to get all the files recursively and then list them in long format, sorted by the modified date. Here's what I came up with.
find . -type f | xargs -d "\n" | ls -lt
However this only lists the files in the current directory and not the sub-directories. I don't understand why, given that the following prints out all the files.
find . -type f | xargs -d "\n" | cat
Any help appreciated.
xargs can only start ls if it's passed ls as an argument. When you pipe from xargs into ls, only one copy of ls is started -- by the parent shell -- and it isn't given any of the filenames from find | xargs as arguments -- instead they're on its stdin, but ls never reads its stdin, so it doesn't even know that they're there.
Thus, you need to remove the | character:
# Does what you specified in the common case, but buggy; don't use this
# (filenames can contain newlines!)
# ...also, xargs -d is GNU-only
find . -type f | xargs -d '\n' ls -lt
...or, better:
# uses NUL separators, which cannot exist inside filenames
# also, while a non-POSIX extension, this is supported in both GNU and BSD xargs
find . -type f -print0 | xargs -0 ls -lt
...or, even better than that:
# no need for xargs at all here; find -exec can do the same thing
# -exec ... {} + is POSIX-mandated functionality since 2008
find . -type f -exec ls -lt {} +
Much of the content in this answer is also covered in the Actions, Complex Actions, and Actions in Bulk sections of Using Find, which is well worth reading.

Argument list too long for ls while moving files from one dir to other in bash shell

Below is the command I am using for moving files from dir a to dir b
ls /<someloc>/a/* | tail -2000 | xargs -I{} mv {} /<someloc>/b/
-bash: /usr/bin/ls: Argument list too long
folder a has files in millions ..
Need your help to fix this please.
If the locations of both directories are on the same disk/partition and folder b is originally empty, you can do the following
$ rmdir /path/to/b
$ mv /other/path/to/a /path/to/b
$ mkdir /other/path/to/a
If folder b is not empty, then you can do something like this:
find /path/to/a/ -type f -exec mv -t /path/to/b {} +
If you just want to move 2000 files, you can do
find /path/to/a/ -type f -print | tail -2000 | xargs mv -t /path/to/b
But this can be problematic with some filenames. A cleaner way would be is to use -print0 of find, but the problem is that head and tail can't process those, so you have to use awk for this.
# first 2000 files (mimick head)
find /path/to/a -type f -print0 \
| awk 'BEGIN{RS=ORS="\0"}(NR<=2000)' \
| xargs -0 mv -t /path/to/b
# last 2000 files (mimick tail)
find /path/to/a -type f -print0 \
| awk 'BEGIN{RS=ORS="\0"}{a[NR%2000]=$0}END{for(i=1;i<=2000;++i) print a[i]}' \
| xargs -0 mv -t /path/to/b
The ls in the code in the question does nothing useful. The glob (/<someloc>/a/*) produces a sorted list of files, and ls just copies it (after re-sorting it), if it works at all. See “Argument list too long”: How do I deal with it, without changing my command? for the reason why ls is failing.
One way to make the code work is to replace ls with printf:
printf '%s\n' /<someloc>/a/* | tail -2000 | xargs -I{} mv {} /<someloc>/b/
printf is a Bash builtin, so running it doesn't create a subprocess, and the "Argument list too long" problem doesn't occur.
This code will still fail if any of the files contains a newline character in its name. See the answer by kvantour for alternatives that are not vulnerable to this problem.

How to use "grep" command to list all the files executable by user in current directory?

my command was this
ls -l|grep "\-[r,-][w,-]x*"|tr -s " " | cut -d" " -f9
but for the result I get all the files, not only the ones for which user has a right to execute ( the first x bit is set on).
I'm running linux ubuntu
You can use find with the -perm option:
find . -maxdepth 1 -type f -perm -u+x
OK -- if you MUST use grep:
ls -l | grep '^[^d]..[sx]' | awk '{ print $9 }'
Don't use grep. If you want to know if a file is executable, use test -x. To check all files in the current directory, use find or a for loop:
for f in *; do test -f "$f" -a -x "$f" && echo "$f"; done
or
find . -maxdepth 1 -type f -exec test -x {} \; -print
Use awk with match
ls -l|awk 'match($1,/^...x/) {print $9}'
match($1,/^...x/): match first field for the regular expression ^...x, ie search for owner permission ending in x.

Script to count number of files in each directory

I need to count the number of files on a large number of directories. Is there an easy way to do this with a shell script (using find, wc, sed, awk or similar)? Just to avoid having to write a proper script in python.
The output would be something like this:
$ <magic_command>
dir1 2
dir2 12
dir3 5
The number after the dir name would be the number of files. A plus would be able to turn counting of dot/hidden files on and off.
Thanks!
Try the below one:
du -a | cut -d/ -f2 | sort | uniq -c | sort -nr
from http://www.linuxquestions.org/questions/linux-newbie-8/how-to-find-the-total-number-of-files-in-a-folder-510009/#post3466477
find <dir> -type f | wc -l
find -type f will list all files in the specified directory one at each line, wc -l count the amount of newlines seen from stdin.
Also for future reference: answers like this are a google away.
More or less what I was looking for:
find . -type d -exec sh -c 'echo "{}" `ls "{}" |wc -l`' \;
try ls | wc it list the file in your directory and gives list of file output to wc as input
One way like this:
$ for dir in $(find . -type d )
> do
> echo $dir $(ls -A $dir | wc -l )
> done
Just remove the -A option if you do not want the hidden file count
find . -type d | xargs ls -1 | perl -lne 'if(/^\./ || eof){print $a." ".$count;$a=$_;$count=-1}else{$count++}'
below is the test:
> find . -type d
.
./SunWS_cache
./wicked
./wicked/segvhandler
./test
./test/test2
./test/tempdir.
./signal_handlers
./signal_handlers/part2
> find . -type d | xargs ls -1 | perl -lne 'if(/^\./ || eof){print $a." ".$count;$a=$_;$count=-1}else{$count++}'
.: 79
./SunWS_cache: 4
./signal_handlers: 6
./signal_handlers/part2: 5
./test: 6
./test/tempdir.: 0
./test/test2: 0
./wicked: 4
./wicked/segvhandler: 9
A generic version of Mehdi Karamosly's solution to list folders of any directory without changing current directory
DIR=~/test/ sh -c 'cd $DIR; du -a | cut -d/ -f2 | sort | uniq -c | sort -nr'
Explanation:
Extract directory into variable
Start new shell
Change directory in that shell so that current shell's directory stays same
Process
I use these functions:
nf()(for d;do echo $(ls -A -- "$d"|wc -l) "$d";done)
nfr()(for d;do echo $(find "$d" -mindepth 1|wc -l) "$d";done)
Both assume that filenames don't contain newlines.
Here's bash-only versions:
nf()(shopt -s nullglob dotglob;for d;do a=("$d"/*);echo "${#a[#]} $d";done)
nfr()(shopt -s nullglob dotglob globstar;for d;do a=("$d"/**);echo "${#a[#]} $d";done)
I liked the output from the du based answer, but when I was looking at a large filesystem it was taking ages, so I put together a small ls based script which gives the same output, but much quicker:
for dir in `ls -1A ~/test/`;
do
echo "$dir `ls -R1Ap ~/test/$dir | grep -Ev "[/:]|^\s*$" | wc -l`"
done
You can try out copying the output of ls command in a text file and then count the number of lines in that file.
ls $LOCATION > outText.txt; NUM_FILES=$(wc -w outText.txt); echo $NUM_FILES
find -type f -printf '%h\n' | sort | uniq -c | sort -n

Get the newest directory to a variable in Bash

I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.

Resources