I am not that experienced with bash and I have a question.
I am trying to select some files (let's say ls for this example) that do not match a certain array o patterns.
I have files named Test,Test1,Test2,Test3,Test4,Test5,Test6,Test7,Test8,Test9,Test10 that I need to select in two groups.
1st group is Test5,Test6,Test7,Test8,Test9,Test10 which I can list by using:
ls -l T*#(5|6|7|8|9|10) or ls -l T*{5,6,7,8,9,10}
The 2nd group is the tricky one (for me) because of the Test & Test1 files. I am trying to invert the previous selection/listing.. or somehow select the rest.
I have tried several things with no luck:
ls -l T*[!5678910]
ls -l T*[!#(5|6|7|8|9|10)]
ls -l T*[!5][!6][!7][!8][!9][!10]
ls -l T*#(1|2|3|4|)
p.s: the actual filenames have extra characters after the number.
You can inverse a pattern like this:
# enable extended glob, if disabled, it will not function
shopt -s extglob
# create our files
touch {Test,Test1,Test2,Test3,Test4,Test5,Test6,Test7,Test8,Test9,Test10}
# list files with matching pattern
ls -1 T*#(5|6|7|8|9|10)
Test10
Test5
Test6
Test7
Test8
Test9
# list files with NOT matching pattern
ls -1 T!(*#(5|6|7|8|9|10))
Test
Test1
Test2
Test3
Test4
You may use an empty string as an option in the list:
ls -l Test*{,1,2,3,4}
[EDIT] But in general I don't see a way to do inverted match in bash alone. Also I see now that there may be other characters after numbers (I suppose non-numerical, or you would have no way to distinguish). I would use ´grep´ , possibly with ´-v´ flag for negation.
ls -1 | grep -v "Test\(\(5\)\|\(6\)\|\(7\)\|\(8\)\\|\(9\)\|\(10\)\)"
I'm not sure why it doesn't work with *, but it works with more specific patterns:
ls -l Test#(1|2|3|4|)
ls -l Test?([1-4])
ls -l T+([^0-9])?([1-4])
If ls is not your only acceptable command, you can also use find with regex to achieve your purpose:
Create all test files:
touch Test{,1,2,3,4,5,6,7,8,9,10}
Find your 1st group of files with regex:
find -type f -regex "\./Test\([5-9]\|10\)"
Reverse is simple, just add a ! before option -regex:
find -type f ! -regex "\./Test\([5-9]\|10\)"
It works on Linux Bash.
Related
I have a directory that contains
frame0.png
frame1.png
frame2.png
...
frame20.png
I want to use wildcards such that ls -l shows me the files ordered by the number. I tried
ls -l frame?.png frame??.png
because I thought it would first seach for the items with just one digit, order them and then do the same with two digits but, the output is
frame0.png
frame10.png
frame11.png
...
frame1.png
frame20.png
frame2.png
...
frame9.png
How can I circumvent that bash orders them like that?
If you have gnu utilities then use -v option to get natural sort:
ls -lv frame*.png
If you don't have gnu ls then try this find + sort:
find . -maxdepth 1 -name 'frame*.png' | sort -V
I have a txt file which contains list of file names
Example:
10.jpg
11.jpg
12.jpeg
...
In a folder this files should protect from delete process and other files should delete.
So i want oppposite logic of this question: Shell command/script to delete files whose names are in a text file
How to do that?
Use extglob and Bash extended pattern matching !(pattern-list):
!(pattern-list)
Matches anything except one of the given patterns
where a pattern-list is a list of one or more patterns separated by a |.
extglob
If set, the extended pattern matching features described above are enabled.
So for example:
$ ls
10.jpg 11.jpg 12.jpeg 13.jpg 14.jpg 15.jpg 16.jpg a.txt
$ shopt -s extglob
$ shopt | grep extglob
extglob on
$ cat a.txt
10.jpg
11.jpg
12.jpeg
$ tr '\n' '|' < a.txt
10.jpg|11.jpg|12.jpeg|
$ ls !(`tr '\n' '|' < a.txt`)
13.jpg 14.jpg 15.jpg 16.jpg a.txt
The deleted files are 13.jpg 14.jpg 15.jpg 16.jpg a.txt according to the example.
So with extglob and !(pattern-list), we can obtain the files which are excluded based on the file content.
Additionally, if you want to exclude the entries starting with ., then you could switch on the dotglob option with shopt -s dotglob.
This is one way that will work with bash GLOBIGNORE:
$ cat file2
10.jpg
11.jpg
12.jpg
$ ls *.jpg
10.jpg 11.jpg 12.jpg 13.jpg
$ echo $GLOBIGNORE
$ GLOBIGNORE=$(tr '\n' ':' <file2 )
$ echo $GLOBIGNORE
10.jpg:11.jpg:12.jpg:
$ ls *.jpg
13.jpg
As it is obvious, globing ignores whatever (file, pattern, etc) is included in the GLOBIGNORE bash variable.
This is why the last ls reports only file 13.jpg since files 10,11 and 12.jpg are ignored.
As a result using rm *.jpg will remove only 13.jpg in my system:
$ rm -iv *.jpg
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
When you are done, you can just set GLOBIGNORE to null:
$ GLOBIGNORE=
Worths to be mentioned, that in GLOBIGNORE you can also apply glob patterns instead of single filenames, like *.jpg or my*.mp3 , etc
Alternative :
We can use programming techniques (grep, awk, etc) to compare the file names present in ignorefile and the files under current directory:
$ awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n')
13.jpg
$ rm -iv "$(awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n'))"
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
Note: This also makes use of bash process substitution, and will break if filenames include new lines.
Another alternative to George Vasiliou's answer would be to read the file with the names of the files to keep using the Bash builtin mapfile and then check for each of the files to be deleted whether it is in that list.
#! /bin/bash -eu
mapfile -t keepthose <keepme.txt
declare -a deletethose
for f in "$#"
do
keep=0
for not in "${keepthose[#]}"
do
[ "${not}" = "${f}" ] && keep=1 || :
done
[ ${keep} -gt 0 ] || deletethose+=("${f}")
done
# Remove the 'echo' if you really want to delete files.
echo rm -f "${deletethose[#]}"
The -t option causes mapfile to trim the trailing newline character from the lines it reads from the file. No other white-space will be trimmed, though. This might be what you want if your file names actually contain white-space but it could also cause subtle surprises if somebody accidentally puts a space before or after the name of an important file they want to keep.
Note that I'm first building a list of the files that should be deleted and then delete them all at once rather than deleting each file individually. This saves some sub-process invocations.
The lookup in the list, as coded above, has linear complexity which gives the overall script quadratic complexity (precisely, N × M where N is the number of command-line arguments and M the number of entries in the keepme.txt file). If you only have a few dozen files, this should be fine. Unfortunately, I don't know of a better way to check for set membership in Bash. (We cannot use the file names as keys in an associative array because they might not be proper identifiers.) If you are concerned with performance for many files, using a more powerful language like Python might be worth consideration.
I would also like to mention that the above example simply compares strings. It will not realize that important.txt and ./important.txt are the same file and hence delete the file. It would be more robust to convert the file name to a canonical path using readlink -f before comparing it.
Furthermore, your users might want to be able to put globing patterns (like important.* into the list of files to keep. If you want to handle those, extra logic would be required.
Overall, specifying what files to not delete seems a little dangerous as the error is on the bad side.
Provided there's no spaces or special escaped chars in the file names, either of these (or variations of these) would work:
rm -v $(stat -c %n * | sort excluded_file_list | uniq -u)
stat -c %n * | grep -vf excluded_file_list | xargs rm -v
Is there a bash command which counts the number of files that match a pattern?
For example, I want to get the count of all files in a directory which match this pattern: log*
This simple one-liner should work in any shell, not just bash:
ls -1q log* | wc -l
ls -1q will give you one line per file, even if they contain whitespace or special characters such as newlines.
The output is piped to wc -l, which counts the number of lines.
Lots of answers here, but some don't take into account
file names with spaces, newlines, or control characters in them
file names that start with hyphens (imagine a file called -l)
hidden files, that start with a dot (if the glob was *.log instead of log*
directories that match the glob (e.g. a directory called logs that matches log*)
empty directories (i.e. the result is 0)
extremely large directories (listing them all could exhaust memory)
Here's a solution that handles all of them:
ls 2>/dev/null -Ubad1 -- log* | wc -l
Explanation:
-U causes ls to not sort the entries, meaning it doesn't need to load the entire directory listing in memory
-b prints C-style escapes for nongraphic characters, crucially causing newlines to be printed as \n.
-a prints out all files, even hidden files (not strictly needed when the glob log* implies no hidden files)
-d prints out directories without attempting to list the contents of the directory, which is what ls normally would do
-1 makes sure that it's on one column (ls does this automatically when writing to a pipe, so it's not strictly necessary)
2>/dev/null redirects stderr so that if there are 0 log files, ignore the error message. (Note that shopt -s nullglob would cause ls to list the entire working directory instead.)
wc -l consumes the directory listing as it's being generated, so the output of ls is never in memory at any point in time.
-- File names are separated from the command using -- so as not to be understood as arguments to ls (in case log* is removed)
The shell will expand log* to the full list of files, which may exhaust memory if it's a lot of files, so then running it through grep is be better:
ls -Uba1 | grep ^log | wc -l
This last one handles extremely large directories of files without using a lot of memory (albeit it does use a subshell). The -d is no longer necessary, because it's only listing the contents of the current directory.
You can do this safely (i.e. won't be bugged by files with spaces or \n in their name) with bash:
$ shopt -s nullglob
$ logfiles=(*.log)
$ echo ${#logfiles[#]}
You need to enable nullglob so that you don't get the literal *.log in the $logfiles array if no files match. (See How to "undo" a 'set -x'? for examples of how to safely reset it.)
For a recursive search:
find . -type f -name '*.log' -printf x | wc -c
wc -c will count the number of characters in the output of find, while -printf x tells find to print a single x for each result. This avoids any problems with files with odd names which contain newlines etc.
For a non-recursive search, do this:
find . -maxdepth 1 -type f -name '*.log' -printf x | wc -c
The accepted answer for this question is wrong, but I have low rep so can't add a comment to it.
The correct answer to this question is given by Mat:
shopt -s nullglob
logfiles=(*.log)
echo ${#logfiles[#]}
The problem with the accepted answer is that wc -l counts the number of newline characters, and counts them even if they print to the terminal as '?' in the output of 'ls -l'. This means that the accepted answer FAILS when a filename contains a newline character. I have tested the suggested command:
ls -l log* | wc -l
and it erroneously reports a value of 2 even if there is only 1 file matching the pattern whose name happens to contain a newline character. For example:
touch log$'\n'def
ls log* -l | wc -l
An important comment
(not enough reputation to comment)
This is BUGGY:
ls -1q some_pattern | wc -l
If shopt -s nullglob happens to be set, it prints the number of ALL regular files, not just the ones with the pattern (tested on CentOS-8 and Cygwin). Who knows what other meaningless bugs does ls have?
This is CORRECT and much faster:
shopt -s nullglob; files=(some_pattern); echo ${#files[#]};
It does the expected job.
And the running times differ.
The 1st: 0.006 on CentOS, and 0.083 on Cygwin (in case it is used with care).
The 2nd: 0.000 on CentOS, and 0.003 on Cygwin.
If you have a lot of files and you don't want to use the elegant shopt -s nullglob and bash array solution, you can use find and so on as long as you don't print out the file name (which might contain newlines).
find -maxdepth 1 -name "log*" -not -name ".*" -printf '%i\n' | wc -l
This will find all files that match log* and that don't start with .* — The "not name .*" is redunant, but it's important to note that the default for "ls" is to not show dot-files, but the default for find is to include them.
This is a correct answer, and handles any type of file name you can throw at it, because the file name is never passed around between commands.
But, the shopt nullglob answer is the best answer!
Here is my one liner for this.
file_count=$( shopt -s nullglob ; set -- $directory_to_search_inside/* ; echo $#)
You can define such a command easily, using a shell function. This method does not require any external program and does not spawn any child process. It does not attempt hazardous ls parsing and handles “special” characters (whitespaces, newlines, backslashes and so on) just fine. It only relies on the file name expansion mechanism provided by the shell. It is compatible with at least sh, bash and zsh.
The line below defines a function called count which prints the number of arguments with which it has been called.
count() { echo $#; }
Simply call it with the desired pattern:
count log*
For the result to be correct when the globbing pattern has no match, the shell option nullglob (or failglob — which is the default behavior on zsh) must be set at the time expansion happens. It can be set like this:
shopt -s nullglob # for sh / bash
setopt nullglob # for zsh
Depending on what you want to count, you might also be interested in the shell option dotglob.
Unfortunately, with bash at least, it is not easy to set these options locally. If you don’t want to set them globally, the most straightforward solution is to use the function in this more convoluted manner:
( shopt -s nullglob ; shopt -u failglob ; count log* )
If you want to recover the lightweight syntax count log*, or if you really want to avoid spawning a subshell, you may hack something along the lines of:
# sh / bash:
# the alias is expanded before the globbing pattern, so we
# can set required options before the globbing gets expanded,
# and restore them afterwards.
count() {
eval "$_count_saved_shopts"
unset _count_saved_shopts
echo $#
}
alias count='
_count_saved_shopts="$(shopt -p nullglob failglob)"
shopt -s nullglob
shopt -u failglob
count'
As a bonus, this function is of a more general use. For instance:
count a* b* # count files which match either a* or b*
count $(jobs -ps) # count stopped jobs (sh / bash)
By turning the function into a script file (or an equivalent C program), callable from the PATH, it can also be composed with programs such as find and xargs:
find "$FIND_OPTIONS" -exec count {} \+ # count results of a search
You can use the -R option to find the files along with those inside the recursive directories
ls -R | wc -l // to find all the files
ls -R | grep log | wc -l // to find the files which contains the word log
you can use patterns on the grep
I've given this answer a lot of thought, especially given the don't-parse-ls stuff. At first, I tried
<WARNING! DID NOT WORK>
du --inodes --files0-from=<(find . -maxdepth 1 -type f -print0) | awk '{sum+=int($1)}END{print sum}'
</WARNING! DID NOT WORK>
which worked if there was only a filename like
touch $'w\nlf.aa'
but failed if I made a filename like this
touch $'firstline\n3 and some other\n1\n2\texciting\n86stuff.jpg'
I finally came up with what I'm putting below. Note I was trying to get a count of all files in the directory (not including any subdirectories). I think it, along with the answers by #Mat and #Dan_Yard , as well as having at least most of the requirements set out by #mogsie (I'm not sure about memory.) I think the answer by #mogsie is correct, but I always try to stay away from parsing ls unless it's an extremely specific situation.
awk -F"\0" '{print NF-1}' < <(find . -maxdepth 1 -type f -print0) | awk '{sum+=$1}END{print sum}'
More readably:
awk -F"\0" '{print NF-1}' < \
<(find . -maxdepth 1 -type f -print0) | \
awk '{sum+=$1}END{print sum}'
This is doing a find specifically for files, delimiting the output with a null character (to avoid problems with spaces and linefeeds), then counting the number of null characters. The number of files will be one less than the number of null characters, since there will be a null character at the end.
To answer the OP's question, there are two cases to consider
1) Non-recursive search:
awk -F"\0" '{print NF-1}' < \
<(find . -maxdepth 1 -type f -name "log*" -print0) | \
awk '{sum+=$1}END{print sum}'
2) Recursive search. Note that what's inside the -name parameter might need to be changed for slightly different behavior (hidden files, etc.).
awk -F"\0" '{print NF-1}' < \
<(find . -type f -name "log*" -print0) | \
awk '{sum+=$1}END{print sum}'
If anyone would like to comment on how these answers compare to those I've mentioned in this answer, please do.
Note, I got to this thought process while getting this answer.
This can be done with standard POSIX shell grammar.
Here is a simple count_entries function:
#!/usr/bin/env sh
count_entries()
{
# Emulating Bash nullglob
# If argument 1 is not an existing entry
if [ ! -e "$1" ]
# argument is a returned pattern
# then shift it out
then shift
fi
echo $#
}
for a compact definition:
count_entries(){ [ ! -e "$1" ]&&shift;echo $#;}
Featured POSIX compatible file counter by type:
#!/usr/bin/env sh
count_files()
# Count the file arguments matching the file operator
# Synopsys:
# count_files operator FILE [...]
# Arguments:
# $1: The file operator
# Allowed values:
# -a FILE True if file exists.
# -b FILE True if file is block special.
# -c FILE True if file is character special.
# -d FILE True if file is a directory.
# -e FILE True if file exists.
# -f FILE True if file exists and is a regular file.
# -g FILE True if file is set-group-id.
# -h FILE True if file is a symbolic link.
# -L FILE True if file is a symbolic link.
# -k FILE True if file has its `sticky' bit set.
# -p FILE True if file is a named pipe.
# -r FILE True if file is readable by you.
# -s FILE True if file exists and is not empty.
# -S FILE True if file is a socket.
# -t FD True if FD is opened on a terminal.
# -u FILE True if the file is set-user-id.
# -w FILE True if the file is writable by you.
# -x FILE True if the file is executable by you.
# -O FILE True if the file is effectively owned by you.
# -G FILE True if the file is effectively owned by your group.
# -N FILE True if the file has been modified since it was last read.
# $#: The files arguments
# Output:
# The number of matching files
# Return:
# 1: Unknown file operator
{
operator=$1
shift
case $operator in
-[abcdefghLkprsStuwxOGN])
for arg; do
# If file is not of required type
if ! test "$operator" "$arg"; then
# Shift it out
shift
fi
done
echo $#
;;
*)
printf 'Invalid file operator: %s\n' "$operator" >&2
return 1
;;
esac
}
count_files "$#"
Example usages:
count_files -f log*.txt
count_files -d datadir*
Alternate count non-directory entries without a loop:
#!/bin/sh
# Creates strings of as many dots as expanded arguments
# dotted string for entries matching star pattern
star=$(printf '%.0s.' ./*)
# dotted string for entries matching star slash pattern (directories)
star_dir=$(printf '%.0s.' ./*/)
# dotted string for entries matching dot star pattern
dot_star=$(printf '%.0s.' ./.*)
# dotted string for entries matching dot star slash pattern (directories)
dot_star_dir=$(printf '%.0s.' ./.*/)
# Print pattern matches count excluding directories matches
printf 'Files count: %d\n' $((
${#star} - ${#star_dir} +
${#dot_star} - ${#dot_star_dir}
))
Here is a generic Bash function you can use in your scripts.
# #see https://stackoverflow.com/a/11307382/430062
function countFiles {
shopt -s nullglob
logfiles=($1)
echo ${#logfiles[#]}
}
FILES_COUNT=$(countFiles "$file-*")
ls -1 log* | wc -l
Which means list one file per line and then pipe it to word count command with parameter switching to count lines.
Here's what I always do:
ls log* | awk 'END{print NR}'
To count everything just pipe ls to word count line:
ls | wc -l
To count with pattern, pipe to grep first:
ls | grep log | wc -l
I know **/*.ext expands to all files in all subdirectories matching *.ext, but what is a similar expansion that includes all such files in the current directory as well?
This will work in Bash 4:
ls -l {,**/}*.ext
In order for the double-asterisk glob to work, the globstar option needs to be set (default: on):
shopt -s globstar
From man bash:
globstar
If set, the pattern ** used in a filename expansion con‐
text will match a files and zero or more directories and
subdirectories. If the pattern is followed by a /, only
directories and subdirectories match.
Now I'm wondering if there might have once been a bug in globstar processing, because now using simply ls **/*.ext I'm getting correct results.
Regardless, I looked at the analysis kenorb did using the VLC repository and found some problems with that analysis and in my answer immediately above:
The comparisons to the output of the find command are invalid since specifying -type f doesn't include other file types (directories in particular) and the ls commands listed likely do. Also, one of the commands listed, ls -1 {,**/}*.* - which would seem to be based on mine above, only outputs names that include a dot for those files that are in subdirectories. The OP's question and my answer include a dot since what is being sought is files with a specific extension.
Most importantly, however, is that there is a special issue using the ls command with the globstar pattern **. Many duplicates arise since the pattern is expanded by Bash to all file names (and directory names) in the tree being examined. Subsequent to the expansion the ls command lists each of them and their contents if they are directories.
Example:
In our current directory is the subdirectory A and its contents:
A
└── AB
└── ABC
├── ABC1
├── ABC2
└── ABCD
└── ABCD1
In that tree, ** expands to "A A/AB A/AB/ABC A/AB/ABC/ABC1 A/AB/ABC/ABC2 A/AB/ABC/ABCD A/AB/ABC/ABCD/ABCD1" (7 entries). If you do echo ** that's the exact output you'd get and each entry is represented once. However, if you do ls ** it's going to output a listing of each of those entries. So essentially it does ls A followed by ls A/AB, etc., so A/AB gets shown twice. Also, ls is going to set each subdirectory's output apart:
...
<blank line>
directory name:
content-item
content-item
So using wc -l counts all those blank lines and directory name section headings which throws off the count even farther.
This a yet another reason why you should not parse ls.
As a result of this further analysis, I recommend not using the globstar pattern in any circumstance other than iterating over a tree of files in this manner:
for entry in **
do
something "$entry"
done
As a final comparison, I used a Bash source repository I had handy and did this:
shopt -s globstar dotglob
diff <(echo ** | tr ' ' '\n') <(find . | sed 's|\./||' | sort)
0a1
> .
I used tr to change spaces to newlines which is only valid here since no names include spaces. I used sed to remove the leading ./ from each line of output from find. I sorted the output of find since it is normally unsorted and Bash's expansion of globs is already sorted. As you can see, the only output from diff was the current directory . output by find. When I did ls ** | wc -l the output had almost twice as many lines.
You can use: **/*.* to include all files recursively (enable by: shopt -s globstar).
Here is the behavior of other variations:
Testing folder with 3472 files in the sample VLC repository folder:
(Total files of 3472 counted as per: find . -type f | wc -l)
ls -1 **/*.* - returns 3338
ls -1 {,**/}*.* - returns 3341 (as proposed by Dennis)
ls -1 {,**/}* - returns 8265
ls -1 **/* - returns 7817, except hidden files (as proposed by Dennis)
ls -1 **/{.[^.],}* - returns 7869 (as proposed by Dennis)
ls -1 {,**/}.?* - returns 15855
ls -1 {,**/}.* - returns 20321
So I think the most closest method to list all files recursively is the first example (**/*.*) as per gniourf-gniourf comment (assuming the files have the proper extensions, or use the specific one), as the second example gives few more duplicates like below:
$ diff -u <(ls -1 {,**/}*.*) <(ls -1 **/*.*)
--- /dev/fd/63 2015-04-19 15:25:07.000000000 +0100
+++ /dev/fd/62 2015-04-19 15:25:07.000000000 +0100
## -1,6 +1,4 ##
COPYING.LIB
-COPYING.LIB
-Makefile.am
Makefile.am
## -45,7 +43,6 ##
compat/tdestroy.c
compat/vasprintf.c
configure.ac
-configure.ac
and the other generate even further duplicates.
To include hidden files, use: shopt -s dotglob (disable by shopt -u dotglob). It's not recommended, because it can affect commands such as mv or rm and you can remove accidentally the wrong files.
This wil print all files in the current directory and its subdirectories which end in '.ext'.
find . -name '*.ext' -print
Why not just use brace expansion to include the current directory as well?
./{*,**/*}.ext
Brace expansion happens before glob expansion, so you can effectively do what you want with older versions of bash, and can forego monkeying with globstar in newer versions.
Also, it's considered good practice in bash to include the leading ./ in your glob patterns.
$ find . -type f
That will list all of the files in the current directory. You can then do some other command on the output using -exec
$find . -type f -exec grep "foo" {} \;
That will grep each file from the find for the string "foo".
I know you can do it with a find, but is there a way to send the output of ls to mv in the unix command line?
ls is a tool used to DISPLAY some statistics about filenames in a directory.
It is not a tool you should use to enumerate them and pass them to another tool for using it there. Parsing ls is almost always the wrong thing to do, and it is bugged in many ways.
For a detailed document on the badness of parsing ls, which you should really go read, check out: http://mywiki.wooledge.org/ParsingLs
Instead, you should use either globs or find, depending on what exactly you're trying to achieve:
mv * /foo
find . -exec mv {} /foo \;
The main source of badness of parsing ls is that ls dumps all filenames into a single string of output, and there is no way to tell the filenames apart from there. For all you know, the entire ls output could be one single filename!
The secondary source of badness of parsing ls comes from the broken way in which half the world uses bash. They think for magically does what they would like it to do when they do something like:
for file in `ls` # Never do this!
for file in $(ls) # Exactly the same thing.
for is a bash builtin that iterates over arguments. And $(ls) takes the output of ls and cuts it apart into arguments wherever there are spaces, newlines or tabs. Which basically means, you're iterating over words, not over filenames. Even worse, you're asking bask to take each of those mutilated filename words and then treat them as globs that may match filenames in the current directory. So if you have a filename which contains a word which happens to be a glob that matches other filenames in the current directory, that word will disappear and all those matching filenames will appear in its stead!
mv `ls` /foo # Exact same badness as the ''for'' thing.
One way is with backticks:
mv `ls *.boo` subdir
Edit: however, this is fragile and not recommended -- see #lhunath's asnwer for detailed explanations and recommendations.
None of the answers so far are safe for filenames with spaces in them. Try this:
for i in *; do mv "$i" some_dir/; done
You can of course use any glob pattern you like in place of *.
Not exactly sure what you're trying to achieve here, but here's one possibility:
The "xargs" part is the important piece everything else is just setup. The effect of this is to take everything that "ls" outputs and add a ".txt" extension to it.
$ mkdir xxx #
$ cd xxx
$ touch a b c x y z
$ ls
a b c x y z
$ ls | xargs -Ifile mv file file.txt
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
Something like this could also be achieved by:
$ touch a b c x y z
$ for i in `ls`;do mv $i ${i}.txt; done
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
I sort of like the second way better. I can NEVER remember how xargs works without reading the man page or going to my "cute tricks" file.
Hope this helps.
Check out find -exec {} as it might be a better option than ls but it depends on what you're trying to achieve.
/bin/ls | tr '\n' '\0' | xargs -0 -i% mv % /path/to/destdir/
"Useless use of ls", but should work. By specifying the full path to ls(1) you avoid clashes with aliasing of ls(1) mentioned in some of the previous posts. The tr(1) command together with "xargs -0" makes the command work with filenames containing (ugh) whitespace. It won't work with filenames containing newlines, but having filenames like that in the file system is to ask for trouble, so it probably won't be a big problem. But filenames with newlines could exist, so a better solution would be to use "find -print0":
find /path/to/srcdir -type f -print0 | xargs -0 -i% mv % dest/
You shouldn't use the output of ls as the input of another command. Files with spaces in their names are difficult as is the inclusion of ANSI escape sequences if you have:
alias ls-'ls --color=always'
for example.
Always use find or xargs (with -0) or globbing.
Also, you didn't say whether you want to move files or rename them. Each would be handled differently.
edit: added -0 to xargs (thanks for the reminder)
Backticks work well, as others have suggested. See xargs, too. And for really complicated stuff, pipe it into sed, make the list of commands you want, then run it again with the output of sed piped into sh.
Here's an example with find, but it works fine with ls, too:
http://github.com/DonBranson/scripts/blob/f09d24629ab6eb3ce509d4d3078818430306b063/jarfinder.sh
#!/bin/bash
for i in $( ls * );
do
mv $1 /backup/$1
done
else, it's the find solution by sybreon, and as suggested NOT the green mv ls solution.
Just use find or your shells globing!
find . -depth=1 -exec mv {} /tmp/blah/ \;
..or..
mv * /tmp/blah/
You don't have to worry about colour in the ls output, or other piping strangeness - Linux allows basically any characters in the filename except a null byte.. For example:
$ touch "blah\new|
> "
$ ls | xargs file
blahnew|: cannot open `blahnew|' (No such file or directory)
..but find works perfectly:
$ find . -exec file {} \;
./blah\new|
: empty
So this answer doesn't send the output of ls to mv but as #lhunath explained using ls is almost always the wrong tool for the job. Use shell globs or a find command.
For more complicated cases (often in a script), using bash arrays to build up the argument list from shell globs or find commands can be very useful. One can create an array and push to it with the appropriate conditional logic. This also handles spaces in filenames properly.
For example:
myargs=()
# don't push if the glob does not match anything
shopt -s nullglob
myargs+=(myfiles*)
To push files matching a find to the array: https://stackoverflow.com/a/23357277/430128.
The last argument should be the target location:
myargs+=("Some target directory")
Use myargs in the invocation of a command like mv:
mv "${myargs[#]}"
Note the quoting of the array myargs to pass array elements with spaces correctly.
You surround the ls with back quotes and put it after the mv, so like this...
mv `ls` somewhere/
But keep in mind that if any of your file names have spaces in them it won't work very well.
Also it would be simpler to just do something like this: mv filepattern* somewhere/