how to delete a file with quote in file name - bash

When I do ls in my directory, get bunch of these:
data.log".2015-01-22"
data.log".2015-01-23"
However when I do this:
rm: cannot remove `data.log.2015-01-22': No such file or directory
If I could somehow do something line ls | escape quotes | xargs rm
So yeah, how do I remove these files containing "?
Update
While most answer work. I was actually trying to do this:
ls | rm
So it was failing for some files. How can I escape a quote in a pipe after ls? Most of the answers actually addresses the manual manipulation of file which works. But I was asking about the escaping/replacing quotes after the ls. Sorry if my question was confusing.

If you only need to do this once in a while interactively, use
rm -i -- *
and answer y or n as appropriate. This can be used to get rid of many files having funny characters in their name.
It has the advantage of not needing to type/escape funny characters, blanks, etc, since the shell globbing with * does that for you. It is also as short as it gets, so easy to memorize.

Use single quotes to quote the double quotes, or backslash:
rm data.log'"'*
rm data.log\"*
Otherwise, double quotes are interpreted by the shell and removed from the string.
Answer to the updated question:
Don't process the output of ls. Filenames can contain spaces, newlines, etc.

You could do like this.
find . -type f -name '*"*' -exec rm {} +

If you have a particularly stubborn file that has all kinds of characters
For example
''$'\336\017\200\336\024\b''J'$'\336\017\220\336\024\b''V'$'\336\017\240\336\024\b''^'$'\336\017\260\336\024\b''f'$'\336\017\300\336\024\b''p'$'\336\017\320\336\024\b''y'$'\336\017\340\336\024\b\202\336\017\360\336\024\b\212\336\017\337\024\b\223\336\017\020\337\024\b\234\336\017'
If it were me, I'd forget dealing with interpolation of strings (unless you are really good at using special characters) and use the file index number, or inode.
For example, assume your culprit is in the current working directory, then the following procedure would work:
# visually inspect list of files with index numbers (inodes) for the culprit
ls -i .
# remove the culprit by inode
find . -maxdepth 1 -inum <inode> -exec rm {} \;
and be done with it.

Escape the quote with single quotes
$ touch '" and spaces also "'
$ ls
" and spaces also "
$ rm '" and spaces also "'
$ ls
$
In your case:
$ rm 'data.log".2015-01-22"' 'data.log".2015-01-23"'

1st - suggestion - modify the tool creating file names with quotes in them... :)
Try a little wild-char magic - using your tool of choice, i.e I would use tr:
ls | escape quotes | xargs rm ## becomes something like:
ls | tr "[\",']" '?' | xargs rm ## does not work on my system but this does:
rm -v $(ls *sing* *doub* | tr "[\",']" '?')
Output is:
removed `"""double"""'
removed `\'\'\'single\'\'\''
Now:
$ touch "'''single'''" '"""double"""'
$ ls -l *sing* *doub*
-rw-rw-r-- 1 dale dale 0 Feb 15 09:48 """double"""
-rw-rw-r-- 1 dale dale 0 Feb 15 09:48 '''single'''
If your patterns are consistent the other way might be to simplify:
$ rm -v *sing* *doub*
removed `\'\'\'single\'\'\''
removed `"""double"""'
For your example:
rm -v data.*${YEAR}-${MONTH}-${DAY}* ## data.log".2015-01-22" OR
rm -v data.*${YEAR}-${MONTH}-${DAY}? ## data.log".2015-01-22"

ls | rm
doesn't work because rm gets the files to remove from the command line arguments, not from standard input. You can use xargs to translate standard input to arguments.
ls | xargs -d '\n' rm
But to just delete the files you want, quote the part of the name containing the single quote:
rm 'data.log"'.*

Related

How to look for files that have an extra character at the end?

I have a strange situation. A group of folks asked me to look at their hacked Wordpress site. When I got in, I noticed there were extra files here and there that had an extra non-printable character at end. In Bash, it shows it as a \r.
Just next to these files with the weird character is the original file. I'm trying to locate all these suspicious files and delete them. But the correct Bash incantation is eluding me.
find . | grep -i \?
and
find . | grep -i '\r'
aren't working
How do I use bash to find them?
Remove all files with filename ending in \r (carriage return), recursively, in current directory:
find . -type f -name $'*\r' -exec rm -fv {} +
Use ls -lh instead of rm to view the file list without removing.
Use rm -fvi to prompt before each removal.
-name GLOB specifies a matching glob pattern for find.
$'\r' is bash syntax for C style escapes.
You said "non-printable character", but ls indicates it's specifically a carriage return. The pattern '*[^[:graph:]' matches filenames ending in any non printable character, which may be relevant.
To remove all files and directories matching $'*\r' and all contents recursively: find . -name $'*\r' -exec rm -rfv {} +.
You have to pass carriage return character literally to grep. Use ANSI-C quoting in Bash.
find . -name $'*\r'
find . | grep $'\r'
find . | sed '/\x0d/!d'
if it a special character
Recursive look up
grep -ir $'\r'
# sample output
# empty line
Recursive look up + just print file name
grep -lir $'\r'
# sample output
file.txt
if it not a special character
You need to escape the backslash \ with a backslash so it becomes \\
Recursive look up
grep -ir '\\r$`
# sample output
file.txt:file.php\r
Recursive look up + just print file name
grep -lir '\\r$`
# sample output
file.txt
help:
-i case insensitive
-r recursive mode
-l print file name
\ escape another backslash
$ match the end
$'' the value is a special character e.g. \r, \t
shopt -s globstar # Enable **
shopt -s dotglob # Also cover hidden files
offending_files=(**/*$'\r')
should store into the array offending_files a list of all files which are compromised in that way. Of course you could also glob for **/*$'\r'*, which searches for all files having a carriage return anywhere in the name (not necessarily at the end).
You can then log the name of those broken files (which might make sense for auditing) and remove them.

Shell script to delete whose files names are not in a text file

I have a txt file which contains list of file names
Example:
10.jpg
11.jpg
12.jpeg
...
In a folder this files should protect from delete process and other files should delete.
So i want oppposite logic of this question: Shell command/script to delete files whose names are in a text file
How to do that?
Use extglob and Bash extended pattern matching !(pattern-list):
!(pattern-list)
Matches anything except one of the given patterns
where a pattern-list is a list of one or more patterns separated by a |.
extglob
If set, the extended pattern matching features described above are enabled.
So for example:
$ ls
10.jpg 11.jpg 12.jpeg 13.jpg 14.jpg 15.jpg 16.jpg a.txt
$ shopt -s extglob
$ shopt | grep extglob
extglob on
$ cat a.txt
10.jpg
11.jpg
12.jpeg
$ tr '\n' '|' < a.txt
10.jpg|11.jpg|12.jpeg|
$ ls !(`tr '\n' '|' < a.txt`)
13.jpg 14.jpg 15.jpg 16.jpg a.txt
The deleted files are 13.jpg 14.jpg 15.jpg 16.jpg a.txt according to the example.
So with extglob and !(pattern-list), we can obtain the files which are excluded based on the file content.
Additionally, if you want to exclude the entries starting with ., then you could switch on the dotglob option with shopt -s dotglob.
This is one way that will work with bash GLOBIGNORE:
$ cat file2
10.jpg
11.jpg
12.jpg
$ ls *.jpg
10.jpg 11.jpg 12.jpg 13.jpg
$ echo $GLOBIGNORE
$ GLOBIGNORE=$(tr '\n' ':' <file2 )
$ echo $GLOBIGNORE
10.jpg:11.jpg:12.jpg:
$ ls *.jpg
13.jpg
As it is obvious, globing ignores whatever (file, pattern, etc) is included in the GLOBIGNORE bash variable.
This is why the last ls reports only file 13.jpg since files 10,11 and 12.jpg are ignored.
As a result using rm *.jpg will remove only 13.jpg in my system:
$ rm -iv *.jpg
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
When you are done, you can just set GLOBIGNORE to null:
$ GLOBIGNORE=
Worths to be mentioned, that in GLOBIGNORE you can also apply glob patterns instead of single filenames, like *.jpg or my*.mp3 , etc
Alternative :
We can use programming techniques (grep, awk, etc) to compare the file names present in ignorefile and the files under current directory:
$ awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n')
13.jpg
$ rm -iv "$(awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n'))"
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
Note: This also makes use of bash process substitution, and will break if filenames include new lines.
Another alternative to George Vasiliou's answer would be to read the file with the names of the files to keep using the Bash builtin mapfile and then check for each of the files to be deleted whether it is in that list.
#! /bin/bash -eu
mapfile -t keepthose <keepme.txt
declare -a deletethose
for f in "$#"
do
keep=0
for not in "${keepthose[#]}"
do
[ "${not}" = "${f}" ] && keep=1 || :
done
[ ${keep} -gt 0 ] || deletethose+=("${f}")
done
# Remove the 'echo' if you really want to delete files.
echo rm -f "${deletethose[#]}"
The -t option causes mapfile to trim the trailing newline character from the lines it reads from the file. No other white-space will be trimmed, though. This might be what you want if your file names actually contain white-space but it could also cause subtle surprises if somebody accidentally puts a space before or after the name of an important file they want to keep.
Note that I'm first building a list of the files that should be deleted and then delete them all at once rather than deleting each file individually. This saves some sub-process invocations.
The lookup in the list, as coded above, has linear complexity which gives the overall script quadratic complexity (precisely, N × M where N is the number of command-line arguments and M the number of entries in the keepme.txt file). If you only have a few dozen files, this should be fine. Unfortunately, I don't know of a better way to check for set membership in Bash. (We cannot use the file names as keys in an associative array because they might not be proper identifiers.) If you are concerned with performance for many files, using a more powerful language like Python might be worth consideration.
I would also like to mention that the above example simply compares strings. It will not realize that important.txt and ./important.txt are the same file and hence delete the file. It would be more robust to convert the file name to a canonical path using readlink -f before comparing it.
Furthermore, your users might want to be able to put globing patterns (like important.* into the list of files to keep. If you want to handle those, extra logic would be required.
Overall, specifying what files to not delete seems a little dangerous as the error is on the bad side.
Provided there's no spaces or special escaped chars in the file names, either of these (or variations of these) would work:
rm -v $(stat -c %n * | sort excluded_file_list | uniq -u)
stat -c %n * | grep -vf excluded_file_list | xargs rm -v

Why am I getting some extra, weird characters when making a file from grep output?

I am doing a very basic command that never gave me trouble in the past, but is inexplicably returning undesired characters now.
I am in BASH on linux, and simply want to search through a directory and make a file containing filenames that match a pattern:
ls | grep "*.file_ID" > my_list.txt
...This works fine, and if I cat the data:
cat my_list.txt
seriesA.file_ID
seriesB.file_ID
seriesC.file_ID
However, when I try to feed this file into downstream processes, I keep getting a weird errors, as if the file isn't properly formatted as a list of file names. When I open the file in vim to reveal any unnecessary characters, I find the file actually looks like this:
vi my_list.txt
^[[00mseriesA.file_ID^[[00m
^[[00mseriesB.file_ID^[[00m
^[[00mseriesC.file_ID^[[00m
For some reason, every line is started and ended with the characters ^[[00m. If I delete these characters, all of the downstream processes work fine. However, I need to have my scripts automatically make such a file list, so I can't keep going in and manually deleting these chars.
Does anyone know what is producing the ^[[00m characters? I don't have any idea where they are coming from, and need a to be able to generate files without them.
Thanks!
Probably your GREP_OPTIONS environment variable contains --color=always, which causes the output to be stuffed with control characters, even when piped to a file.
Use --color=auto instead.
http://www.gnu.org/software/grep/manual/html_node/Environment-Variables.html
Even better, don't use grep:
ls *.file_ID > my_list.txt
usually it is automatically => grep --color=auto
if it pipes into a csv file that looks like this ^[[00...^[[00m
you would have to type this in the terminal:
grep --color=auto "your regex" > example.csv
if you want it to be a permanent situation where you do not have to type "--color=auto" every time, type this in the terminal:
export GREP_OPTIONS='--color=auto'
more info:
https://linuxcommando.blogspot.com/2007/10/grep-with-color-output.html
Don't use ls:
printf "%s\n" *.file_ID > my_list.txt
This should take care of it (assuming GNU find and no directory traversing):
find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
Example:
~> ls *file_ID*
a.file_ID b.file_ID c.file_ID
~> find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
~> cat my_list.txt
a.file_ID
b.file_ID
c.file_ID
As far as the "^[[00m" characters, check your ls options:
~> alias -p | grep "ls="
You may get something like:
alias ls='/bin/ls $LS_OPTIONS'
If so, check env for this:
~> env | grep LS_OP
LS_OPTIONS=-N --color=tty -T 0
The character string you're referencing is used to turn off colors, so your shell likely has been set to show colors. Removing and/or changing the ls alias should resolve it.
The weird characters such as ^[[00m are escape characters for colorizing the output. Color output for ls is most likely enabled through an alias in your environment.
To avoid getting these color characters, you can try disable the ls alias temporarily with a backslash:
\ls *.txt
Or you can use printf command instead.
printf "%s\n" *.txt

Using sed to mass rename files

Objective
Change these filenames:
F00001-0708-RG-biasliuyda
F00001-0708-CS-akgdlaul
F00001-0708-VF-hioulgigl
to these filenames:
F0001-0708-RG-biasliuyda
F0001-0708-CS-akgdlaul
F0001-0708-VF-hioulgigl
Shell Code
To test:
ls F00001-0708-*|sed 's/\(.\).\(.*\)/mv & \1\2/'
To perform:
ls F00001-0708-*|sed 's/\(.\).\(.*\)/mv & \1\2/' | sh
My Question
I don't understand the sed code. I understand what the substitution
command
$ sed 's/something/mv'
means. And I understand regular expressions somewhat. But I don't
understand what's happening here:
\(.\).\(.*\)
or here:
& \1\2/
The former, to me, just looks like it means: "a single character,
followed by a single character, followed by any length sequence of a
single character"--but surely there's more to it than that. As far as
the latter part:
& \1\2/
I have no idea.
First, I should say that the easiest way to do this is to use the
prename or rename commands.
On Ubuntu, OSX (Homebrew package rename, MacPorts package p5-file-rename), or other systems with perl rename (prename):
rename s/0000/000/ F0000*
or on systems with rename from util-linux-ng, such as RHEL:
rename 0000 000 F0000*
That's a lot more understandable than the equivalent sed command.
But as for understanding the sed command, the sed manpage is helpful. If
you run man sed and search for & (using the / command to search),
you'll find it's a special character in s/foo/bar/ replacements.
s/regexp/replacement/
Attempt to match regexp against the pattern space. If success‐
ful, replace that portion matched with replacement. The
replacement may contain the special character & to refer to that
portion of the pattern space which matched, and the special
escapes \1 through \9 to refer to the corresponding matching
sub-expressions in the regexp.
Therefore, \(.\) matches the first character, which can be referenced by \1.
Then . matches the next character, which is always 0.
Then \(.*\) matches the rest of the filename, which can be referenced by \2.
The replacement string puts it all together using & (the original
filename) and \1\2 which is every part of the filename except the 2nd
character, which was a 0.
This is a pretty cryptic way to do this, IMHO. If for
some reason the rename command was not available and you wanted to use
sed to do the rename (or perhaps you were doing something too complex
for rename?), being more explicit in your regex would make it much
more readable. Perhaps something like:
ls F00001-0708-*|sed 's/F0000\(.*\)/mv & F000\1/' | sh
Being able to see what's actually changing in the
s/search/replacement/ makes it much more readable. Also it won't keep
sucking characters out of your filename if you accidentally run it
twice or something.
you've had your sed explanation, now you can use just the shell, no need external commands
for file in F0000*
do
echo mv "$file" "${file/#F0000/F000}"
# ${file/#F0000/F000} means replace the pattern that starts at beginning of string
done
I wrote a small post with examples on batch renaming using sed couple of years ago:
http://www.guyrutenberg.com/2009/01/12/batch-renaming-using-sed/
For example:
for i in *; do
mv "$i" "`echo $i | sed "s/regex/replace_text/"`";
done
If the regex contains groups (e.g. \(subregex\) then you can use them in the replacement text as \1\,\2 etc.
The easiest way would be:
for i in F00001*; do mv "$i" "${i/F00001/F0001}"; done
or, portably,
for i in F00001*; do mv "$i" "F0001${i#F00001}"; done
This replaces the F00001 prefix in the filenames with F0001.
credits to mahesh here: http://www.debian-administration.org/articles/150
The sed command
s/\(.\).\(.*\)/mv & \1\2/
means to replace:
\(.\).\(.*\)
with:
mv & \1\2
just like a regular sed command. However, the parentheses, & and \n markers change it a little.
The search string matches (and remembers as pattern 1) the single character at the start, followed by a single character, follwed by the rest of the string (remembered as pattern 2).
In the replacement string, you can refer to these matched patterns to use them as part of the replacement. You can also refer to the whole matched portion as &.
So what that sed command is doing is creating a mv command based on the original file (for the source) and character 1 and 3 onwards, effectively removing character 2 (for the destination). It will give you a series of lines along the following format:
mv F00001-0708-RG-biasliuyda F0001-0708-RG-biasliuyda
mv abcdef acdef
and so on.
Using perl rename (a must have in the toolbox):
rename -n 's/0000/000/' F0000*
Remove -n switch when the output looks good to rename for real.
There are other tools with the same name which may or may not be able to do this, so be careful.
The rename command that is part of the util-linux package, won't.
If you run the following command (GNU)
$ rename
and you see perlexpr, then this seems to be the right tool.
If not, to make it the default (usually already the case) on Debian and derivative like Ubuntu :
$ sudo apt install rename
$ sudo update-alternatives --set rename /usr/bin/file-rename
For archlinux:
pacman -S perl-rename
For RedHat-family distros:
yum install prename
The 'prename' package is in the EPEL repository.
For Gentoo:
emerge dev-perl/rename
For *BSD:
pkg install gprename
or p5-File-Rename
For Mac users:
brew install rename
If you don't have this command with another distro, search your package manager to install it or do it manually:
cpan -i File::Rename
Old standalone version can be found here
man rename
This tool was originally written by Larry Wall, the Perl's dad.
The backslash-paren stuff means, "while matching the pattern, hold on to the stuff that matches in here." Later, on the replacement text side, you can get those remembered fragments back with "\1" (first parenthesized block), "\2" (second block), and so on.
If all you're really doing is removing the second character, regardless of what it is, you can do this:
s/.//2
but your command is building a mv command and piping it to the shell for execution.
This is no more readable than your version:
find -type f | sed -n 'h;s/.//4;x;s/^/mv /;G;s/\n/ /g;p' | sh
The fourth character is removed because find is prepending each filename with "./".
Here's what I would do:
for file in *.[Jj][Pp][Gg] ;do
echo mv -vi \"$file\" `jhead $file|
grep Date|
cut -b 16-|
sed -e 's/:/-/g' -e 's/ /_/g' -e 's/$/.jpg/g'` ;
done
Then if that looks ok, add | sh to the end. So:
for file in *.[Jj][Pp][Gg] ;do
echo mv -vi \"$file\" `jhead $file|
grep Date|
cut -b 16-|
sed -e 's/:/-/g' -e 's/ /_/g' -e 's/$/.jpg/g'` ;
done | sh
for i in *; do mv $i $(echo $i|sed 's/AAA/BBB/'); done
The parentheses capture particular strings for use by the backslashed numbers.
ls F00001-0708-*|sed 's|^F0000\(.*\)|mv & F000\1|' | bash
Some examples that work for me:
$ tree -L 1 -F .
.
├── A.Show.2020.1400MB.txt
└── Some Show S01E01 the Loreming.txt
0 directories, 2 files
## remove "1400MB" (I: ignore case) ...
$ for f in *; do mv 2>/dev/null -v "$f" "`echo $f | sed -r 's/.[0-9]{1,}mb//I'`"; done;
renamed 'A.Show.2020.1400MB.txt' -> 'A.Show.2020.txt'
## change "S01E01 the" to "S01E01 The"
## \U& : change (here: regex-selected) text to uppercase;
## note also: no need here for `\1` in that regex expression
$ for f in *; do mv 2>/dev/null "$f" "`echo $f | sed -r "s/([0-9] [a-z])/\U&/"`"; done
$ tree -L 1 -F .
.
├── A.Show.2020.txt
└── Some Show S01E01 The Loreming.txt
0 directories, 2 files
$
2>/dev/null suppresses extraneous output (warnings ...)
reference [this thread]: https://stackoverflow.com/a/2372808/1904943
change case: https://www.networkworld.com/article/3529409/converting-between-uppercase-and-lowercase-on-the-linux-command-line.html

How do you send the output of ls to mv?

I know you can do it with a find, but is there a way to send the output of ls to mv in the unix command line?
ls is a tool used to DISPLAY some statistics about filenames in a directory.
It is not a tool you should use to enumerate them and pass them to another tool for using it there. Parsing ls is almost always the wrong thing to do, and it is bugged in many ways.
For a detailed document on the badness of parsing ls, which you should really go read, check out: http://mywiki.wooledge.org/ParsingLs
Instead, you should use either globs or find, depending on what exactly you're trying to achieve:
mv * /foo
find . -exec mv {} /foo \;
The main source of badness of parsing ls is that ls dumps all filenames into a single string of output, and there is no way to tell the filenames apart from there. For all you know, the entire ls output could be one single filename!
The secondary source of badness of parsing ls comes from the broken way in which half the world uses bash. They think for magically does what they would like it to do when they do something like:
for file in `ls` # Never do this!
for file in $(ls) # Exactly the same thing.
for is a bash builtin that iterates over arguments. And $(ls) takes the output of ls and cuts it apart into arguments wherever there are spaces, newlines or tabs. Which basically means, you're iterating over words, not over filenames. Even worse, you're asking bask to take each of those mutilated filename words and then treat them as globs that may match filenames in the current directory. So if you have a filename which contains a word which happens to be a glob that matches other filenames in the current directory, that word will disappear and all those matching filenames will appear in its stead!
mv `ls` /foo # Exact same badness as the ''for'' thing.
One way is with backticks:
mv `ls *.boo` subdir
Edit: however, this is fragile and not recommended -- see #lhunath's asnwer for detailed explanations and recommendations.
None of the answers so far are safe for filenames with spaces in them. Try this:
for i in *; do mv "$i" some_dir/; done
You can of course use any glob pattern you like in place of *.
Not exactly sure what you're trying to achieve here, but here's one possibility:
The "xargs" part is the important piece everything else is just setup. The effect of this is to take everything that "ls" outputs and add a ".txt" extension to it.
$ mkdir xxx #
$ cd xxx
$ touch a b c x y z
$ ls
a b c x y z
$ ls | xargs -Ifile mv file file.txt
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
Something like this could also be achieved by:
$ touch a b c x y z
$ for i in `ls`;do mv $i ${i}.txt; done
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
I sort of like the second way better. I can NEVER remember how xargs works without reading the man page or going to my "cute tricks" file.
Hope this helps.
Check out find -exec {} as it might be a better option than ls but it depends on what you're trying to achieve.
/bin/ls | tr '\n' '\0' | xargs -0 -i% mv % /path/to/destdir/
"Useless use of ls", but should work. By specifying the full path to ls(1) you avoid clashes with aliasing of ls(1) mentioned in some of the previous posts. The tr(1) command together with "xargs -0" makes the command work with filenames containing (ugh) whitespace. It won't work with filenames containing newlines, but having filenames like that in the file system is to ask for trouble, so it probably won't be a big problem. But filenames with newlines could exist, so a better solution would be to use "find -print0":
find /path/to/srcdir -type f -print0 | xargs -0 -i% mv % dest/
You shouldn't use the output of ls as the input of another command. Files with spaces in their names are difficult as is the inclusion of ANSI escape sequences if you have:
alias ls-'ls --color=always'
for example.
Always use find or xargs (with -0) or globbing.
Also, you didn't say whether you want to move files or rename them. Each would be handled differently.
edit: added -0 to xargs (thanks for the reminder)
Backticks work well, as others have suggested. See xargs, too. And for really complicated stuff, pipe it into sed, make the list of commands you want, then run it again with the output of sed piped into sh.
Here's an example with find, but it works fine with ls, too:
http://github.com/DonBranson/scripts/blob/f09d24629ab6eb3ce509d4d3078818430306b063/jarfinder.sh
#!/bin/bash
for i in $( ls * );
do
mv $1 /backup/$1
done
else, it's the find solution by sybreon, and as suggested NOT the green mv ls solution.
Just use find or your shells globing!
find . -depth=1 -exec mv {} /tmp/blah/ \;
..or..
mv * /tmp/blah/
You don't have to worry about colour in the ls output, or other piping strangeness - Linux allows basically any characters in the filename except a null byte.. For example:
$ touch "blah\new|
> "
$ ls | xargs file
blahnew|: cannot open `blahnew|' (No such file or directory)
..but find works perfectly:
$ find . -exec file {} \;
./blah\new|
: empty
So this answer doesn't send the output of ls to mv but as #lhunath explained using ls is almost always the wrong tool for the job. Use shell globs or a find command.
For more complicated cases (often in a script), using bash arrays to build up the argument list from shell globs or find commands can be very useful. One can create an array and push to it with the appropriate conditional logic. This also handles spaces in filenames properly.
For example:
myargs=()
# don't push if the glob does not match anything
shopt -s nullglob
myargs+=(myfiles*)
To push files matching a find to the array: https://stackoverflow.com/a/23357277/430128.
The last argument should be the target location:
myargs+=("Some target directory")
Use myargs in the invocation of a command like mv:
mv "${myargs[#]}"
Note the quoting of the array myargs to pass array elements with spaces correctly.
You surround the ls with back quotes and put it after the mv, so like this...
mv `ls` somewhere/
But keep in mind that if any of your file names have spaces in them it won't work very well.
Also it would be simpler to just do something like this: mv filepattern* somewhere/

Resources