How do you send the output of ls to mv? - bash

I know you can do it with a find, but is there a way to send the output of ls to mv in the unix command line?

ls is a tool used to DISPLAY some statistics about filenames in a directory.
It is not a tool you should use to enumerate them and pass them to another tool for using it there. Parsing ls is almost always the wrong thing to do, and it is bugged in many ways.
For a detailed document on the badness of parsing ls, which you should really go read, check out: http://mywiki.wooledge.org/ParsingLs
Instead, you should use either globs or find, depending on what exactly you're trying to achieve:
mv * /foo
find . -exec mv {} /foo \;
The main source of badness of parsing ls is that ls dumps all filenames into a single string of output, and there is no way to tell the filenames apart from there. For all you know, the entire ls output could be one single filename!
The secondary source of badness of parsing ls comes from the broken way in which half the world uses bash. They think for magically does what they would like it to do when they do something like:
for file in `ls` # Never do this!
for file in $(ls) # Exactly the same thing.
for is a bash builtin that iterates over arguments. And $(ls) takes the output of ls and cuts it apart into arguments wherever there are spaces, newlines or tabs. Which basically means, you're iterating over words, not over filenames. Even worse, you're asking bask to take each of those mutilated filename words and then treat them as globs that may match filenames in the current directory. So if you have a filename which contains a word which happens to be a glob that matches other filenames in the current directory, that word will disappear and all those matching filenames will appear in its stead!
mv `ls` /foo # Exact same badness as the ''for'' thing.

One way is with backticks:
mv `ls *.boo` subdir
Edit: however, this is fragile and not recommended -- see #lhunath's asnwer for detailed explanations and recommendations.

None of the answers so far are safe for filenames with spaces in them. Try this:
for i in *; do mv "$i" some_dir/; done
You can of course use any glob pattern you like in place of *.

Not exactly sure what you're trying to achieve here, but here's one possibility:
The "xargs" part is the important piece everything else is just setup. The effect of this is to take everything that "ls" outputs and add a ".txt" extension to it.
$ mkdir xxx #
$ cd xxx
$ touch a b c x y z
$ ls
a b c x y z
$ ls | xargs -Ifile mv file file.txt
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
Something like this could also be achieved by:
$ touch a b c x y z
$ for i in `ls`;do mv $i ${i}.txt; done
$ ls
a.txt b.txt c.txt x.txt y.txt z.txt
$
I sort of like the second way better. I can NEVER remember how xargs works without reading the man page or going to my "cute tricks" file.
Hope this helps.

Check out find -exec {} as it might be a better option than ls but it depends on what you're trying to achieve.

/bin/ls | tr '\n' '\0' | xargs -0 -i% mv % /path/to/destdir/
"Useless use of ls", but should work. By specifying the full path to ls(1) you avoid clashes with aliasing of ls(1) mentioned in some of the previous posts. The tr(1) command together with "xargs -0" makes the command work with filenames containing (ugh) whitespace. It won't work with filenames containing newlines, but having filenames like that in the file system is to ask for trouble, so it probably won't be a big problem. But filenames with newlines could exist, so a better solution would be to use "find -print0":
find /path/to/srcdir -type f -print0 | xargs -0 -i% mv % dest/

You shouldn't use the output of ls as the input of another command. Files with spaces in their names are difficult as is the inclusion of ANSI escape sequences if you have:
alias ls-'ls --color=always'
for example.
Always use find or xargs (with -0) or globbing.
Also, you didn't say whether you want to move files or rename them. Each would be handled differently.
edit: added -0 to xargs (thanks for the reminder)

Backticks work well, as others have suggested. See xargs, too. And for really complicated stuff, pipe it into sed, make the list of commands you want, then run it again with the output of sed piped into sh.
Here's an example with find, but it works fine with ls, too:
http://github.com/DonBranson/scripts/blob/f09d24629ab6eb3ce509d4d3078818430306b063/jarfinder.sh

#!/bin/bash
for i in $( ls * );
do
mv $1 /backup/$1
done
else, it's the find solution by sybreon, and as suggested NOT the green mv ls solution.

Just use find or your shells globing!
find . -depth=1 -exec mv {} /tmp/blah/ \;
..or..
mv * /tmp/blah/
You don't have to worry about colour in the ls output, or other piping strangeness - Linux allows basically any characters in the filename except a null byte.. For example:
$ touch "blah\new|
> "
$ ls | xargs file
blahnew|: cannot open `blahnew|' (No such file or directory)
..but find works perfectly:
$ find . -exec file {} \;
./blah\new|
: empty

So this answer doesn't send the output of ls to mv but as #lhunath explained using ls is almost always the wrong tool for the job. Use shell globs or a find command.
For more complicated cases (often in a script), using bash arrays to build up the argument list from shell globs or find commands can be very useful. One can create an array and push to it with the appropriate conditional logic. This also handles spaces in filenames properly.
For example:
myargs=()
# don't push if the glob does not match anything
shopt -s nullglob
myargs+=(myfiles*)
To push files matching a find to the array: https://stackoverflow.com/a/23357277/430128.
The last argument should be the target location:
myargs+=("Some target directory")
Use myargs in the invocation of a command like mv:
mv "${myargs[#]}"
Note the quoting of the array myargs to pass array elements with spaces correctly.

You surround the ls with back quotes and put it after the mv, so like this...
mv `ls` somewhere/
But keep in mind that if any of your file names have spaces in them it won't work very well.
Also it would be simpler to just do something like this: mv filepattern* somewhere/

Related

Shell script to delete whose files names are not in a text file

I have a txt file which contains list of file names
Example:
10.jpg
11.jpg
12.jpeg
...
In a folder this files should protect from delete process and other files should delete.
So i want oppposite logic of this question: Shell command/script to delete files whose names are in a text file
How to do that?
Use extglob and Bash extended pattern matching !(pattern-list):
!(pattern-list)
Matches anything except one of the given patterns
where a pattern-list is a list of one or more patterns separated by a |.
extglob
If set, the extended pattern matching features described above are enabled.
So for example:
$ ls
10.jpg 11.jpg 12.jpeg 13.jpg 14.jpg 15.jpg 16.jpg a.txt
$ shopt -s extglob
$ shopt | grep extglob
extglob on
$ cat a.txt
10.jpg
11.jpg
12.jpeg
$ tr '\n' '|' < a.txt
10.jpg|11.jpg|12.jpeg|
$ ls !(`tr '\n' '|' < a.txt`)
13.jpg 14.jpg 15.jpg 16.jpg a.txt
The deleted files are 13.jpg 14.jpg 15.jpg 16.jpg a.txt according to the example.
So with extglob and !(pattern-list), we can obtain the files which are excluded based on the file content.
Additionally, if you want to exclude the entries starting with ., then you could switch on the dotglob option with shopt -s dotglob.
This is one way that will work with bash GLOBIGNORE:
$ cat file2
10.jpg
11.jpg
12.jpg
$ ls *.jpg
10.jpg 11.jpg 12.jpg 13.jpg
$ echo $GLOBIGNORE
$ GLOBIGNORE=$(tr '\n' ':' <file2 )
$ echo $GLOBIGNORE
10.jpg:11.jpg:12.jpg:
$ ls *.jpg
13.jpg
As it is obvious, globing ignores whatever (file, pattern, etc) is included in the GLOBIGNORE bash variable.
This is why the last ls reports only file 13.jpg since files 10,11 and 12.jpg are ignored.
As a result using rm *.jpg will remove only 13.jpg in my system:
$ rm -iv *.jpg
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
When you are done, you can just set GLOBIGNORE to null:
$ GLOBIGNORE=
Worths to be mentioned, that in GLOBIGNORE you can also apply glob patterns instead of single filenames, like *.jpg or my*.mp3 , etc
Alternative :
We can use programming techniques (grep, awk, etc) to compare the file names present in ignorefile and the files under current directory:
$ awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n')
13.jpg
$ rm -iv "$(awk 'NR==FNR{f[$0];next}(!($0 in f))' file2 <(find . -type f -name '*.jpg' -printf '%f\n'))"
rm: remove regular empty file '13.jpg'? y
removed '13.jpg'
Note: This also makes use of bash process substitution, and will break if filenames include new lines.
Another alternative to George Vasiliou's answer would be to read the file with the names of the files to keep using the Bash builtin mapfile and then check for each of the files to be deleted whether it is in that list.
#! /bin/bash -eu
mapfile -t keepthose <keepme.txt
declare -a deletethose
for f in "$#"
do
keep=0
for not in "${keepthose[#]}"
do
[ "${not}" = "${f}" ] && keep=1 || :
done
[ ${keep} -gt 0 ] || deletethose+=("${f}")
done
# Remove the 'echo' if you really want to delete files.
echo rm -f "${deletethose[#]}"
The -t option causes mapfile to trim the trailing newline character from the lines it reads from the file. No other white-space will be trimmed, though. This might be what you want if your file names actually contain white-space but it could also cause subtle surprises if somebody accidentally puts a space before or after the name of an important file they want to keep.
Note that I'm first building a list of the files that should be deleted and then delete them all at once rather than deleting each file individually. This saves some sub-process invocations.
The lookup in the list, as coded above, has linear complexity which gives the overall script quadratic complexity (precisely, N × M where N is the number of command-line arguments and M the number of entries in the keepme.txt file). If you only have a few dozen files, this should be fine. Unfortunately, I don't know of a better way to check for set membership in Bash. (We cannot use the file names as keys in an associative array because they might not be proper identifiers.) If you are concerned with performance for many files, using a more powerful language like Python might be worth consideration.
I would also like to mention that the above example simply compares strings. It will not realize that important.txt and ./important.txt are the same file and hence delete the file. It would be more robust to convert the file name to a canonical path using readlink -f before comparing it.
Furthermore, your users might want to be able to put globing patterns (like important.* into the list of files to keep. If you want to handle those, extra logic would be required.
Overall, specifying what files to not delete seems a little dangerous as the error is on the bad side.
Provided there's no spaces or special escaped chars in the file names, either of these (or variations of these) would work:
rm -v $(stat -c %n * | sort excluded_file_list | uniq -u)
stat -c %n * | grep -vf excluded_file_list | xargs rm -v

Why am I getting some extra, weird characters when making a file from grep output?

I am doing a very basic command that never gave me trouble in the past, but is inexplicably returning undesired characters now.
I am in BASH on linux, and simply want to search through a directory and make a file containing filenames that match a pattern:
ls | grep "*.file_ID" > my_list.txt
...This works fine, and if I cat the data:
cat my_list.txt
seriesA.file_ID
seriesB.file_ID
seriesC.file_ID
However, when I try to feed this file into downstream processes, I keep getting a weird errors, as if the file isn't properly formatted as a list of file names. When I open the file in vim to reveal any unnecessary characters, I find the file actually looks like this:
vi my_list.txt
^[[00mseriesA.file_ID^[[00m
^[[00mseriesB.file_ID^[[00m
^[[00mseriesC.file_ID^[[00m
For some reason, every line is started and ended with the characters ^[[00m. If I delete these characters, all of the downstream processes work fine. However, I need to have my scripts automatically make such a file list, so I can't keep going in and manually deleting these chars.
Does anyone know what is producing the ^[[00m characters? I don't have any idea where they are coming from, and need a to be able to generate files without them.
Thanks!
Probably your GREP_OPTIONS environment variable contains --color=always, which causes the output to be stuffed with control characters, even when piped to a file.
Use --color=auto instead.
http://www.gnu.org/software/grep/manual/html_node/Environment-Variables.html
Even better, don't use grep:
ls *.file_ID > my_list.txt
usually it is automatically => grep --color=auto
if it pipes into a csv file that looks like this ^[[00...^[[00m
you would have to type this in the terminal:
grep --color=auto "your regex" > example.csv
if you want it to be a permanent situation where you do not have to type "--color=auto" every time, type this in the terminal:
export GREP_OPTIONS='--color=auto'
more info:
https://linuxcommando.blogspot.com/2007/10/grep-with-color-output.html
Don't use ls:
printf "%s\n" *.file_ID > my_list.txt
This should take care of it (assuming GNU find and no directory traversing):
find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
Example:
~> ls *file_ID*
a.file_ID b.file_ID c.file_ID
~> find . -maxdepth 1 -type f -name "*.file_ID" -printf "%f\n" > my_list.txt
~> cat my_list.txt
a.file_ID
b.file_ID
c.file_ID
As far as the "^[[00m" characters, check your ls options:
~> alias -p | grep "ls="
You may get something like:
alias ls='/bin/ls $LS_OPTIONS'
If so, check env for this:
~> env | grep LS_OP
LS_OPTIONS=-N --color=tty -T 0
The character string you're referencing is used to turn off colors, so your shell likely has been set to show colors. Removing and/or changing the ls alias should resolve it.
The weird characters such as ^[[00m are escape characters for colorizing the output. Color output for ls is most likely enabled through an alias in your environment.
To avoid getting these color characters, you can try disable the ls alias temporarily with a backslash:
\ls *.txt
Or you can use printf command instead.
printf "%s\n" *.txt

Removing last n characters from Unix Filename before the extension

I have a bunch of files in Unix Directory :
test_XXXXX.txt
best_YYY.txt
nest_ZZZZZZZZZ.txt
I need to rename these files as
test.txt
best.txt
nest.txt
I am using Ksh on AIX .Please let me know how i can accomplish the above using a Single command .
Thanks,
In this case, it seems you have an _ to start every section you want to remove. If that's the case, then this ought to work:
for f in *.txt
do
g="${f%%_*}.txt"
echo mv "${f}" "${g}"
done
Remove the echo if the output seems correct, or replace the last line with done | ksh.
If the files aren't all .txt files, this is a little more general:
for f in *
do
ext="${f##*.}"
g="${f%%_*}.${ext}"
echo mv "${f}" "${g}"
done
If this is a one time (or not very often) occasion, I would create a script with
$ ls > rename.sh
$ vi rename.sh
:%s/\(.*\)/mv \1 \1/
(edit manually to remove all the XXXXX from the second file names)
:x
$ source rename.sh
If this need occurs frequently, I would need more insight into what XXXXX, YYY, and ZZZZZZZZZZZ are.
Addendum
Modify this to your liking:
ls | sed "{s/\(.*\)\(............\)\.txt$/mv \1\2.txt \1.txt/}" | sh
It transforms filenames by omitting 12 characters before .txt and passing the resulting mv command to a shell.
Beware: If there are non-matching filenames, it executes the filename—and not a mv command. I omitted a way to select only matching filenames.

List all files that do not match pattern using ls [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I use inverse or negative wildcards when pattern matching in a unix/linux shell?
I've read the man page for ls, and I can't find the option to list all that do not match the file selector. Do you know how to perform this operation?
For example:
lets say my directory is this:
> ls
a.txt b.mkv c.txt d.mp3 e.flv
Now I would like to do something that does the following
> ls -[SOME_OPTION] *.txt
b.mkv d.mp3 e.flv
Is there such an option?
If not, is there a way to pipe the output of ls to another function (possibly sed) that shows only the ones that I would like?
I don't know exactly how to do this, but I'm imagining it would be something like:
> ls | sed [SOMETHING]
I really should learn how to use sed,awk,and grep, but I keep getting stuck at understanding how to write the regexes. I understand the concept of regular expressions clearly, but I get confused between regexes that use different syntax.
Any help would be much appreciated!
EDIT:
I forgot to mention that I am running Mac OS X, so the functions may be slightly different from the ones discussed in other answers for the unix/linux shell (hence some of my confusion with sed,awk,and grep).
this may be help you
ls --ignore=*.txt
It will not display the .txt files in your directory.
maybe this command will help you
find ./ -maxdepth 1 ! -path "*txt"
ls|grep -v ".txt"
does this helps?
ls just lists what arguments it is presented with. *.txt gets expanded to a.txt c.txt before ls sees it, try echo *.txt.
To do what you ask with sed you can delete the pattern from its input, for example:
ls | sed '/\.txt$/d'
Would delete all lines ending with .txt.
With bash and zsh you can have the shell do the inverted expansion, with bash it would be:
ls !(*.txt)
zsh:
ls *~*.txt
Note that both shells need the extended glob option to be enabled, shopt -s extglob with bash and setopt extendedglob with zsh.
One way using find:
find . -maxdepth 1 -type f -not -name "*.txt" -printf "%f\n"

How can I process a list of files that includes spaces in its names in Unix?

I'm trying to list the files in a directory and do something to them in the Mac OS X prompt.
It should go like this: for f in $(ls -1); do echo $f; done
If I have files without spaces in their names (fileA.txt, fileB.txt), the echo works fine.
If the files include spaces in their names ("file A.txt", "file B.txt"), I get 4 strings (file, A.txt, file, B.txt).
I've tried quoting the listing command, but it only changed the problem.
If I do this: for f in $(ls -1); do echo $f; done
I get: file A.txt\nfile B.txt
(It displays correctly, but it is a single string and I need the 2 lines separated.
Step away from ls if at all possible. Use find from the findutils package.
find /target/path -type f -print0 | xargs -0 your_command_here
-print0 will cause find to output the names separated by NUL characters (ASCII zero). The -0 argument to xargs tells it to expect the arguments separated by NUL characters too, so everything will work just fine.
Replace /target/path with the path under which your files are located.
-type f will only locate files. Use -type d for directories, or omit altogether to get both.
Replace your_command_here with the command you'll use to process the file names. (Note: If you run this from a shell using echo for your_command_here you'll get everything on one line - don't get confused by that shell artifact, xargs will do the expected right thing anyway.)
Edit: Alternatively (or if you don't have xargs), you can use the much less efficient
find /target/path -type f -exec your_command_here \{\} \;
\{\} \; is the escape for {} ; which is the placeholder for the currently processed file. find will then invoke your_command_here with {} ; replaced by the file name, and since your_command_here will be launched by find and not by the shell the spaces won't matter.
The second version will be less efficient since find will launch a new process for each and every file found. xargs is smart enough to pipe the commands to a newly launched process if it can figure it's safe to do so. Prefer the xargs version if you have the choice.
for f in *; do echo "$f"; done
should do what you want. Why are you using ls instead of * ?
In general, dealing with spaces in shell is a PITA. Take a look at the $IFS variable, or better yet at Perl, Ruby, Python, etc.
Here's an answer using $IFS as discussed by derobert
http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
You can pipe the arguments into read. For example, to cat all files in the directory:
ls -1 | while read FILENAME; do cat "$FILENAME"; done
This means you can still use ls, as you have in your question, or any other command that produces $IFS delimited output.
The while loop makes it much easier to do several things to the argument, and makes complex processing more readable in my opinion. A contrived example:
ls -1 | while read FILE
do
echo 1: "$FILE"
echo 2: "$FILE"
done
look --quoting-style option.
for instance, --quoting-style=c would produce :
$ ls --quoting-style=c
"file1" "file2" "dir one"
Check out the manpage for xargs:
it works like this:
ls -1 /tmp/*.jpeg | xargs rm

Resources