Make xargs execute the command once for each line of input - xargs

How can I make xargs execute the command exactly once for each line of input given?
It's default behavior is to chunk the lines and execute the command once, passing multiple lines to each instance.
From http://en.wikipedia.org/wiki/Xargs:
find /path -type f -print0 | xargs -0 rm
In this example, find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. This is more efficient than this functionally equivalent version:
find /path -type f -exec rm '{}' \;
I know that find has the "exec" flag. I am just quoting an illustrative example from another resource.

The following will only work if you do not have spaces in your input:
xargs -L 1
xargs --max-lines=1 # synonym for the -L option
from the man page:
-L max-lines
Use at most max-lines nonblank input lines per command line.
Trailing blanks cause an input line to be logically continued on
the next input line. Implies -x.

It seems to me all existing answers on this page are wrong, including the one marked as correct. That stems from the fact that the question is ambiguously worded.
Summary:   If you want to execute the command "exactly once for each line of input," passing the entire line (without newline) to the command as a single argument, then this is the best UNIX-compatible way to do it:
... | tr '\n' '\0' | xargs -0 -n1 ...
If you are using GNU xargs and don't need to be compatible with all other UNIX's (FreeBSD, Mac OS X, etc.) then you can use the GNU-specific option -d:
... | xargs -d\\n -n1 ...
Now for the long explanation…
There are two issues to take into account when using xargs:
how does it split the input into "arguments"; and
how many arguments to pass the child command at a time.
To test xargs' behavior, we need an utility that shows how many times it's being executed and with how many arguments. I don't know if there is a standard utility to do that, but we can code it quite easily in bash:
#!/bin/bash
echo -n "-> "; for a in "$#"; do echo -n "\"$a\" "; done; echo
Assuming you save it as show in your current directory and make it executable, here is how it works:
$ ./show one two 'three and four'
-> "one" "two" "three and four"
Now, if the original question is really about point 2. above (as I think it is, after reading it a few times over) and it is to be read like this (changes in bold):
How can I make xargs execute the command exactly once for each argument of input given? Its default behavior is to chunk the input into arguments and execute the command as few times as possible, passing multiple arguments to each instance.
then the answer is -n 1.
Let's compare xargs' default behavior, which splits the input around whitespace and calls the command as few times as possible:
$ echo one two 'three and four' | xargs ./show
-> "one" "two" "three" "and" "four"
and its behavior with -n 1:
$ echo one two 'three and four' | xargs -n 1 ./show
-> "one"
-> "two"
-> "three"
-> "and"
-> "four"
If, on the other hand, the original question was about point 1. input splitting and it was to be read like this (many people coming here seem to think that's the case, or are confusing the two issues):
How can I make xargs execute the command with exactly one argument for each line of input given? Its default behavior is to chunk the lines around whitespace.
then the answer is more subtle.
One would think that -L 1 could be of help, but it turns out it doesn't change argument parsing. It only executes the command once for each input line, with as many arguments as were there on that input line:
$ echo $'one\ntwo\nthree and four' | xargs -L 1 ./show
-> "one"
-> "two"
-> "three" "and" "four"
Not only that, but if a line ends with whitespace, it is appended to the next:
$ echo $'one \ntwo\nthree and four' | xargs -L 1 ./show
-> "one" "two"
-> "three" "and" "four"
Clearly, -L is not about changing the way xargs splits the input into arguments.
The only argument that does so in a cross-platform fashion (excluding GNU extensions) is -0, which splits the input around NUL bytes.
Then, it's just a matter of translating newlines to NUL with the help of tr:
$ echo $'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 ./show
-> "one " "two" "three and four"
Now the argument parsing looks all right, including the trailing whitespace.
Finally, if you combine this technique with -n 1, you get exactly one command execution per input line, whatever input you have, which may be yet another way to look at the original question (possibly the most intuitive, given the title):
$ echo $'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 -n1 ./show
-> "one "
-> "two"
-> "three and four"
As mentioned above, if you are using GNU xargs you can replace the tr with the GNU-specific option -d:
$ echo $'one \ntwo\nthree and four' | xargs -d\\n -n1 ./show
-> "one "
-> "two"
-> "three and four"

If you want to run the command for every line (i.e. result) coming from find, then what do you need the xargs for?
Try:
find path -type f -exec your-command {} \;
where the literal {} gets substituted by the filename and the literal \; is needed for find to know that the custom command ends there.
EDIT:
(after the edit of your question clarifying that you know about -exec)
From man xargs:
-L max-lines
Use at most max-lines nonblank input lines per command line. Trailing
blanks cause an input line to be logically continued on the next input line.
Implies -x.
Note that filenames ending in blanks would cause you trouble if you use xargs:
$ mkdir /tmp/bax; cd /tmp/bax
$ touch a\ b c\ c
$ find . -type f -print | xargs -L1 wc -l
0 ./c
0 ./c
0 total
0 ./b
wc: ./a: No such file or directory
So if you don't care about the -exec option, you better use -print0 and -0:
$ find . -type f -print0 | xargs -0L1 wc -l
0 ./c
0 ./c
0 ./b
0 ./a

How can I make xargs execute the command exactly once for each line of input given?
-L 1 is the simple solution but it does not work if any of the files contain spaces in them. This is a key function of find's -print0 argument – to separate the arguments by '\0' character instead of whitespace. Here's an example:
echo "file with space.txt" | xargs -L 1 ls
ls: file: No such file or directory
ls: with: No such file or directory
ls: space.txt: No such file or directory
A better solution is to use tr to convert newlines to null (\0) characters, and then use the xargs -0 argument. Here's an example:
echo "file with space.txt" | tr '\n' '\0' | xargs -0 ls
file with space.txt
If you then need to limit the number of calls you can use the -n 1 argument to make one call to the program for each input:
echo "file with space.txt" | tr '\n' '\0' | xargs -0 -n 1 ls
This also allows you to filter the output of find before converting the breaks into nulls.
find . -name \*.xml | grep -v /target/ | tr '\n' '\0' | xargs -0 tar -cf xml.tar

These two ways also work, and will work for other commands that are not using find!
xargs -I '{}' rm '{}'
xargs -i rm '{}'
example use case:
find . -name "*.pyc" | xargs -i rm '{}'
will delete all pyc files under this directory even if the pyc files contain spaces.

Another alternative...
find /path -type f | while read ln; do echo "processing $ln"; done

find path -type f | xargs -L1 command
is all you need.

The following command will find all the files (-type f) in /path and then copy them using cp to the current folder. Note the use if -I % to specify a placeholder character in the cp command line so that arguments can be placed after the file name.
find /path -type f -print0 | xargs -0 -I % cp % .
Tested with xargs (GNU findutils) 4.4.0

You can limit the number of lines, or arguments (if there are spaces between each argument) using the --max-lines or --max-args flags, respectively.
-L max-lines
Use at most max-lines nonblank input lines per command line. Trailing blanks cause an input line to be logically continued on the next input
line. Implies -x.
--max-lines[=max-lines], -l[max-lines]
Synonym for the -L option. Unlike -L, the max-lines argument is optional. If max-args is not specified, it defaults to one. The -l option
is deprecated since the POSIX standard specifies -L instead.
--max-args=max-args, -n max-args
Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option) is exceeded,
unless the -x option is given, in which case xargs will exit.

#Draemon answers seems to be right with "-0" even with space in the file.
I was trying the xargs command and I found that "-0" works perfectly with "-L". even the spaces are treated (if input was null terminated ). the following is an example :
#touch "file with space"
#touch "file1"
#touch "file2"
The following will split the nulls and execute the command on each argument in the list :
#find . -name 'file*' -print0 | xargs -0 -L1
./file with space
./file1
./file2
so -L1 will execute the argument on each null terminated character if used with "-0". To see the difference try :
#find . -name 'file*' -print0 | xargs -0 | xargs -L1
./file with space ./file1 ./file2
even this will execute once :
#find . -name 'file*' -print0 | xargs -0 | xargs -0 -L1
./file with space ./file1 ./file2
The command will execute once as the "-L" now doesn't split on null byte. you need to provide both "-0" and "-L" to work.

It seems I don't have enough reputation to add a comment to Tobia's answer above, so I am adding this "answer" to help those of us wanting to experiment with xargs the same way on the Windows platforms.
Here is a windows batch file that does the same thing as Tobia's quickly coded "show" script:
#echo off
REM
REM cool trick of using "set" to echo without new line
REM (from: http://www.psteiner.com/2012/05/windows-batch-echo-without-new-line.html)
REM
if "%~1" == "" (
exit /b
)
<nul set /p=Args: "%~1"
shift
:start
if not "%~1" == "" (
<nul set /p=, "%~1"
shift
goto start
)
echo.

In your example, the point of piping the output of find to xargs is that the standard behavior of find's -exec option is to execute the command once for each found file. If you're using find, and you want its standard behavior, then the answer is simple - don't use xargs to begin with.

execute ant task clean-all on every build.xml on current or sub-folder.
find . -name 'build.xml' -exec ant -f {} clean-all \;

Related

Making xargs work in Cygwin

Linux/bash, taking the list of lines on input and using xargs to work on each line:
% ls -1 --color=never | xargs -I{} echo {}
a
b
c
Cygwin, take 1:
$ ls -1 --color=never | xargs -I{} echo {}
xargs: invalid option -- I
Usage: xargs [-0prtx] [-e[eof-str]] [-i[replace-str]] [-l[max-lines]]
[-n max-args] [-s max-chars] [-P max-procs] [--null] [--eof[=eof-str]]
[--replace[=replace-str]] [--max-lines[=max-lines]] [--interactive]
[--max-chars=max-chars] [--verbose] [--exit] [--max-procs=max-procs]
[--max-args=max-args] [--no-run-if-empty] [--version] [--help]
[command [initial-arguments]]
Cygwin, take 2:
$ ls -1 --color=never | xargs echo
a b c
(yes, I know there's a universal method of ls -1 --color=never | while read X; do echo ${X}; done, I have tested that it works in Cygwin too, but I'm looking for a way to make xargs work correctly in Cygwin)
damienfrancois's answer is correct. You probably want to use -n to enforce echo to echo one file name at a time.
However, if you are really interested in taking each file and executing it one at a time, you may be better off using find:
$ find . -maxdepth 1 --exec echo {} \;
A few things:
This will pick up file names that begin with a period (including '.')
This will put a ./ in front of your file names.
The echo being used is from /bin/echo and not the built in shell version of echo.
However, it doesn't depend upon the shell executing ls * and possibility causing issues (such as coloring file names, or printing out files in sub-directories (which your command will do).
The purpose of xargs was to minimize the execution of a particular command:
$ find . -type f | xargs foo
In this case, xargs will execute foo only a minimal number of times. foo will only execute when the command line buffer gets full, or there are no more file names. However, if you are forcing an execution after each name, you're probably better off using find. It's a lot more flexible and you're not depending upon shell behavior.
Use the -n argument of xargs, which is really the one you should be using, as -I is an option that serves to give the argument a 'name' so you can make them appear anywhere in the command line:
$ ls -1 --color=never | xargs echo
a b c
$ ls -1 --color=never | xargs -n 1 echo
a
b
c
From the manpage:
-n max-args
Use at most max-args arguments per command line
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names read from standard input.

From UNIX shell, how to find all files containing a specific string, then print the 4th line of each file?

I want to find all files within the current directory that contain a given string, then print just the 4th line of each file.
grep --null -l "$yourstring" * | # List all the files containing your string
xargs -0 sed -n '4p;q' # Print the fourth line of said files.
Different editions of grep have slightly different incantations of --null, but it's usually there in some form. Read your manpage for details.
Update: I believe one of the null file list incantations of grep is a reasonable solution that will cover the vast majority of real-world use cases, but to be entirely portable, if your version of grep does not support any null output it is not perfectly safe to use it with xargs, so you must resort to find.
find . -maxdepth 1 -type f -exec grep -q "$yourstring" {} \; -exec sed -n '4p;q' {} +
Because find arguments can almost all be used as predicates, the -exec grep -q… part filters the files that are eventually fed to sed down to only those that contain the required string.
From other user:
grep -Frl string . | xargs -n 1 sed -n 4p
Give a try to the below GNU find command,
find . -maxdepth 1 -type f -exec grep -l 'yourstring' {} \; | xargs -I {} awk 'NR==4{print; exit}' {}
It finds all the files in the current directory which contains specific string, and prints the line number 4 present in each file.
This for loop should work:
while read -d '' -r file; do
echo -n "$file: "
sed '4q;d' "$file"
done < <(grep --null -l "some-text" *.txt)

how to grep large number of files?

I am trying to grep 40k files in the current directory and i am getting this error.
for i in $(cat A01/genes.txt); do grep $i *.kaks; done > A01/A01.result.txt
-bash: /usr/bin/grep: Argument list too long
How do one normally grep thousands of files?
Thanks
Upendra
This makes David sad...
Everyone so far is wrong (except for anubhava).
Shell scripting is not like any other programming language because much of the interpretation of lines comes from the power of the shell interpolating them before the command is actually executed.
Let's take something simple:
$ set -x
$ ls
+ ls
bar.txt foo.txt fubar.log
$ echo The text files are *.txt
echo The text files are *.txt
> echo The text files are bar.txt foo.txt
The text files are bar.txt foo.txt
$ set +x
$
The set -x allows you to see how the shell actually interpolates the glob and then passes that back to the command as input. The > points to the line that is actually being executed by the command.
You can see that the echo command isn't interpreting the *. Instead, the shell grabs the * and replaces it with the names of the matching files. Then and only then does the echo command actually executes the command.
When you have 40K plus files, and you do grep *, you're expanding that * to the names of those 40,000 plus files before grep even has a chance to execute, and that's where the error message /usr/bin/grep: Argument list too long is coming from.
Fortunately, Unix has a way around this dilemma:
$ find . -name "*.kaks" -type f -maxdepth 1 | xargs grep -f A01/genes.txt
The find . -name "*.kaks" -type f -maxdepth 1 will find all of your *.kaks files, and the -depth 1 will only include files in the current directory. The -type f makes sure you only pick up files and not a directory.
The find command pipes the names of the files into xargs and xargs will append the names of the file to the grep -f A01/genes.txtcommand. However, xargs has a trick up it sleeve. It knows how long the command line buffer is, and will execute the grep when the command line buffer is full, then pass in another series of file to the grep. This way, grep gets executed maybe three or ten times (depending upon the size of the command line buffer), and all of our files are used.
Unfortunately, xargs uses whitespace as a separator for the file names. If your files contain spaces or tabs, you'll have trouble with xargs. Fortunately, there's another fix:
$ find . -name "*.kaks" -type f -maxdepth 1 -print0 | xargs -0 grep -f A01/genes.txt
The -print0 will cause find to print out the names of the files not separated by newlines, but by the NUL character. The -0 parameter for xargs tells xargs that the file separator isn't whitespace, but the NUL character. Thus, fixes the issue.
You could also do this too:
$ find . -name "*.kaks" -type f -maxdepth 1 -exec grep -f A01/genes.txt {} \;
This will execute the grep for each and every file found instead of what xargs does and only runs grep for all the files it can stuff on the command line. The advantage of this is that it avoids shell interference entirely. However, it may or may not be less efficient.
What would be interesting is to experiment and see which one is more efficient. You can use time to see:
$ time find . -name "*.kaks" -type f -maxdepth 1 -exec grep -f A01/genes.txt {} \;
This will execute the command and then tell you how long it took. Try it with the -exec and with xargs and see which is faster. Let us know what you find.
You can combine find with grep like this:
find . -maxdepth 1 -name '*.kaks' -exec grep -H -f A01/genes.txt '{}' \; > A01/A01.result.txt
you can use recursive feature of grep:
for i in $(cat A01/genes.txt); do
grep -r $i .
done > A01/A01.result.txt
though if you want to select only kaks files:
for i in $(cat A01/genes.txt); do
find . -iregex '.*\.kaks$' -exec grep $i \;
done > A01/A01.result.txt
Put another for loop inside your outer one:
for f in *.kaks; do
grep -H $i "$f"
done
By the way, are you interested in finding EVERY occurrence in each file, or merely if the search string exists in there one or more times? If it is "good enough" to know the string occurs in there one or more times you can specify "-n 1" to grep and it will not bother reading/searching the rest of the file after finding the first match, which could potentially save lots of time.
The following solution has worked for me:
Problem:
grep -r "example\.com" *
-bash: /bin/grep: Argument list too long
Solution:
grep -r "example\.com" .
["In newer versions of grep you can omit the “.“, as the current directory is implied."]
Source:
Reinlick, J. https://www.saotn.org/bash-grep-through-large-number-files-argument-list-too-long/

xargs with multiple commands

In the current directory, I'd like to print the filename and contents in it.
I can print filenames or contents separately by
find . | grep "file_for_print" | xargs echo
find . | grep "file_for_print" | xargs cat
but what I want is printing them together like this:
file1
line1 inside file1
line2 inside file1
file2
line1 inside file2
line2 inside file2
I read xargs with multiple commands as argument
and tried
find . | grep "file_for_print" | xargs -I % sh -c 'echo; cat;'
but doesn't work.
I'm not familiar with xargs, so don't know what exactly "-I % sh -c" means.
could anyone help me? thank you!
find . | grep "file_for_print" | xargs -I % sh -c 'echo %; cat %;' (OP was missing %s)
To start with, there is virtually no difference between:
find . | grep "file_for_print" | xargs echo
and
find . -name "file_for_print*"
except that the second one will not match filenames like this_is_not_the_file_for_print, and it will print the filenames one per line. It will also be a lot faster, because it doesn't need to generate and print the entire recursive directory structure just in order for grep to toss most of it away.
find . -name "file_for_print*"
is actually exactly the same as
find . -name "file_for_print*" -print
where the -print action prints each matched filename followed by a newline. If you don't provide find with any actions, it assumes you wanted -print. But it has more tricks up its sleeve than that. For example:
find . -name "file_for_print*" -exec cat {} \;
The -exec action causes find to execute the following command, up to the \;, replacing {} with each matching file name.
find does not limit itself to a single action. You can tell it to do however many you want. So:
find . -name "file_for_print*" -print -exec cat {} \;
will probably do pretty well what you want.
For lots more information on this very useful utility, type:
man find
or
info find
and read all about It.
Since it's not been said yet: -I % tells xargs to replace '%' with the arguments in the command you give it. The sh -c '...' just means run the commands '...' in a new shell.
So
xargs -I % sh -c 'echo %; cat %;'
will run echo [filename] followed by cat [filename] for every filename given to xargs. The echo and cat commands will be executed inside a different shell process but this usually doesn't matter. Your version didn't work because it was missing the % signs inside the command passed to xargs.
For what it's worth I would use this command to achieve the same thing:
find -name "*file_for_print*" | parallel 'echo {}; cat {};'
because it's simpler (parallel automatically uses {} as the substitution character and can take multiple commands by default).
In this specific case, each command is executed for each individual file anyway, so there's no advantage in using xargs. You may just append -exec twice to your 'find':
find . -name "*file_for_print*" -exec echo {} \; -exec cat {} \;
In this case-print could be used instead of the first echo as pointed out by rici, but this example shows the ability to execute two arbitrary commands with a single find
What about writing your own bash function?
#!/bin/bash
myFunction() {
while read -r file; do
echo "$file"
cat "$file"
done
}
find . -name "file_for_print*" | myFunction

Delete all but the most recent X files in bash

Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory?
To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5.
And just to be clear, there's only one file present, it should never be deleted.
The problems with the existing answers:
inability to handle filenames with embedded spaces or newlines.
in the case of solutions that invoke rm directly on an unquoted command substitution (rm `...`), there's an added risk of unintended globbing.
inability to distinguish between files and directories (i.e., if directories happened to be among the 5 most recently modified filesystem items, you'd effectively retain fewer than 5 files, and applying rm to directories will fail).
wnoise's answer addresses these issues, but the solution is GNU-specific (and quite complex).
Here's a pragmatic, POSIX-compliant solution that comes with only one caveat: it cannot handle filenames with embedded newlines - but I don't consider that a real-world concern for most people.
For the record, here's the explanation for why it's generally not a good idea to parse ls output: http://mywiki.wooledge.org/ParsingLs
ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}
Note: This command operates in the current directory; to target a directory explicitly, use a subshell ((...)) with cd:
(cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {})
The same applies analogously to the commands below.
The above is inefficient, because xargs has to invoke rm separately for each filename.
However, your platform's specific xargs implementation may allow you to solve this problem:
A solution that works with GNU xargs is to use -d '\n', which makes xargs consider each input line a separate argument, yet passes as many arguments as will fit on a command line at once:
ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm --
Note: Option -r (--no-run-if-empty) ensures that rm is not invoked if there's no input.
A solution that works with both GNU xargs and BSD xargs (including on macOS) - though technically still not POSIX-compliant - is to use -0 to handle NUL-separated input, after first translating newlines to NUL (0x0) chars., which also passes (typically) all filenames at once:
ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm --
Explanation:
ls -tp prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (-t), with directories printed with a trailing / to mark them as such (-p).
Note: It is the fact that ls -tp always outputs file / directory names only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one ((cd /path/to && ls -tp ...)).
grep -v '/$' then weeds out directories from the resulting listing, by omitting (-v) lines that have a trailing / (/$).
Caveat: Since a symlink that points to a directory is technically not itself a directory, such symlinks will not be excluded.
tail -n +6 skips the first 5 entries in the listing, in effect returning all but the 5 most recently modified files, if any.
Note that in order to exclude N files, N+1 must be passed to tail -n +.
xargs -I {} rm -- {} (and its variations) then invokes on rm on all these files; if there are no matches at all, xargs won't do anything.
xargs -I {} rm -- {} defines placeholder {} that represents each input line as a whole, so rm is then invoked once for each input line, but with filenames with embedded spaces handled correctly.
-- in all cases ensures that any filenames that happen to start with - aren't mistaken for options by rm.
A variation on the original problem, in case the matching files need to be processed individually or collected in a shell array:
# One by one, in a shell loop (POSIX-compliant):
ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done
# One by one, but using a Bash process substitution (<(...),
# so that the variables inside the `while` loop remain in scope:
while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6)
# Collecting the matches in a Bash *array*:
IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6)
printf '%s\n' "${files[#]}" # print array elements
Remove all but 5 (or whatever number) of the most recent files in a directory.
rm `ls -t | awk 'NR>5'`
(ls -t|head -n 5;ls)|sort|uniq -u|xargs rm
This version supports names with spaces:
(ls -t|head -n 5;ls)|sort|uniq -u|sed -e 's,.*,"&",g'|xargs rm
Simpler variant of thelsdj's answer:
ls -tr | head -n -5 | xargs --no-run-if-empty rm
ls -tr displays all the files, oldest first (-t newest first, -r reverse).
head -n -5 displays all but the 5 last lines (ie the 5 newest files).
xargs rm calls rm for each selected file.
find . -maxdepth 1 -type f -printf '%T# %p\0' | sort -r -z -n | awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } NR > 5 { sub("^[0-9]*(.[0-9]*)? ", ""); print }' | xargs -0 rm -f
Requires GNU find for -printf, and GNU sort for -z, and GNU awk for "\0", and GNU xargs for -0, but handles files with embedded newlines or spaces.
All these answers fail when there are directories in the current directory. Here's something that works:
find . -maxdepth 1 -type f | xargs -x ls -t | awk 'NR>5' | xargs -L1 rm
This:
works when there are directories in the current directory
tries to remove each file even if the previous one couldn't be removed (due to permissions, etc.)
fails safe when the number of files in the current directory is excessive and xargs would normally screw you over (the -x)
doesn't cater for spaces in filenames (perhaps you're using the wrong OS?)
ls -tQ | tail -n+4 | xargs rm
List filenames by modification time, quoting each filename. Exclude first 3 (3 most recent). Remove remaining.
EDIT after helpful comment from mklement0 (thanks!): corrected -n+3 argument, and note this will not work as expected if filenames contain newlines and/or the directory contains subdirectories.
Ignoring newlines is ignoring security and good coding. wnoise had the only good answer. Here is a variation on his that puts the filenames in an array $x
while IFS= read -rd ''; do
x+=("${REPLY#* }");
done < <(find . -maxdepth 1 -printf '%T# %p\0' | sort -r -z -n )
For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest:
n=5
find . -maxdepth 1 -type f -printf '%T# %p\0' |
sort -z -nrt ' ' -k1,1 |
sed -z -e "1,${n}d" -e 's/[^ ]* //' |
xargs -0r rm -f
For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records.
Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out:
#!/bin/bash
n=5
find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + |
sort -nrt ' ' -k1,1 |
awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' |
xargs -0 rm -f
note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead.
Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b):
Some platforms don't provide find -printf, nor stat, nor support NUL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS.
I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same as the previous solutions and is implemented with POSIX tools.
note: perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the stat function of the module Time::HiRes (when both the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line.
n=5
find . '(' -name '.' -o -prune ')' -type f -exec \
perl -MTime::HiRes=stat -le '
foreach (#ARGV) {
#st = stat($_);
if ( #st > 0 ) {
s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge;
print sprintf( "%.9f %s", $st[9], $_ );
}
else { print STDERR "stat: $_: $!"; }
}
' {} + |
sort -nrt ' ' -k1,1 |
sed -e "1,${n}d" -e 's/[^ ]* //' |
perl -l -ne '
s/\\([0-7]{3})/chr(oct($1))/ge;
s/(["\n])/"\\$1"/g;
print "\"$_\"";
' |
xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$#"' sh
Explanations:
For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \012 and \134 respectively).
Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream.
The second perl decodes the filenames and escapes them for POSIX xargs.
Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete.
I realize this is an old thread, but maybe someone will benefit from this. This command will find files in the current directory :
for F in $(find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n' | sort -r -z -n | tail -n+5 | awk '{ print $2; }'); do rm $F; done
This is a little more robust than some of the previous answers as it allows to limit your search domain to files matching expressions. First, find files matching whatever conditions you want. Print those files with the timestamps next to them.
find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n'
Next, sort them by the timestamps:
sort -r -z -n
Then, knock off the 4 most recent files from the list:
tail -n+5
Grab the 2nd column (the filename, not the timestamp):
awk '{ print $2; }'
And then wrap that whole thing up into a for statement:
for F in $(); do rm $F; done
This may be a more verbose command, but I had much better luck being able to target conditional files and execute more complex commands against them.
If the filenames don't have spaces, this will work:
ls -C1 -t| awk 'NR>5'|xargs rm
If the filenames do have spaces, something like
ls -C1 -t | awk 'NR>5' | sed -e "s/^/rm '/" -e "s/$/'/" | sh
Basic logic:
get a listing of the files in time order, one column
get all but the first 5 (n=5 for this example)
first version: send those to rm
second version: gen a script that will remove them properly
With zsh
Assuming you don't care about present directories and you will not have more than 999 files (choose a bigger number if you want, or create a while loop).
[ 6 -le `ls *(.)|wc -l` ] && rm *(.om[6,999])
In *(.om[6,999]), the . means files, the o means sort order up, the m means by date of modification (put a for access time or c for inode change), the [6,999] chooses a range of file, so doesn't rm the 5 first.
Adaptation of #mklement0's excellent answer with some parameters and without needing to navigate to the folder containing the files to be deleted...
TARGET_FOLDER="/my/folder/path"
FILES_KEEP=5
ls -tp "$TARGET_FOLDER"**/* | grep -v '/$' | tail -n +$((FILES_KEEP+1)) | xargs -d '\n' -r rm --
[Ref(s).: https://stackoverflow.com/a/3572628/3223785 ]
Thanks! 😉
found interesting cmd in Sed-Onliners - Delete last 3 lines - fnd it perfect for another way to skin the cat (okay not) but idea:
#!/bin/bash
# sed cmd chng #2 to value file wish to retain
cd /opt/depot
ls -1 MyMintFiles*.zip > BigList
sed -n -e :a -e '1,2!{P;N;D;};N;ba' BigList > DeList
for i in `cat DeList`
do
echo "Deleted $i"
rm -f $i
#echo "File(s) gonzo "
#read junk
done
exit 0
Removes all but the 10 latest (most recents) files
ls -t1 | head -n $(echo $(ls -1 | wc -l) - 10 | bc) | xargs rm
If less than 10 files no file is removed and you will have :
error head: illegal line count -- 0
To count files with bash
I needed an elegant solution for the busybox (router), all xargs or array solutions were useless to me - no such command available there. find and mtime is not the proper answer as we are talking about 10 items and not necessarily 10 days. Espo's answer was the shortest and cleanest and likely the most unversal one.
Error with spaces and when no files are to be deleted are both simply solved the standard way:
rm "$(ls -td *.tar | awk 'NR>7')" 2>&-
Bit more educational version: We can do it all if we use awk differently. Normally, I use this method to pass (return) variables from the awk to the sh. As we read all the time that can not be done, I beg to differ: here is the method.
Example for .tar files with no problem regarding the spaces in the filename. To test, replace "rm" with the "ls".
eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}')
Explanation:
ls -td *.tar lists all .tar files sorted by the time. To apply to all the files in the current folder, remove the "d *.tar" part
awk 'NR>7... skips the first 7 lines
print "rm \"" $0 "\"" constructs a line: rm "file name"
eval executes it
Since we are using rm, I would not use the above command in a script! Wiser usage is:
(cd /FolderToDeleteWithin && eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}'))
In the case of using ls -t command will not do any harm on such silly examples as: touch 'foo " bar' and touch 'hello * world'. Not that we ever create files with such names in real life!
Sidenote. If we wanted to pass a variable to the sh this way, we would simply modify the print (simple form, no spaces tolerated):
print "VarName="$1
to set the variable VarName to the value of $1. Multiple variables can be created in one go. This VarName becomes a normal sh variable and can be normally used in a script or shell afterwards. So, to create variables with awk and give them back to the shell:
eval $(ls -td *.tar | awk 'NR>7 { print "VarName=\""$1"\"" }'); echo "$VarName"
leaveCount=5
fileCount=$(ls -1 *.log | wc -l)
tailCount=$((fileCount - leaveCount))
# avoid negative tail argument
[[ $tailCount < 0 ]] && tailCount=0
ls -t *.log | tail -$tailCount | xargs rm -f
I made this into a bash shell script. Usage: keep NUM DIR where NUM is the number of files to keep and DIR is the directory to scrub.
#!/bin/bash
# Keep last N files by date.
# Usage: keep NUMBER DIRECTORY
echo ""
if [ $# -lt 2 ]; then
echo "Usage: $0 NUMFILES DIR"
echo "Keep last N newest files."
exit 1
fi
if [ ! -e $2 ]; then
echo "ERROR: directory '$1' does not exist"
exit 1
fi
if [ ! -d $2 ]; then
echo "ERROR: '$1' is not a directory"
exit 1
fi
pushd $2 > /dev/null
ls -tp | grep -v '/' | tail -n +"$1" | xargs -I {} rm -- {}
popd > /dev/null
echo "Done. Kept $1 most recent files in $2."
ls $2|wc -l
Modified version of the answer of #Fabien if you want to specify a path. Useful if you're running the script elsewhere.
ls -tr /path/foo/ | head -n -5 | xargs -I% --no-run-if-empty rm /path/foo/%

Resources