xargs to execute a string - what am I doing wrong? - shell

I'm trying to rename all files in current directory such that upper case name is converted to lower. I'm trying to do it like this:
ls -1|gawk '{print "`mv "$0" "tolower($0)"`"}'|xargs -i -t eval {}
I have two files in the directory, Y and YY
-t added for debugging, and output is:
eval `mv Y y`
xargs: eval: No such file or directory
if I execute the eval on its own, it works and moves Y to y.
I know there are other ways to achieve this, but I'd like to get this working if I can!
Cheers

eval is a shell builtin command, not a standalone executable. Thus, xargs cannot run it directly. You probably want:
ls -1 | gawk '{print "`mv "$0" "tolower($0)"`"}' | xargs -i -t sh -c "{}"

Although you're looking at an xargs solution, another method to perform the same thing can be done with tr (assuming sh/bash/ksh syntax):
for i in *; do mv $i `echo $i | tr '[A-Z]' '[a-z]'`; done

If your files are created by creative users, you will see files like:
My brother's 12" records
The solutions so far do not work on that kind of files. If you have GNU Parallel installed this will work (even on the files with creative names):
ls | parallel 'mv {} "$(echo {} | tr "[:upper:]" "[:lower:]")"'
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ

You can use eval with xargs like the one below.
Note: I only tested this in bash shell
ls -1| gawk '{print "mv "$0" /tmp/"toupper($0)""}'| xargs -I {} sh -c "eval {}"
or
ls -1| gawk '{print "mv "$0" /tmp/"toupper($0)""}'| xargs -I random_var_name sh -c "eval random_var_name"
I generally use this approach when I want to avoid one-liner for loop.
e.g.
for file in $(find /some/path | grep "pattern");do somecmd $file; done
The same can be written like below
find /some/path | grep "pattern"| xargs -I {} sh -c "somecmd {}"

Related

Shell command xargs & sed doesn't work

I just want to batch modify the suffix of the files,but it doesn't work!
The command line I used as below:
ls *html | xargs -I{} echo "\`echo {} | sed 's/html/css/g'\`"
However, when I just used ls *html,it shows:
file1.html file2.html file3.html file4.html file5.html
used ls *html | sed 's/html/css/g',it shows as I expected!
like this:
file1.css file2.css file3.css file4.css file5.css
I work on Mac OS. Could anyone give me some suggestions?
Thans in advance.
Because the backquotes are in double quotes, it gets executed immediately by the shell and not by xargs on each file.
The result is the same as
ls *html | xargs -I{} echo "{}"
However, if you use single quotes, you run into other troubles. You end up having to do something like this:
ls *html | xargs -I{} sh -c 'echo `echo {} | sed '\''s/html/css/g'\''`'
but it gets to be a mess, and we haven't even got to the actual renaming yet.
Using a loop is a bit nicer:
for file in *html; do
newname=${file%html}css
mv "$file" "$newname"
done
Using GNU Parallel:
ls *html | parallel echo before {} after {.}.css

Bash: moving a group of files of a certain size with grep, awk and xargs

At work, I need to upload images to a website. They cannot be larger than 300 KB. In order to group the images that are ready to be uploaded, I devised the following line in Bash:
du -h * | grep "[0-2]..K" | awk '{print $2}' | xargs mv Ready/
This did not work, however, because the shell returned the following:
usage: mv [-f | -i | -n] [-v] source target
mv [-f | -i | -n] [-v] source ... directory
Finally, I resorted to a for-loop to accomplish the same:
for file in $(du -h * | grep "[0-2]..K" | awk '{print $2}')
do
mv -v ${file} Ready/
done
Can somebody explain why the first line doesn't work? It is probably something very simple I'm missing, but I can't seem to find it.
I'm on Mac OS X 10.7, Bash version 4.3.
I would use the find command to get all files smaller than a certain size, makes the code a lot cleaner and easier to read like so:
find . -size -300k -name *.png -exec mv {} Ready/ \;
The reason your first command fails is because you have to reference the value you are piping in since it is not at the end of the statement. This should work:
du -h * | grep "[0-2]..K" | awk '{print $2}' | xargs -0 -I {} mv {} Ready/

how to pipe commands in ubuntu

How do I pipe commands and their results in Ubuntu when writing them in the terminal. I would write the following commands in sequence -
$ ls | grep ab
abc.pdf
cde.pdf
$ cp abc.pdf cde.pdf files/
I would like to pipe the results of the first command into the second command, and write them all in the same line. How do I do that ?
something like
$ cp "ls | grep ab" files/
(the above is a contrived example and can be written as cp *.pdf files/)
Use the following:
cp `ls | grep ab` files/
Well, since the xargs person gave up, I'll offer my xargs solution:
ls | grep ab | xargs echo | while read f; do cp $f files/; done
Of course, this solution suffers from an obvious flaw: files with spaces in them will cause chaos.
An xargs solution without this flaw? Hmm...
ls | grep ab | xargs '-d\n' bash -c 'docp() { cp "$#" files/; }; docp "$#"'
Seems a bit klunky, but it works. Unless you have files with returns in them I mean. However, anyone who does that deserves what they get. Even that is solvable:
find . -mindepth 1 -maxdepth 1 -name '*ab*' -print0 | xargs -0 bash -c 'docp() { cp "$#" files/; }; docp "$#"'
To use xargs, you need to ensure that the filename arguments are the last arguments passed to the cp command. You can accomplish this with the -t option to cp to specify the target directory:
ls | grep ab | xargs cp -t files/
Of course, even though this is a contrived example, you should not parse the output of ls.

How to use > in an xargs command?

I want to find a bash command that will let me grep every file in a directory and write the output of that grep to a separate file. My guess would have been to do something like this
ls -1 | xargs -I{} "grep ABC '{}' > '{}'.out"
but, as far as I know, xargs doesn't like the double-quotes. If I remove the double-quotes, however, then the command redirects the output of the entire command to a single file called '{}'.out instead of to a series of individual files.
Does anyone know of a way to do this using xargs? I just used this grep scenario as an example to illustrate my problem with xargs so any solutions that don't use xargs aren't as applicable for me.
Do not make the mistake of doing this:
sh -c "grep ABC {} > {}.out"
This will break under a lot of conditions, including funky filenames and is impossible to quote right. Your {} must always be a single completely separate argument to the command to avoid code injection bugs. What you need to do, is this:
xargs -I{} sh -c 'grep ABC "$1" > "$1.out"' -- {}
Applies to xargs as well as find.
By the way, never use xargs without the -0 option (unless for very rare and controlled one-time interactive use where you aren't worried about destroying your data).
Also don't parse ls. Ever. Use globbing or find instead: http://mywiki.wooledge.org/ParsingLs
Use find for everything that needs recursion and a simple loop with a glob for everything else:
find /foo -exec sh -c 'grep "$1" > "$1.out"' -- {} \;
or non-recursive:
for file in *; do grep "$file" > "$file.out"; done
Notice the proper use of quotes.
A solution without xargs is the following:
find . -mindepth 1 -maxdepth 1 -type f -exec sh -c "grep ABC '{}' > '{}.out'" \;
...and the same can be done with xargs, it turns out:
ls -1 | xargs -I {} sh -c "grep ABC '{}' > '{}.out'"
Edit: single quotes added after remark by lhunath.
I assume your example is just an example and that you may need > for other things. GNU Parallel http://www.gnu.org/software/parallel/ may be your rescue. It does not need additional quoting as long as your filenames do not contain \n:
ls | parallel "grep ABC {} > {}.out"
If you have filenames with \n in it:
find . -print0 | parallel -0 "grep ABC {} > {}.out"
As an added bonus you get the jobs run in parallel.
Watch the intro videos to learn more: http://pi.dk/1
The 10 seconds installation will try to do a full installation; if that fails, a personal installation; if that fails, a minimal installation:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
If you need to move it to a server, that does not have GNU Parallel installed, try parallel --embed.
Actually, most of the answers here do not work with all filenames (if they contain double and single quotes), including the answer by lhunath and Stephan202.
This solution works with filenames with single and double quotes:
find . -mindepth 1 -print0 | xargs -0 -I{} sh -c 'grep ABC "$1" > "$1.out"' -- {}
Here's a test with filename with both single and double quotes:
echo ABC > "I'm here.txt"
# lhunath solution (hangs waiting for input)
$ find . -exec sh -c 'grep "$1" > "$1.out"' -- {} \;
# Stephan202 solutions
$ find . -mindepth 1 -maxdepth 1 -type f -exec sh -c "grep ABC '{}' > '{}.out'" \;
grep: ./Im: No such file or directory
grep: here.txt > ./Im here.txt.out: No such file or directory
$ ls -1 | xargs -I {} sh -c "grep ABC '{}' > '{}.out'"
xargs: unterminated quote
# this solution
$ find . -mindepth 1 -print0 | xargs -0 -I{} sh -c 'grep ABC "$1" > "$1.out"' -- {}
$ ls -1
"I'm here.txt"
"I'm here.txt.out"

Delete all but the most recent X files in bash

Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory?
To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5.
And just to be clear, there's only one file present, it should never be deleted.
The problems with the existing answers:
inability to handle filenames with embedded spaces or newlines.
in the case of solutions that invoke rm directly on an unquoted command substitution (rm `...`), there's an added risk of unintended globbing.
inability to distinguish between files and directories (i.e., if directories happened to be among the 5 most recently modified filesystem items, you'd effectively retain fewer than 5 files, and applying rm to directories will fail).
wnoise's answer addresses these issues, but the solution is GNU-specific (and quite complex).
Here's a pragmatic, POSIX-compliant solution that comes with only one caveat: it cannot handle filenames with embedded newlines - but I don't consider that a real-world concern for most people.
For the record, here's the explanation for why it's generally not a good idea to parse ls output: http://mywiki.wooledge.org/ParsingLs
ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}
Note: This command operates in the current directory; to target a directory explicitly, use a subshell ((...)) with cd:
(cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {})
The same applies analogously to the commands below.
The above is inefficient, because xargs has to invoke rm separately for each filename.
However, your platform's specific xargs implementation may allow you to solve this problem:
A solution that works with GNU xargs is to use -d '\n', which makes xargs consider each input line a separate argument, yet passes as many arguments as will fit on a command line at once:
ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm --
Note: Option -r (--no-run-if-empty) ensures that rm is not invoked if there's no input.
A solution that works with both GNU xargs and BSD xargs (including on macOS) - though technically still not POSIX-compliant - is to use -0 to handle NUL-separated input, after first translating newlines to NUL (0x0) chars., which also passes (typically) all filenames at once:
ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm --
Explanation:
ls -tp prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (-t), with directories printed with a trailing / to mark them as such (-p).
Note: It is the fact that ls -tp always outputs file / directory names only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one ((cd /path/to && ls -tp ...)).
grep -v '/$' then weeds out directories from the resulting listing, by omitting (-v) lines that have a trailing / (/$).
Caveat: Since a symlink that points to a directory is technically not itself a directory, such symlinks will not be excluded.
tail -n +6 skips the first 5 entries in the listing, in effect returning all but the 5 most recently modified files, if any.
Note that in order to exclude N files, N+1 must be passed to tail -n +.
xargs -I {} rm -- {} (and its variations) then invokes on rm on all these files; if there are no matches at all, xargs won't do anything.
xargs -I {} rm -- {} defines placeholder {} that represents each input line as a whole, so rm is then invoked once for each input line, but with filenames with embedded spaces handled correctly.
-- in all cases ensures that any filenames that happen to start with - aren't mistaken for options by rm.
A variation on the original problem, in case the matching files need to be processed individually or collected in a shell array:
# One by one, in a shell loop (POSIX-compliant):
ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done
# One by one, but using a Bash process substitution (<(...),
# so that the variables inside the `while` loop remain in scope:
while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6)
# Collecting the matches in a Bash *array*:
IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6)
printf '%s\n' "${files[#]}" # print array elements
Remove all but 5 (or whatever number) of the most recent files in a directory.
rm `ls -t | awk 'NR>5'`
(ls -t|head -n 5;ls)|sort|uniq -u|xargs rm
This version supports names with spaces:
(ls -t|head -n 5;ls)|sort|uniq -u|sed -e 's,.*,"&",g'|xargs rm
Simpler variant of thelsdj's answer:
ls -tr | head -n -5 | xargs --no-run-if-empty rm
ls -tr displays all the files, oldest first (-t newest first, -r reverse).
head -n -5 displays all but the 5 last lines (ie the 5 newest files).
xargs rm calls rm for each selected file.
find . -maxdepth 1 -type f -printf '%T# %p\0' | sort -r -z -n | awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } NR > 5 { sub("^[0-9]*(.[0-9]*)? ", ""); print }' | xargs -0 rm -f
Requires GNU find for -printf, and GNU sort for -z, and GNU awk for "\0", and GNU xargs for -0, but handles files with embedded newlines or spaces.
All these answers fail when there are directories in the current directory. Here's something that works:
find . -maxdepth 1 -type f | xargs -x ls -t | awk 'NR>5' | xargs -L1 rm
This:
works when there are directories in the current directory
tries to remove each file even if the previous one couldn't be removed (due to permissions, etc.)
fails safe when the number of files in the current directory is excessive and xargs would normally screw you over (the -x)
doesn't cater for spaces in filenames (perhaps you're using the wrong OS?)
ls -tQ | tail -n+4 | xargs rm
List filenames by modification time, quoting each filename. Exclude first 3 (3 most recent). Remove remaining.
EDIT after helpful comment from mklement0 (thanks!): corrected -n+3 argument, and note this will not work as expected if filenames contain newlines and/or the directory contains subdirectories.
Ignoring newlines is ignoring security and good coding. wnoise had the only good answer. Here is a variation on his that puts the filenames in an array $x
while IFS= read -rd ''; do
x+=("${REPLY#* }");
done < <(find . -maxdepth 1 -printf '%T# %p\0' | sort -r -z -n )
For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest:
n=5
find . -maxdepth 1 -type f -printf '%T# %p\0' |
sort -z -nrt ' ' -k1,1 |
sed -z -e "1,${n}d" -e 's/[^ ]* //' |
xargs -0r rm -f
For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records.
Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out:
#!/bin/bash
n=5
find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + |
sort -nrt ' ' -k1,1 |
awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' |
xargs -0 rm -f
note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead.
Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b):
Some platforms don't provide find -printf, nor stat, nor support NUL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS.
I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same as the previous solutions and is implemented with POSIX tools.
note: perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the stat function of the module Time::HiRes (when both the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line.
n=5
find . '(' -name '.' -o -prune ')' -type f -exec \
perl -MTime::HiRes=stat -le '
foreach (#ARGV) {
#st = stat($_);
if ( #st > 0 ) {
s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge;
print sprintf( "%.9f %s", $st[9], $_ );
}
else { print STDERR "stat: $_: $!"; }
}
' {} + |
sort -nrt ' ' -k1,1 |
sed -e "1,${n}d" -e 's/[^ ]* //' |
perl -l -ne '
s/\\([0-7]{3})/chr(oct($1))/ge;
s/(["\n])/"\\$1"/g;
print "\"$_\"";
' |
xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$#"' sh
Explanations:
For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \012 and \134 respectively).
Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream.
The second perl decodes the filenames and escapes them for POSIX xargs.
Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete.
I realize this is an old thread, but maybe someone will benefit from this. This command will find files in the current directory :
for F in $(find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n' | sort -r -z -n | tail -n+5 | awk '{ print $2; }'); do rm $F; done
This is a little more robust than some of the previous answers as it allows to limit your search domain to files matching expressions. First, find files matching whatever conditions you want. Print those files with the timestamps next to them.
find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n'
Next, sort them by the timestamps:
sort -r -z -n
Then, knock off the 4 most recent files from the list:
tail -n+5
Grab the 2nd column (the filename, not the timestamp):
awk '{ print $2; }'
And then wrap that whole thing up into a for statement:
for F in $(); do rm $F; done
This may be a more verbose command, but I had much better luck being able to target conditional files and execute more complex commands against them.
If the filenames don't have spaces, this will work:
ls -C1 -t| awk 'NR>5'|xargs rm
If the filenames do have spaces, something like
ls -C1 -t | awk 'NR>5' | sed -e "s/^/rm '/" -e "s/$/'/" | sh
Basic logic:
get a listing of the files in time order, one column
get all but the first 5 (n=5 for this example)
first version: send those to rm
second version: gen a script that will remove them properly
With zsh
Assuming you don't care about present directories and you will not have more than 999 files (choose a bigger number if you want, or create a while loop).
[ 6 -le `ls *(.)|wc -l` ] && rm *(.om[6,999])
In *(.om[6,999]), the . means files, the o means sort order up, the m means by date of modification (put a for access time or c for inode change), the [6,999] chooses a range of file, so doesn't rm the 5 first.
Adaptation of #mklement0's excellent answer with some parameters and without needing to navigate to the folder containing the files to be deleted...
TARGET_FOLDER="/my/folder/path"
FILES_KEEP=5
ls -tp "$TARGET_FOLDER"**/* | grep -v '/$' | tail -n +$((FILES_KEEP+1)) | xargs -d '\n' -r rm --
[Ref(s).: https://stackoverflow.com/a/3572628/3223785 ]
Thanks! 😉
found interesting cmd in Sed-Onliners - Delete last 3 lines - fnd it perfect for another way to skin the cat (okay not) but idea:
#!/bin/bash
# sed cmd chng #2 to value file wish to retain
cd /opt/depot
ls -1 MyMintFiles*.zip > BigList
sed -n -e :a -e '1,2!{P;N;D;};N;ba' BigList > DeList
for i in `cat DeList`
do
echo "Deleted $i"
rm -f $i
#echo "File(s) gonzo "
#read junk
done
exit 0
Removes all but the 10 latest (most recents) files
ls -t1 | head -n $(echo $(ls -1 | wc -l) - 10 | bc) | xargs rm
If less than 10 files no file is removed and you will have :
error head: illegal line count -- 0
To count files with bash
I needed an elegant solution for the busybox (router), all xargs or array solutions were useless to me - no such command available there. find and mtime is not the proper answer as we are talking about 10 items and not necessarily 10 days. Espo's answer was the shortest and cleanest and likely the most unversal one.
Error with spaces and when no files are to be deleted are both simply solved the standard way:
rm "$(ls -td *.tar | awk 'NR>7')" 2>&-
Bit more educational version: We can do it all if we use awk differently. Normally, I use this method to pass (return) variables from the awk to the sh. As we read all the time that can not be done, I beg to differ: here is the method.
Example for .tar files with no problem regarding the spaces in the filename. To test, replace "rm" with the "ls".
eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}')
Explanation:
ls -td *.tar lists all .tar files sorted by the time. To apply to all the files in the current folder, remove the "d *.tar" part
awk 'NR>7... skips the first 7 lines
print "rm \"" $0 "\"" constructs a line: rm "file name"
eval executes it
Since we are using rm, I would not use the above command in a script! Wiser usage is:
(cd /FolderToDeleteWithin && eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}'))
In the case of using ls -t command will not do any harm on such silly examples as: touch 'foo " bar' and touch 'hello * world'. Not that we ever create files with such names in real life!
Sidenote. If we wanted to pass a variable to the sh this way, we would simply modify the print (simple form, no spaces tolerated):
print "VarName="$1
to set the variable VarName to the value of $1. Multiple variables can be created in one go. This VarName becomes a normal sh variable and can be normally used in a script or shell afterwards. So, to create variables with awk and give them back to the shell:
eval $(ls -td *.tar | awk 'NR>7 { print "VarName=\""$1"\"" }'); echo "$VarName"
leaveCount=5
fileCount=$(ls -1 *.log | wc -l)
tailCount=$((fileCount - leaveCount))
# avoid negative tail argument
[[ $tailCount < 0 ]] && tailCount=0
ls -t *.log | tail -$tailCount | xargs rm -f
I made this into a bash shell script. Usage: keep NUM DIR where NUM is the number of files to keep and DIR is the directory to scrub.
#!/bin/bash
# Keep last N files by date.
# Usage: keep NUMBER DIRECTORY
echo ""
if [ $# -lt 2 ]; then
echo "Usage: $0 NUMFILES DIR"
echo "Keep last N newest files."
exit 1
fi
if [ ! -e $2 ]; then
echo "ERROR: directory '$1' does not exist"
exit 1
fi
if [ ! -d $2 ]; then
echo "ERROR: '$1' is not a directory"
exit 1
fi
pushd $2 > /dev/null
ls -tp | grep -v '/' | tail -n +"$1" | xargs -I {} rm -- {}
popd > /dev/null
echo "Done. Kept $1 most recent files in $2."
ls $2|wc -l
Modified version of the answer of #Fabien if you want to specify a path. Useful if you're running the script elsewhere.
ls -tr /path/foo/ | head -n -5 | xargs -I% --no-run-if-empty rm /path/foo/%

Resources