Here's a snippet:
var=`ls | shuf | head -2 | xargs cat | sed -e 's/\(.\)/\1\n/g' | shuf | tr -d '\n'`
This will select two random files from the current directory, combine their contents, shuffle them, and assign the result to var. This works fine most of the time, but about once in a thousand cases, instead just the output of ls is bound to var (It's not just the output, see EDIT II). What could be the explanation?
Some more potentially relevant facts:
the directory contains at least two files
there are only text files in the directory
file names don't contain spaces
the files are anywhere from 5 to about 1000 characters in length
the snippet is a part of a larger script that it ran two instances in parallel
bash version: GNU bash, version 4.1.5(1)-release (i686-pc-linux-gnu)
uname: Linux 2.6.35-28-generic-pae #50-Ubuntu
EDIT: I ran the snippet by itself a couple of thousand times with no errors. Then I tried running it with various other parts of the whole script. Here's a configuration that produces errors:
cd dir_with_text_files
var=`ls | shuf | head -2 | xargs cat | sed -e 's/\(.\)/\1\n/g' | shuf | tr -d '\n'`
cd ..
There are several hundred lines of the script between the cds, but this is the minimal configuration to reproduce the error. Note that the anomalous output binds to var the output of the current directory, not dir_with_text_files.
EDIT II: I've been looking at the outputs in more detail. The ls output doesn't appear alone, it's along with with two shuffled files (between their contents, or after or before them, intact). But it gets better; let me set up the stage to talk about particular directories.
[~/projects/upload] ls -1
checked // dir
lines // dir, the files to shuffle are here
pages // also dir
proxycheck
singlepost
uploader
indexrefresh
t
tester
So far, I've seen the output of ls ran from upload, but now I saw the output of ls */* (also ran from upload). It was in the form of "someMangledText ls moreMangledText ls */* finalBatchOfText". Is it possible that the sequence ls that undoubtedly was generated was somehow executed?
No problems here either.
I would also rewrite the above to this:
sed 's:\(.\):\1\n:g' < <(shuf -e * | head -2 | xargs cat) | shuf | tr -d '\n'
Do not use ls to list a directory's content, use *.
Moreover, do some debugging. Use a shebang followed by:
set -e
set -o pipefail
and run the script like this:
/bin/bash -x /path/to/script
and do inspect the output.
Instead of debugging the whole script, you can surround just the part that seems to be problematic with -x
set -x
...code that may have problems...
set +x
so that the output focuses on that part of the code.
Also, use the pipefail option.
Some definitions:
-e : Exit immediately if a simple command exits with a non-zero status, unless the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of a && or || list, or if the command's return status is being inverted using !. A trap on ERR, if set, is executed before the shell exits
-x : Print a trace of simple commands, for commands, case commands, select commands, and arithmetic for commands and their arguments or associated word lists after they are expanded and before they are executed. The value of the PS4 variable is expanded and the resultant value is printed before the command and its expanded arguments
pipefail : If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully
For debugging purposes you may also clear the environment using env -i and filter out non-printable characters:
#!/usr/bin/env -i /bin/bash --
set -ef
set -o pipefail
unset IFS PATH LC_ALL
IFS=$' \t\n'
PATH="$(PATH=/bin:/usr/bin getconf PATH)"
LC_ALL=C
export IFS PATH LC_ALL
#var="$((find . -type f -maxdepth 1 -print0 | shuf -z -n 2 | xargs -0 cat) | sed -e 's/\(.\)/\1\n/g' | shuf | tr -d '\n')"
var="$((find . -type f -maxdepth 1 -print0 | shuf -z -n 2 | xargs -0 cat) | tr -cd '[[:print:]]' | grep -o '.' | shuf | tr -d '\n')"
Before running the script you may also disable the GNU readline library and ! style history expansion:
bash --noediting
set +H
Based on what you say wrt to your failure rates, and given the success of the other tests performed by the posters above, it sounds like a problem that could be caused by an occasional directory-change failure. Is the directory you're accessing in this script mounted from a remote machine by chance? If so, it might just be a small and temporary network-related failure that's not being handled properly. (Just a guess.)
Related
What is the correct way to xargs to feed a list of strings as inputs to the middle of a command?
For example, say I want to move all files that come through a complicated series of "pipes" to the home directory. Something like
$ ... | ... | ... | awk '{print $2}' | xargs -L1 mv ~/
Tries to move the home directory to each input, rather than the desired order.
Someone previously asked a question about this, but the answers were not helpful:
Unix - "xargs" - output "in the middle" (not at the end!)
Is there a way to place the xargs input into a specific part of the command and not just at the end.
Using GNU Parallel it looks like this:
$ ... | ... | ... | awk '{print $2}' | parallel mv {} ~/
It will run one job per file. Faster is:
$ ... | ... | ... | awk '{print $2}' | parallel -X mv {} ~/
It will insert multiple names before running the job.
You didn't mention what xargs are you using.
If you use BSD xargs instead of GNU xargs, there is a powerful option -L that can do what you want:
-J replstr
If this option is specified, xargs will use the data read from
standard input to replace the first occurrence of replstr instead
of appending that data after all other arguments. This option
will not affect how many arguments will be read from input (-n),
or the size of the command(s) xargs will generate (-s). The op-
tion just moves where those arguments will be placed in the com-
mand(s) that are executed. The replstr must show up as a dis-
tinct argument to xargs. It will not be recognized if, for in-
stance, it is in the middle of a quoted string. Furthermore,
only the first occurrence of the replstr will be replaced. For
example, the following command will copy the list of files and
directories which start with an uppercase letter in the current
directory to destdir:
/bin/ls -1d [A-Z]* | xargs -J % cp -Rp % destdir
Examples
$ seq 3 | xargs -J# echo # fourth fifth
1 2 3 fourth fifth
seq 3 | xargs -J# ruby -e 'pp ARGV' # fourth fifth
["1", "2", "3", "fourth", "fifth"]
The Ruby code pp ARGV here would pretty print the command arguments it receives. I usually use this way to debug xargs scripts.
I have to run many python script which differ just with one parameter. I name them as runv1.py, runv2.py, runv20.py. I have the original script, say runv1.py. Then I make all copies that I need by
cat runv1.py | tee runv{2..20..1}.py
So I have runv1.py,.., runv20.py. But still the parameter v=1 in all of them.
Q: how can I also replace v parameter to read it from the file name? so e.g in runv4.py then v=4. I would like to know if there is any one-line shell command or combination of commands. Thank you!
PS: direct editing each file is not a proper solution when there are too many files.
Below for loop will serve your purpose I think
for i in `ls | grep "runv[0-9][0-9]*.py"`
do
l=`echo $i | tr -d [a-z.]`
sed -i 's/v/'"$l"'/g' runv$l.py
done
Below command was to pass the parameter to script extracted from the filename itself
ls | grep "runv[0-9][0-9]*.py" | tr -d [a-z.] | awk '{print "./runv"$0".py "$0}' | xargs sh
in the end instead of sh you can use python or bash or ksh.
How can I perform an operation for each item listed by grep individually?
Background:
I use grep to list all files containing a certain pattern:
grep -l '<pattern>' directory/*.extension1
I want to delete all listed files but also all files having the same file name but a different extension: .extension2.
I tried using the pipe, but it seems to take the output of grep as a whole.
In find there is the -exec option, but grep has nothing like that.
If I understand your specification, you want:
grep --null -l '<pattern>' directory/*.extension1 | \
xargs -n 1 -0 -I{} bash -c 'rm "$1" "${1%.*}.extension2"' -- {}
This is essentially the same as what #triplee's comment describes, except that it's newline-safe.
What's going on here?
grep with --null will return output delimited with nulls instead of newline. Since file names can have newlines in them delimiting with newline makes it impossible to parse the output of grep safely, but null is not a valid character in a file name and thus makes a nice delimiter.
xargs will take a stream of newline-delimited items and execute a given command, passing as many of those items (one as each parameter) to a given command (or to echo if no command is given). Thus if you said:
printf 'one\ntwo three \nfour\n' | xargs echo
xargs would execute echo one 'two three' four. This is not safe for file names because, again, file names might contain embedded newlines.
The -0 switch to xargs changes it from looking for a newline delimiter to a null delimiter. This makes it match the output we got from grep --null and makes it safe for processing a list of file names.
Normally xargs simply appends the input to the end of a command. The -I switch to xargs changes this to substitution the specified replacement string with the input. To get the idea try this experiment:
printf 'one\ntwo three \nfour\n' | xargs -I{} echo foo {} bar
And note the difference from the earlier printf | xargs command.
In the case of my solution the command I execute is bash, to which I pass -c. The -c switch causes bash to execute the commands in the following argument (and then terminate) instead of starting an interactive shell. The next block 'rm "$1" "${1%.*}.extension2"' is the first argument to -c and is the script which will be executed by bash. Any arguments following the script argument to -c are assigned as the arguments to the script. This, if I were to say:
bash -c 'echo $0' "Hello, world"
Then Hello, world would be assigned to $0 (the first argument to the script) and inside the script I could echo it back.
Since $0 is normally reserved for the script name I pass a dummy value (in this case --) as the first argument and, then, in place of the second argument I write {}, which is the replacement string I specified for xargs. This will be replaced by xargs with each file name parsed from grep's output before bash is executed.
The mini shell script might look complicated but it's rather trivial. First, the entire script is single-quoted to prevent the calling shell from interpreting it. Inside the script I invoke rm and pass it two file names to remove: the $1 argument, which was the file name passed when the replacement string was substituted above, and ${1%.*}.extension2. This latter is a parameter substitution on the $1 variable. The important part is %.* which says
% "Match from the end of the variable and remove the shortest string matching the pattern.
.* The pattern is a single period followed by anything.
This effectively strips the extension, if any, from the file name. You can observe the effect yourself:
foo='my file.txt'
bar='this.is.a.file.txt'
baz='no extension'
printf '%s\n'"${foo%.*}" "${bar%.*}" "${baz%.*}"
Since the extension has been stripped I concatenate the desired alternate extension .extension2 to the stripped file name to obtain the alternate file name.
If this does what you want, pipe the output through /bin/sh.
grep -l 'RE' folder/*.ext1 | sed 's/\(.*\).ext1/rm "&" "\1.ext2"/'
Or if sed makes you itchy:
grep -l 'RE' folder/*.ext1 | while read file; do
echo rm "$file" "${file%.ext1}.ext2"
done
Remove echo if the output looks like the commands you want to run.
But you can do this with find as well:
find /path/to/start -name \*.ext1 -exec grep -q 'RE' {} \; -print | ...
where ... is either the sed script or the three lines from while to done.
The idea here is that find will ... well, "find" things based on the qualifiers you give it -- namely, that things match the file glob "*.ext", AND that the result of the "exec" is successful. The -q tells grep to look for RE in {} (the file supplied by find), and exit with a TRUE or FALSE without generating any of its own output.
The only real difference between doing this in find vs doing it with grep is that you get to use find's awesome collection of conditions to narrow down your search further if required. man find for details. By default, find will recurse into subdirectories.
You can pipe the list to xargs:
grep -l '<pattern>' directory/*.extension1 | xargs rm
As for the second set of files with a different extension, I'd do this (as usual use xargs echo rm when testing to make a dry run; I haven't tested it, it may not work correctly with filenames with spaces in them):
filelist=$(grep -l '<pattern>' directory/*.extension1)
echo $filelist | xargs rm
echo ${filelist//.extension1/.extension2} | xargs rm
Pipe the result to xargs, it will allow you to run a command for each match.
The following is a simple Bash command line:
grep -li 'regex' "filename with spaces" "filename"
No problems. Also the following works just fine:
grep -li 'regex' $(<listOfFiles.txt)
where listOfFiles.txt contains a list of filenames to be grepped, one
filename per line.
The problem occurs when listOfFiles.txt contains filenames with
embedded spaces. In all cases I've tried (see below), Bash splits the
filenames at the spaces so, for example, a line in listOfFiles.txt
containing a name like ./this is a file.xml ends up trying to run
grep on each piece (./this, is, a and file.xml).
I thought I was a relatively advanced Bash user, but I cannot find a
simple magic incantation to get this to work. Here are the things I've
tried.
grep -li 'regex' `cat listOfFiles.txt`
Fails as described above (I didn't really expect this to work), so I
thought I'd put quotes around each filename:
grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Bash interprets the quotes as part of the filename and gives "No such
file or directory" for each file (and still splits the filenames with
blanks)
for i in $(<listOfFiles.txt); do grep -li 'regex' "$i"; done
This fails as for the original attempt (that is, it behaves as if the
quotes are ignored) and is very slow since it has to launch one 'grep'
process per file instead of processing all files in one invocation.
The following works, but requires some careful double-escaping if
the regular expression contains shell metacharacters:
eval grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Is this the only way to construct the command line so it will
correctly handle filenames with spaces?
Try this:
(IFS=$'\n'; grep -li 'regex' $(<listOfFiles.txt))
IFS is the Internal Field Separator. Setting it to $'\n' tells Bash to use the newline character to delimit filenames. Its default value is $' \t\n' and can be printed using cat -etv <<<"$IFS".
Enclosing the script in parenthesis starts a subshell so that only commands within the parenthesis are affected by the custom IFS value.
cat listOfFiles.txt |tr '\n' '\0' |xargs -0 grep -li 'regex'
The -0 option on xargs tells xargs to use a null character rather than white space as a filename terminator. The tr command converts the incoming newlines to a null character.
This meets the OP's requirement that grep not be invoked multiple times. It has been my experience that for a large number of files avoiding the multiple invocations of grep improves performance considerably.
This scheme also avoids a bug in the OP's original method because his scheme will break where listOfFiles.txt contains a number of files that would exceed the buffer size for the commands. xargs knows about the maximum command size and will invoke grep multiple times to avoid that problem.
A related problem with using xargs and grep is that grep will prefix the output with the filename when invoked with multiple files. Because xargs invokes grep with multiple files one will receive output with the filename prefixed, but not for the case of one file in listOfFiles.txt or the case of multiple invocations where the last invocation contains one filename. To achieve consistent output add /dev/null to the grep command:
cat listOfFiles.txt |tr '\n' '\0' |xargs -0 grep -i 'regex' /dev/null
Note that was not an issue for the OP because he was using the -l option on grep; however it is likely to be an issue for others.
This works:
while read file; do grep -li dtw "$file"; done < listOfFiles.txt
With Bash 4, you can also use the builtin mapfile function to set an array containing each line and iterate on this array:
$ tree
.
├── a
│ ├── a 1
│ └── a 2
├── b
│ ├── b 1
│ └── b 2
└── c
├── c 1
└── c 2
3 directories, 6 files
$ mapfile -t files < <(find -type f)
$ for file in "${files[#]}"; do
> echo "file: $file"
> done
file: ./a/a 2
file: ./a/a 1
file: ./b/b 2
file: ./b/b 1
file: ./c/c 2
file: ./c/c 1
Though it may overmatch, this is my favorite solution:
grep -i 'regex' $(cat listOfFiles.txt | sed -e "s/ /?/g")
Do note that if you somehow ended up with a list in a file which has Windows line endings, \r\n, NONE of the notes above about the input file separator $IFS (and quoting the argument) will work; so make sure that the line endings are correctly \n (I use scite to show the line endings, and easily change them from one to the other).
Also cat piped into while file read ... seems to work (apparently without need to set separators):
cat <(echo -e "AA AA\nBB BB") | while read file; do echo $file; done
... although for me it was more relevant for a "grep" through a directory with spaces in filenames:
grep -rlI 'search' "My Dir"/ | while read file; do echo $file; grep 'search\|else' "$ix"; done
I'm well aware of the source (aka .) utility, which will take the contents from a file and execute them within the current shell.
Now, I'm transforming some text into shell commands, and then running them, as follows:
$ ls | sed ... | sh
ls is just a random example, the original text can be anything. sed too, just an example for transforming text. The interesting bit is sh. I pipe whatever I got to sh and it runs it.
My problem is, that means starting a new sub shell. I'd rather have the commands run within my current shell. Like I would be able to do with source some-file, if I had the commands in a text file.
I don't want to create a temp file because feels dirty.
Alternatively, I'd like to start my sub shell with the exact same characteristics as my current shell.
update
Ok, the solutions using backtick certainly work, but I often need to do this while I'm checking and changing the output, so I'd much prefer if there was a way to pipe the result into something in the end.
sad update
Ah, the /dev/stdin thing looked so pretty, but, in a more complex case, it didn't work.
So, I have this:
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/git mv -f $1 $2.doc/i' | source /dev/stdin
Which ensures all .doc files have their extension lowercased.
And which incidentally, can be handled with xargs, but that's besides the point.
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/$1 $2.doc/i' | xargs -L1 git mv
So, when I run the former, it'll exit right away, nothing happens.
The eval command exists for this very purpose.
eval "$( ls | sed... )"
More from the bash manual:
eval
eval [arguments]
The arguments are concatenated together
into a single command, which
is then read and executed, and its
exit status returned as the exit
status of eval. If there are no
arguments or only empty arguments, the
return status is zero.
$ ls | sed ... | source /dev/stdin
UPDATE: This works in bash 4.0, as well as tcsh, and dash (if you change source to .). Apparently this was buggy in bash 3.2. From the bash 4.0 release notes:
Fixed a bug that caused `.' to fail to read and execute commands from non-regular files such as devices or named pipes.
Try using process substitution, which replaces output of a command with a temporary file which can then be sourced:
source <(echo id)
Wow, I know this is an old question, but I've found myself with the same exact problem recently (that's how I got here).
Anyway - I don't like the source /dev/stdin answer, but I think I found a better one. It's deceptively simple actually:
echo ls -la | xargs xargs
Nice, right? Actually, this still doesn't do what you want, because if you have multiple lines it will concat them into a single command instead of running each command separately. So the solution I found is:
ls | ... | xargs -L 1 xargs
the -L 1 option means you use (at most) 1 line per command execution. Note: if your line ends with a trailing space, it will be concatenated with the next line! So make sure each line ends with a non-space.
Finally, you can do
ls | ... | xargs -L 1 xargs -t
to see what commands are executed (-t is verbose).
Hope someone reads this!
`ls | sed ...`
I sort of feel like ls | sed ... | source - would be prettier, but unfortunately source doesn't understand - to mean stdin.
I believe this is "the right answer" to the question:
ls | sed ... | while read line; do $line; done
That is, one can pipe into a while loop; the read command command takes one line from its stdin and assigns it to the variable $line. $line then becomes the command executed within the loop; and it continues until there are no further lines in its input.
This still won't work with some control structures (like another loop), but it fits the bill in this case.
To use the mark4o's solution on bash 3.2 (macos) a here string can be used instead of pipelines like in this example:
. /dev/stdin <<< "$(grep '^alias' ~/.profile)"
I think your solution is command substitution with backticks: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
See section 3.4.5
Why not use source then?
$ ls | sed ... > out.sh ; source out.sh