Within a bash script I am trying to construct a string for grep -E so that it appears as
grep -E 'alice|bar|bob|foo'
If I test the grep at the command line-- ls * | grep -E 'alice|bar|bob|foo'-- things work as expected. It excludes all the files with the same name as the list within the extended regular expression.
The issue I've found is that it will not match the first and last strings within a bash script if I construct the string as 'alice|bar|bob|foo'
Broken testcase:
#!/bin/bash
touch foo.txt bar.txt alice.txt bob.txt
touch alice.tmp bob.tmp foo.tmp crump.tmp dammitall.tmp
EXCLUDE_PATTERN=$(echo *.txt | sed 's/\.txt /|/g' | sed 's/\.txt//')
EXCLUDE_PATTERN="'""$EXCLUDE_PATTERN""'"
echo "Excluding files that match the string $EXCLUDE_PATTERN"
for file in *.tmp
do
if echo $file | grep -q -E $EXCLUDE_PATTERN
then
echo "Keeping $file"
else
echo "Deleting $file"
rm -f $file
fi
done
Outputs:
Excluding files that match the string 'alice|bar|bob|foo'
Deleting alice.tmp
Keeping bob.tmp
Deleting crump.tmp
Deleting dammitall.tmp
Deleting foo.tmp
... and yet I don't want it do delete alice.tmp or foo.tmp because they're in the regex!
I assume the shell is getting some characters that it's not when the string is expanded in this script, but I can't for the life of me figure out in what manner the string passed to grep -E is getting hosed by the "broken" script above.
Variations like EXCLUDE_PATTERN="'$EXCLUDE_PATTERN'" don't seem to help. Haven't found the magic string.
Edit to include useful comment below:
Using set -x indicates that bash does the single-quote wrapping itself, so the incorrect code above does this EXCLUDE_PATTERN=''\''alice|bar|bob|foo'\''' which is just adding single quotes around single quotes.
Why are you adding the single quote marks? Just remove this line:
EXCLUDE_PATTERN="'""$EXCLUDE_PATTERN""'"
I'm getting the following without that line:
Excluding files that match the string alice|bar|bob|foo
Keeping alice.tmp
Keeping bob.tmp
Deleting crump.tmp
Deleting dammitall.tmp
Keeping foo.tmp
Related
I'm trying to iterate through all the files in a directory and rename them from the prefix ABC to XYZ using the command below
while read file; do mv \"$file\" \"$(echo $file | sed -e s/ABC/XYZ/g)\" ; done < <(ls -1)
When I throw an echo in front of the mv, everything looks like it should work fine and copy/pasting the outputted command works fine but it won't execute correctly within the context of the loop giving me a usage error as if the command is malformed like below.
usage: mv [-f | -i | -n] [-v] source target
mv [-f | -i | -n] [-v] source ... directory
Even though the outputted command from the check with echo gives
mv "ABC Test1" "XYZ Test1"
which should be a valid command and works if I copy paste.
Any idea what is going on?
Relace:
while read file; do mv \"$file\" \"$(echo $file | sed -e s/ABC/XYZ/g)\" ; done < <(ls -1)
With:
for file in *
do
mv "$file" "${file//ABC/XYZ}"
done
Notes:
This is very important: Never parse ls. ls is only designed to produce human-friendly output.
To iterate over all files in a directory, use for file in *; do ...; done. This will work reliably for all manor of file names including file names with newlines, blanks, or other difficult characters.
\" produces a literal character, not a syntactic character. Since we want the syntactic meaning of " here, we leave it unescaped.
There are times when one needs sed but this isn't one of them.
The shell is capable of doing simple substitutions without all the issues associated with command substitution. Thus, $(echo $file | sed -e s/ABC/XYZ/g) can be replaced with ${file//ABC/XYZ}.
The form ${var//old/new} is called pattern substitution and is documented in man bash.
Very stupid mistake. There was no need to escape the quotes in the mv command. Taking those out makes it work as expected. Escaping the quotes shows the correct output with echo but does not give intended behavior.
while read file; do mv "$file" "$(echo $file | sed -e s/ABC/XYZ/g)" ; done < <(ls -1)
Information and Problems
I am learning linux command now, and was simply practicing grep command in a bash.
I want to match every file whose name begins with character "a"...quite a simple requirement...From what I understand the regex should be something like a.*, but it doesn't work as what I thought.
Some of the filenames should be matched doesn't match.
My Command
I typed commands in a Ubuntu Mate 16.04 VirtualBox terminal.
I have created a document called test. In the test document, I have got three files,
a.txt
a1.txt
a2.txt
Here the following is my command using grep.
ls -a | grep -E -e a.*
But the output is simply
a.txt
I think .* should mean any numbers of whatever character. So the a1.txt and a2.txt should match the regex, but it doesn't work.
However if I tried
ls -a | grep -E -e ^a.*
ls -a | grep -E -e a.+
Both of the command work as what I expected, all the filenames matches.
a.txt
a1.txt
a2.txt
I could not figure out what goes wrong?
What I have tried
I have searched through the questions, there exist a question very similar to mine, but the problems is about the extended grep and the basic one, which definitely isn't my situation.
Use more quotes!
With the literal command you ran in your question:
ls -a | grep -E -e a.*
...your shell will replace a.* with a list of filenames in the current directory matching a.* as a glob pattern before grep is started at all. (See also the full bash-hackers page on globbing).
If a.* is placed inside quotes, as in:
ls -a | grep -E 'a.*'
...then this string will no longer be evaluated as a glob. You might also want to anchor the regex with ^, to search only at the beginning:
ls -a | grep -E '^a.*'
That said, ls is not a tool build for programmatic use -- it isn't guaranteed to emit filenames in unmodified literal form, so it's not certain that all possible names will be emitted in such a way that grep or other tools will parse them correctly (indeed, ls can't emit all possible names is literal form, since it uses newline delimiters between names, whereas newline literals are actually possible within names themselves). Consider using find for this kind of processing:
while IFS= read -r -d '' filename; do
printf 'Found file: %q\n' "$filename"
done < <(find . -regex '/^a[^/]*' -print0)
...will work even with files having intentionally difficult-to-process names; consider, for example, mkdir -p $'\n/etc/passwd\n' && touch $'\n/etc/passwd\n/a.txt'.
You are misunderstanding how the shell is parsing your command. When you do this:
ls -a | grep -E -e a.*
The shell globs the command before it is passed to ls or grep. The result of the glob is this:
ls -a | grep -E -e a.txt
Because in globbing, a.* only matches a.txt.
You need to put the regexes in quotes, e.g.
ls -a | grep -E -e 'a.*'
I have redirected some string into one parameter for ex: ab=jyoti,priya, pranit
I have one file : Name.txt which contains -
jyoti
prathmesh
John
Kelvin
pranit
I want to delete the records from the Name.txt file which are contain in ab parameter.
Please suggest if this can be done ?
If ab is a shell variable, you can easily turn it into an extended regular expression, and use it with grep -E:
grep -E -x -v "${ab//,/|}" Name.txt
The string substitution ${ab//,/|} returns the value of $ab with every , substituted with a | which turns it into an extended regular expression, suitable for passing as an argument to grep -E.
The -v option says to remove matching lines.
The -x option specifies that the match needs to cover the whole input line, so that a short substring will not cause an entire longer line to be removed. Without it, ab=prat would cause pratmesh to be removed.
If you really require a sed solution, the transformation should be fairly trivial. grep -E -v -x 'aaa|bbb|ccc' is equivalent to sed '/^\(aaa\|bbb\|ccc)$/d' (with some dialects disliking the backslashes, and others requiring them).
To do an in-place edit (modify Name.txt without a temporary file), try this:
sed -i "/^\(${ab//,/\|}\)\$/d" Name.txt
This is not entirely robust against strings containing whitespace or other shell metacharacters, but if you just need
Try with
sed -e 's/\bjyoti\b//g;s/\bpriya\b//g' < Name.txt
(using \b assuming you need word boundaries)
this will do it:
for param in `echo $ab | sed -e 's/[ ]+//g' -e 's/,/ /g'` ; do res=`sed -e "s/$param//g" < name.txt`; echo $res > name.txt; done
echo $res
I am using this command
cat text.csv | while read a ; do grep $a text1.csv >> text2.csv; done
text.csv has file names with full path. The file names are having spaces.
Example: C:\Users\Downloads\File Name.txt
text1.csv contains logs showing user id and the file name with full path.
Example: MyName,C:\Users\Downloads\File Name.txt
When I run the command, I get and error
grep: Name: No such file or Directory
I know that the error is because of the spaces in the file name. I would like to know how can I remove this error.
Use your grep pattern with double quotes otherwise shell will treat it as different arguments to grep:
while read a ; do grep "$a" text1.csv >> text2.csv; done < text.csv
There is NO need of extra cat hence I changed it in my answer.
Quote the variable:
cat text.csv | while read a ; do grep "$a" text1.csv >> text2.csv; done
In general, you should usually quote variables, unless you specifically want the value to undergo word splitting and wildcard expansion.
The following is a simple Bash command line:
grep -li 'regex' "filename with spaces" "filename"
No problems. Also the following works just fine:
grep -li 'regex' $(<listOfFiles.txt)
where listOfFiles.txt contains a list of filenames to be grepped, one
filename per line.
The problem occurs when listOfFiles.txt contains filenames with
embedded spaces. In all cases I've tried (see below), Bash splits the
filenames at the spaces so, for example, a line in listOfFiles.txt
containing a name like ./this is a file.xml ends up trying to run
grep on each piece (./this, is, a and file.xml).
I thought I was a relatively advanced Bash user, but I cannot find a
simple magic incantation to get this to work. Here are the things I've
tried.
grep -li 'regex' `cat listOfFiles.txt`
Fails as described above (I didn't really expect this to work), so I
thought I'd put quotes around each filename:
grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Bash interprets the quotes as part of the filename and gives "No such
file or directory" for each file (and still splits the filenames with
blanks)
for i in $(<listOfFiles.txt); do grep -li 'regex' "$i"; done
This fails as for the original attempt (that is, it behaves as if the
quotes are ignored) and is very slow since it has to launch one 'grep'
process per file instead of processing all files in one invocation.
The following works, but requires some careful double-escaping if
the regular expression contains shell metacharacters:
eval grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Is this the only way to construct the command line so it will
correctly handle filenames with spaces?
Try this:
(IFS=$'\n'; grep -li 'regex' $(<listOfFiles.txt))
IFS is the Internal Field Separator. Setting it to $'\n' tells Bash to use the newline character to delimit filenames. Its default value is $' \t\n' and can be printed using cat -etv <<<"$IFS".
Enclosing the script in parenthesis starts a subshell so that only commands within the parenthesis are affected by the custom IFS value.
cat listOfFiles.txt |tr '\n' '\0' |xargs -0 grep -li 'regex'
The -0 option on xargs tells xargs to use a null character rather than white space as a filename terminator. The tr command converts the incoming newlines to a null character.
This meets the OP's requirement that grep not be invoked multiple times. It has been my experience that for a large number of files avoiding the multiple invocations of grep improves performance considerably.
This scheme also avoids a bug in the OP's original method because his scheme will break where listOfFiles.txt contains a number of files that would exceed the buffer size for the commands. xargs knows about the maximum command size and will invoke grep multiple times to avoid that problem.
A related problem with using xargs and grep is that grep will prefix the output with the filename when invoked with multiple files. Because xargs invokes grep with multiple files one will receive output with the filename prefixed, but not for the case of one file in listOfFiles.txt or the case of multiple invocations where the last invocation contains one filename. To achieve consistent output add /dev/null to the grep command:
cat listOfFiles.txt |tr '\n' '\0' |xargs -0 grep -i 'regex' /dev/null
Note that was not an issue for the OP because he was using the -l option on grep; however it is likely to be an issue for others.
This works:
while read file; do grep -li dtw "$file"; done < listOfFiles.txt
With Bash 4, you can also use the builtin mapfile function to set an array containing each line and iterate on this array:
$ tree
.
├── a
│ ├── a 1
│ └── a 2
├── b
│ ├── b 1
│ └── b 2
└── c
├── c 1
└── c 2
3 directories, 6 files
$ mapfile -t files < <(find -type f)
$ for file in "${files[#]}"; do
> echo "file: $file"
> done
file: ./a/a 2
file: ./a/a 1
file: ./b/b 2
file: ./b/b 1
file: ./c/c 2
file: ./c/c 1
Though it may overmatch, this is my favorite solution:
grep -i 'regex' $(cat listOfFiles.txt | sed -e "s/ /?/g")
Do note that if you somehow ended up with a list in a file which has Windows line endings, \r\n, NONE of the notes above about the input file separator $IFS (and quoting the argument) will work; so make sure that the line endings are correctly \n (I use scite to show the line endings, and easily change them from one to the other).
Also cat piped into while file read ... seems to work (apparently without need to set separators):
cat <(echo -e "AA AA\nBB BB") | while read file; do echo $file; done
... although for me it was more relevant for a "grep" through a directory with spaces in filenames:
grep -rlI 'search' "My Dir"/ | while read file; do echo $file; grep 'search\|else' "$ix"; done