Grep to Print all file content [duplicate] - bash

This question already has answers here:
Colorized grep -- viewing the entire file with highlighted matches
(24 answers)
Closed 4 years ago.
How can I modify grep so that it prints full file if its entry matches the grep pattern , instead of printing Just the matching line ?
I tried using(say) grep -C2 to print two lines above and 2 below but this doesn't always works as no. of lines is not fixed ..
I am not Just searching a single file , I am searching an entire directory where some files may contain the given pattern and I want those Files to be completely Printed.
I am also using grep inside grep result without getting printed the first grep output.

Simple grep + cat combination:
grep 'pattern' file && cat file

Use grep's -l option to list the paths of files with matching contents, then print the contents of these files using cat.
grep -lR 'regex' 'directory' | xargs -d '\n' cat
The command from above cannot handle filenames with newlines in them.
To overcome the filename with newlines issue and also allow more sophisticated checks you can use the find command.
The following command prints the content of all regular files in directory.
find 'directory' -type f -exec cat {} +
To print only the content of files whose content matches the regexes regex1 and regex2, use
find 'directory' -type f \
-exec grep -q 'regex1' {} \; -and \
-exec grep -q 'regex2' {} \; \
-exec cat {} +
The linebreaks are only for better readability. Without the \ you can write everything into one line.
Note the -q for grep. That option supresses grep's output. grep's exit status will tell find whether to list a file or not.

Related

How to print the file names from which I grep some lines

I'm trying to get some lines from several json files using the following code:
cat $(find ./*/*/folderA/*DTI*.json) | grep -i -E '(phaseencodingdirection|phaseencodingaxis)' > phase_direction
It worked! the problem is that I don't know which line comes from which file
With this find ./*/*/preprocessing/*DTI*.json -type f -printf "%f\n" I can print those names, but they appear at the end and not in order with their respective phaseencodingdirection|phaseencodingaxis extracted lines.
I don't know how to combine those lines of code to print the file's name from which the line was extracted and their respective extracted lines!?
Could you help me?
the problem is that I don't know which line comes from which file
Well no, you don't, because you have concatenated the contents of all the files into a single stream. If you want to be able to identify at the point of pattern matching which file each line comes from then you have to give that information to grep in the first place. Like this, for example:
find ./*/*/folderA/*DTI*.json |
xargs grep -i -E -H '(phaseencodingdirection|phaseencodingaxis)' > phase_direction
The xargs program converts lines read from its standard input into arguments to the specified command (grep in this case). The -H option to grep causes it to list the filename of each match along with the matching line itself.
Alternatively, this variation on the same thing is a little simpler, and closer in some senses to the original:
grep -i -E -H '(phaseencodingdirection|phaseencodingaxis)' \
$(find ./*/*/folderA/*DTI*.json) > phase_direction
That takes xargs out of the picture, and moves the command substitution directly to the argument list of grep.
But now observe that if the pattern ./*/*/folderA/*DTI*.json does not match any directories then find isn't actually doing anything useful for you. There is then no directory recursion to be done, and you haven't specified any tests, so the command substitution will simply expand to all the paths that match the pattern, just like the pattern would do if expanded without find. Thus, this is probably best of all:
grep -i -E -H '(phaseencodingdirection|phaseencodingaxis)' \
./*/*/folderA/*DTI*.json > phase_direction
Use the filenames as arguments to grep rather than cat.
grep -i -H -E '(phaseencodingdirection|phaseencodingaxis)' $(find ./*/*/folderA/*DTI*.json) > phase_direction
The -H option forces grep to incliude filenames in the output even if there's only one file.
But since your arguments to find are filenames, not directories to search recursively, there's no need to use it at all. Just pass the wildcard directly to grep. There's also no need to begin with ./. Any non-absolute pathname is interpreted relative to the current directory.
grep -i -H -E '(phaseencodingdirection|phaseencodingaxis)' */*/folderA/*DTI*.json > phase_direction
You may use recursive grep:
grep -iER 'phaseencodingdirection|phaseencodingaxis' --include=*DTI*.json */*/folderA

Recursively find and open files

I want to search through all subdirectories and files to find files with a specific extension. When I find a file with the extension, I need to open it, find a specific string from within the file and store it within a txt file.
This is what I have so far for finding all of the correct files:
find . -name ".ext" ! -path './ExcludeThis*'
This is what I have for opening the file and getting the part of the file I want and storing it:
LINE=$(head .ext | grep search_string)
SUBSTR=$(echo $LINE | cut -f2 -d '"')
echo $SUBSTR >> results.txt
I am struggling for how to combine the 2 together, I have looked at 'for f in **/*' and then run an if statement in there to see if it matches the .ext and remove the need for find all together but **/* seems to work on directories only and not files.
A break down of any solutions would be very much appreciated too, I am new to shell scripting. Thanks.
find -name "*.ext" \! -path './ExcludeThis*' -exec head -q '{}' \+ |
grep search_string | cut -f2 -d'"' >> results.txt
find explanation
find -name "*.ext" \! -path './ExcludeThis*' -exec head -q '{}' \+
For each file name matched, executes head (with \+, the command line is built by appending each selected file name at the end, so the total number of invocations of the command will be much less than the number of matched files).
Notice I replaced .ext with *.ext (the first way just math a file named exactly .ext), and ! with \! (protection from interpretation by the shell).
The head option -q is necessary because that command prints headers when used with multiple files (due to \+ in this case).
In addition, if no path is given, the default is taken (.). i.e.: find . -name = find -name.
pipeline explanation
<find ... -exec head> | grep search_string | cut -f2 -d'"' >> results.txt
While head write the lines (10 by default) for every file in the pipe, grep read them.
If grep matches search_string in some of them, write those lines in the next pipe.
At the same time, cut take the second fields (delimited by ") of every line and appends them in results.txt

In bash, how to batch show the text of certain line in files?

I want to batch show the text of certain line of files in certain directory, usually this can be done with the following commands:
for file in `find ./ -name "results.txt"`;
do
sed -n '12p' < ${file};
done
In the 12th line of each file names "results.txt", there is the text I want to output.
But, I wonder that if we can use the pipeline command to do this operation. I have tried the following command:
find ./ -name "results.txt" | xargs sed -n '12p'
or
find ./ -name "results.txt" | xargs sed -n '12p' < {} \;
But neither works fine.
Could you give some advice or recommend some references, please?
All are welcome, Thanks in advice!
This should do it
find ./ -name results.txt -exec sed '12!d' {} ';'
#Steven Penny's answer is the most elegant and best-performing solution, but to shed some light on why your solution didn't work:
find ./ -name "results.txt" | xargs sed -n '12p'
causes all filenames(1) to be passed at once(2) to sed. Since sed counts lines cumulatively, across input files, only 1 line will be printed for all input files, namely line 12 from the first input file.
Keeping in mind that find's -exec action is the best solution, if you still wanted to solve this problem with xargs, you'd have to use xarg's -I option as follows, so as to ensure that sed is called once per input line (filename) (% is a self-chosen placeholder):
find ./ -name "results.txt" | xargs -I % sed -n '12q;d' %
Footnotes:
(1) with word splitting applied, which would break with paths with embedded spaces, but that's a separate issue.
(2) assuming they don't make the entire command exceed the max. length of a command line; either way, multiple filenames are passed at once.
As an aside: parsing command output with for as in your first snippet is NEVER a good idea - see http://mywiki.wooledge.org/ParsingLs and http://mywiki.wooledge.org/BashFAQ/001
Your use of xargs results in running sed with multiple file arguments. But as you can see, sed doesn't reset the record number to 1 when it starts reading a new file. For example, try running the following command against files with more than 12 lines each.
sed -n '12p' x.txt y.txt
If you want to use xargs, you might consider using awk:
find . -name 'results.txt' | xargs awk 'FNR==12'
P.S: I personally like using the for loop.

Find all files with text "example.html" and replace with "example.php" works only if no spaces are in file name

I have used the following to do a recursive find and replace within files, to update hrefs to point to a new page correctly:
#!/bin/bash
oldstring='features.html'
newstring='features.php'
grep -rl $oldstring public_html/ | xargs sed -i s#"$oldstring"#"$newstring"#g
It worked, except for a few files that had spaces in the name.
This isn't an issue, as the files with spaces in their names are backups/duplicates I created while testing new things. But I'd like to understand how I could properly pass paths with spaces to the sed command, in this query. Would anybody know how this could be corrected in this "one liner"?
find public_html/ -type f -exec grep -q "$oldstring" {} \; -print0 |
xargs -0 sed -i '' s#"$oldstring"#"$newstring"#g
find will print all the filenames for which the grep command is successful. I use the -print0 option to print them with the NUL character as the delimiter. This goes with the -0 option to xargs, which treats NUL as the argument delimiter on its input, rather than breaking the input at whitespace.
Actually, you don't even need grep and xargs, just run sed from find:
find public_html/ -type f -exec sed -i '' s#"$oldstring"#"$newstring"#g {} +
Here's a lazy approach:
grep -rl $oldstring public_html/ | xargs -d'\n' sed -i "s#$oldstring#$newstring#g"
By default, xargs uses whitespace as the delimiter of arguments coming from the input. So for example if you have two files, a b and c, then it will execute the command:
sed -i 's/.../.../' a b c
By telling xargs explicitly to use newline as the delimiter with -d '\n' it will correctly handle a b as a single argument and quote it when running the command:
sed -i 's/.../.../' 'a b' c
I called a lazy approach, because as #Barmar pointed out, this won't work if your files have newline characters in their names. If you need to take care of such cases, then use #Barmar's method with find ... -print0 and xargs -0 ...
PS: I also changed s#"$oldstring"#"$newstring"#g to "s#$oldstring#$newstring#g", which is equivalent, but more readable.

Bash Script which recursively makes all text in files lowercase

I'm trying to write a shell script which recursively goes through a directory, then in each file converts all Uppercase letters to lowercase ones. To be clear, I'm not trying to change the file names but the text in the files.
Considerations:
This is an old Fortran project which I am trying to make more accessible
I do not want to create a new file but rather write over the old one with the changes
There are several different file extensions in this directory, including .par .f .txt and others
What would be the best way to go about this?
To convert a file from lower case to upper case you can use ex (a good friend of ed, the standard editor):
ex -s file <<EOF
%s/[[:upper:]]\+/\L&/g
wq
EOF
or, if you like stuff on one line:
ex -s file <<< $'%s/[[:upper:]]\+/\L&/g\nwq'
Combining with find, you can then do:
find . -type f -exec bash -c "ex -s -- \"\$0\" <<< $'%s/[[:upper:]]\+/\L&/g\nwq'" {} \;
This method is 100% safe regarding spaces and funny symbols in the file names. No auxiliary files are created, copied or moved; files are only edited.
Edit.
Using glenn jackmann's suggestion, you can also write:
find . -type f -exec bash -c 'printf "%s\n" "%s/[[:upper:]]\+/\L&/g" "wq" | ex -- -s "$0"' {} \;
(the pro is that it avoids awkward escapes; the con is that it's longer).
You can translate all uppercase characters (A–Z) to lowercase (a–z) using the tr command
and specifying a range of characters, as in:
$ tr 'A-Z' 'a-z' <be.fore >af.ter
There is also special syntax in tr for specifying this sort of range for upper- and lowercase
conversions:
$ tr '[:upper:]' '[:lower:]' <be.fore >af.ter
The tr utility copies the given input to produced the output with substitution or deletion of selected characters. tr abbreviated as translate or transliterate. It takes as parameters two sets of characters, and replaces occurrences of the characters in the first set with the corresponding elements from the other set i.e. it is used to translate characters.
tr "set1" "set2" < input.txt > output.txt
Although tr doesn't support regular expressions, hmm, it does support a range of characters.
Just make sure that both arguments end up with the same number of characters.
If the second argument is shorter, its last character will be repeated to match the
length of the first argument. If the first argument is shorter, the second argument will
be truncated to match the length of the first.
sed -e 's/\(.*\)/\L\1/g' *
or you could pipe the files in from find
Expanding on #nullrevolution's solution:
find /path_to_files -type f -exec sed --in-place -e 's/\(.*\)/\L\1/g' '{}' \;
This one liner will look for all files in all sub-directories starting with /path_to_files as a base directory.
WARNING: This will change the case on ALL files in EVERY directory under */path_to_file*, so make sure you want to do that before you execute this script. You can limit the scope of the find based on file extensions by utilizing the following:
find /path_to_files -type f -name \*.txt -exec sed --in-place -e 's/\(.*\)/\L\1/g' '{}' \;
You may also want to make a backup of the original file before modifying the original:
find /path_to_files -type f -name *.txt -exec sed --in-place=-orig -e 's/(.*)/\L\1/g' '{}' \;
This will leave the original file name, while making an unmodified copy with the "_orig" appended to the file name (ie file.txt would become file.txt-orig).
An explanation of each piece:
find /path_to_file This will set the base directory to the path provided.
-type f This will search the directory hierarchy for files only.
-exec COMMAND '{}' \; This executes the provided command once for each matched file. The '{}' is replaced by the current file name. The \; indicates the end of the command.
sed --in-place -e 's/\(.*\)/\L\1/g' The --in-place will make the cnages to the file without backing up the file. The regular expression uses a backreference \1 to refer to the entire line and the \L to convert to lower case.
Optional
(For a more archaic solution.)
find /path_to_files -type f -exec dd if='{}' of='{}'-lc conv=lcase \;
Identifying text files can be a bit tricky in Unixlike environments. You can do something like this:
set -e -o noclobber
while read f; do
tr 'A-Z' 'a-z' <"$f" >"f.$$"
mv "$f.$$" "$f"
done < <(find "$start_directory" -type f -exec file {} + | cut -d: -f1)
This will fail on filenames with embedded colons or newlines, but should work on others, including those with spaces.

Resources