Related
I have many files that contain strings like:
code="AA123",code_mark="AI123"
code="AA234",code_mark="AI234"
code="AA456",code_mark="AI456"
code="AA789",code_mark="AI789"
code="AA321",code_mark="AI321"
code="AA111",code_mark="AI111"
code="AA222",code_mark="AI221"
The prefix AA and AI should be always like that in the line but there are some cases where code_mark="AA###" instead of code_mark="AI###". I want to find those lines, e.g.:
code="AA451",code_mark="AA451"
code="AA121",code_mark="AA121"
code="AA272",code_mark="AA272"
I tried for multiple hours to do it but the only thing I have achieved was to grep the text in between the brackets. Is there any way to grep all the lines that do not match the above pattern?
Use grep, like so:
grep 'AA.*AA' input_file(s)
# or:
grep -v 'AA.*AI' input_file(s)
Example:
Create input file:
cat > in.txt <<EOF
code="AA123",code_mark="AI123" (this is on one line)
code="AA234",code_mark="AI234" (this is on one line)
code="AA456",code_mark="AI456" (this is on one line)
code="AA789",code_mark="AI789' (this is on one line)
code="AA321",code_mark="AI321" (this is on one line)
code="AA111",code_mark="AI111" (this is on one line)
code="AA222",code_mark="AI221" (this is on one line)
code="AA451",code_mark="AA451" (wrong case)
code="AA121",code_mark="AA121" (wrong case)
code="AA272",code_mark="AA272" (wrong case)
EOF
Find the desired lines:
# Lines that have 'AA' followed by 'AA':
grep 'AA.*AA' in.txt
# or:
# Lines that do not have 'AA' followed by 'AI':
grep -v 'AA.*AI' in.txt
Prints:
code="AA451",code_mark="AA451" (wrong case)
code="AA121",code_mark="AA121" (wrong case)
code="AA272",code_mark="AA272" (wrong case)
SEE ALSO:
grep manual
I am trying to find all numbers in a json file and replace them with a half value of the original number using sed on mac. For example, here I search for 2010 and replace it with 1005:
file="data.json"
sed -i '' -E 's,([^0-9]|^)2010([^0-9]|$),\1 1005\2,g' "$file"
I would like to find all number instances, and replace them with half values of themselves. It would need to work on decimals, eg: 2009 would become 1004.5, 10.5 would become 5.25.
I'm aware this could take each individual number character, so perhaps it would need to find numbers with non-numerical characters either side of it.
edit: I would like it to be flexible and work on all forms of text files, not just JSON files. (.txt, .html, .rtf etc...)
You may use Perl with a regex with e modifier:
perl -pe 's{(?<!\d)(\d+(?:\.\d+)?)(?!\d)}{$1/2}ge' file
To modify the file inline, add -i option:
perl -i -pe 's{(?<!\d)(\d+(?:\.\d+)?)(?!\d)}{$1/2}ge' file
perl -pi.bak -e 's{(?<!\d)(\d+(?:\.\d+)?)(?!\d)}{$1/2}ge' file # To save a backup of the original file
See the online demo:
s="abc_2010_and+2009+or-10.5"
perl -pe 's{(?<!\d)(\d+(?:\.\d+)?)(?!\d)}{$1/2}ge' <<< "$s"
# => abc_1005_and+1004.5+or-5.25
The (?<!\d)(\d+(?:\.\d+)?)(?!\d) regex matches
(?<!\d) - no digit immediately to the left is allowed
(\d+(?:\.\d+)?) - Group 1 ($1): 1+ digits followed with an optional sequence of . and 1+ digits
(?!\d) - no digit immediately to the right is allowed.
The RHS - $1/2 - is an expression that divides the Group 1 value with 2. It is achieved through adding e modifier at the end of the regex.
With GNU awk for multi-char RS and RT it'd just be:
awk -v RS='[0-9]+([.][0-9]+)?' -v ORS= 'RT{$0=$0 RT/2} 1'
e.g borrowing #Wiktors example:
$ s="abc_2010_and+2009+or-10.5"
$ awk -v RS='[0-9]+([.][0-9]+)?' -v ORS= 'RT{$0=$0 RT/2} 1' <<< "$s"
abc_1005_and+1004.5+or-5.25
If you want to overwrite an input file then add -i inplace:
awk -i inplace -v RS...1' file
I have a text file that is basically one giant excel file on one line in a text file. An example would be like this:
Name,Age,Year,Michael,27,2018,Carl,19,2018
I need to change the third occurance of a comma into a new line so that I get
Name,Age,Year
Michael,27,2018
Carl,19,2018
Please let me know if that is too ambiguous and as always thank you in advance for all the help!
With Gnu sed:
sed -E 's/(([^,]*,){2}[^,]*),/\1\n/g'
To change the number of fields per line, change {2} to one less than the number of fields. For example, to change every fifth comma (as in the title of your question), you would use:
sed -E 's/(([^,]*,){4}[^,]*),/\1\n/g'
In the regular expression, [^,]*, is "zero or more characters other than , followed by a ,; in other words, it is a single comma-delimited field. This won't work if the fields are quoted strings with internal commas or newlines.
Regardless of what Linux's man sed says, the -E flag is an extension to Posix sed, which causes sed to use extended regular expressions (EREs) rather than basic regular expressions (see man 7 regex). -E also works on BSD sed, used by default on Mac OS X. (Thanks to #EdMorton for the note.)
With GNU awk for multi-char RS:
$ awk -v RS='[,\n]' '{ORS=(NR%3 ? "," : "\n")} 1' file
Name,Age,Year
Michael,27,2018
Carl,19,2018
With any awk:
$ awk -v RS=',' '{sub(/\n$/,""); ORS=(NR%3 ? "," : "\n")} 1' file
Name,Age,Year
Michael,27,2018
Carl,19,2018
Try this:
$ cat /tmp/22.txt
Name,Age,Year,Michael,27,2018,Carl,19,2018,Nooka,35,1945,Name1,11,19811
$ echo "Name,Age,Year"; grep -o "[a-zA-Z][a-zA-Z0-9]*,[1-9][0-9]*,[1-9][0-9]\{3\}" /tmp/22.txt
Michael,27,2018
Carl,19,2018
Nooka,35,1945
Name1,11,1981
Or, ,[1-9][0-9]\{3\} if you don't want to put [0-9] 3 more times for the YYYY part.
PS: This solution will give you only YYYY for the year (even if the data for YYYY is 19811 (typo mistakes if any), you'll still get 1981
You are looking for 3 fragments, each without a comma and separated by a comma.
The last fields can give problems (not ending with a comma and mayby only two fields.
The next command looks fine.
grep -Eo "([^,]*[,]{0,1}){0,3}" inputfile
This might work for you (GNU sed):
sed 's/,/\n/3;P;D' file
Replace every third , with a newline, print ,delete the first line and repeat.
Using grep, you can print lines that match your search query. Adding a -C option will print two lines of surrounding context, like this:
> grep -C 2 'lorem'
some context
some other context
**lorem ipsum**
another line
yet another line
Similarly, you can use grep -B 2 or grep -A 2 to print matching lines with two preceding or two following lines, respectively, for example:
> grep -A 2 'lorem'
**lorem ipsum**
another line
yet another line
Is it possible to skip the matching line and only print the context? Specifically, I would like to only print the line that is exactly 2 lines above a match, like this:
> <some magic command>
some context
If you can allow couple of grep instances to be used, you can try like as I mentioned in the comments section.
$ grep -v "lorem" < <(grep -A2 "lorem" file)
another line
yet another line
$ grep -A2 "lorem" file | grep -v "lorem"
another line
yet another line
If you are interested in a dose of awk, there is a cool way to do it as
$ awk -v count=2 '{a[++i]=$0;}/lorem/{for(j=NR-count;j<NR;j++)print a[j];}' file
another line
yet another line
It works by storing the entire file in its own array and after searching for the pattern lorem, the awk special variable which stores the row number(NR), points at the exact line in which the pattern is found. If we loop for 2 lines before it as dictated by the awk variable -v count, we can print the lines needed.
If you are interested in the printing the pattern also, just change the condition in for-loop as j<=NR instead of j<NR. That's it!
There’s no way to do this purely through a grep command. If there’s only one instance of lorem in the text, you could pipe the output through head.
grep -B2 lorem t | head -1
If there may be multiple occurrence of lorem, you could use awk:
awk '{second_previous=previous; previous=current_line; current_line=$0}; /lorem/ { print second_previous; }'
This awk command saves each line (along with the previous and the one before that) in variables so when it encounters a line containing lorem, it prints the second last line. If lorem happens to occur in the first or second line of the input, nothing would be printed.
awk, as others have said, is your friend here. You don't need complex loops or arrays or other junk, though; basic patterns suffice.
When you use -B N, (and the --no-group-separator flag) you get output in groups of M=N+1 lines. To select precisely one of those lines (in your question, you want the very first of the group), you can use modular arithmetic (tested with GNU awk).
awk -vm=3 -vx=1 'NR%m==x{print}'
You can think of the lines being numbered like this: they count up until you reach the match, at which point they go back to zero. So set m to N+1 and x to the line you want to extract.
1 some context
2 some other context
0 **lorem ipsum**
So the final command would be
grep -B2 --no-group-separator lorem $input | awk -vm=3 -vx=1 'NR%m==x{print}'
Using this:
grep -A1 -B1 "test_pattern" file
will produce one line before and after the matched pattern in the file. Is there a way to display not lines but a specified number of characters?
The lines in my file are pretty big so I am not interested in printing the entire line but rather only observe the match in context. Any suggestions on how to do this?
3 characters before and 4 characters after
$> echo "some123_string_and_another" | grep -o -P '.{0,3}string.{0,4}'
23_string_and
grep -E -o ".{0,5}test_pattern.{0,5}" test.txt
This will match up to 5 characters before and after your pattern. The -o switch tells grep to only show the match and -E to use an extended regular expression. Make sure to put the quotes around your expression, else it might be interpreted by the shell.
You could use
awk '/test_pattern/ {
match($0, /test_pattern/); print substr($0, RSTART - 10, RLENGTH + 20);
}' file
You mean, like this:
grep -o '.\{0,20\}test_pattern.\{0,20\}' file
?
That will print up to twenty characters on either side of test_pattern. The \{0,20\} notation is like *, but specifies zero to twenty repetitions instead of zero or more.The -o says to show only the match itself, rather than the entire line.
I'll never easily remember these cryptic command modifiers so I took the top answer and turned it into a function in my ~/.bashrc file:
cgrep() {
# For files that are arrays 10's of thousands of characters print.
# Use cpgrep to print 30 characters before and after search pattern.
if [ $# -eq 2 ] ; then
# Format was 'cgrep "search string" /path/to/filename'
grep -o -P ".{0,30}$1.{0,30}" "$2"
else
# Format was 'cat /path/to/filename | cgrep "search string"
grep -o -P ".{0,30}$1.{0,30}"
fi
} # cgrep()
Here's what it looks like in action:
$ ll /tmp/rick/scp.Mf7UdS/Mf7UdS.Source
-rw-r--r-- 1 rick rick 25780 Jul 3 19:05 /tmp/rick/scp.Mf7UdS/Mf7UdS.Source
$ cat /tmp/rick/scp.Mf7UdS/Mf7UdS.Source | cgrep "Link to iconic"
1:43:30.3540244000 /mnt/e/bin/Link to iconic S -rwxrwxrwx 777 rick 1000 ri
$ cgrep "Link to iconic" /tmp/rick/scp.Mf7UdS/Mf7UdS.Source
1:43:30.3540244000 /mnt/e/bin/Link to iconic S -rwxrwxrwx 777 rick 1000 ri
The file in question is one continuous 25K line and it is hopeless to find what you are looking for using regular grep.
Notice the two different ways you can call cgrep that parallels grep method.
There is a "niftier" way of creating the function where "$2" is only passed when set which would save 4 lines of code. I don't have it handy though. Something like ${parm2} $parm2. If I find it I'll revise the function and this answer.
With gawk , you can use match function:
x="hey there how are you"
echo "$x" |awk --re-interval '{match($0,/(.{4})how(.{4})/,a);print a[1],a[2]}'
ere are
If you are ok with perl, more flexible solution : Following will print three characters before the pattern followed by actual pattern and then 5 character after the pattern.
echo hey there how are you |perl -lne 'print "$1$2$3" if /(.{3})(there)(.{5})/'
ey there how
This can also be applied to words instead of just characters.Following will print one word before the actual matching string.
echo hey there how are you |perl -lne 'print $1 if /(\w+) there/'
hey
Following will print one word after the pattern:
echo hey there how are you |perl -lne 'print $2 if /(\w+) there (\w+)/'
how
Following will print one word before the pattern , then the actual word and then one word after the pattern:
echo hey there how are you |perl -lne 'print "$1$2$3" if /(\w+)( there )(\w+)/'
hey there how
If using ripgreg this is how you would do it:
grep -E -o ".{0,5}test_pattern.{0,5}" test.txt
You can use regexp grep for finding + second grep for highlight
echo "some123_string_and_another" | grep -o -P '.{0,3}string.{0,4}' | grep string
23_string_and
With ugrep you can specify -ABC context with option -o (--only-matching) to show the match with extra characters of context before and/or after the match, fitting the match plus the context within the specified -ABC width. For example:
ugrep -o -C30 pattern testfile.txt
gives:
1: ... long line with an example pattern to match. The line could...
2: ...nother example line with a pattern.
The same on a terminal with color highlighting gives:
Multiple matches on a line are either shown with [+nnn more]:
or with option -k (--column-number) to show each individually with context and the column number:
The context width is the number of Unicode characters displayed (UTF-8/16/32), not just ASCII.
I personally do something similar to the posted answers.. but since the dot key, like any keyboard key, can be tapped or held down.. and I often don't need a lot of context(if I needed more I might do the lines like grep -C but often like you I don't want lines before and after), so I find it much quicker for entering the command, to just tap the dot key for how many dots / how many characters, if it's a few then tapping the key, or hold it down for more.
e.g. echo zzzabczzzz | grep -o '.abc..'
Will have the abc pattern with one dot before and two after. ( in regex language, Dot matches any character). Others used dot too but with curly braces to specify repetition.
If I wanted to be strict re between (0 or x) characters and exactly y characters, then i'd use the curlies.. and -P, as others have done.
There is a setting re whether dot matches new line but you can look into that if it's a concern/interest.