sed/awk - print text between patterns spanned across multiple lines - bash

I am new to scripting and was trying to learn how to extract any text that exists between two different patterns. However, I am still not able to figure out how to extract text between two patterns in the following scenario:
If I have my input file reading:
Hi I would like
to print text
between these
patterns
and my expected output is like:
I would like
to print text
between these
i.e. my first search pattern is "Hi' and skip this pattern, but print everything that exists in the same line following that matched pattern. My second search pattern is "patterns" and I would like to completely avoid printing this line or any lines beyond that.
I tried the following:
sed -n '/Hi/,/patterns/p' test.txt
[output]
Hi I would like
to print text
between these
patterns
Next, I tried:
`awk ' /'"Hi"'/ {flag=1;next} /'"pattern"'/{flag=0} flag { print }'` test.txt
[output]
to print text
between these
Can someone help me out in identifying how to achieve this?
Thanks in advance

You have the right idea, a mini-state-machine in awk but you need some slight mods as per the following transcript:
pax> echo 'Hi I would like
to print text
between these
patterns ' | awk '
/patterns/ { echo = 0 }
/Hi / { gsub("^.*Hi ", "", $0); echo = 1 }
{ if (echo == 1) { print } }'
Or, in compressed form:
awk '/patterns/{e=0}/Hi /{gsub("^.*Hi ","",$0);e=1}{if(e==1){print}}'
The output of that is:
I would like
to print text
between these
as requested.
The way this works is as follows. The echo variable is initially 0 meaning that no echoing will take place.
Each line is checked in turn. If it contains patterns, echoing is disabled.
If it contains Hi followed by a space, echoing is turned on and gsub is used to modify the line to get rid of everything up to the Hi.
Then, regardless, the line (possibly modified) is echoed when the echo flag is on.
Now, there's going to be edge cases such as:
lines containing two occurrences of Hi; or
lines containing something before the patterns.
You haven't specified how they should be handled so I didn't bother, but the basic concept should be the same.

Updated the solution to remove the line "patterns" :
$ sed -n '/^Hi/,/patterns/{s/^Hi //;/^patterns/d;p;}' file
I would like
to print text
between these

This might work for you (GNU sed):
sed '/Hi /!d;s//\n/;s/.*\n//;ta;:a;s/patterns.*$//;tb;$!{n;ba};:b;/^$/d' file

Just set a flag (f) when you find+replace Hi at the start of a line, clear it when you find patterns, then invoke the default print when the flag is set:
$ awk 'sub(/^Hi /,""){f=1} /patterns/{f=0} f' file
I would like
to print text
between these

Related

How do I read into a .txt and extract a certain string corresponding to a found string?

A folder contains a README.txt and several dicom files named emr_000x.sx (where x are numerical values). In the README.txt are different lines, one of which contains the characters "xyz" and a corresponding emr_000x.sx in the line.
I would like to: read into the .txt, identify which line contains "xyz", and extract the emr_000x.sx from that line only.
For reference, the line in the .txt is formatted in this way:
A:emr_000x.sx, B:00001, C:number, D(characters)string_string_number_**xyz**_number_number
I think using grep might be helpful, but am not familiar enough to bash coding myself. Does anyone know how to solve this? Many thanks!
You can use awk to match fields on you csv:
awk -F, '$4 ~ "xyz" {sub(/^A:/, "", $1); print $1}'
I like sed for this sort of thing.
sed -nE '/xyz/{ s/^.*A:([^,]+),.*/\1/; p; }' README.txt
This says, "On lines where you see xyz replace the whole line with the non-commas between A: and a comma, then print the line."
-n is no printing unless I say so. (p means print.)
-E just means to use Extended regexes.
/xyz/{...} means "on lines where you see xyz do the stuff between the curlies."
s/^.*A:([^,]+),.*/\1/ will substitute the matched part (which should be the whole line) with just the part between the parens.

The for loop overwrites or entries duplicates

Say, I have 250 files, and from which I need to extract certain information and store them in a text file.
I have tried for loop in the shell as following,
text= 'home/path/tothe/textfiles'
for sam in $(find ${text} -name \*_PG.tsv);do
#echo ${sam}
awk '{if($2=="ID") print FILENAME"\t""yes""\t""SAP""\t""LUFTA"}' ${sam}
done >> ${text}/metadata.txt
With > operator the output text file is overwritten and with >> the output text file is being entered multiple times or duplicate entry.
I would like to know where should I change to get rid of these issues. Thanks for suggestions !!
I think you can do this with a single invocation of awk:
path=home/path/tothe/textfiles
awk -v OFS='\t' '$2 == "ID" {
print FILENAME, "yes", "SAP", "LUFTA"
}' "$path"/*_PG.tsv > "$path"/metadata.txt
careful with your variable assignments, there should be no spaces around the =
use the shell to expand the list of files, without find
pass the full list of files as arguments to awk, instead of looping one by one
set the Output Field Separator OFS instead of writing \t to separate your fields
redirect the output to the metadata file
I assume that your awk script is behaving as you expect - I removed the useless if since awk scripts are written like condition { action }. I guess you only want one line of output per file, so you can probably add an exit inside the block to avoid processing the rest of the file.

Batch create files with name and content based on input file

I am a mac OS user trying to batch create a bunch of files. I have a text file with column of several hundred terms/subjects, eg:
hydrogen
oxygen
nitrogen
carbon
etcetera
I want to programmatically fill a directory with text files generated from this subject list. For example, "hydrogen.txt" and "oxygen.txt" and so on, with each file created by iterating through the lines of my list_of_names.txt file. Some lines are one word, but other lines are two or three words (eg: "carbon monoxide"). This I have figured out how to do:
awk 'NF>0' list_of_names.txt | while read line; do touch "${line}.txt"; done
Additionally I need to create two lines of content within each of these files, and the content is both static and dynamic...
# filename
#elements/filename
...where in the example above the pound sign ("#") and "elements/" would be the same in all of the files created, but "filename" would be variable (eg: "hydrogen" for "hydrogen.txt" and "oxygen" for "oxygen.txt" etc). One further wrinkle is that if any spaces appear at all on the second line of content, there needs to be a trailing pound sign. For example:
# filename
#elements/carbon monoxide#
...although this last part is not a dealbreaker and I can use grep to modify list_of_names.txt such that phrases like "carbon monoxide" become "carbon_monoxide" and just deal with the repercussions of this later. (But if it is easy to preserve the spaces, I would prefer that.)
After a couple hours of searching and attempts to use sed, awk, and so on I am stuck at a directory full of files with the correct filename.txt format, but I can't get further that this. Mostly I think my efforts are failing because the solutions I can find for doing something like this are using commands I am not familiar with and they are structured for GNU and don't execute correctly in Terminal on Mac OS.
I am amenable to processing this in multiple steps (ie make all of the files.txt first, then run a second step to populate the content of the files), or as a single command that makes the files and all of their content simultaneously ('simultaneously' from a human timescale).
My horrible pseudocode (IN CAPS) for how this would look as 2 steps:
awk 'NF>0' list_of_names.txt | while read line; do touch "${line}.txt"; done
awk 'NF>0' list_of_names.txt | while read line; OPEN "${line}.txt" AND PRINT "# ${line}\n#elements/${line}"; IF ${line} CONTAINS CHARACTER " " PRINT "#"; done
You could use a simple Bash loop and create the files in one shot:
#!/bin/bash
while read -r name; do # loop through input file content
[[ $name ]] || continue # skip empty lines
output=("# $name") # initialize the array with first element
trailing=
[[ $name = *" "* ]] && trailing="#" # name has spaces in it
output+=("#elements/$name$trailing") # name doesn't have a space
printf '%s\n' "${output[#]}" > "$name.txt" # write array content to the output file
done < list_of_names.txt
Doing it in awk:
awk '
NF {
trailing = (/ / ? "#" : "")
out=$0".txt"
printf("# %s\n#elements/%s%s\n", $0, $0, trailing) > out
close(out)
}
' list_of_names.txt
Doing the whole job in awk will yield better performance than in bash, which isn't really suited to processing text like this.
It seems to me that this should cover the requirements you've specified:
awk '
{
out=$0 ".txt"
printf "# %s\n#elements/%s%s\n", $0, $0, (/ / ? "#" : "") >> out
close(out)
}
' list_of_subjects.txt
Though you could shrink it to a one-liner:
awk '{printf "# %s\n# elements/%s%s\n",$0,$0,(/ /?"#":"")>($0".txt");close($0".txt")}' list_of_subjects.txt

Merging specific lines in a CSV file if specific trend is found out

I have been stuck with creating the script for below mentioned scenario:
I have a file a.csv with content as
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt
wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet
What happens is, due to some application, the CSV content of one line gets printed on the second line.
My task is to find such occurrence and merge the such lines in a new file.
so new files will contains only required CSV records I tried few things using sed, but couldn't succeed.
$ awk -F, '!length $4 && length $3 {printf "%s,", $0;next}1' file
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt,wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet
All previous answers seem great, but I wanted to add a sed answer as well because sed is awesome! (And sed was added as a tag so we were missing a sed answer.)
This answer should work on multiple lines, provided that the cut always happen on a separator and that that separator is omitted (see input example for these assumptions).
sed ':l;/\([^,]*,\)\{3\}[^,]*/!{;N;s/\n/,/g;bl;}' <file_in >file_out
What it does is :
defines a label (:l)
tests if there are four fields (/\([^,]*,\)\{3\}[^,]*/)
If there isn't (!), execute the block ({;N;s/\n/,/g;bl;})
The block :
reads the next line into the buffer (N)
replaces the newline with the separator (s/\n/,/g)
loops by branching to our :llabel (bl)
Proof :
$ sed ':l;/\([^,]*,\)\{3\}[^,]*/!{;N;s/\n/,/g;bl;}' <<EOF
> 123,fsfs,4124124,412412
> 1314,fasfwe,42145,rwr
> 1234,fwtrwqt,twt
> wqrfsdgaseg
> 12424,23532,fafwe,gewgt
> 14214,wet,wertwtw,wet
> EOF
123,fsfs,4124124,412412
1314,fasfwe,42145,rwr
1234,fwtrwqt,twt,wqrfsdgaseg
12424,23532,fafwe,gewgt
14214,wet,wertwtw,wet

'grep +A': print everything after a match [duplicate]

This question already has answers here:
How to get the part of a file after the first line that matches a regular expression
(12 answers)
Closed 7 years ago.
I have a file that contains a list of URLs. It looks like below:
file1:
http://www.google.com
http://www.bing.com
http://www.yahoo.com
http://www.baidu.com
http://www.yandex.com
....
I want to get all the records after: http://www.yahoo.com, results looks like below:
file2:
http://www.baidu.com
http://www.yandex.com
....
I know that I could use grep to find the line number of where yahoo.com lies using
grep -n 'http://www.yahoo.com' file1
3 http://www.yahoo.com
But I don't know how to get the file after line number 3. Also, I know there is a flag in grep -A print the lines after your match. However, you need to specify how many lines you want after the match. I am wondering is there something to get around that issue. Like:
Pseudocode:
grep -n 'http://www.yahoo.com' -A all file1 > file2
I know we could use the line number I got and wc -l to get the number of lines after yahoo.com, however... it feels pretty lame.
AWK
If you don't mind using AWK:
awk '/yahoo/{y=1;next}y' data.txt
This script has two parts:
/yahoo/ { y = 1; next }
y
The first part states that if we encounter a line with yahoo, we set the variable y=1, and then skip that line (the next command will jump to the next line, thus skip any further processing on the current line). Without the next command, the line yahoo will be printed.
The second part is a short hand for:
y != 0 { print }
Which means, for each line, if variable y is non-zero, we print that line. In AWK, if you refer to a variable, that variable will be created and is either zero or empty string, depending on context. Before encounter yahoo, variable y is 0, so the script does not print anything. After encounter yahoo, y is 1, so every line after that will be printed.
Sed
Or, using sed, the following will delete everything up to and including the line with yahoo:
sed '1,/yahoo/d' data.txt
This is much easier done with sed than grep. sed can apply any of its one-letter commands to an inclusive range of lines; the general syntax for this is
START , STOP COMMAND
except without any spaces. START and STOP can each be a number (meaning "line number N", starting from 1); a dollar sign (meaning "the end of the file"), or a regexp enclosed in slashes, meaning "the first line that matches this regexp". (The exact rules are slightly more complicated; the GNU sed manual has more detail.)
So, you can do what you want like so:
sed -n -e '/http:\/\/www\.yahoo\.com/,$p' file1 > file2
The -n means "don't print anything unless specifically told to", and the -e directive means "from the first appearance of a line that matches the regexp /http:\/\/www\.yahoo\.com/ to the end of the file, print."
This will include the line with http://www.yahoo.com/ on it in the output. If you want everything after that point but not that line itself, the easiest way to do that is to invert the operation:
sed -e '1,/http:\/\/www\.yahoo\.com/d' file1 > file2
which means "for line 1 through the first line matching the regexp /http:\/\/www\.yahoo\.com/, delete the line" (and then, implicitly, print everything else; note that -n is not used this time).
awk '/yahoo/ ? c++ : c' file1
Or golfed
awk '/yahoo/?c++:c' file1
Result
http://www.baidu.com
http://www.yandex.com
This is most easily done in Perl:
perl -ne 'print unless 1 .. m(http://www\.yahoo\.com)' file
In other words, print all lines that aren’t between line 1 and the first occurrence of that pattern.
Using this script:
# Get index of the "yahoo" word
index=`grep -n "yahoo" filepath | cut -d':' -f1`
# Get the total number of lines in the file
totallines=`wc -l filepath | cut -d' ' -f1`
# Subtract totallines with index
result=`expr $total - $index`
# Gives the desired output
grep -A $result "yahoo" filepath

Resources