grep few columns from a file to another file in shell - shell

The following file is present in file1.txt:
mudId|~|mudType|~|mudNAme|~|mudDate|~|mudEndDate
100|~|Balance|~|Abc|~|21-09-2020|~|22-09-2020
101|~|Clone|~|Bcd|~|11-07-2020|~|12-07-2020
102|~|Ledger|~|Def|~|12-06-2019|~|13-06-2019
How to grep only the columns mudId, mudType and mudDate with all the rows into another file?
The columns are separated by |~|

To meet your criteria of specifying the field names from the heading row, you can use awk utilizing a Regular Expression as the Field-Separator variable (e.g. "[|][~][|]"). For the first record (line), read the field names as array indexes and set the value to the current field index. For your second rule, simply output the field value captured in your array that corresponds to the strings "mudId", "mudType" and "mudDate".
For example you can do:
awk '
BEGIN { FS="[|][~][|]"; OFS="|~|" }
FNR==1 { for(i=1;i<=NF;i++) arr[$i]=i; next }
{ print $arr["mudId"], $arr["mudType"], $arr["mudDate"] }
' file
(note: the above intentionally generalizes to meet your criteria where you want to specify the string names of the fields to output)
If you simply want to write fields 1, 2, & 4 to a new file, you would do:
awk -v FS="[|][~][|]" -v OFS="|~|" 'FNR>1 {print $1,$2,$4}' file
Example Use/Output
Simply copy/middle-mouse paste the above into an xterm where file is in the current directory, e.g.
$ awk '
> BEGIN { FS="[|][~][|]"; OFS="|~|" }
> FNR==1 { for(i=1;i<=NF;i++) arr[$i]=i; next }
> { print $arr["mudId"], $arr["mudType"], $arr["mudDate"] }
> ' file
100|~|Balance|~|21-09-2020
101|~|Clone|~|11-07-2020
102|~|Ledger|~|12-06-2019
(note: if you want the new file space-delimited, just remove OFS="|~|")
or
$ awk -v FS="[|][~][|]" -v OFS="|~|" 'FNR>1 {print $1,$2,$4}' file
100|~|Balance|~|21-09-2020
101|~|Clone|~|11-07-2020
102|~|Ledger|~|12-06-2019
To write the contents to a new filename, just redirect the output to a new filename (e.g. for the last line above, add ' file > newfile)
Look things over and let me know if you have further questions.

If the column is fixed by mudId|~|mudType|~|mudNAme|~|mudDate|~|mudEndDate, try this:
sed 's/|~|/\t/g' file1.txt | awk '{print $1"|~|"$2"|~|"$4}'
you should change \t to other character which will not occur in your file1.txt if the \t would exist in file1.txt, and then add -F'\t' after awk.

Related

awk: select first column and value in column after matching word

I have a .csv where each row corresponds to a person (first column) and attributes with values that are available for that person. I want to extract the names and values a particular attribute for persons where the attribute is available. The doc is structured as follows:
name,attribute1,value1,attribute2,value2,attribute3,value3
joe,height,5.2,weight,178,hair,
james,,,,,,
jesse,weight,165,height,5.3,hair,brown
jerome,hair,black,breakfast,donuts,height,6.8
I want a file that looks like this:
name,attribute,value
joe,height,5.2
jesse,height,5.3
jerome,height,6.8
Using this earlier post, I've tried a few different awk methods but am still having trouble getting both the first column and then whatever column has the desired value for the attribute (say height). For example the following returns everything.
awk -F "height," '{print $1 "," FS$2}' file.csv
I could grep only the rows with height in them, but I'd prefer to do everything in a single line if I can.
You may use this awk:
cat attrib.awk
BEGIN {
FS=OFS=","
print "name,attribute,value"
}
NR > 1 && match($0, k "[^,]+") {
print $1, substr($0, RSTART+1, RLENGTH-1)
}
# then run it as
awk -v k=',height,' -f attrib.awk file
name,attribute,value
joe,height,5.2
jesse,height,5.3
jerome,height,6.8
# or this one
awk -v k=',weight,' -f attrib.awk file
name,attribute,value
joe,weight,178
jesse,weight,165
With your shown samples please try following awk code. Written and tested in GNU awk. Simple explanation would be, using GNU awk and setting RS(record separator) to ^[^,]*,height,[^,]* and then printing RT as per requirement to get expected output.
awk -v RS='^[^,]*,height,[^,]*' 'RT{print RT}' Input_file
I'd suggest a sed one-liner:
sed -n 's/^\([^,]*\).*\(,height,[^,]*\).*/\1\2/p' file.csv
One awk idea:
awk -v attr="height" '
BEGIN { FS=OFS="," }
FNR==1 { print "name", "attribute", "value"; next }
{ for (i=2;i<=NF;i+=2) # loop through even-numbered fields
if ($i == attr) { # if field value is an exact match to the "attr" variable then ...
print $1,$i,$(i+1) # print current name, current field and next field to stdout
next # no need to check rest of current line; skip to next input line
}
}
' file.csv
NOTE: this assumes the input value (height in this example) will match exactly (including same capitalization) with a field in the file
This generates:
name,attribute,value
joe,height,5.2
jesse,height,5.3
jerome,height,6.8
With a perl one-liner:
$ perl -lne '
print "name,attribute,value" if $.==1;
print "$1,$2" if /^(\w+).*(height,\d+\.\d+)/
' file
output
name,attribute,value
joe,height,5.2
jesse,height,5.3
jerome,height,6.8
awk accepts variable-value arguments following a -v flag before the script. Thus, the name of the required attribute can be passed into an awk script using the general pattern:
awk -v attr=attribute1 ' {} ' file.csv
Inside the script, the value of the passed variable is reference by the variable name, in this case attr.
Your criteria are to print column 1, the first column containing the name, the column corresponding to the required header value, and the column immediately after that column (holding the matched values).
Thus, the following script allows you to fish out the column headed "attribute1" and it's next neighbour:
awk -v attr=attribute1 ' BEGIN {FS=","} /attr/{for (i=1;i<=NF;i++) if($i == attr) col=i;} {print $1","$col","$(col+1)} ' data.txt
result:
name,attribute1,value1
joe,height,5.2
james,,
jesse,weight,165
jerome,hair,black
another column (attribute 3):
awk -v attr=attribute3 ' BEGIN {FS=","} /attr/{for (i=1;i<=NF;i++) if($i == attr) col=i;} {print $1","$col","$(col+1)} ' awkNames.txt
result:
name,attribute3,value3
joe,hair,
james,,
jesse,hair,brown
jerome,height,6.8
Just change the value of the -v attr= argument for the required column.

How to find content in a file and replace the adjecent value

Using bash how do I find a string and update the string next to it for example pass value
my.site.com|test2.spin:80
proxy_pass.map
my.site2.com test2.spin:80
my.site.com test.spin:8080;
Expected output is to update proxy_pass.map with
my.site2.com test2.spin:80
my.site.com test2.spin:80;
I tried using awk
awk '{gsub(/^my\.site\.com\s+[A-Za-z0-9]+\.spin:8080;$/,"my.site2.comtest2.spin:80"); print}' proxy_pass.map
but does not seem to work. Is there a better way to approch the problem. ?
One awk idea, assuming spacing needs to be maintained:
awk -v rep='my.site.com|test2.spin:80' '
BEGIN { split(rep,a,"|") # split "rep" variable and store in
site[a[1]]=a[2] # associative array
}
$1 in site { line=$0 # if 1st field is in site[] array then make copy of current line
match(line,$1) # find where 1st field starts (in case 1st field does not start in column #1)
newline=substr(line,1,RSTART+RLENGTH-1) # save current line up through matching 1st field
line=substr(line,RSTART+RLENGTH) # strip off 1st field
match(line,/[^[:space:];]+/) # look for string that does not contain spaces or ";" and perform replacement, making sure to save everything after the match (";" in this case)
newline=newline substr(line,1,RSTART-1) site[$1] substr(line,RSTART+RLENGTH)
$0=newline # replace current line with newline
}
1 # print current line
' proxy_pass.map
This generates:
my.site2.com test2.spin:80
my.site.com test2.spin:80;
If the input looks like:
$ cat proxy_pass.map
my.site2.com test2.spin:80
my.site.com test.spin:8080;
This awk script generates:
my.site2.com test2.spin:80
my.site.com test2.spin:80;
NOTES:
if multiple replacements need to be performed I'd suggest placing them in a file and having awk process said file first
the 2nd match() is hardcoded based on OP's example; depending on actual file contents it may be necessary to expand on the regex used in the 2nd match()
once satisified with the result the original input file can be updated in a couple ways ... a) if using GNU awk then awk -i inplace -v rep.... or b) save result to a temp file and then mv the temp file to proxy_pass.map
If the number of spaces between the columns is not significant, a simple
proxyf=proxy_pass.map
tmpf=$$.txt
awk '$1 == "my.site.com" { $2 = "test2.spin:80;" } {print}' <$proxyf >$tmpf && mv $tmpf $proxyf
should do. If you need the columns to be lined up nicely, you can replace the print by a suitable printf .... statement.
With your shown samples and attempts please try following awk code. Creating shell variable named var where it stores value my.site.com|test2.spin:80 in it. which further is being passed to awk program. In awk program creating variable named var1 which has shell variable var's value in it.
In BEGIN section of awk using split function to split value of var(shell variable's value container) into array named arr with separator as |. Where num is total number of values delimited by split function. Then using for loop to be running till value of num where it creates array named arr2 with index of current i value and making i+1 as its value(basically 1 is for key of array and next item is value of array).
In main block of awk program checking condition if $1 is in arr2 then print arr2's value else print $2 value as per requirement.
##Shell variable named var is being created here...
var="my.site.com|test2.spin:80"
awk -v var1="$var" '
BEGIN{
num=split(var1,arr,"|")
for(i=1;i<=num;i+=2){
arr2[arr[i]]=arr[i+1]
}
}
{
print $1,(($1 in arr2)?arr2[$1]:$2)
}
' Input_file
OR in case you want to maintain spaces between 1st and 2nd field(s) then try following code little tweak of Above code. Written and tested with your shown samples Only.
awk -v var1="$var" '
BEGIN{
num=split(var1,arr,"|")
for(i=1;i<=num;i+=2){
arr2[arr[i]]=arr[i+1]
}
}
{
match($0,/[[:space:]]+/)
print $1 substr($0,RSTART,RLENGTH) (($1 in arr2)?arr2[$1]:$2)
}
' Input_file
NOTE: This program can take multiple values separated by | in shell variable to be passed and checked on in awk program. But it considers that it will be in format of key|value|key|value... only.
#!/bin/sh -x
f1=$(echo "my.site.com|test2.spin:80" | cut -d'|' -f1)
f2=$(echo "my.site.com|test2.spin:80" | cut -d'|' -f2)
echo "${f1}%${f2};" >> proxy_pass.map
tr '%' '\t' < proxy_pass.map >> p1
cat > ed1 <<EOF
$
-1
d
wq
EOF
ed -s p1 < ed1
mv -v p1 proxy_pass.map
rm -v ed1
This might work for you (GNU sed):
<<<'my.site.com|test2.spin:80' sed -E 's#\.#\\.#g;s#^(\S+)\|(\S+)#/^\1\\b/s/\\S+/\2/2#' |
sed -Ef - file
Build a sed script from the input arguments and apply it to the input file.
The input arguments are first prepared so that their metacharacters ( in this case the .'s are escaped.
Then the first argument is used to prepare a match command and the second is used as the value to be replaced in a substitution command.
The result is piped into a second sed invocation that takes the sed script and applies it the input file.

How to do multiple match and print different number of lines after each pattern using awk

I have a big file with thousand lines that looks like:
>ENST00001234.1
ACGTACGTACGG
TTACCCAGTACG
ATCGCATTCAGC
>ENST00002235.4
TTACGCAT
TAGGCCAG
>ENST00005546.9
TTTATCGC
TTAGGGTAT
I want to grep specific ids (after > sign), for example, ENST00001234.1 then want to get lines after the match until the next > [regardless of the number of lines]. I want to grep about 63 ids in this way at once.
If I grep ENST00001234.1 and ENST00005546.9 ids, the ideal output should be:
>ENST00001234.1
ACGTACGTACGG
TTACCCAGTACG
ATCGCATTCAGC
>ENST00005546.9
TTTATCGC
TTAGGGTAT
I tried awk '/ENST00001234.1/ENST00005546.9/{print}' but it did not help.
You can set > as the record separator:
$ awk -F'\n' -v RS='>' -v ORS= '$1=="ENST00001234.1"{print RS $0}' ip.txt
>ENST00001234.1
ACGTACGTACGG
TTACCCAGTACG
ATCGCATTCAGC
-F'\n' to make it easier to compare the search term with first line
-v RS='>' set > as input record separator
-v ORS= clear the output record separator, otherwise you'll get extra newline in the output
$1=="ENST00001234.1" this will do string comparison and matches the entire first line, otherwise you'll have to escape regex metacharacters like . and add anchors
print RS $0 if match is found, print > and the record content
If you want to match more than one search terms, put them in a file:
$ cat f1
ENST00001234.1
ENST00005546.9
$ awk 'BEGIN{FS="\n"; ORS=""}
NR==FNR{a[$0]; next}
$1 in a{print RS $0}' f1 RS='>' ip.txt
>ENST00001234.1
ACGTACGTACGG
TTACCCAGTACG
ATCGCATTCAGC
>ENST00005546.9
TTTATCGC
TTAGGGTAT
Here, the contents of f1 is used to build the keys for array a. Once the first file is read, RS='>' will change the record separator for the second file.
$1 in a will check if the first line matches a key in array a
EDIT(Generic solution): In case one has to look for multiple strings in Input_file then mention all of them in awk variable search with ,(comma) separated and that should print all matched ones(respective lines).
awk -v search="ENST00001234.1,ENST00002235.4" '
BEGIN{
num=split(search,arr,",")
for(i=1;i<=num;i++){
look[">"arr[i]]
}
}
/^>/{
if($0 in look){ found=1 }
else { found="" }
}
found
' Input_file
In case you want to read ids(which needs to be searched into Input_file) from another file then try following. Where look_file is the file which has all ids needs to be searched and Input_file is the actual content file.
awk '
FNR==NR{
look[">"$0]
}
/^>/{
if($0 in look){ found=1 }
else { found="" }
}
found
' look_file Input_file
For single text search: Could you please try following. Written and tested with shown samples in GNU awk. One could give string which needs to be searched in variable search as per their requirement.
awk -v search="ENST00001234.1" '
/^>/{
if($0==">"search){ found=1 }
else { found="" }
}
found
' Input_file
Explanation: Adding detailed explanation for above.
awk -v search="ENST00001234.1" ' ##Starting awk program from here and setting and setting search variable value what we need to look.
/^>/{ ##Checking condition if a line starts from > then do following.
if($0==">"search){ found=1 } ##Checking condition if current line equals to > search(variable value) then set found to 1 here.
else { found="" } ##else set found to NULL here.
}
found ##Checking condition if found is SET then print that line.
' Input_file ##Mentioning Input_file name here.
There is no need to reinvent the wheel. There are several bioinformatics tools for this task (extract fasta sequences using a list of sequence ids). For example, seqtk subseq:
Extract sequences with names in file name.lst, one sequence name per line:
seqtk subseq in.fq name.lst > out.fq
It works with fasta files as well.
Use conda install seqtk or conda create --name seqtk seqtk to install the seqtk package, which has other useful functionalities, and is very fast.
SEE ALSO:
Retrieve FASTA sequences using sequence ids
Extract fasta sequences from a file using a list in another file
How To Extract A Sequence From A Big (6Gb) Multifasta File?
extract sequences from multifasta file by ID in file using awk

Extract first 5 fields from semicolon-separated file

I have a semicolon-separated file with 10 fields on each line. I need to extract only the first 5 fields.
Input:
A.txt
1;abc ;xyz ;0.0000;3.0; ; ;0.00; ; xyz;
Output file:
B.txt
1;abc ;xyz ;0.0000;3.0;
You can cut from field1-5:
cut -d';' -f1-5 file
If the ending ; is needed, you can append it by other tool or using grep(assume your grep has -P option):
kent$ grep -oP '^(.*?;){5}' file
1;abc ;xyz ;0.0000;3.0;
In sed you can match the pattern string; 5 times:
sed 's/\(\([^;]*;\)\{5\}\).*/\1/' A.txt
or, when your sedsupports -r:
sed -r 's/(([^;]*;){5}).*/\1/' A.txt
cut -f-5 -d";" A.txt > B.txt
Where:
- -f selects the fields (-5 from start to 5)
- -d provides a delimiter, (here the semicolon)
Given that the input is field-based, using awk is another option:
awk 'BEGIN { FS=OFS=";"; ORS=OFS"\n" } { NF=5; print }' A.txt > B.txt
If you're using BSD/macOS, insert $1=$1; after NF=5; to make this work.
FS=OFS=";" sets both the input field separator, FS, and the output field separator, OFS, to a semicolon.
The input field separator is used to break each input record (line) into fields.
The output field separator is used to rebuild the record when individual fields are modified or the number of fields are modified.
ORS=OFS"\n" sets the output record separator to a semicolon followed by a newline, given that a trailing ; should be output.
Simply omit this statement if the trailing ; is undesired.
{ NF=5; print } truncates the input record to 5 fields, by setting NF, the number (count) of fields to 5 and then prints the modified record.
It is at this point that OFS comes into play: the first 5 fields are concatenated to form the output record, using OFS as the separator.
Note: BSD/macOS Awk doesn't modify the record just by setting NF; you must additionally modify a field explicitly for the changed field count to take effect: a dummy operation such as $1=$1 (assigning field 1 to itself) is sufficient.
awk '{print $1,$2,$3}' A.txt >B.txt
1;abc ;xyz ;0.0000;3.0;

convert white space to tab on first line of a tab delimited file

I have multiple tab delimited files with the same column headers. However, the headers (1st row of the files) are delimited by white spaces instead of tabs. How can I convert the white space to tab on first line of a tab delimited file?
You can use sed for one line only:
sed -i.bak $'1s/ /\t/g' file.csv
Sounds like you can use awk:
awk -v OFS='\t' 'NR == 1 { $1 = $1 } 1' file
Assigning the first field of the first line $1 to itself causes awk to reformat the line, inserting the output field separator OFS (defined as a tab character). 1 is the shortest true condition, so awk does the default: { print } for every line.
To overwrite "in-place", use a temp file:
awk -v OFS='\t' 'NR == 1 { $1 = $1 } 1' file > tmp && mv tmp file
Note that this will interpret any number of spaces as a single field separator.

Resources