Write specific columns of files into another files, Who can give me a more concise solution? - shell

I have a troublesome problem about writing specific columns of the file into another file, more details are I have the file1 like below, I need to write the first columns exclude the first row to file2 with one line and separated with '|' sign. And now I have a solution by sed and awk, this missing last step inserts into the top of file2, even though I still believe there should be some more concise solution on account of powerful of awk、sed, etc. So, Who can offer me another more concise script?
sed '1d;s/ .//' ./file1 | awk '{printf "%s|", $1; }' | awk '{if (NR != 0) {print substr($1, 1, length($1) - 1)}}'
file1:
col_name data_type comment
aaa string null
bbb int null
ccc int null
file2:
xxx ccc(whatever is this)
The result of file2 should be this :
aaa|bbb|ccc
xxx ccc(whatever is this)

Assuming there's no whitespace in the column 1 data, in increasing length:
sed -i "1i$(awk 'NR > 1 {print $1}' file1 | paste -sd '|')" file2
or
ed file2 <<END
1i
$(awk 'NR > 1 {print $1}' file1 | paste -sd '|')
.
wq
END
or
{ awk 'NR > 1 {print $1}' file1 | paste -sd '|'; cat file2; } | sponge file2
or
mapfile -t lines < <(tail -n +2 file1)
col1=( "${lines[#]%%[[:blank:]]*}" )
new=$(IFS='|'; echo "${col1[*]}"; cat file2)
echo "$new" > file2

This might work for you (GNU sed):
sed -z 's/[^\n]*\n//;s/\(\S*\).*/\1/mg;y/\n/|/;s/|$/\n/;r file2' file1
Process file1 "wholemeal" by using the -z command line option.
Remove the first line.
Remove all columns other than the first.
Replace newlines by |'s
Replace the last | by a newline.
Append file2.
Alternative using just command line utils:
tail +2 file1 | cut -d' ' -f1 | paste -s -d'|' | cat - file2
Tail file1 from line 2 onwards.
Using the results from the tail command, isolate the first column using a space as the column delimiter.
Using the results from the cut command, serialize each line into one, delimited by |',s.
Using the results from the paste, append file2 using the cat command.

I'm learning awk at the moment.
awk 'BEGIN{a=""} {if(NR>1) a = a $1 "|"} END{a=substr(a, 1, length(a)-1); print a}' file1
Edit: Here's another version that uses an array:
awk 'NR > 1 {a[++n]=$1} END{for(i=1; i<=n; ++i){if(i>1) printf("|"); printf("%s", a[i])} printf("\n")}' file1

Here is a simple Awk script to merge the files as per your spec.
awk '# From the first file, merge all lines except the first
NR == FNR { if (FNR > 1) { printf "%s%s", sep, $1; sep = "|"; } next }
# We are in the second file; add a newline after data from first file
FNR == 1 { printf "\n" }
# Simply print all lines from file2
1' file1 file2
The NR==FNR condition is true when we are reading the first input file: The overall line number NR is equal to the line number within the current file FNR. The final 1 is a common idiom for printing all input lines which make it this far into the script (the next in the first block prevent lines from the first file to reaching this far).
For conciseness, you can remove the comments.
awk 'NR == FNR { if (FNR > 1) { printf "%s%s", sep, $1; sep = "|"; } next }
FNR == 1 { printf "\n" } 1' file1 file2
Generally speaking, Awk can do everything sed can do, so piping sed into Awk (or vice versa) is nearly always a useless use of sed.

Related

using awk how to merge 2 files, say A & B and do a left outer join function and include all columns in both files

I have multiple files with different number of columns, i need to do a merge on first file and second file and do a left outer join in awk respective to first file and print all columns in both files matching the first column of both files.
I have tried below codes to get close to my output. But i can't print the ",', where no matching number is found in second file. Below is the code. Join needs sorting and takes more time than awk. My file sizes are big, like 30 million records.
awk -F ',' '{
if (NR==FNR){ r[$1]=$0}
else{ if($1 in r)
r[$1]=r[$1]gensub($1,"",1)}
}END{for(i in r){print r[i]}}' file1 file2
file1
number,column1,column2,..columnN
File2
numbr,column1,column2,..columnN
Output
number,file1.column1,file1.column2,..file1.columnN,file2.column1,file2.column3...,file2.columnN
file1
1,a,b,c
2,a,b,c
3,a,b,c
5,a,b,c
file2
1,x,y
2,x,y
5,x,y
6,x,y
7,x,y
desired output
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,,
5,a,b,c,x,y
$ cat tst.awk
BEGIN { FS=OFS="," }
NR==FNR {
tail = gensub(/[^,]*,/,"",1)
if ( FNR == 1 ) {
empty = gensub(/[^,]/,"","g",tail)
}
file2[$1] = tail
next
}
{ print $0, ($1 in file2 ? file2[$1] : empty) }
$ awk -f tst.awk file2 file1
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
The above uses GNU awk for gensub(), with other awks it's just one more step to do [g]sub() on the appropriate variable after initially assigning it.
An interesting (to me at least!) alternative you might want to test for a performance difference is:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR==FNR {
tail = gensub(/[^,]*,/,"",1)
idx[$1] = NR
file2[NR] = tail
if ( FNR == 1 ) {
file2[""] = gensub(/[^,]/,"","g",tail)
}
next
}
{ print $0, file2[idx[$1]] }
$ awk -f tst.awk file2 file1
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
but I don't really expect it to be any faster and it MAY even be slower.
you can try,
awk 'BEGIN{FS=OFS=","}
FNR==NR{d[$1]=substr($0,index($0,",")+1); next}
{print $0, ($1 in d?d[$1]:",")}' file2 file1
you get,
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y
join to the rescue:
$ join -t $',' -a 1 -e '' -o 0,1.2,1.3,1.4,2.2,2.3 file1.txt file2.txt
Explanation:
-t $',': Field separator token.
-a 1: Do not discard records from file 1 if not present in file 2.
-e '': Missing records will be treated as an empty field.
-o: Output format.
file1.txt
1,a,b,c
2,a,b,c
3,a,b,c
5,a,b,c
file2.txt
1,x,y
2,x,y
5,x,y
6,x,y
7,x,y
Output
1,a,b,c,x,y
2,a,b,c,x,y
3,a,b,c,,
5,a,b,c,x,y

Replacing a specific field using awk, sed or any POSIX tool

I have a huge file1, which has values as follows:
a 1
b 2
c 3
d 4
e 5
I have another huge file2, which is colon delimited with seven fields as follows:
a:2543:2524:2542:252:536365:54654
c:5454:5454:654:54:87:54
d:87:65:1:98:32:87
I want to search the lines for the variables of file1 and replace its value in the 7th column in file2 so the output should be as follows:
a:2543:2524:2542:252:536365:1
c:5454:5454:654:54:87:3
d:87:65:1:98:32:4
Maybe this awk will work, assuming the files are as posted
awk -F: 'NR==FNR{a[$1]=$0;next;}a[$1]{$0=a[$1]}1' file1 file2
a:2543:2524:2542:252:536365:1
c:5454:5454:654:54:87:3
d:87:65:1:98:32:4
So I came up with a solution; ended up with a couple of lines of code. Maybe there is a better way to do it.But this works !
while read line ; do
var1=`echo $line| awk '{print $1}'`
var2=`echo $line| awk '{print $2}'`
awk -v var1="$var1" -v var2="$var2" -F ':' 'BEGIN { OFS = ":"} $1==var1 {sub(".*",var2,$7)}{print}' file2 > file2.tmp
mv file2.tmp file2
done < file1
cat file2
This should work - it assumes both files are sorted on the first column - I'd be very interested in any performance comparisons with your solution for very large files - both speed and memory - --complement is linux-specific but easily replaced if not available
file0=$1 #file with single value
file1=$2 #file with 6th value to be replaced
# normalize on colon delimiter
tr ' ' : <$file0|
# join on first field
join -t: $file1 -|
# delete column 7
cut --complement -d: -f7

How can I use bash to split only some elements of a text file?

I'm trying to figure out how to make a .txt file (myGeneFile.txt) of IDs and genes that looks like this:
Probe Set ID Gene Symbol
1007_s_at DDR1 /// MIR4640
1053_at RFC2
117_at HSPA6
121_at PAX8
1255_g_at GUCA1A
1294_at MIR5193 /// UBA7
into this:
DDR1
MIR4640
RFC2
HSPA6
PAX8
GUCA1A
MIR5193
UBA
First I tried doing this:
cat myGeneFile.txt | tail -n +2 | awk '{split($2,a,"///"); print a[1] "\t" a[2] "\t" a[3] "\t" a[4] "\t" a[5];}' > test.txt
(i.e., I removed the top (header) line of the file, I tried splitting the second line along the delimiter ///, and then printing any genes that might appear)
Then, I tried doing this:
cat myGeneFile.txt | tail -n +2 | awk '{print $2}' | grep -o -E '\w+' > test.txt
(literally listing out all of the words in the second column)
I got the same output in both cases - a long list of just the first gene in each row (e.g. so MIR4640 and UBA7 were mising)
Any ideas?
EDIT: Thanks #CodeGnome for your help. I ended up using that code and modifying it because I discovered that my file had between 1 and 30 different gene names on each row. So, I used:
awk 'NR == 1 {next}
{
sub("///", "")
print $2 }
{ for (i=3; i<=30; i++)
if ($i) {print $i}
}' myGeneFile.txt > test2.txt
#GlenJackson also had a solution that worked really well:
awk 'NR>1 {for (i=2; i<=NF; i++) if ($i != "///") print $i}' file
My awk take:
awk 'NR>1 {for (i=2; i<=NF; i++) if ($i != "///") print $i}' file
or sed
sed '
1d # delete the header
s/[[:blank:]]\+/ /g # squeeze whitespace
s/^[^ ]\+ // # remove the 1st word
s| ///||g # delete all "///" words
s/ /\n/g # replace spaces with newlines
' file
Use Conditional Print Statements Inside an AWK Action
The following gives the desired output by removing unwanted characters with sub(), and then using multiple print statements to create the line breaks. The second print statement is conditional, and only triggers when the third field isn't empty; this avoids creating extraneous empty lines in the output.
$ awk 'NR == 1 {next}
{
sub("///", "")
print $2
if ($3) {print $3}
}' myGeneFile.txt
DDR1
MIR4640
RFC2
HSPA6
PAX8
GUCA1A
MIR5193
UBA7
This will work:
tail -n+2 tmp | sed -E 's/ +/ /' | cut -d' ' -f2- | sed 's_ */// *_\n_'
Here's what is happening:
tail -n+2 Strip off the header
sed -E 's/ +/ /' Condense the whitespace
cut -d' ' -f2- Use cut to select all fields but the first, using a single space as the delimiter
sed 's_ */// *_\n_' Convert all /// (and any surrounding whitespace) to a newline
You don't need the initial cat, it's usually better to simply pass the input file as an argument to the first command. If you want the file name in a place that is easy to change, this is a better option as it avoids the additional process (and I find it easier to change the file if it's at the end):
(tail -n+2 | sed -E 's/ +/ /' | cut -d' ' -f2- | sed 's_ */// *_\n_') < tmp
Given the existing input and the modified requirement (from the comment on Morgen's answer) the following should do what you want (for any number of gene columns).
awk 'NR > 1 {
p=0
for (i = 2; i <= NF; i++) {
if ($i == "///") {
p=1
continue
}
printf "%s%s\n", p?"n":"", $i
}
}' input.txt
Your criteria for selecting which strings to output is not entirely clear, but here's another command that at least produces your expected output:
tail -n +2 myGeneFile.txt | grep -oE '\<[A-Z][A-Z0-9]*\>'
It basically just 1) skips the first line and 2) finds all other words (delimited by non-word characters and/or start/end of line) that consist entirely of uppercase letters or digits, with the first being a letter.

Comparing values in two files

I am comparing two files, each having one column and n number of rows.
file 1
vincy
alex
robin
file 2
Allen
Alex
Aaron
ralph
robin
if the data of file 1 is present in file 2 it should return 1 or else 0, in a tab seprated file.
Something like this
vincy 0
alex 1
robin 1
What I am doing is
#!/bin/bash
for i in `cat file1 `
do
cat file2 | awk '{ if ($1=="'$i'") print 1 ; else print 0 }'>>binary
done
the above code is not giving me the output which I am looking for.
Kindly have a look and suggest correction.
Thank you
The simple awk solution:
awk 'NR==FNR{ seen[$0]=1 } NR!=FNR{ print $0 " " seen[$0] + 0}' file2 file1
A simple explanation: for the lines in file2, NR==FNR, so the first action is executed and we simply record that a line has been seen. In file1, the 2nd action is taken and the line is printed, followed by a space, followed by a "0" or a "1", depending on if the line was seen in file2.
AWK loves to do this kind of thing.
awk 'FNR == NR {a[tolower($1)]; next} {f = 0; if (tolower($1) in a) {f = 1}; print $1, f}' file2 file1
Swap the positions of file2 and file1 in the argument list to make file1 the dictionary instead of file2.
When FNR (the record number in the current file) and NR (the record number of all records so far) are equal, then the first file is the one being processed. Simply referencing an array element brings it into existence. This sets up the dictionary. The next instruction reads the next record.
Once FNR and NR aren't equal, subsequent file(s) are being processed and their data is looked up in the dictionary array.
The following code should do it.
Take a close look to the BEGIN and END sections.
#!/bin/bash
rm -f binary
for i in $(cat file1); do
awk 'BEGIN {isthere=0;} { if ($1=="'$i'") isthere=1;} END { print "'$i'",isthere}' < file2 >> binary
done
There are several decent approaches. You can simply use line-by-line set math:
{
grep -xF -f file1 file2 | sed $'s/$/\t1/'
grep -vxF -f file1 file2 | sed $'s/$/\t0/'
} > somefile.txt
Another approach would be to simply combine the files and use uniq -c, then just swap the numeric column with something like awk:
sort file1 file2 | uniq -c | awk '{ print $2"\t"$1 }'
The comm command exists to do this kind of comparison for you.
The following approach does only one pass and scales well to very large input lists:
#!/bin/bash
while read; do
if [[ $REPLY = $'\t'* ]] ; then
printf "%s\t0\n" "${REPLY#?}"
else
printf "%s\t1\n" "${REPLY}"
fi
done < <(comm -2 <(tr '[A-Z]' '[a-z]' <file1 | sort) <(tr '[A-Z]' '[a-z]' <file2 | sort))
See also BashFAQ #36, which is directly on-point.
Another solution, if you have python installed.
If you're familiar with Python and are interested in the solution, you only need a bit of formatting.
#/bin/python
f1 = open('file1').readlines()
f2 = open('file2').readlines()
f1_in_f2 = [int(x in f2) for x in f1]
for n,c in zip(f1, f1_in_f2):
print n,c

get Nth line in file after parsing another file

I have one of my large file as
foo:43:sdfasd:daasf
bar:51:werrwr:asdfa
qux:34:werdfs:asdfa
foo:234:dfasdf:dasf
qux:345:dsfasd:erwe
...............
here 1st column foo, bar and qux etc. are file names. and 2nd column 43,51, 34 etc. are line numbers. I want to print Nth line(specified by 2nd column) for each file(specified in 1st column).
How can I automate above in unix shell.
Actually above file is generated while compiling and I want to print warning line in code.
-Thanks,
while IFS=: read name line rest
do
head -n $line $name | tail -1
done < input.txt
while IFS=: read file line message; do
echo "$file:$line - $message:"
sed -n "${line}p" "$file"
done <yourfilehere
awk 'NR==4 {print}' yourfilename
or
cat yourfilename | awk 'NR==4 {print}'
The above one will work for 4th line in your file.You can change the number as per your requirement.
Just in awk, but probably worse performance than answers by #kev or #MarkReed.
However it does process each file just once. Requires GNU awk
gawk -F: '
BEGIN {OFS=FS}
{
files[$1] = 1
lines[$1] = lines[$1] " " $2
msgs[$1, $2] = $3
}
END {
for (file in files) {
split(lines[file], l, " ")
n = asort(l)
count = 0
for (i=1; i<=n; i++) {
while (++count <= l[i])
getline line < file
print file, l[i], msgs[file, l[i]]
print line
}
close(file)
}
}
'
This might work for you:
sed 's/^\([^,]*\),\([^,]*\).*/sed -n "\2p" \1/' file |
sort -k4,4 |
sed ':a;$!N;s/^\(.*\)\(".*\)\n.*"\(.*\)\2/\1;\3\2/;ta;P;D' |
sh
sed -nr '3{s/^([^:]*):([^:]*):.*$/\1 \2/;p}' namesNnumbers.txt
qux 34
-n no output by default,
-r regular expressions (simplifies using the parens)
in line 3 do {...;p} (print in the end)
s ubstitute foobarbaz with foo bar
So to work with the values:
fnUln=$(sed -nr '3{s/^([^:]*):([^:]*):.*$/\1 \2/;p}' namesNnumbers.txt)
fn=$(echo ${fnUln/ */})
ln=$(echo ${fnUln/* /})
sed -n "${ln}p" "$fn"

Resources