Sort first column while merging second column - sorting

I am looking for a solution of the following problem. I have a text file with in the first column geneIDs and in the second single GOterms. Because each gene has multiple annotated GOterms, identical geneIDs do occur multiple times (with different GOterms in the second column. I only want to have unique geneIDs with GOterms merged:
I have:
TRINITY_DN10151_c0_g1 GO:0004175
TRINITY_DN10151_c0_g1 GO:0004252
TRINITY_DN10151_c0_g1 GO:0006508
TRINITY_DN10151_c0_g1 GO:0008233
TRINITY_DN102626_c42_g1 GO:0005198
TRINITY_DN102626_c42_g1 GO:0042302
TRINITY_DN102626_c58_g1 GO:0004175
I want:
TRINITY_DN10151_c0_g1 GO:0004175-GO:0004252-GO:0006508-GO:0008233
TRINITY_DN102626_c42_g1 GO:0005198-GO:0042302
etc..
Further it is important (and I have truly no idea how to solve this) that each GO term combination occurs once. So if two genes have the same GO term combination (A, B and C) in column 2 they should both have A-B-C. And not also A-C-B..
I have tried to use sort and uniq, but in the end I was only deleting rows.
Can someone help me with a unix solution?

You can do it with a rather cryptic sed command. (Every sed command is trivial or cryptic.)
sort filename | sed -e :a -e '$!N;s/^\([^ ]* \) *\(.*\)\n\1 */\1\2-/;ta' -e 'P;D'
Loosely translated, this says "Append the next line to this one and replace the newline and second gene name with a hyphen, as long as the two gene names are the same".
And the sort is to keep the GOterm order consistent across genes.

Related

How to extract specific rows based on row number from a file

I am working on a RNA-Seq data set consisting of around 24000 rows (genes) and 1100 columns (samples), which is tab separated. For the analysis, I need to choose a specific gene set. It would be very helpful if there is a method to extract rows based on row number? It would be easier that way for me rather than with the gene names.
Below is an example of the data (4X4) -
gene    Sample1    Sample2    Sample3
A1BG       5658    5897      6064
AURKA    3656    3484      3415
AURKB    9479    10542    9895
From this, say for example, I want row 1, 3 and4, without a specific pattern
I have also asked on biostars.org.
You may use a for-loop to build the sed options like below
var=-n
for i in 1 3,4 # Put your space separated ranges here
do
var="${var} -e ${i}p"
done
sed $var filename
Note: In any case the requirement mentioned here would still be pain as it involves too much typing.
Say you have a file, or a program that generates a list of the line numbers you want, you could edit that with sed to make it into a script that prints those lines and passes it to a second invocation of sed.
In concrete terms, say you have a file called lines that says which lines you want (or it could equally be a program that generates the lines on its stdout):
1
3
4
You can make that into a sed script like this:
sed 's/$/p/' lines
1p
3p
4p
Now you can pass that to another sed as the commands to execute:
sed -n -f <(sed 's/$/p/' lines) FileYouWantLinesFrom
This has the advantage of being independent of the maximum length of arguments you can pass to a script because the sed commands are in a pseudo-file, i.e. not passed as arguments.
If you don't like/use bash and process substitution, you can do the same like this:
sed 's/$/p/' lines | sed -n -f /dev/stdin FileYouWantLinesFrom

grep: keep lines by number in specific column

I know how to do it with awk, for example, keep lines, which contains number 3 in second column: $ awk '"$2" == 3'
But how to do the same with only grep?
What about for first column?
Grep is not great for this, awk is better. But assuming your columns are separated by spaces, then you want
grep -E '^[^ ]+ +3( |$)'
Explanation: find something that has a start of line, followed by one or more non-space characters (first column), then one or more space characters (column separator), then the number 3, then either a space (because there's another column) or end of line (if there's no other column).
(Updated to fix syntax after testing.)
Here is the longer explanation for my mysterious command grep -P '^[^\t]*\t3\t' your_file from the comments:
I assumed that the column delimiter is a tab. grep without -P would require some strange things to use it directly (see e.g. see here ) . The -P makes it possible to just write \t without any problems. If for example your delimiter is ; then you could replace the \t with ; and you dont need the -P option.
Having said that, lets explain the idea behind the regular expression: You said, you want to match a 3 in the second column:
^ means: at the beginning of the line
[^\t]* means: zero or more (*) occurences of something not a tab ([^\t] here the ^ means "not a")
followed by tab
followed by 3
followed by tab
Now we have effectively expressed the idea that we need a 3 as the content of the second column (\t3\t) and we are not interested in the precise content of the first column. The ^[^\t]*\t is only necessary to express the idea "what follows is in the second column".
If you want to match something in the fourth column, you could use this to "skip" the first three column and match a 4 in the fourth column:
^([^\t]*\t){3}4. (Note the parenthesis and the {3}).
As you can see many details and awk is much more elegant and easy.
You can read this up in the documentation of grep and then you will need to study something about regular expression, e.g. start here.

Change field name and edit a csv file

I have a csv file I am looking at in bash that I am trying to manipulate. There are several things that I have/am trying to edit. Structure is like so where the first row are the column(field) headers
cat,dog,hippopotamus,zebra
1,,3,2
three species, five species,only one,multiple
at,home, at, home, wild, wild
How can I edit the field (column) names in the csv?
head -1 test.csv
shows what the field (column) names are, but it still has the commas in it as well and this doesn't allow for field name changing at all.
The other part about this is that I want to only edit titles that are greater than 8 characters in length, in which case I will just take the first 8 characters. I'm guessing I would use some sort of loop based on string length but since I don't know how to even edit the field name of just one column I'm not sure how to do this. In scenario above, changing hippopotamus to hippopot.
How can I replace empty cells in the csv to NA or NULL?
sed -i 's/ /NULL/g'
Thought would work but doesn't.
Some of the cells have commas within them, messing with the , delimiter. I used the code below and it seems to work, but is there a better/safer way to do this?
sed -i "s/, /_/g"
Or in a similar situation, if multiple columns contain strings sometimes with spaces within a string but I only want to remove the space in one of the columns while leaving the other columns alone, how can I achieve this?
sed -i 's/ //g' test.csv
Sed will allow a line number as a command prefix, to only work on a single line (or a range of numbers, to work on lines in that range). Try something like this.
sed -e '1s/cat/Feline/' test.csv > test2.csv
CSV files will store an empty field as either a comma at start of line, a comma at end of line, or a comma followed by another comma:
Field1,Field2,Field3
,"<-- empty field1",field3
field1,,"<-- empty field2"
field1,"empty field3-->",
You can use the following sed commands to fix these:
sed -e 's/^,/NA,/;s/,$/,NA/' -e ':loop' -e 's/,,/,NA,/g;tloop' test.csv
Your solution appears good. Be aware, however, that CSV should have quotes around any string containing a comma. And that's legit. It's also the point where sed stops being a good tool for manipulating CSV files. ;-) One suggestion would be to replace "interior" commas with "%2C", which is the HTML encoding for a comma. That's pretty distinctive, and at least somewhat standard.
sed numbers groups starting from the left-most paren. If your groups match multiple times, you can only get the last match contents, but if an outer group contains the multi-match, the outer group is still valid. (I assume here that you have already replaced the "interior" commas with something else.)
sed -e ':loop' -e '^\(\([^,]*,\)\{3\}\)\([^ ,]*\) /\1\3/;tloop'
This will remove the first space in column 4, then loop. It will stop when it finds the comma that ends the column, or end-of-line.
Note that the first part, called \1, is general. You can replace the 3 with whatever field, minus one, and that will get you to the start of the field. The actual work is in the second part, \3, where you can do what you like. (Note that \2 is included within \1, and not particularly useful.)

bash sort characters before numbers

How to call sort to ensure that the following list is the result of any its randomizations? In particular, standard sort sorts [0-9] before [A-Za-z] but I need [A-Za-z] before [0-9]. I read the manpage but nothing seems to fit. I read that the locale influences the sorting of individual characters but which locale is the right one?
01_abc
02_abc
02_01_abc
02_02_abc
02_02_01_abc
02_02_02_abc
02_02_03_abc
02_03_abc
03_abc
04_abc
It's pretty ugly, but given a file data containing the sample data, and a program shuffle that randomizes the sequence of the lines in the file, and sed and sort, you can use:
shuffle data |
sed -e 's/[a-z][a-z]*/a:&/g' -e 's/[0-9][0-9]/d:&/g' |
sort |
sed 's/[ad]://g'
The first sed tags strings of letters with a: (for alpha), and strings of digits with d: (for digit), where the crucial point is that a comes before d in any plausible (ascending) sort order. This means that the data gets sorted with letters before digits. The second sed removes the tags.
Example of the steps:
$ shuffle data
02_03_abc
04_abc
02_02_abc
02_02_02_abc
03_abc
02_02_01_abc
02_01_abc
02_abc
02_02_03_abc
01_abc
$ shuffle data | sed -e 's/[a-z][a-z]*/a:&/g' -e 's/[0-9][0-9]/d:&/g'
d:01_a:abc
d:04_a:abc
d:02_d:02_d:02_a:abc
d:02_a:abc
d:03_a:abc
d:02_d:03_a:abc
d:02_d:02_a:abc
d:02_d:02_d:03_a:abc
d:02_d:02_d:01_a:abc
d:02_d:01_a:abc
$ shuffle data | sed -e 's/[a-z][a-z]*/a:&/g' -e 's/[0-9][0-9]/d:&/g' | sort
d:01_a:abc
d:02_a:abc
d:02_d:01_a:abc
d:02_d:02_a:abc
d:02_d:02_d:01_a:abc
d:02_d:02_d:02_a:abc
d:02_d:02_d:03_a:abc
d:02_d:03_a:abc
d:03_a:abc
d:04_a:abc
$ shuffle data | sed -e 's/[a-z][a-z]*/a:&/g' -e 's/[0-9][0-9]/d:&/g' | sort | sed 's/[ad]://g'
01_abc
02_abc
02_01_abc
02_02_abc
02_02_01_abc
02_02_02_abc
02_02_03_abc
02_03_abc
03_abc
04_abc
$
It is important to tag the letter sequences before tagging the number sequences. Note that each time it is run (at least for small numbers of runs), shuffle produces a different permutation of the data.
This technique – modifying the input so that the key can be sorted — can be applied to other sort operations too. Sometimes (often, even), it is better to prefix the key data to the unmodified line, separating the parts with, for example, a tab. This makes it easier to remove the sort key. For example, if you need to sort dates with alphabetic month names, you may well need to map the month names to numbers.

How to remove duplicate phrases from a document?

Is there a simple way to remove duplicate contents from a large textfile? It would be great to be able to detect duplicate sentences (as separated by "." or even better to find duplicates of sentence fragments (such as 4-word pieces of text).
Removing duplicate words is easy enough, as other people have pointed out. Anything more complicated than that, and you're into Natural Language Processing territory. Bash isn't the best tool for that -- you need a slightly more elegant weapon for a civilized age.
Personally, I recommend Python and it's NLTK (natural language toolkit). Before you dive into that, it's probably worth reading up a little bit on NLP so that you know what you actually need to do. For example, the "4-word pieces of text" are known as 4-grams (n-grams in the generic case) in the literature. The toolkit will help you find those, and more.
Of course, there are probably alternatives to Python/NLTK, but I'm not familiar with any.
Remove duplicate phrases an keeping the original order:
nl -w 8 "$infile" | sort -k2 -u | sort -n | cut -f2
The first stage of the pipeline prepends every line with line number to document the original order. The second stage sorts the original data with the unique switch set.
The third restores the original order (sorting the 1. column). The final cut removes the first column.
You can remove duplicate lines (which have to be exactly equal) with uniq if you sort your textfile first.
$ cat foo.txt
foo
bar
quux
foo
baz
bar
$ sort foo.txt
bar
bar
baz
foo
foo
quux
$ sort foo.txt | uniq
bar
baz
foo
quux
Apart from that, there's no simple way of doing what you want. (How will you even split sentences?)
You can use grep with backreferences.
If you write grep "\([[:alpha:]]*\)[[:space:]]*\1" -o <filename> it will match any two identical words following one another. I.e. if the file content is this is the the test file , it will output the the.
(Explanation [[:alpha:]] matches any character a-z and A-Z, the asterisk * after it means that may appear as many times as it wants, the \(\) is used for grouping to backreference it later, then [[:space:]]* matches any number of spaces and tabs, and finally \1 matches the exact sequence that was found, that was enclosed in \(\)brackets)
Likewise, if you want to match a group of 4 words, that is repeated two times in a row, the expression will look like grep "\(\([[:alpha:]]*[[:space]]*\)\{4\}[[:space:]]*\1" -o <filename> - it will match e.g. a b c d a b c d.
Now we need to add an arbitrary character sequence inbetween matches. In theory this should be done with inserting .* just before backreference, i.e. grep "\(\([[:alpha:]]*[[:space]]*\)\{4\}.*\1" -o <filename>, but this doesn't seem to work for me - it matches just any string and ignores said backreference
The short answer is that there's no easy method. In general any solution needs to first decide how to split the input document into chunks (sentences, sets of 4 words each, etc) and then compare them to find duplicates. If it's important that the ordering of the non-duplicate elements by the same in the output as it was in the input then this only complicates matters further.
The simplest bash-friendly solution would be to split the input into lines based on whatever criteria you choose (e.g. split on each ., although doing this quote-safely is a bit tricky) and then use standard duplicate detection mechanisms (e.g. | uniq -c | sort -n | sed -E -ne '/^[[:space:]]+1/!{s/^[[:space:]]+[0-9]+ //;p;}' and then, for each resulting line, remote the text from the input.
Presuming that you had a file that was properly split into lines per "sentence" then
uniq -c lines_of_input_file | sort -n | sed -E -ne '/^[[:space:]]+1/!{s/^[[:space:]]+[0-9]+ //;p;}' | while IFS= read -r match ; do sed -i '' -e 's/'"$match"'//g' input_file ; done
Might be sufficient. Of course it will break horribly if the $match contains any data which sed interprets as a pattern. Another mechanism should be employed to perform the actual replacement if this is an issue for you.
Note: If you're using GNU sed the -E switch above should be changed to -r
I just created a script in python, that does pretty much what I wanted originally:
import string
import sys
def find_all(a_str, sub):
start = 0
while True:
start = a_str.find(sub, start)
if start == -1: return
yield start
start += len(sub)
if len(sys.argv) != 2:
sys.exit("Usage: find_duplicate_fragments.py some_textfile.txt")
file=sys.argv[1]
infile=open(file,"r")
text=infile.read()
text=text.replace('\n','') # remove newlines
table = string.maketrans("","")
text=text.translate(table, string.punctuation) # remove punctuation characters
text=text.translate(table, string.digits) # remove numbers
text=text.upper() # to uppercase
while text.find(" ")>-1:
text=text.replace(" "," ") # strip double-spaces
spaces=list(find_all(text," ")) # find all spaces
# scan through the whole text in packets of four words
# and check for multiple appearances.
for i in range(0,len(spaces)-4):
searchfor=text[spaces[i]+1:spaces[i+4]]
duplist=list(find_all(text[spaces[i+4]:len(text)],searchfor))
if len(duplist)>0:
print len(duplist),': ',searchfor
BTW: I'm a python newbie, so any hints on better python practise are welcome!

Resources