Remove shortest leading whitespace from all lines - shell

I have some text with some leading whitespace on all lines. I want to remove the whitespace from the shortest line (if it's simpler, this requirement could be changed to the first line) and then remove the same amount of whitespace from all other lines.
E.g. I have this text:
var flatten = function(result, next_array) {
console.log('current result', result);
return result.concat(next_array);
};
[1, [2], [3, 4]]
.reduce(flatten, []);
And I want to result in this text:
var flatten = function(result, next_array) {
console.log('current result', result);
return result.concat(next_array);
};
[1, [2], [3, 4]]
.reduce(flatten, []);
Basically, I want to shift the text over until there's at least one line with no whitespace on the left and preserve all other leading whitespace on all other lines.
The use case for this is copying code from the middle of a section of code to paste as an example elsewhere. What I currently do is copy the code, paste into vim with paste insert mode, use << until I get the desired output, and copy the buffer. The same could be done in TextMate with Cmd-[.
What I want is to do this with a shell script so I could, for example, trigger it with a hotkey to take my clipboard contents, remove the desired whitespace, and paste the result.

this awk one-liner could do it for you too. it assumes you want to remove at least 1 whitespace. (because I see in your example, there is an empty line, without any leading spaces, but all lines are shifted left anyway.)
test with your example:
kent$ cat f
var flatten = function(result, next_array) {
console.log('current result', result);
return result.concat(next_array);
};
[1, [2], [3, 4]]
.reduce(flatten, []);
kent$ awk -F '\\S.*' '{l=length($1);if(l>0){if(NR==1)s=l; else s=s>l?l:s;}a[NR]=$0}END{for(i=1;i<=NR;i++){sub("^ {"s"}","",a[i]);print a[i]}}' f
var flatten = function(result, next_array) {
console.log('current result', result);
return result.concat(next_array);
};
[1, [2], [3, 4]]
.reduce(flatten, []);
EDIT
I don't think awk scripts not readable. but you have to know the syntax of awk script. anyway, I am adding some explanation:
The awk script has two blocks, the first block was executed when each line of your file was read. The END block was executed after the last line of your file was read. See commpents below for explanation.
awk -F '\\S.*' #using a delimiter '\\S.*'(regex). the first non-empty char till the end of line
#so that each line was separated into two fields,
#the field1: leading spaces
#and the field2: the rest texts
'{ #block 1
l=length($1); #get the length of field1($1), which is the leading spaces, save to l
if(l>0){ #if l >0
if(NR==1)s=l; #NR is the line number, if it was the first line, s was not set yet, then let s=l
else s=s>l?l:s;} #else if l<s, let s=l otherwise keep s value
a[NR]=$0 #after reading each line, save the line in a array with lineNo. as index
} #this block is just for get the number of "shortest" leading spaces, in s
END{for(i=1;i<=NR;i++){ #loop from lineNo 1-last from the array a
sub("^ {"s"}","",a[i]); #replace s number of leading spaces with empty
print a[i]} #print the array element (after replacement)
}' file #file is the input file

These functions can be defined in your .bash_profile to have access to them anywhere, rather than creating a script file. They don't require first line to match:
shiftleft(){
len=$(grep -e "^[[:space:]]*$" -v $1 | sed -E 's/([^ ]).*/x/' | sort -r | head -1 | wc -c)
cut -c $(($len-1))- $1
}
Usage: shiftleft myfile.txt
This works with a file, but would have to be modified to work with pbpaste piped to it...
NOTE: Definitely inspired by the answer of #JoSo but fixes the errors in there. (Uses sort -r and cut -c N- and missing $ on len and doesn't get hung up by blank lines w/o white space.)
EDIT: A version to work with the contents of the clipboard on OSX:
shiftclip(){
len=$(pbpaste | grep -e "^[[:space:]]*$" -v | sed -E 's/([^ ]).*/x/' | sort -r | head -1 | wc -c)
pbpaste | cut -c $(($len-1))-
}
Usage for this version:
Copy the text of interest and then type shiftclip. To copy output directly back to the clipboard, do shiftclip | pbcopy

len=$(sed 's/[^ ].*//' <"$file"| sort | head -n1 | wc -c)
cut -c "$((len))"- <"$file"
Or, a bit less readable but avoiding the overhead of a sort:
len=$(awk 'BEGIN{m=-1} {sub("[^ ].*", "", $0); l = length($0); m = (m==-1||l<m) ? l : m; } END { print m+1 }' <"$file")
cut -c "$((len))"- <"$file"

Hmmm... this isn't very beautiful, and also assumes you have access to Bash, but if you can live with your "first line" rule:
#!/bin/bash
file=$1
spaces=`head -1 $file | sed -re 's/(^[ ]+)(.+)/\1/g'`
cat $file | sed -re "s/^$spaces//"
It also assumes only the space character (i.e., you need to tweak for tabs) but you get the idea.
Assuming your example is in a file snippet.txt, put the bash code in a script (e.g., "shiftleft") , "chmod +x" the script, then run with:
shiftleft snippet.txt

Related

How i should use sed for delete specific strings and allow duplicate with more characters?

i had generate a list of file, and this had 17417 lines like :
./usr
./usr/share
./usr/share/mime-info
./usr/share/mime-info/libreoffice7.0.mime
./usr/share/mime-info/libreoffice7.0.keys
./usr/share/appdata
./usr/share/appdata/libreoffice7.0-writer.appdata.xml
./usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
./usr/share/appdata/libreoffice7.0-draw.appdata.xml
./usr/share/appdata/libreoffice7.0-impress.appdata.xml
./usr/share/appdata/libreoffice7.0-base.appdata.xml
./usr/share/appdata/libreoffice7.0-calc.appdata.xml
./usr/share/applications
./usr/share/applications/libreoffice7.0-xsltfilter.desktop
./usr/share/applications/libreoffice7.0-writer.desktop
./usr/share/applications/libreoffice7.0-base.desktop
./usr/share/applications/libreoffice7.0-math.desktop
./usr/share/applications/libreoffice7.0-startcenter.desktop
./usr/share/applications/libreoffice7.0-calc.desktop
./usr/share/applications/libreoffice7.0-draw.desktop
./usr/share/applications/libreoffice7.0-impress.desktop
./usr/share/icons
./usr/share/icons/gnome
./usr/share/icons/gnome/16x16
./usr/share/icons/gnome/16x16/mimetypes
./usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
The thing is i want to delete the lines like :
./usr
./usr/share
./usr/share/mime-info
./usr/share/appdata
./usr/share/applications
./usr/share/icons
./usr/share/icons/gnome
./usr/share/icons/gnome/16x16
./usr/share/icons/gnome/16x16/mimetypes
and the "." at the start, for the result must be like :
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
This is possible using sed ? or is more practical using another tool
With your list in the filename list, you could do:
sed -n 's/^[.]//;/\/.*[._].*$/p' list
Where:
sed -n suppresses printing of pattern-space; then
s/^[.]// is the substitution form that simply removes the first character '.' from each line; then
/\/.*[._].*$/p matches line that contain a '.' or '_' (optional) after the last '/' with p causing that line to be printed.
Example Use/Output
$ sed -n 's/^[.]//;/\/.*[._].*$/p' list
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
Note, without GNU sed that allows chaining of expressions with ';' you would need:
sed -n -e 's/^[.]//' -e '/\/.*[._].*$/p' list
Assuming you want to delete the line(s) which is included other
pathname(s), would you please try:
sort -r list.txt | awk ' # sort the list in the reverse order
{
sub("^\\.", "") # remove leading dot
s = prev; sub("/[^/]+$", "", s) # remove the rightmost slash and following characters
if (s != $0) print # if s != $0, it means $0 is not a substring of the previous line
prev = $0 # keep $0 for the next line
}'
Result:
/usr/share/mime-info/libreoffice7.0.mime
/usr/share/mime-info/libreoffice7.0.keys
/usr/share/icons/gnome/16x16/mimetypes/libreoffice7.0-oasis-formula.png
/usr/share/applications/libreoffice7.0-xsltfilter.desktop
/usr/share/applications/libreoffice7.0-writer.desktop
/usr/share/applications/libreoffice7.0-startcenter.desktop
/usr/share/applications/libreoffice7.0-math.desktop
/usr/share/applications/libreoffice7.0-impress.desktop
/usr/share/applications/libreoffice7.0-draw.desktop
/usr/share/applications/libreoffice7.0-calc.desktop
/usr/share/applications/libreoffice7.0-base.desktop
/usr/share/appdata/org.libreoffice7.0.kde.metainfo.xml
/usr/share/appdata/libreoffice7.0-writer.appdata.xml
/usr/share/appdata/libreoffice7.0-impress.appdata.xml
/usr/share/appdata/libreoffice7.0-draw.appdata.xml
/usr/share/appdata/libreoffice7.0-calc.appdata.xml
/usr/share/appdata/libreoffice7.0-base.appdata.xml

Sorting the contents within a column using Shell Script Line by Line in a File

I am Sorting a File using a column using the command -
cat myFile | sort -u -k3
Now i want to Sort Data within a Column of a File. Can anyone please help and tell me how can i achieve it?
My Data Looks like this in the File names Student.csv -
Name,Age,Marks,Grades
Sam,21,"34,56,21,67","C,B,D,A"
Josh,25,"90,89,78,45","A,A,B,C"
Output-
Name,Age,Marks,Grades
Sam,21,"21,34,56,67","A,B,C,D"
Josh,25,"45,78,89,90","A,A,B,C"
Will Appreciate the help, Thanks
You should export your CSV with a field separator that does not exist within the texts. Otherwise it becomes hugely cumbersome to deal with this.
Afterwards you can easily sort by specifying the separator and the field.
Example if you would use | as separator:
Name|Age|Marks|Grades
Sam|21|"34,56,21,67"|"C,B,D,A"
Josh|25|"90,89,78,45"|"A,A,B,C"
Then execute:
cat myFile | sort -u -k3 -t\|
or:
sort -u -k3 -t\| <myFile
Afterwards you could be putting your semi-colons back:
sort -u -k3 -t\| <myFile | sed 's/|/;/g'
Did it, but I'm too tired to explain how; brain's hitting a brick wall. There's a lot to unpack there, and it'll take half-a-day to explain. I'll write all the steps in a couple hours after I get a nap in, otherwise there's gonna be 50 typos in that description.
cat Student.csv | head -n1 && cat Student.csv | tail -n+2 | awk -F \" '{split($2,a,",");asort(a);b="";for(i in a)b=b a[i] ",";split($4,c,",");asort(c);d="";for(i in c)d=d c[i] ",";printf "%s\"%s\",\"%s\"\n",$1,substr(b,1,length(b)-1),substr(d,1,length(d)-1)}'
Alternatively:
cat Student.csv | tee >(head -n1) >(tail -n+2 | awk -F \" '{split($2,a,",");asort(a);b="";for(i in a)b=b a[i] ",";split($4,c,",");asort(c);d="";for(i in c)d=d c[i] ",";printf "%s\"%s\",\"%s\"\n",$1,substr(b,1,length(b)-1),substr(d,1,length(d)-1)}') >/dev/null ; sleep 0.1
Output:
Name,Age,Marks,Grades
Sam,21,"21,34,56,67","A,B,C,D"
Josh,25,"45,78,89,90","A,A,B,C"
https://www.tutorialspoint.com/awk/index.htm
Edit -- 'kay, the explaination:
cat concatenates (glues) files together, but when you just give it one arg, then that's what it prints out.
You can do the next part in one or two steps, I'll explain the first method. | pipe directs the output to another command. We all know this, or we wouldn't be here right now... however someday, someone will come across this post, and wonder what it does.
head prints out the first few lines of what you give it. Here, I specified -n1 number of lines = one, so it would print out the header:
Name,Age,Marks,Grades
&& continues to the next command, so long as that initial instruction was a success.
cat Student.csv again, but this time piped into tail, which prints the last few lines, of whatever you give it. -n+2 specifies to spit out everything from line number 2, and beyond.
We then pipe those contents into AWK https://en.wikipedia.org/wiki/AWK ...I'm sure you could do it with sed https://en.wikipedia.org/wiki/Sed, and I started with that, but sed tends to be more simple than awk, so you'd need to do far more chained-commands to achieve the same thing. Lisp might be able to do it more concicely, but it sounded like you were asking for shell builtins. Python's also decent with strings, but again, sh.
-F \" delegates a literal " as the field separator, so that we can group the contents into 3 categories:
Sam,21, " 34,56,21,67 " , "C,B,D,A"
$1 = Sam,21,
$2 = 34,56,21,67
$3 = ,
$4 = C,B,D,A
You actually get 4, but I'm throwing out that comma in the third position. It's easy enough to put it back in.
We now need to sort those numbers, so split($2,a,",") returns an array, in this case, named a, from the contents of $2, which has been delimited by the , symbol.
a = [ 34, 56, 21, 67 ]
; separates AWK commands, you can mostly ignore those. If there were simply a space, awk would try to concatenate items together, and we don't want that yet.
Next, array sort asort( a ), the contents of a -- https://www.tutorialspoint.com/awk/awk_string_functions.htm
a = [ 21, 34, 56, 67 ]
Here would be a perfect time for Python's string .join() method https://www.w3schools.com/python/ref_string_join.asp
However, we don't have that available to us, and AWK doens't seem to have it, as far as I know, so we have to roll our own here. So construct string, b, whose contents will be appended by each item in a. Single-quotes often won't do in commandline, so you'll see double-quotes.
b=""
for( i in a ) b=b a[i] ","
b begins empty. Iterating a for-loop over a's contents, we arrive at an appending which includes commas. Leave the trailing comma for now, it'll get trimmed off in a bit.
21,34,56,67,
Exact same procedure for $4, but we name the array c this time, and the string in which those contents are contatenaded with commas, d -- split( $4, c, "," ) ; asort( c ) ; d="" ; for( i in c ) d=d c[i] "," You can name them anything you like, just happened to have ABCD staring me in the face from those grade listings, so that's what I went with.
OK, now we have everything we need.
$1 = Sam,21,
b = 21,34,56,67,
d = A,B,C,D,
Let's format a string so they're all together.
printf "%s\"%s\",\"%s\"\n"
This will print $1 in the first %s string position, then a literal double-quote,
b into the second %s string position, next ",",
followed by d in the third %s position,
all wrapped up with a final double-quote and a newline.
However, b and d both have trailing commas, so we trim those off with AWK's substr() command. -- https://www.tutorialspoint.com/awk/awk_string_functions.htm Knowing where to begin is easy enough, but we need to chop those at one-from-the-end.
substr( b, 1, length(b) -1 )
substr( d, 1, length(d) -1 )
It'd be nice if you could just specify -2, and have it count backwards, like you can in Lua, Python, et al... but that doesn't seem to do in AWK, so whatevs. Ya live, ya learn. And there you have it, all your ducks in a row.
Sam,21,"21,34,56,67","A,B,C,D"
This does, maybe not elegantly, but it's within the required guidelines. I'm sure there's possibilities of code-golfing in there somewhere, but it's solid logic you can follow.

Unix bash - using cut to regex lines in a file, match regex result with another similar line

I have a text file: file.txt, with several thousand lines. It contains a lot of junk lines which I am not interested in, so I use the cut command to regex for the lines I am interested in first. For each entry I am interested in, it will be listed twice in the text file: Once in a "definition" section, another in a "value" section. I want to retrieve the first value from the "definition" section, and then for each entry found there find it's corresponding "value" section entry.
The first entry starts with ' gl_ ', while the 2nd entry would look like ' "gl_ ', starting with a '"'.
This is the code I have so far for looping through the text document, which then retrieves the values I am interested in and appends them to a .csv file:
while read -r line
do
if [[ $line == gl_* ]] ; then (param=$(cut -d'\' -f 1 $line) | def=$(cut -d'\' -f 2 $line) | type=$(cut -d'\' -f 4 $line) | prompt=$(cut -d'\' -f 8 $line))
while read -r glline
do
if [[ $glline == '"'$param* ]] ; then val=$(cut -d'\' -f 3 $glline) |
"$project";"$param";"$val";"$def";"$type";"$prompt" >> /filepath/file.csv
done < file.txt
done < file.txt
This seems to throw some syntax errors related to unexpected tokens near the first 'done' statement.
Example of text that needs to be parsed, and paired:
gl_one\User Defined\1\String\1\\1\Some Text
gl_two\User Defined\1\String\1\\1\Some Text also
gl_three\User Defined\1\Time\1\\1\Datetime now
some\junk
"gl_one\1\Value1
some\junk
"gl_two\1\Value2
"gl_three\1\Value3
So effectively, the while loop reads each line until it hits the first line that starts with 'gl_', which then stores that value (ie. gl_one) as a variable 'param'.
It then starts the nested while loop that looks for the line that starts with a ' " ' in front of the gl_, and is equivalent to the 'param' value. In other words, the
script should couple the lines gl_one and "gl_one, gl_two and "gl_two, gl_three and "gl_three.
The text file is large, and these are settings that have been defined this way. I need to collect the values for each gl_ parameter, to save them together in a .csv file with their corresponding "gl_ values.
Wanted regex output stored in variables would be something like this:
first while loop:
$param = gl_one, $def = User Defined, $type = String, $prompt = Some Text
second while loop:
$val = Value1
Then it stores these variables to the file.csv, with semi-colon separators.
Currently, I have an error for the first 'done' statement, which seems to indicate an issue with the quotation marks. Apart from this,
I am looking for general ideas and comments to the script. I.e, not entirely sure I am looking for the quotation mark parameters "gl_ correctly, or if the
semi-colons as .csv separators are added correctly.
Edit: Overall, the script runs now, but extremely slow due to the inner while loop. Is there any faster way to match the two lines together and add them to the .csv file?
Any ideas and comments?
This will generate a file containing the data you want:
cat file.txt | grep gl_ | sed -E "s/\"//" | sort | sed '$!N;s/\n/\\/' | awk -F'\' '{print $1"; "$5"; "$7"; "$NF}' > /filepath/file.csv
It uses grep to extract all lines containing 'gl_'
then sed to remove the leading '"' from the lines that contain one [I have assumed there are no further '"' in the line]
The lines are sorted
sed removes the return from each pair of lines
awk then prints
the required columns according to your requirements
Output routed to the file.
LANG=C sort -t\\ -sd -k1,1 <file.txt |\
sed '
/^gl_/{ # if definition
N; # append next line to buffer
s/\n"gl_[^\\]*//; # if value, strip first column
t; # and start next loop
}
D; # otherwise, delete the line
' |\
awk -F\\ -v p="$project" -v OFS=\; '{print p,$1,$10,$2,$4,$8 }' \
>>/filepath/file.csv
sort lines so gl_... appears immediately before "gl_... (LANG fixes LC_TYPE) - assumes definition appears before value
sed to help ensure matching definition and value (may still fail if duplicate/missing value), and tidy for awk
awk to pull out relevant fields

Script using sed and grep gives unintended output

I have a "source.fasta" file that contains information in the following format:
>TRINITY_DN80_c0_g1_i1 len=723 path=[700:0-350 1417:351-368 1045:369-722] [-1, 700, 1417, 1045, -2]
CGTGGATAACACATAAGTCACTGTAATTTAAAAACTGTAGGACTTAGATCTCCTTTCTATATTTTTCTGATAACATATGGAACCCTGCCGATCATCCGATTTGTAATATACTTAACTGCTGGATAACTAGCCAAAAGTCATCAGGTTATTATATTCAATAAAATGTAACTTGCCGTAAGTAACAGAGGTCATATGTTCCTGTTCGTCACTCTGTAGTTACAAATTATGACACGTGTGCGCTG
>TRINITY_DN83_c0_g1_i1 len=371 path=[1:0-173 152:174-370] [-1, 1, 152, -2]
GTTGTAAACTTGTATACAATTGGGTTCATTAAAGTGTGCACATTATTTCATAGTTGATTTGATTATTCCGAGTGACCTATTTCGTCACTCGATGTTTAAAGAAATTGCTAGTGTGACCCCAATTGCGTCAGACCAAAGATTGAATCTAGACATTAATTTCCTTTTGTATTTGTATCGAGTAAGTTTACAGTCGTAAATAAAGAATCTGCCTTGAACAAACCTTATTCCTTTTTATTCTAAAAGAGGCCTTTGCGTAGTAGTAACATAGTACAAATTGGTTTATTTAACGATTTATAAACGATATCCTTCCTACAGTCGGGTGAAAAGAAAGTATTCGAAATTAGAATGGTTCCTCATATTACACGTTGCTG
>TRINITY_DN83_c0_g1_i2 len=218 path=[1:0-173 741:174-217] [-1, 1, 741, -2]
GTTGTAAACTTGTATACAATTGGGTTCATTAAAGTGTGCACATTATTTCATAGTTGATTTGATTATTCCGAGTGACCTATTTCGTCACTCGATGTTTAAAGAAATTGCTAGTGTGACCCCAATTGCGTCAGACCAAAGATTGAATCTAGACATTAATTTCCTTTTGTATTTGTACCGAGTAAGTTTCCAGTCGTAAATAAAGAATCTGCCAGATCGGA
>TRINITY_DN99_c0_g1_i1 len=326 path=[1:0-242 221:243-243 652:244-267 246:268-325] [-1, 1, 221, 652, 246, -2]
ATCGGTACTATCATGTCATATATCTAGAAATAATACCTACGAATGTTATAAGAATTTCATAACATGATATAACGATCATACATCATGGCCTTTCGAAGAAAATGGCGCATTTACGTTTAATAATTCCGCGAAAGTCAAGGCAAATACAGACCTAATGCGAAATTGAAAAGAAAATCCGAATATCAGAAACAGAACCCAGAACCAATATGCTCAGCAGTTGCTTTGTAGCCAATAAACTCAACTAGAAATTGCTTATCTTTTATGTAACGCCATAAAACGTAATACCGATAACAGACTAAGCACACATATGTAAATTACCTGCTAAA
>TRINITY_DN90_c0_g1_i1 len=1240 path=[1970:0-527 753:528-1239] [-1, 1970, 753, -2]
GTCGATACTAGACAACGAATAATTGTGTCTATTTTTAAAAATAATTCCTTTTGTAAGCAGATTTTTTTTTTCATGCATGTTTCGAGTAAATTGGATTACGCATTCCACGTAACATCGTAAATGTAACCACATTGTTGTAACATACGGTATTTTTTCTGACAACGGACTCGATTGTAAGCAACTTTGTAACATTATAATCCTATGAGTATGACATTCTTAATAATAGCAACAGGGATAAAAATAAAACTACATTGTTTCATTCAACTCGTAAGTGTTTATTTAAAATTATTATTAAACACTATTGTAATAAAGTTTATATTCCTTTGTCAGTGGTAGACACATAAACAGTTTTCGAGTTCACTGTCG
>TRINITY_DN84_c0_g1_i1 len=301 path=[1:0-220 358:221-300] [-1, 1, 358, -2]
ACTATTATGTAGTACCTACATTAGAAACAACTGACCCAAGACAGGAGAAGTCATTGGATGATTTTCCCCATTAAAAAAAGACAACCTTTTAAGTAAGCATACTCCAAATTAAGGTTTAATTAGCTAAGTGAGCGCGAAAAATGATCAAATATACCGACGTCCATTTGGGGCCTATCCTTTTTAGTGTTCCTAATTGAAATCCTCACGTATACAGCTAGTCACTTTTAAATCTTATAAACATGTGATCCGTCTGCTCATTTGGACGTTACTGCCCAAAGTTGGTACATGTTTCGTACTCACG
>TRINITY_DN84_c0_g1_i2 len=301 path=[1:0-220 199:221-300] [-1, 1, 199, -2]
ACTATTATGTAGTACCTACATTAGAAACAACTGACCCAAGACAGGAGAAGTCATTGGATGATTTTCCCCATTAAAAAAAGACAACCTTTTAAGTAAGCATACTCCAAATTAAGGTTTAATTAGCTAAGTGAGCGCGAAAAATGATCAAATATACCGACGTCCATTTGGGGCCTATCCTTTTTAGTGTTCCTAATTGAAATCCTCACGTATACAGCTAGTCAGCTAACCAAAGATAAGTGTCTTGGCTTGGTATCTACAGATCTCTTTTCGTAATTTCGTGAGTACGAAACATGTACCAACT
>TRINITY_DN72_c0_g1_i1 len=434 path=[412:0-247 847:248-271 661:272-433] [-1, 412, 847, 661, -2]
GTTAATTTAGTGGGAAGTATGTGTTAAAATTAGTAAATTAGGTGTTGGTGTGTTTTTAATATGAATCCGGAAGTGTTTTGTTAGGTTACAAGGGTACGGAATTGTAATAATAGAAATCGGTATCCTTGAGACCAATGTTATCGCATTCGATGCAAGAATAGATTGGGAAATAGTCCGGTTATCAATTACTTAAAGATTTCTATCTTGAAAACTATTTCTAATTGGTAAAAAAACTTATTTAGAATCACCCATAGTTGGAAGTTTAAGATTTGAGACATCTTAAATTTTTGGTAGGTAATTTTAAGATTCTATCGTAGTTAGTACCTTTCGTTCTTCTTATTTTATTTGTAAAATATATTACATTTAGTACGAGTATTGTATTTCCAATATTCAGTCTAATTAGAATTGCAAAATTACTGAACACTCAATCATAA
>TRINITY_DN75_c0_g1_i1 len=478 path=[456:0-477] [-1, 456, -2]
CGAGCACATCAGGCCAGGGTTCCCCAAGTGCTCGAGTTTCGTAACCAAACAACCATCTTCTGGTCCGACCACCAGTCACATGATCAGCTGTGGCGCTCAGTATACGAGCACAGATTGCAACAGCCACCAAATGAGAGAGGAAAGTCATCCACATTGCCATGAAATCTGCGAAAGAGCGTAAATTGCGAGTAGCATGACCGCAGGTACGGCGCAGTAGCTGGAGTTGGCAGCGGCTAGGGGTGCCAGGAGGAGTGCTCCAAGGGTCCATCGTGCTCCACATGCCTCCCCGCCGCTGAACGCGCTCAGAGCCTTGCTCATCTTGCTACGCTCGCTCCGTTCAGTCATCTTCGTGTCTCATCGTCGCAGCGCGTAGTATTTACG
There are close to 400,000 sequences in this file.
I have another file ids.txt in the following format:
>TRINITY_DN14840_c10_g1_i1
>TRINITY_DN8506_c0_g1_i1
>TRINITY_DN12276_c0_g2_i1
>TRINITY_DN15434_c5_g3_i1
>TRINITY_DN9323_c8_g3_i5
>TRINITY_DN11957_c1_g7_i1
>TRINITY_DN15373_c1_g1_i1
>TRINITY_DN22913_c0_g1_i1
>TRINITY_DN13029_c4_g5_i1
I have 100 sequence ids in this file. When I match these ids to the source file I want an output that gives me the match for each of these ids with the entire sequence.
For example, for an id:
>TRINITY_DN80_c0_g1_i1
I want my output to be:
>TRINITY_DN80_c0_g1_i1
CGTGGATAACACATAAGTCACTGTAATTTAAAAACTGTAGGACTTAGATCTCCTTTCTATATTTTTCTGATAACATATGGAACCCTGCCGATCATCCGATTTGTAATATACTTAACTGCTGGATAACTAGCCAAAAGTCATCAGGTTATTATATTCAATAAAATGTAACTTGCCGTAAGTAACAGAGGTCATATGTTCCTGTTCGTCACTCTGTAGTTACAAATTATGACACGTGTGCGCTG
I want all hundred sequences in this format.
I used this code:
while read p; do
echo ''$p >> out.fasta
grep -A 400000 -w $p source.fasta | sed -n -e '1,/>/ {/>/ !{'p''}} >> out.fasta
done < ids.txt
But my output is different in that only the last id has a sequence and the rest dont have any sequence associated:
>TRINITY_DN14840_c10_g1_i1
>TRINITY_DN8506_c0_g1_i1
>TRINITY_DN12276_c0_g2_i1
....
>TRINITY_DN10309_c6_g3_i1
>TRINITY_DN6990_c0_g1_i1
TTTTTTTTTTTTTGTGGAAAAACATTGATTTTATTGAATTGTAAACTTAAAATTAGATTGGCTGCACATCTTAGATTTTGTTGAAAGCAGCAATATCAACAGACTGGACGAAGTCTTCGAATTCCTGGATTTTTTCAGTCAAGAGATCAACAGACACTTTGTCGTCTTCAATGACACACATGATCTGCAGTTTGTTGATACCATATCCAACAGGTACAAGTTTGGAAGCTCCCCAGAGGAGACCATCCATTTCGATGGTGCGGACCTGGTTTTCCATTTCTTTCATGTCTGTTTCATCATCCCATGGCTTGACGTCAAGGATTATAGATGATTTAGCAATGAGAGCAGGTTTCTTCGATTTTTTGTCAGCATAAGCTTTCAGACGTTCTTCACGAATTCTGGCGGCCTCTGCATCCTCTTCCTCGTCGCCAGATCCGAATAGGTCGACGTCATCATCGTCGTCATCCTTAGCAGCGGGTGCAGGTGCTGTGGTGGTCTTTCCGCCAGCGGTCAGAGGGCTAGCTCCAGCCGCCCAGGATTTGCGCTCCTCGGCATTGTAGGAGGCAATCTGGTTGTACCACCGGAGAGCGTGGGGCAAGCTTGCGCTCGGGGCCTTGCCGACTTGTTGGAACACTTGGAAATCGGCTTGAGTTGGTGTGTAACCTGACACATAACTCTTATCAGCTAAGAAATTGTTAAGCTCATTAAGGCCTTGTGCGGTTTTAACGTCTCCTACTGCCATTTTTATTTAAAAAAGTAGTTTTTTTCGAGTAATAGCCACACGCCCCGGCACAATGTGAGCAAGAAGGAATGAAAAAGAAATCTGACATTGACATTGCCATGAAATTGACTTTCAAAGAACGAATGAATTGAACTAATTTGAACGG
I am only getting the desired output for the 100th id from my ids.txt. Could someone help me on where my script is wrong. I would like to get all 100 sequences when i run the script.
Thank you
I have added google drive links to the files i am working with:
ids.txt
Source.fasta
Repeatedly looping over a large file is inefficient; you really want to avoid running grep (or sed or awk) more than once if you can avoid it. Generally speaking, sed and Awk will often easily allow you to specify actions for individual lines in a file, and then you run the script on the file just once.
For this particular problem, the standard Awk idiom with NR==FNR comes in handy. This is a mechanism which lets you read a number of keys into memory (concretely, when NR==FNR it means that you are processing the first input file, because the overall input line number is equal to the line number within this file) and then check if they are present in subsequent input files.
Recall that Awk reads one line at a time and performs all the actions whose conditions match. The conditions are a simple boolean, and the actions are a set of Awk commands within a pair of braces.
awk 'NR == FNR { s[$0]; next }
# If we fall through to here, we have finished processing the first file.
# If we see a wedge and p is 1, reset it -- this is a new sequence
/^>/ && p { p = 0 }
# If the prefix of this line is in s, we have found a sequence we want.
($1$2 in s) || ($1 in s) || ((substr($1, 1, 1) " " substr($1, 2)) in s) {
if ($1 ~ /^>./) { print $1 } else { print $1 $2 }; p = 1; next }
# If p is true, we want to print this line
p' ids.txt source.fasta >out.fasta
So when we are reading ids.txt, the condition NR==FNR is true, and so we simply store each line in the array s. The next causes the rest of the Awk script to be skipped for this line.
On subsequent reads, when NR!=FNR, we use the variable p to control what to print. When we see a new sequence, we set p to 0 (in case it was 1 from a previous iteration). Then when we see a new sequence, we check if it is in s, and if so, we set p to one. The last line simply prints the line if p is not empty or zero. (An empty action is a shorthand for the action { print }.)
The slightly complex condition to check if $1 is in s might be too complicated -- I put in some normalizations to make sure that a space between the > and the sequence identifier is tolerated, regardless of there was one in ids.txt or not. This can probably be simplified if your files are consistently formatted.
Only with GNU grep and sed:
grep -A 1 -w -F -f ids.txt source.fasta | sed 's/ .*//'
See: man grep
$ awk 'NR==FNR{a[$1];next} $1 in a{c=2} c&&c--' ids.txt source.fasta
>TRINITY_DN80_c0_g1_i1 len=723 path=[700:0-350 1417:351-368 1045:369-722] [-1, 700, 1417, 1045, -2]
CGTGGATAACACATAAGTCACTGTAATTTAAAAACTGTAGGACTTAGATCTCCTTTCTATATTTTTCTGATAACATATGGAACCCTGCCGATCATCCGATTTGTAATATACTTAACTGCTGGATAACTAGCCAAAAGTCATCAGGTTATTATATTCAATAAAATGTAACTTGCCGTAAGTAACAGAGGTCATATGTTCCTGTTCGTCACTCTGTAGTTACAAATTATGACACGTGTGCGCTG
The above was run using your posted source.fasta and this ids.txt:
$ cat ids.txt
>TRINITY_DN14840_c10_g1_i1
>TRINITY_DN80_c0_g1_i1
First group all id's as one expression separated by | like this
cat ids.txt | tr '\n' '|' | awk "{print "\"" $0 "\""}'
Remove the last | symbol from the expression.
Now you can grep using the output you got from previous command like this
egrep -E ">TRINITY_DN14840_c10_g1_i1|>TRINITY_DN8506_c0_g1_i1|>TRINITY_DN12276_c0_g2_i1|>TRINITY_DN15434_c5_g3_i1|>TRINITY_DN9323_c8_g3_i5|>TRINITY_DN11957_c1_g7_i1|>TRINITY_DN15373_c1_g1_i1|>TRINITY_DN22913_c0_g1_i1|>TRINITY_DN13029_c4_g5_i1" source.fasta
This will print only the matching lines
Editing as per tripleee comments
Using the following is printing the output properly
Assuming the ID and sequence are in different lined
tr '\n' '|' <ids.txt | sed 's/|$//' | grep -A 1 -E -f - source.fasta
This might work for you (GNU sed):
sed 's#.*#/^&/{s/ .*//;N;p}#' idFile | sed -nf - fastafile
Convert the idFile into a sed script and run it against the fastaFile.
the best way to do this is using either python or perl. I was able to make a script for extracting the ids using python as follows.
#script to extract sequences from a source file based on ids in another file
#the source is a fasta file with a header and a sequence that follows in one line
#the ids file contains one id per line
#both the id and source file should contain the character '>' at the beginning that siginifies an id
def main():
#asks the user for the ids file
file1 = raw_input('ids file: ');
#opens the ids file into the memory
ids_file = open(file1, 'r');
#asks the user for the fasta file
file2 = raw_input('fasta file: ');
#opens the fasta file into memory; you need your memory to be larger than the filesize, or python will hard crash
fasta_file = open(file2, 'r');
#ask the user for the file name of output file
file3 = raw_input('enter the output filename: ');
#opens output file with append option; append is must as you dont want to override the existing data
output_file = open(file3, 'w');
#split the ids into an array
ids_lines = ids_file.read().splitlines()
#split the fasta file into an array, the first element will be the id followed by the sequence
fasta_lines = fasta_file.read().splitlines()
#initializing loop counters
i = 0;
j = 0;
#while loop to iterate over the length of the ids file as this is the limiter for the program
while j<len(fasta_lines) and i<len(ids_lines):
#if statement to match ids from both files and bring matching sequences
if ids_lines[i] == fasta_lines[j]:
#output statements including newline characters
output_file.write(fasta_lines[j])
output_file.write('\n')
output_file.write(fasta_lines[j+1])
output_file.write('\n')
#increment i so that we go for the next id
i=i+1;
#deprecate j so we start all over for the new id
j=0;
else:
#when there is no match check the id, we are skipping the sequence in the middle which is j+1
j=j+2;
ids_file.close()
fasta_file.close()
output_file.close()
main()`
The code is not perfect but works for any number of ids. I have tested for my samples which contained 5000 ids in one of them and the program worked fine. If there are improvements to the code please do so, I am a relatively newbie at programming so the code is a bit crude.

shell: how to read a certain column in a certain line into a variable

I want to extract the first column of the last line of a text file. Instead of output the content of interest in another file and read it in again, can I just use some command to read it into a variable directly?
For exampole, if my file is like this:
...
123 456 789(this is the last line)
What I want is to read 123 into a variable in my shell script. How can I do that?
One approach is to extract the line you want, read its columns into an array, and emit the array element you want.
For the last line:
#!/bin/bash
# ^^^^- not /bin/sh, to enable arrays and process substitution
read -r -a columns < <(tail -n 1 "$filename") # put last line's columns into an array
echo "${columns[0]}" # emit the first column
Alternately, awk is an appropriate tool for the job:
line=2
column=1
var=$(awk -v line="$line" -v col="$column" 'NR == line { print $col }' <"$filename")
echo "Extracted the value: $var"
That said, if you're looking for a line close to the start of a file, it's often faster (in a runtime-performance sense) and easier to stick to shell builtins. For instance, to take the third column of the second line of a file:
{
read -r _ # throw away first line
read -r _ _ value _ # extract third value of second line
} <"$filename"
This works by using _s as placeholders for values you don't want to read.
I guess with "first column", you mean "first word", do you?
If it is guaranteed, that the last line doesn't start with a space, you can do
tail -n 1 YOUR_FILE | cut -d ' ' -f 1
You could also use sed:
$> var=$(sed -nr '$s/(^[^ ]*).*/\1/p' "file.txt")
The -nr tells sed to not output data by default (-n) and use extended regular expressions (-r to avoid needing to escape the paranthesis otherwise you have to write \( \))). The $ is an address that specifies the last line. The regular expression anchors the beginning of the line with the first ^, then matches everything that is not a space [^ ]* and puts that the result into a capture group ( ) and then gets rid of the rest of the line .* by replacing the line with the capture group \1, then print p to print the line.

Resources