I am a biologist that is starting to have to learn some elementary scripting skills to deal with large DNA sequence data sets. So please go easy on me. I am doing this all in bash. I have a file with my data formatted like this:
CLocus_58919_Sample_25_Locus_33235_Allele_0
TGCAGGTGCTTCCAGTTGTCTTTGTAGCGTCCCACCATGATCTGCAGGTCCTTG
CLocus_58919_Sample_9_Locus_54109_Allele_0
TGCAGGTGCTTCCAGTTGTCTTTGTAGCGTCCCACCATGATCTGCAGGTCCTTG
What I need is to do is loop through this file and write all the sequences from the same sample into their own file. Just to be clear, these sequences come from samples 25 and 9. So my idea was to use awk to reformat my file in the following way:
CLocus_58919_Sample_25_Locus_33235_Allele_0_TGCAGGTGCTTCCAGTTGTCTTTGTAGCGTCCCACCATGATCTGCAGGTCCTTG
CLocus_58919_Sample_9_Locus_54109_Allele_0_TGCAGGTGCTTCCAGTTGTCTTTGTAGCGTCCCACCATGATCTGCAGGTCCTTG
then pipe this into another awk if statement to say "if sample=$i then write out that entire line to a file named sample.$i" Here is my code so far:
#!/bin/bash
a=`ls /scratch/tkchafin/data/raw | wc -l`;
b=1;
c=$((a-b));
mkdir /scratch/tkchafin/data/phylogenetics
for ((i=0; i<=$((c)); i++)); do
awk 'ORS=NR%2?"_":"\n"' $1 | awk -F_ '{if($4==$i) print}' >> /scratch/tkchafin/data/phylogenetics/sample.$i
done;
I understand this is not working because $i is in single quotes so bash is not recognizing it. I know awk has a -v option for passing external variables to it, but I don't know how I would apply that in this case. I tried to move the for loop inside the awk statement but this does not produce the desired result either. Any help would be much appreciated.
You can have awk write directly to the desired output file, without a shell loop:
awk -F_ '(NR % 2) == 1 { line1 = $0; fn="/scratch/tkchafin/data/phylogenetics/sample."$4; }
(NR % 2) == 0 { print line1"_"$0 > fn; }' "$1"
But to show how you would use -v in your version, it would be:
for ((i=0; i<=$((c)); i++)); do
awk 'ORS=NR%2?"_":"\n"' $1 | awk -F_ -v i=$i '$4 == i' >> /scratch/tkchafin/data/phylogenetics/sample.$i
done;
Related
I'd be very grateful for your help with something probably quite simple.
I have a table (table2.txt), which has a single column of randomly generated numbers, and is about a million lines long.
2655087
3721239
5728533
9082076
2016819
8983893
9446748
6607974
I want to create a loop that repeats 10,000 times, so that for iteration 1, I print lines 1 to 4 to a file (file0.txt), for iteration 2, I print lines 5 to 8 (file1.txt), and so on.
What I have so far is this:
#!/bin/bash
for i in {0..10000}
do
awk 'NR==((4 * "$i") +1)' table2.txt > file"$i".txt
awk 'NR==((4 * "$i") +2)' table2.txt >> file"$i".txt
awk 'NR==((4 * "$i") +3)' table2.txt >> file"$i".txt
awk 'NR==((4 * "$i") +4)' table2.txt >> file"$i".txt
done
Desired output for file0.txt:
2655087
3721239
5728533
9082076
Desired output for file1.txt:
2016819
8983893
9446748
6607974
Something is going wrong with this, because I am getting identical outputs from all my files (i.e. they all look like the desired output of file0.txt). Hopefully you can see from my script that during the second iteration, i.e. when i=2, I want the output to be the values of rows 5, 6, 7 and 8.
This is probably a very simple syntax error, and I would be grateful if you can tell me where I'm going wrong (or give me a less cumbersome solution!)
Thank you very much.
The beauty of awk is that you can do this in one awk line :
awk '{ print > ("file"c".txt") }
(NR % 4 == 0) { ++c }
(c == 10001) { exit }' <file>
This can be slightly more optimized and file handling friendly (cfr. James Brown):
awk 'BEGIN{f="file0.txt" }
{ print > f }
(NR % 4 == 0) { close(f); f="file"++c".txt" }
(c == 10001) { exit }' <file>
Why did your script fail?
The reason why your script is failing is because you used single quotes and tried to pass a shell variable to it. Your lines should read :
awk 'NR==((4 * '$i') +1)' table2.txt > file"$i".txt
but this is very ugly and should be improved with
awk -v i=$i 'NR==(4*i+1)' table2.txt > file"$i".txt
Why is your script slow?
The way you are processing your file is by doing a loop of 10001 iterations. Per iterations, you perform 4 awk calls. Each awk call reads the full file completely and writes out a single line. So in the end you read your files 40004 times.
To optimise your script step by step, I would do the following :
Terminate awk to step reading the file after the line is print
#!/bin/bash
for i in {0..10000}; do
awk -v i=$i 'NR==(4*i+1){print; exit}' table2.txt > file"$i".txt
awk -v i=$i 'NR==(4*i+2){print; exit}' table2.txt >> file"$i".txt
awk -v i=$i 'NR==(4*i+3){print; exit}' table2.txt >> file"$i".txt
awk -v i=$i 'NR==(4*i+4){print; exit}' table2.txt >> file"$i".txt
done
Merge the 4 awk calls into a single one. This prevents reading the first lines over and over per loop cycle.
#!/bin/bash
for i in {0..10000}; do
awk -v i=$i '(NR<=4*i) {next} # skip line
(NR> 4*(i+1)}{exit} # exit awk
1' table2.txt > file"$i".txt # print line
done
remove the final loop (see top of this answer)
This is functionally the same as #JamesBrown's answer but just written more awk-ishly so don't accept this, I just posted it to show the more idiomatic awk syntax as you can't put formatted code in a comment.
awk '
(NR%4)==1 { close(out); out="file" c++ ".txt" }
c > 10000 { exit }
{ print > out }
' file
See why-is-using-a-shell-loop-to-process-text-considered-bad-practice for some of the reasons why you should avoid shell loops for manipulating text.
With just bash you can do it very simple:
chunk=4
files=10000
head -n $(($chunk*$files)) table2.txt |
split -d -a 5 --additional-suffix=.txt -l $chunk - file
Basically read first 10k lines and split them into chunks of 4 consecutive lines, using file as prefix and .txt as suffix for the new files.
If you want a numeric identifier, you will need 5 digits (-a 5), as pointed in the comments (credit: #kvantour).
Another awk:
$ awk '{if(NR%4==1){if(i==10000)exit;close(f);f="file" i++ ".txt"}print > f}' file
$ ls
file file0.txt file1.txt
Explained:
awk ' {
if(NR%4==1) { # use mod to recognize first record of group
if(i==10000) # exit after 10000 files
exit # test with 1
close(f) # close previous file
f="file" i++ ".txt" # make a new filename
}
print > f # output record to file
}' file
Sometimes I want a bash script that's mostly a help file. There are probably better ways to do things, but sometimes I want to just have a file called "awk_help" that I run, and it dumps my awk notes to the terminal.
How can I do this easily?
Another idea, use #!/bin/cat -- this will literally answer the title of your question since the shebang line will be displayed as well.
Turns out it can be done as pretty much a one liner, thanks to #CharlesDuffy for the suggestions!
Just put the following at the top of the file, and you're done
cat "$BASH_SOURCE" | grep -v EZREMOVEHEADER
So for my awk_help example, it'd be:
cat "$BASH_SOURCE" | grep -v EZREMOVEHEADER
# Basic form of all awk commands
awk search pattern { program actions }
# advanced awk
awk 'BEGIN {init} search1 {actions} search2 {actions} END { final actions }' file
# awk boolean example for matching "(me OR you) OR (john AND ! doe)"
awk '( /me|you/ ) || (/john/ && ! /doe/ )' /path/to/file
# awk - print # of lines in file
awk 'END {print NR,"coins"}' coins.txt
# Sum up gold ounces in column 2, and find out value at $425/ounce
awk '/gold/ {ounces += $2} END {print "value = $" 425*ounces}' coins.txt
# Print the last column of each line in a file, using a comma (instead of space) as a field separator:
awk -F ',' '{print $NF}' filename
# Sum the values in the first column and pretty-print the values and then the total:
awk '{s+=$1; print $1} END {print "--------"; print s}' filename
# functions available
length($0) > 72, toupper,tolower
# count the # of times the word PASSED shows up in the file /tmp/out
cat /tmp/out | awk 'BEGIN {X=0} /PASSED/{X+=1; print $1 X}'
# awk regex operators
https://www.gnu.org/software/gawk/manual/html_node/Regexp-Operators.html
I found another solution that works on Mac/Linux and works exactly as one would hope.
Just use the following as your "shebang" line, and it'll output everything from line 2 on down:
test.sh
#!/usr/bin/tail -n+2
hi there
how are you
Running this gives you what you'd expect:
$ ./test.sh
hi there
how are you
and another possible solution - just use less, and that way your file will open in searchable gui
#!/usr/bin/less
and this way you can grep if for something too, e.g.
$ ./test.sh | grep something
I'm trying to write a simple script to make several replacements in a big text file. I've a "map" file which contains the records to be searched and replaced,one per line,separated by a space, and a "input" file where I need the changes to be done. The examples files and the script I wrote are beneath.
Map file
new_0 old_0
new_1 old_1
new_2 old_2
new_3 old_3
new_4 old_4
Input file
itsa(old_0)single(old_2)string(old_1)with(old_5)ocurrences(old_4)ofthe(old_3)records
Script
#!/bin/bash
while read -r mapline ; do
mapf1=`awk 'BEGIN {FS=" "} {print $1}' <<< "$mapline"`
mapf2=`awk 'BEGIN {FS=" "} {print $2}' <<< "$mapline"`
for line in $(cat "input") ; do
if [[ "${line}" == *"${mapf2}"* ]] ; then
sed "s/${mapf2}/${mapf1}/g" <<< "${line}"
fi
done < "input"
done < "map"
The thing is that the searches and replaces are made correctly, but I can't find a way to save the output of each iteration and work over it in the next. So, my output looks like this:
itsa(new_0)single(old_2)string(old_1)withocurrences(old_4)ofthe(old_3)records
itsa(old_0)single(old_2)string(new_1)withocurrences(old_4)ofthe(old_3)records
itsa(old_0)single(new_2)string(old_1)withocurrences(old_4)ofthe(old_3)records
itsa(old_0)single(old_2)string(old_1)withocurrences(old_4)ofthe(new_3)records
itsa(old_0)single(old_2)string(old_1)withocurrences(new_4)ofthe(old_3)records
Yet, the desired output would look like this:
itsa(new_0)single(new_2)string(new_1)withocurrences(new_4)ofthe(new_3)records
May anyone bring some light in this darkly waters??? Thanks in advance!
Improving the existing script
Improvements:
Use "$()" instead of ``. It supports whitespace and is easier to read.
Don't execute sed for each line. sed already loops over all lines and is faster than a loop in bash.
The adapted script:
text="$(< input)"
while read -r mapline; do
mapf1="$(awk 'BEGIN {FS=" "} {print $1}' <<< "$mapline")"
mapf2="$(awk 'BEGIN {FS=" "} {print $2}' <<< "$mapline")"
text="$(sed "s/${mapf2}/${mapf1}/g" <<< "$text")"
done < "map"
echo "$text"
The variable $text contains the complete input file and is modified in each iteration. The output of this script is the file after all replacements were done.
Alternative approach
Convert the map file into a pattern for sed and execute sed just once using that pattern.
pattern="$(sed 's#\(.*\) \(.*\)#s/\2/\1/g#' map)"
sed "$pattern" input
The first command is the conversion step. The file
new_0 old_0
new_1 old_1
...
will result in the pattern
s/old_0/new_0/g
s/old_1/new_1/g
...
It is possible in GNU Awk as follows,
awk 'FNR==NR{hash[$2]=$1; next} \
{for (i=1; i<=NF; i++)\
{for(key in hash) \
{if (match ($i,key)) {$i=sprintf("(%s)",hash[key];break;)}}}print}' \
map-file FS='[()]' OFS= input-file
produces an output as,
itsa(new_0)single(new_2)string(new_1)withold_5ocurrences(new_4)ofthe(new_3)records
Another in Gnu awk, using split and ternary operator(s):
$ awk '
NR==FNR { a[$2]=$1; next }
{
n=split($0,b,"[()]")
for(i=1;i<=n;i++)
printf "%s%s",(i%2 ? b[i] : (b[i] in a? "(" a[b[i]] ")":"")),(i==n?ORS:"")
}' map foo
itsa(new_0)single(new_2)string(new_1)withocurrences(new_4)ofthe(new_3)records
First you read in the map to a hash. When processing the file, split all records by ( and ). Every other could be in the map (i%2==0). While printfing test with ternary operator if matches are found from a and when there is a match, output it parenthesized.
I am extracting the values in fourth column of a file and trying to add them.
#!/bin/bash
cat tag_FLI1 | awk '{print $4}'>tags
$t=0
for i in `cat tags`
do
$t=$t+$i (this is the position of trouble)
done
echo $t
error on line 6.
Thank you in advance for your time.
In case of using only awk for the task:
If fields are separated with blanks:
awk '{ sum += $4 } END { print sum }' tag_FLI1
Otherwise, use FS variable, like:
awk 'BEGIN { FS = "|" } { sum += $4 } END { print sum }' tag_FLI1
That's not how you do arithmetic in bash. To add the values from two variables x and y and store the result in a third variable z, it should look like this:
z=$((x + y))
However, you could more simply just do everything in awk, replacing your awk '{print $4}' with:
awk '{ sum += $4 } END { print sum }'
The awk approach will also correctly handle floating point numbers, which the bash approach will not.
You need to use a numeric context for adding the numbers. Also, cat is not needed here, as awk can read from a file. Unless you use "tags" in another script, you don't need to create the file. Also, if you are using bash and not perl or php, there shouldn't be a "$" on the left side of a variable assignment.
t=0
while read -r i
do
t=$((t + i))
done < <(awk '{print $4}' tag_FLI1)
echo "$t"
That can be done in just one line:
awk '{sum += $4} END {print sum}' tag_FLI1
However, if this is a learning exercise for bash, have a look at this example:
#!/bin/bash
sum=0
while read line; do
(( sum += $line ))
done < <(awk '{print $4}' tag_FLI1)
echo $sum
There were essentially 3 issues with your code:
Variables are assigned using VAR=..., not $VAR=.... See http://tldp.org/LDP/abs/html/varassignment.html
The way you sum the numers is incorrect. See arithmetic expansion for examples of how to do it.
It is not necessary to use an intermediate file just to iterate through the output of a command. Use a while loop as show above, but beware of this caveat.
I'm writing a bash script to get data from procmail and produce an output CSV file.
At the end, I need to translate from this:
---label1: 123456
---label2: 654321
---label3: 246810
---label4: 135791
---label5: 101010
to this:
label1,label2,label3,label4,label5
123456,654321,246810,135791,101010
I could do this easily with a ruby script but I don't want to call another script other than the bash script. So I've though of doing it with sed.
I can extract the data I want like this:
sed -nr 's/^---(\S+): (\S+)$/\1,\2/p'
But I don't know how to transpose it. Can you help me?
If you're running sed I'd argue that you are calling another script. So why not write it in Ruby if that's easier to write and maintain?
If you're worried about having multiple files could embed the Ruby code in the bash script as a here document (assuming Ruby can read a script from the Standard Input).
you can just do everything in awk
awk 'BEGIN{
FS=": "
}
{
gsub("---","")
label[++c] = $1
num[++d] =$2
}
END{
for(i=1;i<c;i++){
printf label[i]","
}
print label[c]
for(i=1;i<d;i++){
printf num[i]","
}
print num[d]
}' file
output
# ./test.sh
label1,label2,label3,label4,label5
123456,654321,246810,135791,101010
redirect the output to csv file as needed
Perhaps this is what you are looking for?
(from unix.com)
num=$(awk -F"," 'NR==1 { print NF }' data)
print $num
i=1
while (( $i > tmpdata
(( i = i + 1 ))
done
mv tmpdata data
Finally I resolved it using ruby, as suggested buy Dave Webb but as a one-liner not a here document with the following script:
ruby -ne 'BEGIN{#head=[];#data=[]}; #head << $1 && #data << $2 if $_.match(/^---(\S+): (\S+)$/); END{puts #head.join(",");puts #data.join(",")}' $FILE
I didn't know that I could use BEGIN, END blocks to set the variables and output the results.
you said you needed bash script..
#!/bin/bash
labels=`awk '/---/{printf("%s,",$1)}' file.txt `
values=`awk '/---/{printf("%s,",$2)}' file.txt `
labels=`echo $labels|sed 's/---//g;s/\://g;s/\,$//'`
values=`echo $values|sed 's/\,$//`
echo $labels
echo $values
You can do something like this from within the shell script:
cat demo | awk -F':' '{print $1}' | sed -e s/'---'// | tr '\n' ',' > csv_file
cat demo | awk '{print $2}' | tr '\n' ',' >> csv_file