How to use awk to select text from a file starting from a line number until a certain string - bash

I have this file where I want to read it starting from a certain line number, until a string. I already used
awk "NR>=$LINE && NR<=$((LINE + 121)) {print}" db_000022_model1.dlg
to read from a specific line until and incremented line number, but right now I need to make it stop by itself at a certain string in order to be able to use it on other files.
DOCKED: ENDBRANCH 7 22
DOCKED: TORSDOF 3
DOCKED: TER
DOCKED: ENDMDL
I want it to stop after it reaches
DOCKED: ENDMDL
#!/bin/bash
# This script is for extracting the pdb files from a sorted list of scored
# ligands
mkdir top_poses
for d in $(head -20 summary_2.0.sort | cut -d, -f1 | cut -d/ -f1)
do
cd "$d"||continue
# find the cluster with the highest population within the dlg
RUN=$(grep '###*' "$d.dlg" | sort -k10 -r | head -1 | cut -d\| -f3 | sed 's/ //g')
LINE=$(grep -ni "BEGINNING GENETIC ALGORITHM DOCKING $RUN of 100" "$d.dlg" | cut -d: -f1)
echo "$LINE"
# extract the best pose and correct the format
awk -v line="$((LINE + 14))" "NR>=line; /DOCKED: ENDMDL/{exit}" "$d.dlg" | sed 's/^........//' > "$d.pdbqt"
# convert the pdbqt file into pdb
#obabel -ipdbqt $d.pdbqt -opdb -O../top_poses/$d.pdb
cd ..
done
When I try the
awk -v line="$((LINE + 14))" "NR>=line; /DOCKED: ENDMDL/{exit}" "$d.dlg" | sed 's/^........//' > "$d.pdbqt"
Just like that in the shell terminal, it works. But in the script it outputs an empty file.

Depending on your requirements for handling DOCKED: ENDMDL occurring before your target line:
awk -v line="$LINE" 'NR>=line; /DOCKED: ENDMDL/{exit}' db_000022_model1.dlg
or:
awk -v line="$LINE" 'NR>=line{print; if (/DOCKED: ENDMDL/) exit}' db_000022_model1.dlg

Related

How to remove all but the last 3 parts of FQDN?

I have a list of IP lookups and I wish to remove all but the last 3 parts, so:
98.254.237.114.broad.lyg.js.dynamic.163data.com.cn
would become
163data.com.cn
I have spent hours searching for clues, including parameter substitution, but the closest I got was:
$ string="98.254.237.114.broad.lyg.js.dynamic.163data.com.cn"
$ string1=${string%.*.*.*}
$ echo $string1
Which gives me the inverted answer of:
98.254.237.114.broad.lyg.js.dynamic
which is everything but the last 3 parts.
A script to do a list would be better than just the static example I have here.
Using CentOS 6, I don't mind if it by using sed, cut, awk, whatever.
Any help appreciated.
Thanks, now that I have working answers, may I ask as a follow up to then process the resulting list and if the last part (after last '.') is 3 characters - eg .com .net etc, then to just keep the last 2 parts.
If this is against protocol, please advise how to do a follow up question.
if parameter expansion inside another parameter expansion is supported, you can use this:
$ s='98.254.237.114.broad.lyg.js.dynamic.163data.com.cn'
$ # removing last three fields
$ echo "${s%.*.*.*}"
98.254.237.114.broad.lyg.js.dynamic
$ # pass output of ${s%.*.*.*} plus the extra . to be removed
$ echo "${s#${s%.*.*.*}.}"
163data.com.cn
can also reverse the line, get required fields and then reverse again.. this makes it easier to use change numbers
$ echo "$s" | rev | cut -d. -f1-3 | rev
163data.com.cn
$ echo "$s" | rev | cut -d. -f1-4 | rev
dynamic.163data.com.cn
$ # and easy to use with file input
$ cat ip.txt
98.254.237.114.broad.lyg.js.dynamic.163data.com.cn
foo.bar.123.baz.xyz
a.b.c.d.e.f
$ rev ip.txt | cut -d. -f1-3 | rev
163data.com.cn
123.baz.xyz
d.e.f
echo $string | awk -F. '{ if (NF == 2) { print $0 } else { print $(NF-2)"."$(NF-1)"."$NF } }'
NF signifies the total number of field separated by "." and so we want the last piece (NF), last but 1 (NF-1) and last but 2 (NF-2)
$ echo $string | awk -F'.' '{printf "%s.%s.%s\n",$(NF-2),$(NF-1),$NF}'
163data.com.cn
Brief explanation,
Set the field separator to .
Print only last 3 field using the awk parameter $(NF-2), $(NF-1),and $NF.
And there's also another option you may try,
$ echo $string | awk -v FPAT='[^.]+.[^.]+.[^.]+$' '{print $NF}'
163data.com.cn
It sounds like this is what you need:
awk -F'.' '{sub("([^.]+[.]){"NF-3"}","")}1'
e.g.
$ echo "$string" | awk -F'.' '{sub("([^.]+[.]){"NF-3"}","")}1'
163data.com.cn
but with just 1 sample input/output it's just a guess.
wrt your followup question, this might be what you're asking for:
$ echo "$string" | awk -F'.' '{n=(length($NF)==3?2:3); sub("([^.]+[.]){"NF-n"}","")}1'
163data.com.cn
$ echo 'www.google.com' | awk -F'.' '{n=(length($NF)==3?2:3); sub("([^.]+[.]){"NF-n"}","")}1'
google.com
Version which uses only bash:
echo $(expr "$string" : '.*\.\(.*\..*\..*\)')
To use it with a file you can iterate with xargs:
File:
head list.dat
98.254.237.114.broad.lyg.js.dynamic.163data.com.cn
98.254.34.56.broad.kkk.76onepi.co.cn
98.254.237.114.polst.a65dal.com.cn
iterating the whole file:
cat list.dat | xargs -I^ -L1 expr "^" : '.*\.\(.*\..*\..*\)'
Notice: it won't be very efficient in large scale, so you need to consider by your own whether it is good enough for you.
Regexp explanation:
.* \. \( .* \. .* \. .* \)
\___| | | | |
| \------------------------/> brakets shows which part we extract
| | |
| \-------/> the \. indicates the dots to separate specific number of words
|
|
-> the rest and the final dot which we are not interested in (out of brakets)
details:
http://tldp.org/LDP/abs/html/string-manipulation.html -> Substring Extraction

How do I check if the number of lines in a set of files is not equal to a certain number? (Bash)

I want to detect which one of my files is corrupt, and by corrupt it means that the file does not have 102 lines in it. I want the for loop that I'm writing to output a error message giving me the file name of the corrupt files. I have files named ethane1.log ethane2.log ethane3.log ... ethane10201.log .
for j in {1..10201}
do
if [ ! (grep 'C 2- C 5' ethane$j.log | cut -c 22- | tail -n +2 | awk '{for (i=1;i<=NF;i++) print $i}'; done | wc -l) == 102]
then echo "Ethane$j.log is corrupt."
fi
done
When the file is not corrupt, the input:
grep 'C 2- C 5' ethane$j.log | cut -c 22- | tail -n +2 | awk '{for (i=1;i<=NF;i++) print $i}'; done | wc -l
returns:
102
Or else it is another number.
Only thing is, I'm not sure the syntax for the if construct (How to create a variable from the 102 output of wc -l, and then how to check if it is equal to or not equal to 102.)
A sample output would be:
Ethane100.log is corrupt.
Ethane2010.log is corrupt.
Ethane10201.log is corrupt.
To count lines, use wc -l:
wc -l ethane*.log | grep -v '^ *102 ' | head -n-1
grep -v removes matching lines
^ matches the start of a line
space* matches any number of spaces (0 or more)
head removes some trailing lines
-n-1 removes the last line (the total)
Using gawk
awk 'ENDFILE{if(NR!=102)print NR,FILENAME}' ethane*.log
At the end of each file, checks the number of lines isn't 102 and prints the number of lines and the filename.

Unix uniq command to CSV file

I have a text file (list.txt) containing single and multi-word English phrases. My goal is to do a word count for each word and write the results to a CSV file.
I have figured out the command to write the amount of unique instances of each word, sorted from largest to smallest. That command is:
$ tr 'A-Z' 'a-z' < list.txt | tr -sc 'A-Za-z' '\n' | sort | uniq -c | sort -n -r | less > output.txt
The problem is the way the new file (output.txt) is formatted. There are 3 leading spaces, followed by the number of occurrences, followed by a space, followed by the word. Then on to a next line. Example:
9784 the
6368 and
4211 for
2929 to
What would I need to do in order to get the results in a more desired format, such as CSV? For example, I'd like it to be:
9784,the
6368,and
4211,for
2929,to
Even better would be:
the,9784
and,6368
for,4211
to,2929
Is there a way to do this with a Unix command, or do I need to do some post-processing within a text editor or Excel?
Use awk as follows:
> cat input
9784 the
6368 and
4211 for
2929 to
> cat input | awk '{ print $2 "," $1}'
the,9784
and,6368
for,4211
to,2929
You full pipeline will be:
$ tr 'A-Z' 'a-z' < list.txt | tr -sc 'A-Za-z' '\n' | sort | uniq -c | sort -n -r | awk '{ print $2 "," $1}' > output.txt
use sed to replace the spaces with comma
cat extra_set.txt | sort -i | uniq -c | sort -nr | sed 's/^ *//g' | sed 's/ /\, /'

'Read' command stripping '\n' string

I want to extract data from a file which looks like this :
BK20120802130531:/home/michael/Scripts/usb_backup.sh
BK20120802130531:/home/michael/Scripts/yad_0.17.1.1-1_i386.deb
BK20120802130731:/home/michael/Scripts/gbk.sh
BK20120802130131:/home/michael/Scripts/alt-notify-send.sh
BK20120802130131:/home/michael/Scripts/bk.bak
BK20120802130131:/home/michael/Scripts/bk.sh
BK20120802130131:/home/michael/Scripts/demande_password.sh
The idea is to show on the screen (without creating a temporary file, nor modifying the original file) what follows :
alt-notify-send.sh
/home/michael/Scripts
bk.bak
/home/michael/Scripts
bk.sh
/home/michael/Scripts
demande_password.sh
/home/michael/Scripts
gbk.sh
/home/michael/Scripts
usb_backup.sh
/home/michael/Scripts
yad_0.17.1.1-1_i386.deb
/home/michael/Scripts
To sum up :
Strip the characters before ':'
Put the filenames before their corresponding directory
Sort the filenames by alphabetical order
Do a carriage return between each filename and its corresponding directory
I succeed doing all this, but there is still an ugly thing in my code concerning point #4 :
cut -f 2 -d ':' $big_file | \
sort -u | \
while read file ; do
echo "$(basename "$file")zipzapzupzop$(dirname "$file")" # <-- ugly thing #1
done | \
sort -dfb | \
while read line ; do
echo $line
done | \
sed 's/zipzapzupzop/\n/' # <-- ugly thing #2
At the beginning, I had written :
echo "$(basename "$file")\n$(dirname "$file")"
in place of ugly thing#1, in order to be able to do
echo -e "$line"
in the second while boucle. However, the read command strips each time the '\n' string, so that I obtain
alt-notify-send.shn/home/michael/Scripts
bk.bakn/home/michael/Scripts
bk.shn/home/michael/Scripts
demande_password.shn/home/michael/Scripts
gbk.shn/home/michael/Scripts
usb_backup.shn/home/michael/Scripts
yad_0.17.1.1-1_i386.debn/home/michael/Scripts
I tried to protect the '\' character by another '\', but the result is the same.
man read
is of no help either. So, is it a proper way to do this ?
read is a shell builtin, and man read may be giving you the docs for the (mostly unrelated) syscall.
read -r will prevent read from processing \ sequences.
The whole thing could have been done with a single awk script though:
awk '
{
start = index($0, ":") + 1
end = match($0, "[^/]*$")
out[NR] = substr($0, end) "\n" substr($0, start, end - start - 1)
}
END {
asort(out)
for (i = 1; i <= NR; i++)
print out[i]
}'
If you don't need to handle spaces in filenames, you can do this:
cat $bigfile | sed 's/.*://' | while read file; do
echo "$(basename $file) $(dirname $file)"
done | sort | awk '{print $1"\n"$2}'
You can do it with the following pipeline (should be on one line, I've split it and added comments for readability):
| sed -e 's/^[^:]*://' # Remove from start of line to first ':'
-e 's?/\([^/]*$\)? \1?' # Replace final '/' with a space
| sort -k2 # Sort on column 2 (filename)
| awk '{print $2"\n"$1}' # Reverse fields
See the following transcript:
echo 'BK20120802130531:/home/michael/Scripts/usb_backup.sh
BK20120802130531:/home/michael/Scripts/yad_0.17.1.1-1_i386.deb
BK20120802130731:/home/michael/Scripts/gbk.sh
BK20120802130131:/home/michael/Scripts/alt-notify-send.sh
BK20120802130131:/home/michael/Scripts/bk.bak
BK20120802130131:/home/michael/Scripts/bk.sh
BK20120802130131:/home/michael/Scripts/demande_password.sh'
| sed -e 's/^[^:]*://'
-e 's?/\([^/]*$\)? \1?'
| sort -k2
| awk '{print $2"\n"$1}'
alt-notify-send.sh
/home/michael/Scripts
bk.bak
/home/michael/Scripts
bk.sh
/home/michael/Scripts
demande_password.sh
/home/michael/Scripts
gbk.sh
/home/michael/Scripts
usb_backup.sh
/home/michael/Scripts
yad_0.17.1.1-1_i386.deb
/home/michael/Scripts
Just keep in mind that sort may not work as expected with lines containing spaces.
Assuming you do not have hash tags in your filenames you could use this coreutils pipeline:
cut -d: -f2- infile \
| sed -r 's,(.*)/([^/]*)$,\2#\1,' \
| sort -t'#' \
| tr '#' '\n'
cut removes the first part.
sed splits the path, swaps filename and directory and delimits them with a #.
sort hash tag delimited text.
tr finally replaces the hash tag with a newline.
If you know the number of path elements, you can use the simpler version:
cut -d: -f2- infile \
| sort -t/ -k4,4 \
| sed 's,(.*)/([^/]*)$,\2\n\1,'

sed move text in .txt to next line

I am trying to parse out a text file that looks like the following:
EMPIRE,STATE,BLDG,CO,494202320000008,336,5,AVE,ENT,NEW,YORK,NY,10003,N,3/1/2012,TensionCode,VariableICAP,PFJICAP,Residential,%LBMPZone,L,9,146.0,,,10715.0956,,,--,,0,,,J,TripNumber,ServiceClass,PreviousAccountNumber,MinMonthlyDemand,TODCode,Profile,Tax,Muni,41,39,00000000000000,9952,54,Y,Non-Taxable,--,FromDate,ToDate,Use,Demand,BillAmt,12/29/2011,1/31/2012,4122520,6,936.00,$293,237.54
what I would like to see is the data stacked
- EMPIRE STATE BLDG CO
- 494202320000008
- 336 5 AVE ENT
- NEW YORK NY
and so on. If anything, after each comma I would want the text following to go to a new txt line. Ultimatly in regards to the last line where it states date from forward, I would like to have it in a txt file like
- From Date ToDate use Demand BillAmt
- 12/29/2011 1/31/2012 4122520 6,936.00 $293,237.54.
I am using cygwin on a windows XP machine. Thank you in advance for any assistance.
For getting the last line into a separate file:
echo -e "From Date\tToDate\tuse\tDemand\tBillAmt" > lastlinefile.txt
cat originalfile.txt | sed 's/,FromDate/~Fromdate/' | awk -v FS="~" '{print $2}' | sed 's/FromDate,ToDate,use,Demand,BillAmt,//' | sed 's/,/\t/' >> lastlinefile.txt
For the rest:
cat originalfile.txt | sed -r 's/,Fromdate[^\n]+//' | sed 's/,/\n/' | sed -r 's/$/\n\n' > nocommas.txt
Your mileage may vary as far as the first '\n' is concerned in the second command. It if doesn't work properly replace it with a space (assuming your data doesn't have spaces).
Or, if you like, a shell script to operate on a file and split it:
#!/bin/bash
if [ -z "$1" ]
then echo "Usage: $0 filename.txt; exit; fi
echo -e "From Date\tToDate\tuse\tDemand\tBillAmt" > "$1_lastline.txt"
cat "$1" | sed 's/,FromDate/~Fromdate/' | awk -v FS="~" '{print $2}' | sed 's/FromDate,ToDate,use,Demand,BillAmt,//' | sed 's/,/\t/' >> "$1_lastline.txt"
cat "$1" | sed -r 's/,Fromdate[^\n]+//' | sed 's/,/\n/' | sed -r 's/$/\n\n' > "$1_fixed.txt"
Just paste it into a file and run it. It's been years since I used Cygwin... you may have to chmod +x file it first.
I'm providing you two answers depending on how you wanted the file. The previous answer split it into two files, this one keeps it all in one file in the format:
EMPIRE
STATE
BLDG
CO
494202320000008
336
5
AVE
ENT
NEW
YORK
NY
From Date ToDate use Demand BillAmt
12/29/2011 1/31/2012 4122520 6,936.00 $293,237.54.
That's the best I can do with the delimiters have you set in place. If you'd have left it something like "EMPIRE STATE BUILDING CO,494202320000008,336 5 AVE ENT,NEW YORK,NY" it'd be a lot easier.
#!/bin/bash
if [ -z "$1" ]
then echo "Usage: $0 filename.txt; exit; fi
cat "$1" | sed 's/,FromDate/~Fromdate/' | awk -v FS="~" '{gsub(",","\n",$1);print $1;print "FromDate\tToDate\tuse\tDemand\tBillAmt";gsub("FromDate,ToDate,use,Demand,BillAmt","",$2);gsub(",","\t",$2);print $2}' >> "$1_fixed.txt"
again, just paste it into a file and run it from Cygwin: ./filename.sh

Resources