Using AWK to preserve lines based on a single line field being repeated/duplicate in a CSV file - bash

Would someone help me form a script in Bash to keep only the unique lines, based solely on identifying duplicate values in a single field (the first field)
If I have data like this:
123456,23423,Smith,John,Jacob,Main St.,,Houston,78003<br>
654321,54524,Smith,Jenny,,Main St.,,Houston,78003<br>
332423,9023432,Gonzales,Michael,,Everyman,,Dallas,73423<br>
123456,324324,Bryant,Kobe,,Special St.,,New York,2311<br>
234324,232411,Willis,Bruce,,Sunset Blvd,,Hollywood,90210<br>
438329,34233,Moore,Mike,,Whatever,,Detroit,92343<br>
654321,43234,Smith,Jimbo,,Main St.,,Houston,78003<br>
And I like to only keep the lines which do not have matching first fields
(result would be a file with these contents below, based on above sample)
332423,9023432,Gonzales,Michael,,Everyman,,Dallas,73423<br>
234324,232411,Willis,Bruce,,Sunset Blvd,,Hollywood,90210<br>
438329,34233,Moore,Mike,,Whatever,,Detroit,92343<br>
What would the bash/awk approach be? Thanks in advance.

Related

Bash: Sort file numerically, but only where the first field matches a pattern

Due to poor past naming practices, I'm left with a list of names that is proving to be a challenge to work with. The bottom line is that I want the most current name (by date) to be placed in a variable. All the names are listed (unsorted) in a file called bar.txt.
In this case I can't rename, and there's no way to get the actual dates of the images; these names are all I have to go on. The names can follow one of several patterns;
foo
YYYYMMDD-foo
YYYYMMDD##-foo
foo can be anything from a single character to a long string of letters/numbers/symbols. I am interested only in the names matching the second use case, YYMMDD-foo, as those are from after we started tagging consistently.
I would like to end up with a variable containing the most recent date that follows the pattern YYMMDD-foo.
I know sort -k1 -n < bar.txt, but then I'm not sure how to isolate the second pattern's results to extract what I need.
How do I sort the file to ignore anything but the second pattern, and return the most current date?
Sample
Given that bar.txt looks like this;
test
2017120901-develop-BUILD-31
20170326-TEST-1.2.0
20170406-BUILD-40-1.2.0-test
2010818_001
I would want to extract 20170406-BUILD-40-1.2.0-test
Since your requirement involves 1) to get only files of a certain format 2) apply sorting and get only the latest file. Am using a Awk & GNU sort together to achieve it
awk -F'-' 'length($1) == 8' file | sort -nrk1 | head -1
20170406-BUILD-40-1.2.0-test
The solution works by only getting those lines in the file whose first column has 8 characters exactly corresponding to YYYYMMDD alignment. Once those filtered, sort applied on first field and the first line is obtained using head.

Use bash to extract data between two regular expressions while keeping the formatting

but I have a question about a small piece of code using the awk command. I have not found an answer/solution anywhere.
I am trying to parse an output file and extract all data between the 1st expression (including) ATOMIC and 2nd expression (excluding) Bond. This data is to be sent to a new file $1_geom. So far I have the following:
`awk '/ATOMIC/{flag=1;next}/Bond lengths in Bohr/{flag=0}flag' $1` >> $1_geom
This script will extract the correct data for me, but there are 2 problems:
The line ATOMICis not extracted with the data
The data is extracted and appended to a single line. I want the data to retain the formatting from the parsed file (5 columns, variable amount of lines). Please see attachment to see a visual. Visual Example Attachment. Is there a different way to append data (other than >>) so that I can keep formatting?
Any help is appreciated, thank you.
The next is causing the first match to be skipped; take it out if you don't want that.
The backticks by themselves are a shell syntax error (unless your Awk script happens to produce valid shell commands). I'm guessing you have a useless echo or something like that in your actual script which disarms the error, but instead produces the symptoms you describe.
This was part of a code in a csh script and I did have an "echo" in front of this line. Removing the "echo" makes it work perfectly and addresses the 2 questions that I had.

Replace specific commas in a csv file

I have a file like this:
gene_id,transcript_id(s),length,effective_length,expected_count,TPM,FPKM,id
ENSG00000000003.14,ENST00000373020.8,ENST00000494424.1,ENST00000496771.5,ENST00000612152.4,ENST00000614008.4,2.23231E3,2.05961E3,2493,2.112E1,1.788E1,00065a62-5e18-4223-a884-12fca053a109
ENSG00000001084.10,ENST00000229416.10,ENST00000504353.1,ENST00000504525.1,ENST00000505197.1,ENST00000505294.5,ENST00000509541.5,ENST00000510837.5,ENST00000513939.5,ENST00000514004.5,ENST00000514373.2,ENST00000514933.1,ENST00000515580.1,ENST00000616923.4,3.09456E3,2.92186E3,3111,1.858E1,1.573E1,00065a62-5e18-4223-a884-12fca053a109
The problem is that instead of ,, the file should've been tab delimited because the values starting from ENST (i.e. transcript_id(s)) are grouped in one column.
The number of ENST IDs is different in each line.
Each ENST ID has the same pattern: starts from ENST, followed by 11 digits followed by a period and then 1-3 digits: ^ENST[0-9]{11}[.][0-9]{1,3}.
I want to convert all the comma's between ENST ids to a : or any other character to read this as a csv file. Any help would be much appreciated. Thanks!
I imagine something as simple as
sed 's|,ENST|:ENST|g;s|:|,|' < /path/to/your/file
should work. No reason to over-complicate.

Extracting data from text file after 2 conditions have been met

I'm working on a bash script at the moment which extracts data from a text file called carslist.txt, which each car (and its corresponding characteristics) being on separate lines. I've been able to extract and save data from the text file after it's met a single condition (below for example) but I can't figure out how to do it for two conditions.
Single condition example:
grep 'Vauxhall' $CARFILE > output/Vauxhall_Cars.txt
output:
Vauxhall:Vectra:1999:White:2
Vauxhall:Corsa:1999:White:5
Vauxhall:Cavalier:1995:White:2
Vauxhall:Nova:1994:Black:8
From the examples above, how would I extract data if I wanted the conditions Vauxhall and White to be met before extracting them?
the grep example above asks for Vauxhall to be met before pulling and saving the data, but I have no idea how to do it for 2. I've tried pipelining the command as Vauxhall | White but after that I was out of ideas.
Thanks in advance.
I would recommend to use awk, like this:
awk -F: '$1=="Vauxhall" && $4=="White"' input.file
As I'm using : as the field separator, I simply need to check the values of field 1 and 4.

return line of strings between two strings in a ruby variable

I would like to extract a line of strings but am having difficulties using the correct RegEx. Any help would be appreciated.
String to extract: KSEA 122053Z 21008KT 10SM FEW020 SCT250 17/08 A3044 RMK AO2 SLP313 T01720083 50005
For Some reason StackOverflow wont let me cut and paste the XML data here since it includes "<>" characters. Basically I am trying to extract data between "raw_text" ... "/raw_text" from a xml that will always be formatted like the following: http://www.aviationweather.gov/adds/dataserver_current/httpparam?dataSource=metars&requestType=retrieve&format=xml&hoursBeforeNow=3&mostRecent=true&stationString=PHNL%20KSEA
However, the Station name, in this case "KSEA" will not always be the same. It will change based on user input into a search variable.
Thanks In advance
if I can assume that every strings that you want starts with KSEA, then the answer would be:
.*(KSEA.*?)KSEA.*
using ? would let .* match as less as possible.

Resources