Bash: Find line similar to searchstring - bash

I have a csvfile like:
col1,col2
A,100foo
A,104foo
B,110bar
C,111bar
Now I have a searchstring
B,112
Which shall return line:
B,110bar
Or a searchstring
A,103
Which Shall return A,100foo
So I am always looking for the line 'smaller' than the searchstring.
The second column is not a number, so I cannot do math operations.
I more need something like an 'inaccurate search'.
Can I do that in Bash?
The file can be Sorte alphabetically, so I was thinking about like a 'grep-like' and the take the line before.

It's not really clear how inaccurate the search is allowed to be.
Would searching all lines that begin with the same character of searchstring do?
str = "B,110"
grep "^${str:0:1}" csvfile
or are there more requirements on the format of the line?

Related

What's the efficient way of checking the format of file by Ruby?

I have a file like:
Fruit.Store={
#blabla
"customer-id:12345,item:store/apple" = (1,2); #blabla
"customer-id:23456,item:store/banana" = (1,3); #blabla
"customer-id:23456,item:store/watermelon" = (1,4);
#blabla
"customer-id:67890,item:store/watermelon" = (1,6);
#The following two are unique
"customer-id:0000,item:store/" = (100, 100);
#
"" = (0,0)
};
Except the comments, each line has the same format: customer-id and item:store/ are fixed, and customer-id is a 5-digit number. The last two records are unique. How could I make sure the file is in the right format elegantly? I am thinking about using the flag for the first special line Fruit.Store={ and than for the following lines split each line by "," and "=", and if the splitted line is not correct, match them with the last two records. I want to use Ruby for it. Any advice? Thank you.
I am also thinking about using regular expression for the format, and wrote:
^"customer:\d{5},item:store\/\D*"=\(\d*,\d*\);
but I want to combine these two situations (with comment and without comment):
^"customer:\d{5},item:store\/\D*"=\(\d*,\d*\);$
^"customer:\d{5},item:store\/\D*"=\(\d*,\d*\);#.*$
how could I do it? Thanks
Using regular expressions could be a good option since each line has a fixed format; and you almost got it, your regex just needed a few tweaks:
(?:#.*|^"customer-id:\d{5},item:store\/\D*" *= *\(\d*, *\d*\); *(?:#.*)?)$
This is what was added to your current regex:
Option to be a comment line (#.*) or (|) a regular line (everything after |).
Check for possible spaces before and after =, after the comma (,) that separates the digits in parenthesis, and at the end of the line.
Option to include another comment at the end of the line ((?:#.*)?).
So just compare each line against this regex to check for the right format.

Using awk or sed to print column of CSV file enclosed in double quotes

I'm working on a csv file like the one below, comma delimited, each cell is enclosed in double quotes, but some of them contain double quote and/or comma inside double quote enclosure. The actual file contain around 300 columns and 200,000 rows.
"Column1","Column2","Column3","Column4","Column5","Column6","Column7"
"abc","abc","this, but with "comma" and a quote","18"" inch TV","abc","abc","abc"
"cde","cde","cde","some other, "cde" here","cde","cde","cde"
I'll need to remove some unless columns, and merge last few columns, instead of having "," in between them, I need </br>. and move second column to the end. Anything within the cells should be the same, with double quotes and commas as the original file. Below is an example of the output that I need.
"Column1","Column4","Column5","Column2"
"abc","18"" inch TV","abc</br>abc</br>abc","abc"
"cde","some other, "cde" here","cde</br>cde</br>cde","cde"
In this example I want to remove column3 and merge column 5, 6, 7.
Below is the code that I tried to use, but it is reading either double quote and/or comma, which is end of the row to be different than what I expected.
awk -vFPAT='([^,]*)|("[^"]+")' -vOFS=, '{print $1,$4,$5"</br>"$6"</br>"$7",$2}' inputfile.csv
sed -i 's#"</br>"#</br>#g' inputfile.csv
sed is used to remove beginning and ending double quote of a cell.
The output file that I'm getting right now, if previous field contains a double quote, it will consider that is the beginning of a cell, so the following values are often pushed up a column.
Other code that I have used consider every comma as beginning of a cell, so that won't work as well.
awk -F',' 'BEGIN{OFS=",";} {print $1,$4,$5"</br>"$6"</br>"$7",$2}' inputfile.csv
sed -i 's#"</br>"#</br>#g' inputfile.csv
Any help is greatly appreciated. thanks!
CSV is a loose format. There may be subtle variations in formatting. Your particular format may or may not be expressible with a regular grammar/regular expression. (See this question for a discussion about this.) Even if your particular formatting can be expressed with regular expressions, it may be easier to just whip out a parser from an existing library.
It is not a bash/awk/sed solution as you may have wanted or needed, but Python has a csv module for parsing CSV files. There are a number of options to tweak the formatting. Try something like this:
#!/usr/bin/python
import csv
with open('infile.csv', 'r') as infile, open('outfile.csv', 'wb') as outfile:
inreader = csv.reader(infile)
outwriter = csv.writer(outfile, quoting=csv.QUOTE_ALL)
for row in inreader:
# Merge fields 5,6,7 (indexes 4,5,6) into one
row[4] = "</br>".join(row[4:7])
del row[5:7]
# Copy second field to the end
row.append(row[1])
# Remove second and third fields
del row[1:3]
# Write manipulated row
outwriter.writerow(row)
Note that in Python, indexes start with 0 (e.g. row[1] is the second field). The first index of a slice is inclusive, the last is exclusive (row[1:3] is row[1] and row[2] only). Your formatting seems to require quotes around every field, hence the quoting=csv.QUOTE_ALL. There are more options at Dialects and Formatting Parameters.
The above code produces the following output:
"Column1","Column4","Column5</br>Column6</br>Column7","Column2"
"abc","18"" inch TV","abc</br>abc</br>abc","abc"
"cde","some other, cde"" here""","cde</br>cde</br>cde","cde"
There are two issues with this:
It doesn't treat the first row any differently, so the headers of columns 5, 6, and 7 are merged like the other rows.
Your input CSV contains "some other, "cde" here" (third row, fourth column) with unescaped quotes around the cde. There is another case of this on line two, but it was removed since it is in column 3. The result contains incorrect quotes.
If these quotes are properly escaped, your sample input CSV file becomes
infile.csv (escaped quotes):
"Column1","Column2","Column3","Column4","Column5","Column6","Column7"
"abc","abc","this, but with ""comma"" and a quote","18"" inch TV","abc","abc","abc"
"cde","cde","cde","some other, ""cde"" here","cde","cde","cde"
Now consider this modified Python script that doesn't merge columns on the first row:
#!/usr/bin/python
import csv
with open('infile.csv', 'r') as infile, open('outfile.csv', 'wb') as outfile:
inreader = csv.reader(infile)
outwriter = csv.writer(outfile, quoting=csv.QUOTE_ALL)
first_row = True
for row in inreader:
if first_row:
first_row = False
else:
# Merge fields 5,6,7 (indexes 4,5,6) into one
row[4] = "</br>".join(row[4:7])
del row[5:7]
# Copy second field (index 1) to the end
row.append(row[1])
# Remove second and third fields
del row[1:3]
# Write manipulated row
outwriter.writerow(row)
The output outfile.csv is
"Column1","Column4","Column5","Column2"
"abc","18"" inch TV","abc</br>abc</br>abc","abc"
"cde","some other, ""cde"" here","cde</br>cde</br>cde","cde"
This is your sample output, but with properly escaped "some other, ""cde"" here".
This may not be precisely what you wanted, not being a sed or awk solution, but I hope it is still useful. Processing more complicated formats may justify more complicated tools. Using an existing library also removes a few opportunities to make mistakes.
This might be an oversimplification of the problem but this has worked for me with your test data:
cat /tmp/inputfile.csv | sed 's#\"\,\"#|#g' | sed 's#"</br>"#</br>#g' | awk 'BEGIN {FS="|"} {print $1 "," $4 "," $5 "</br>" $6 "</br>" $7 "," $2}'
Please not that I am on Mac probably that's why I had to wrap the commas in the AWK script in quotation marks.

Conditional search and replace in Shell

I have a string processing requirement where I want to take a line from line number n and edit it (replace #2 to #3) and then insert the new edited string to line number n+1
Here is what my input file looks like
Input File:-
x/a y/a z/a
x/a#2 y/a#2 z/a#2
x/b y/b z/b
x/b#2 y/b#2 z/b#2
Expected output is as below. Notice the third line with #3. This is what I am expecting.
x/a y/a z/a
x/a#2 y/a#2 z/a#2
x/a#3 y/a#3 z/a#3
x/b y/b z/b
x/b#2 y/b#2 z/b#2
What I have tried:-
I have basic understanding of sed. So i was able to search and replace a string using:
sed '/a#2/ s/a#2/a#3/' -i $file
However I am not able to figure out a way to insert it to next line where it was picked up.
Any help will be appreciated.
TIA
You can simply print the line you want to edit, before you edit it:
sed '/a#2/{ p; s/a#2/a#3/g; }'

Multiple sequence alignment. Convert multi-line format to single-line format?

I have a multiple sequence alignment file in which the lines from the different sequences are interspersed, as in the format outputed by clustal and other popular multiple sequence alignment tools. It looks like this:
TGFb3_human_used_for_docking ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPY
tr|B3KVH9|B3KVH9_HUMAN ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPY
tr|G3UBH9|G3UBH9_LOXAF ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPY
tr|G3WTJ4|G3WTJ4_SARHA ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPY
TGFb3_human_used_for_docking LRSADTTHST-
tr|B3KVH9|B3KVH9_HUMAN LRSADTTHST-
tr|G3UBH9|G3UBH9_LOXAF LRSTDTTHST-
tr|G3WTJ4|G3WTJ4_SARHA LRSADTTHST-
Each line begins with a sequence identifier, and then a sequence of characters (in this case describing the amino acid sequence of a protein). Each sequence is split into several lines, so you see that the first sequence (with ID TGFb3_human_used_for_docking) has two lines. I want to convert this to a format in which each sequence has a single line, like this:
TGFb3_human_used_for_docking ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPYLRSADTTHST-
tr|B3KVH9|B3KVH9_HUMAN ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPYLRSADTTHST-
tr|G3UBH9|G3UBH9_LOXAF ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPYLRSTDTTHST-
tr|G3WTJ4|G3WTJ4_SARHA ALDTNYCFRNLEENCCVRPLYIDFRQDLGWKWVHEPKGYYANFCSGPCPYLRSADTTHST-
(In this particular examples the sequences are almost identical, but in general they aren't!)
How can I convert from multi-line multiple sequence alignment format to single-line?
Looks like you need to write a script of some sort to achieve this. Here's a quick example I wrote in Python. It won't line the white-space up prettily like in your example (if you care about that, you'll have to mess around with formatting), but it gets the rest of the job done
#Create a dictionary to accumulate full sequences
full_sequences = {}
#Loop through original file (replace test.txt with your file name)
#and add each line to the appropriate dictionary entry
with open("test.txt") as infile:
for line in infile:
line = [element.strip() for element in line.split()]
if len(line) < 2:
continue
full_sequences[line[0]] = full_sequences.get(line[0], "") + line[1]
#Now loop through the dictionary and write each entry as a single line
outstr = ""
with open("test.txt", "w") as outfile:
for seq in full_sequences:
outstr += seq + "\t\t" + full_sequences[seq] + "\n"
outfile.write(outstr)

Grep for displaying count of muliple strings in a single file

Another question ... Can I get the count of items that are unique .. If in my previous case, i just took a simple instance . My business req is here ...
I have string like the below happy=7
happy=5
happy=5,
bascically I will be using regex for searching the word happy, I would give like "happy=*"... I need the output as "count of happy =2" as there is one duplicate instance ...
Use awk:
awk '/happy/{ happy+=1 } /sad/ {sad += 1 }
END { print "happy =", happy+0, "sad = ", sad+0 }'
Note that like grep -c, this does not count occurrences of each word but the number of lines that match each word.
You're better off using something like perl or awk, where you can increment counters based on conditional statements.

Resources