awk - match positive whole numbers and floating point numbers only - bash

I have an input file in .csv format which contains entries of tax invoices separated by pipe.
for example:
Header--TIN | NAME | INV NO | DATE | NET | TAX | OTHERS | TOTAL
Record1-29001234768 | A S Spares | AB012 | 23/07/2016 | 5600.25 | 200.70 | 10.05 | 5811.00
Record2-29450956221 | HONDA Spare Parts | HOSS0987 |29/09/2016 | 70000 | 2200 | 0 | 72200
The record's NET value, TAX Value, OTHER Charges and TOTAL value column may contain positive whole numbers or positive floating point numbers with 2-4 places after the decimal point.
Now my requirement is to check whether the columns meets the specified constraints by checking with appropriate 'Regular Expression using awk'.
I need to match these 4 columns with regular expression such that if I encounter any numeric value other than positive whole number or positive floating point number , I need to print an error message to the user.
I've tried the following , but it doesn't seem to work.
if(!($5 ~ /[0-9]+/) || !($5 ~ /[0-9]+[.][0-9]+/) || ($5<=0))
{ printf("NET VALUE (Violates constraints)" }
Can anyone give the proper working regular expression or any implementation using built-in-function to meet my requirements?

Sounds like your validation should be:
$5 ~ /^[0-9]+(\.[0-9]{2,4})?$/
If it matches that, then it's valid (either a positive whole number, or a number followed by . and between 2 and 4 other numbers).
The anchors to the start and end of the field are important!
As rightly pointed out in the comments, if you want to accept numbers with no digits before the decimal point, then you will have to go for a more complex regular expression.

Related

PowerAutomate - replace nth occurrence of character

I'm attempting to parse email body to excel file.
After some manipulations, my current output is an array, where each line is data related to a product.
[  
"Periods: 01.01.2023 - 01.02.2023 | Code: 111 | Code2: 1111 | product-name",  
"Periods: 01.01.2023 - 01.02.2023 | Code: 222 | Code2: 2222 | product-name2"
]
I need to replace the 3rd occurrence of " | " with " | Product: " , so i can get field Product before the product name.
I've tried to use Apply to each -> current item -> various ways to find 3rd occurrence and replace it, but can't succeed.
Any suggestion?
You should be able to loop through each item and perform a simple replace expression like thus ...
replace(item(), split(item(), ' | ')[3], concat('Product: ', split(item(), ' | ')[3]))
That should get you across the line. Of course, I'm basing my answer off the limited information you provided.

Number of intergers in a file using Command Line Interface

How to count number of integers in a file using egrep?
I tried to solve it as a pattern finding problem. Actually, I am facing problem of how to represent range of characters [0-9] continuously which include "space" before the beginning and "space or dot" after the end. I think the latter can be solved by using \< and \> respectively. Also, It should not include dot in between otherwise it will not be an integer. I am unable to convert this logic into regular expression using available tools and techniques.
My name is 2322.
33 is my sister.
I am blessed with a son named 55.
Why are you so 69. Is everything 33.
66.88 is not an integer
55whereareyou?
The right answer should be 5 i.e. for 2322, 33, 55, 69 and 33.
grep -Eo '(^| )([0-9]+[\.\?\=\:]?( |$))+' | wc -w
^^ ^ ^ ^ ^ ^ ^
|| | | | | | |
E = extended regex--------+| | | | | | |
o = extract what found-----+ | | | | | |
starts with new line or space---+ | | | | |
digits--------------------------------+ | | | |
optional dot, question mark, etc.-------------+ | | |
ends with end line or space----------------------------+ | |
repeat 1 time or more (to detect integers like "123 456")--+ |
count words------------------------------------------------------+
Note: 123. 123? 123: are also counted as integer
Test:
#!/bin/bash
exec 3<<EOF
My name is 2322.
33 is my sister.
I am blessed with a son named 55.
Why are you so 69. Is everything 33.
66.88 is not an integer
55whereareyou?
two integers 123 456.
how many tables in room 400? 50.
50? oh I thought it was 40.
23: It's late, 23:00 already
EOF
grep -Eo '(^| )([0-9]+[\.\?\=\:]?( |$))+' <&3 | \
tee >(sleep 0.5; echo -n "integer counted: "; wc -w; )
Outputs:
2322.
33
55.
69.
33.
123 456.
400? 50.
50?
40.
23:
integer counted: 12
Based on the observation that you want 66.88 excluded, I'm guessing
grep -Ec '[0-9]\.?( |$)' file
which finds a digit, optionally followed by a dot, followed by either a space or end of line.
The -c option says to report the number of lines which contain a match (so not strictly the number of matches, if there are lines which contain multiple matches) and the -E option enables extended regular expression syntax, i.e. what was traditionally calned egrep (though the command name is now obsolescent).
If you need to count matches, the -o option prints each match on a separate line, which you can then pass to wc -l (or in lucky cases combine with grep -c, but check first; this doesn't work e.g. with GNU grep currently).
On my ubuntu this code working fine
grep -P '((^)|(\s+))[-+]?\d+\.?((\s+)|($))' test

CSV - How to add columns based on an existing column?

What is the best way to do this and how?
I gather things called sed, AWK and bash may be relevant.
I have used AWK once for one command, the others never.
I have searched and other apparently similar questions do not have an answer I need.
I have columns I have called fields in a CSV file:
_________________________
field1 | field2 | field3|
-------------------------
1990AB | 123456 | 123456|
-------------------------
I want to add fields based on these three original fields to appear as follows:
_______________________________________________________
field1 | field2 | field3 | field1a | field2a | field3a |
-------------------------------------------------------
1990AB | 123456 | 123456| 1990 | 12345 | 12345 |
-------------------------------------------------------
where:
field1a 1990 column 1 first 4 always digits then alpha
field2a 12345 column 2 is always 6 digits
field3a 12345 column 3 is always 6 digits
These are one-time-per-file actions, prior to database import.
macosx has about 6 million records. 2nd attempt at this question as my first was apparently not good. In this area I am a 100% novice.
awk to the rescue!
this should be easy to read even if you have no prior experience with awk
$ awk -F, -v OFS=, 'NR==1 {for(i=1;i<=3;i++) $(++NF)=$i"a"}
NR>1 {$(++NF)=substr($1,1,4);
$(++NF)=substr($2,1,5);
$(++NF)=substr($3,1,5)}1' file
NR is line number, special treatment for header, NF is number of fields, here incrementing for each additional column and $i is field value at position i. The last 1 is shorthand for printing the line. Initial options are for setting input field delimiter (F) and output field delimiter (OFS) to comma.

Using a non-literal value in Apache Derby's OFFSET clause

Using Derby, is it possible to offset by a value from the query rather than an integer literal?
When I run this query, it complains about the value I've given to the offset clause.
select
PRIZE."NAME" as "Prize Name",
PRIZE."POSITION" as "Position",
(select
PARTICIPANT."NAME"
from PARTICIPANT
order by POINTS desc
offset PRIZE."POSITION" rows fetch next 1 row only <-- notice I'm trying to pass in a value to offset by
) as "Participant"
from PRIZE
With the expectation that the results would look like this:
| Prize Name | Position | Participant |
|--------------|----------|---------------|
| Gold medal | 1 | Mari Loudi |
| Silver medal | 2 | Keesha Vacc |
| Bronze medal | 3 | Melba Hammit |
| Hundredth | 100 | James Thornby |
The documentation suggests that it's possible to pass in a value from java code, but I'm trying to use a value from the query itself.
By the way, this is just an example schema to illustrate the point.
I know there are other ways to achieve the ranking, but I'm specifically interested if there's a way to pass values to the offset clause.

Regexp issue involving reverse polish calculator

I'm trying to use a regular expression to solve a reverse polish calculator problem, but I'm having issues with converting the mathematical expressions into conventional form.
I wrote:
puts '35 29 1 - 5 + *'.gsub(/(\d*) (\d*) (\W)/, '(\1\3\2)')
which prints:
35 (29-1)(+5) *
expected
(35*((29-1)+5))
but I'm getting a different result. What am I doing wrong?
I'm assuming you meant you tried
puts '35 29 1 - 5 + *'.gsub(/(\d*) (\d*) (\W)/, '(\1\3\2)')
^ ^
Anyway, you have to use the quantifier + instead of *, since otherwise you will match an empty string for \d* as one of your captures, hence the (+5):
/(\d+) (\d+) (\W)/
I would further extend/constrain the expression to something like:
/([\d+*\/()-]+)\s+([\d+*\/()-]+)\s+([+*\/-])/
| | | | |
| | | | Valid operators, +, -, *, and /.
| | | |
| | | Whitespace.
| | |
| | Arbitrary atom, e.g. "35", "(29-1)", "((29-1)+5)".
| |
| Whitepsace.
|
Arbitrary atom, e.g. "35", "(29-1)", "((29-1)+5)".
...and instead of using gsub, use sub in a while loop that quits when it detects that no more substitutions can be made. This is very important because otherwise, you will violate the order of operations. For example, take a look at this Rubular demo. You can see that by using gsub, you might potentially replace the second triad of atoms, "5 + *", when really a second iteration should substitute an "earlier" triad after substituting the first triad!
WARNING: The - (minus) character must appear first or last in a character class, since otherwise it will specify a range! (Thanks to #JoshuaCheek.)

Resources