Validating phone numbers with yup -- googlelip, yup-phone, regex - validation

I have been trying to validate a phone number with yup.
I've tried a few regex examples
const phoneSchema = yup
.string()
.matches(phoneRegExp, "Phone number is not valid")
.required();
fliflop
/^((\\+[1-9]{1,4}[ \\-]*)|(\\([0-9]{2,3}\\)[ \\-]*)|([0-9]{2,4})[ \\-]*)*?[0-9]{3,4}?[ \\-]*[0-9]{3,4}?$/
LGenzelis
/^((\+[1-9]{1,4}[ -]?)|(\([0-9]{2,3}\)[ -]?)|([0-9]{2,4})[ -]?)*?[0-9]{3,4}[ -]?[0-9]{3,4}$/
and I've tried two libs yup-phone and google-libphonenumber
but I am unsure on what the matrix would be for valid and invalid phone number formats -- a QA flagged the initial validation in place because it would allow multiple + signs in front of the number +++12345 -- but the libs consider this valid. One of the regex seems to work well but then it will allow another + sign in the middle of the number +234+32432.
Also I am concerned about how to validate the number with google lib - if it could be any type of region -- the expected behaviour being they could enter UK/International or US numbers into the system -- would it be a case of detecting the region of the entered number and solving validation that way?
let region = "US";
try {
const number = phoneUtil.parseAndKeepRawInput(value, region);
res = phoneUtil.isValidNumber(number).toString();
} catch (e) {
res = "false";
}
//google lib phone
https://codesandbox.io/s/google-phone-lib-demo-forked-k29kk8
//yup phone
https://codesandbox.io/s/yup-phone-validation-forked-ixgnjj
//regex by flipflop
https://codesandbox.io/s/regex-flipflop-phone-number-validation-forked-ggvel7
//regex by LGenzelis
https://codesandbox.io/s/regex-lgens-phone-number-validation-forked-szuy6o
this is my number matrix -- but some of these seem to fall through the validation process
//valid numbers
Standard Telephone numbers
+61 1 2345 6789
+61 01 2345 6789 (zero entered is not required but enterd by user anyway)
01 2345 6789
01-2345-6789
(01) 2345 6789
(01) 2345-6789
1234 5678
1234-5678
12345678
Mobile Numbers
0123 456 789
0123456789
International Phone Numbers
US Format - +1 (012) 456 7890
US Virgin Islands (four digit international code) +1-340 123 4567
// invalid numbers
1234+5678
+++12345678

If you want to implement this in react front end
You can use a validating function from react-phone-number-input npm package
import { isValidPhoneNumber } from 'react-phone-number-input'
phone: Yup.string()
.required("Phone number is required")
.test("is-valid-phone", "Phone number is invalid", (value) => {
return isValidPhoneNumber(value || '');
}),

Related

Validation in elementor form

in elementor form I want validate the phone field to ONLY accept 9-10 numbers, in less than 9 number it's work good but in 11 numbers and up it send the form what should I have to add to limit until 10 numbers, thankshere is my code:
// Elementor Form Telephone Number Validition
//======================================
add_action( 'elementor_pro/forms/validation/tel', function( $field, $record, $ajax_handler ) {
// Match this format XXXXXXXXXX, 1234567890
if ( preg_match( '/[0-9]{10}/', $field['value'] ) !== 1 ) {
$ajax_handler->add_error( $field['id'], 'הזן טלפון חוקי בעל 9-10 ספרות ללא מקף או רווח' );
}
  }, 10, 3 );
/[0-9]{10}/ matches if it contains 10 digits at any position, so "abc-0123456789.xzy" would be fine.
/^[0-9]{10}$/ matches if it is exactly 10 digits, nothing more and nothing less.

Doing BigDecimal accounting with Ruby. When can I convert to float?

I basically want to know the best to convert my BigDecimal numbers to readable format (float perhaps) for the purposes of displaying them to the client:
In order to figure out liabilities, I do contributions - distributions from each address.
For example, if a person contributes 2 units to an address and that same address distributes 1 unit back to the person, then there is still a 1 unit liability. that's what's going on below. These numbers are all units of cryptocurrency.
Here's another example:
Say address1 and address2 contribute 2 coins each to address3, and address3 distributes 1.0 coins to address1 and 0.5 coins to address2, then address3 has a 1.0 coin liability to address1 and a 1.5 coin liability to address2.
So the actual data using BigDecimal below:
contributions =
{"1444"=>#<BigDecimal:7f915c08f030,'0.502569E2',18(36)>,
"alice"=>#<BigDecimal:7f915c084018,'0.211E1',18(27)>,
"address1"=>#<BigDecimal:7f915c0a4430,'0.87161E1',18(36)>,
"address2"=>#<BigDecimal:7f915c0943f0,'0.84811E1',18(36)>,
"address3"=>#<BigDecimal:7f915c0a43e0,'0.385E0',9(18)>,
"address6"=>#<BigDecimal:7f915c09ebe8,'0.1E1',9(18)>,
"address7"=>#<BigDecimal:7f915c09eb98,'0.1E1',9(18)>,
"address8"=>#<BigDecimal:7f915c09d428,'0.15E1',18(18)>,
"address9"=>#<BigDecimal:7f915c09d3d8,'0.15E1',18(18)>,
"address10"=>#<BigDecimal:7f915c0a7540,'0.132E1',18(36)>,
"address11"=>#<BigDecimal:7f915c0af8a8,'0.392E1',18(36)>,
"address12"=>#<BigDecimal:7f915c0a4980,'0.14E1',18(36)>,
"address13"=>#<BigDecimal:7f915c0af858,'0.2133333333 3333333333 33333334E1',36(54)>,
"address14"=>#<BigDecimal:7f915c0a54c0,'0.3533333333 3333333333 33333334E1',36(45)>,
"address15"=>#<BigDecimal:7f915c0a66e0,'0.1533333333 3333333333 33333334E1',36(36)>,
"sdfds"=>#<BigDecimal:7f915c0a6118,'0.1E0',9(27)>,
"sf"=>#<BigDecimal:7f915c0a6028,'0.1E0',9(27)>,
"address20"=>#<BigDecimal:7f915c0ae688,'0.3E0',9(18)>,
"address21"=>#<BigDecimal:7f915c0ae638,'0.3E0',9(18)>,
"address23"=>#<BigDecimal:7f915c0ae070,'0.1E0',9(27)>,
"address22"=>#<BigDecimal:7f915c0adf80,'0.1E0',9(27)>,
"add1"=>#<BigDecimal:7f915c0ad328,'0.1E0',9(18)>,
"add2"=>#<BigDecimal:7f915c0ad2d8,'0.1E0',9(18)>,
"addx"=>#<BigDecimal:7f915c0acd10,'0.1E0',9(27)>,
"addy"=>#<BigDecimal:7f915c0acc20,'0.1E0',9(27)>}
and the distributions:
distributions =
{"1444"=>#<BigDecimal:7f915a9068f0,'0.502569E2',18(63)>,
"alice"=>#<BigDecimal:7f915a8f44e8,'0.211E1',18(27)>,
"address1"=>#<BigDecimal:7f915a906800,'0.87161E1',18(54)>,
"address2"=>#<BigDecimal:7f915a906710,'0.84811E1',18(54)>,
"address3"=>#<BigDecimal:7f915a906620,'0.385E0',9(36)>,
"address6"=>#<BigDecimal:7f915a8fdea8,'0.1E1',9(27)>,
"address7"=>#<BigDecimal:7f915a8fddb8,'0.1E1',9(27)>,
"address8"=>#<BigDecimal:7f915a8fd5e8,'0.15E1',18(18)>,
"address9"=>#<BigDecimal:7f915a8fd4f8,'0.15E1',18(18)>,
"address10"=>#<BigDecimal:7f915a8fc9b8,'0.132E1',18(36)>,
"address11"=>#<BigDecimal:7f915a9071b0,'0.3920000000 0000003E1',27(45)>,
"address12"=>#<BigDecimal:7f915a907660,'0.1400000000 0000001E1',27(36)>,
"address13"=>#<BigDecimal:7f915a9070c0,'0.2133333333 3333337E1',27(45)>,
"address14"=>#<BigDecimal:7f915a906530,'0.3533333333 3333333333 33333334E1',36(54)>,
"address15"=>#<BigDecimal:7f915a8fc148,'0.1533333333 3333334E1',27(27)>,
"sdfds"=>#<BigDecimal:7f915a907f98,'0.1E0',9(27)>,
"sf"=>#<BigDecimal:7f915a907e08,'0.1E0',9(27)>,
"address20"=>#<BigDecimal:7f915a906ad0,'0.3000000000 0000003E0',18(27)>,
"address21"=>#<BigDecimal:7f915a9069e0,'0.3000000000 0000003E0',18(27)>,
"address23"=>#<BigDecimal:7f915a9063c8,'0.1E0',9(27)>,
"address22"=>#<BigDecimal:7f915a906238,'0.1E0',9(27)>,
"add1"=>#<BigDecimal:7f915a9060a8,'0.5E-1',9(27)>,
"add2"=>#<BigDecimal:7f915a905f18,'0.1E0',9(27)>}
Ideally, I want my liabilities to look like this:
{"add1"=>0.05,
"addx"=>0.1>,
"addy"=>0.1>}
But they look like this:
{"1444"=>0.0,
"alice"=>0.0,
"address1"=>0.0,
"address2"=>0.0,
"address3"=>0.0,
"address6"=>0.0,
"address7"=>0.0,
"address8"=>0.0,
"address9"=>0.0,
"address10"=>0.0,
"address11"=>-3.0e-16,
"address12"=>-1.0e-16,
"address13"=>-3.66666666666e-16,
"address14"=>0.0,
"address15"=>-6.6666666666e-17,
"sdfds"=>0.0,
"sf"=>0.0,
"address20"=>-3.0e-17,
"address21"=>-3.0e-17,
"address23"=>0.0,
"address22"=>0.0,
"add1"=>0.05,
"add2"=>0.0,
"addx"=>#<BigDecimal:7f915c0acd10,'0.1E0',9(27)>,
"addy"=>#<BigDecimal:7f915c0acc20,'0.1E0',9(27)>}
I don't want to include -3.66666666666e-16 because that's essentially 0 and even Ruby accounts for it this way when you run -3.66666666666e-16 > 0... it returns false.
This is what I have... is the a better way? The code below is calculating the liabilities by subtracting con from dis and only selecting the liabilities that are greater than 0.0...that makes sense to me and it excludes 1-time grants of coins (there must be a matching contribution to be a liability). Then, I convert everything to floats so it's readable. Does this look right?
liab = #contributions.merge(#distributions) do |key, con, dis|
(con - dis)
end.select { |addr, amount| amount > 0.0 && #contributions.keys.include?(addr) }
liab.merge(liab) do |k, old, new|
k = new.to_f
end
I want the amount returned in float format, not the big decimal object. Is what I'm doing okay? Will I keep accuracy if I convert to float at the end?

U2 Universe Update Multi value field errror

I am using the Universe U2.net toolkit to update the record in universe database. We have so far no issue with update to non multi value field with the following code
Open_Again:
Try
db_connectionU2 = openConnU2()
db_connectionU2.Open()
Catch ex As Exception
GoTo Open_Again
End Try
Dim cmdWIP As New U2Command
'cmdWIP = New U2Command("DELETE FROM MPS", db_connectionU2)
cmdWIP = New U2Command("UPDATE POH SET EPOS=#FLAG where PONO='C11447'", db_connectionU2)
cmdWIP = New U2Command("UPDATE CURCVRD F8=#F8 where F0='51747*1'", db_connectionU2)
cmdWIP.Parameters.Add(New U2Parameter("#F8", U2Type.VarChar)).Value = "t"
cmdWIP.Connection = db_connectionU2
cmdWIP.ExecuteNonQuery()
cmdWIP.Dispose()
cmdWIP = Nothing
db_connectionU2.Close()
db_connectionU2.Dispose()
db_connectionU2 = Nothing
but it having the problem when we try to add in to multivalue field. It's return the error " Column being update from single to multi is illegal. Please see the red box for the message and the value we are writing in.
Please click below to see the screenshot
enter image description here
Thank you
You need to look at the DICT of that file and make sure your entries are marked and MultiValued and have an Multi-Value Association.
Here is an example from the HS.SALES demo account.
>LIST DICT CUSTOMER
DICT CUSTOMER 03:56:47pm 01 Dec 2016 Page 1
Type &
Field......... Field. Field........ Conversion.. Column......... Output Depth &
Name.......... Number Definition... Code........ Heading........ Format Assoc..
CUSTID D 0 P(0N) Customer ID 10R S
#ID D 0 CUSTOMER 10L S
SAL D 1 Salutation 5T S
FNAME D 2 First Name 12T S
LNAME D 3 Last Name 16T S
COMPANY D 4 Company Name 20T S
ADDR1 D 5 Address line 1 30T S
ADDR2 D 6 Address line 2 30T S
CITY D 7 City 12T S
STATE D 8 P(2A) State 2L S
MCU
ZIP D 9 P(5N) Zip 5L S
PHONE D 10 P("("3N")"3N Telephone 13R S
-4N)
PRODID D 11 P(1A4N) Product 5L M ORDER
S
SER_NUM D 12 P(6N) Serial# 6L M ORDER
S
Notice how PRODID has "M ORDERS" after is (the is drops to the next line thanks to the 80 char size of my terminal. This tells Universe that it is a multivalued field with an Association called ORDERS. This allows the SQL interpreter to know how to update things.
It gets a bit more complicated and I would recommend looking up HS.ADMIN and specifically HS.SCRIB for tips on formatting things for non-pick style consumption. Check the UVodbc guide for more info on that.

Showing bits in objects

I'm reading a book about C++. The author shows this enum:
[Flags] enum class FlagBits{ Ready = 1, ReadMode = 2, WriteMode = 4,
EOF = 8, Disabled = 16};
FlagBits status = FlagBits::Ready | FlagBits::ReadMode | FlagBits::EOF;
and he says that status is equals to '0000 0000 0000 0000 0000 0000 0000 1011', but when I write status to console:
Console::WriteLine(L”Current status: {0}”, status);
it shows: 'Current status: Ready, ReadMode, EOF'. How can he know it, and how can I write status to console to show its binary form?
You should look into System::Convert::ToString
int main(array<System::String ^> ^args)
{
FlagBits status = FlagBits::Ready | FlagBits::ReadMode | FlagBits::EOF;
Console::WriteLine(L"Current status: {0}", System::Convert::ToString( ( int ) status, 2 ) );
Console::ReadLine();
return 0;
}
Output: Current Status: 1011
Edit: if you want the empty zero 'padding' just do:
Console::WriteLine(L"Current status: {0}", System::Convert::ToString( ( int ) status, 2 )->PadLeft( 32, '0' ) );
If you want it segmented into byte size pieces, then just split up the result and insert space / hyphens.
The first thing will be to cast the value to an integer. I'm not sure of the best way to do this in C++/CLI, but in C it would be (int)status.
C++ does not offer a way to display a value in binary, but it does allow hexadecimal. Here's the statement for that:
Console::WriteLine(L"Current status: {0:x}", (int)status);
The output should be 0000000b.
First, the author knows that because status is being OR'ed with the three enum values
FlagBits::Ready = 1 // Binary 0001
FlagBits::ReadMode = 2 // Binary 0010
FlagBits::EOF = 8 // Binary 1000
Just add these three values together and you'll get the 1011 the author talks about (you can truncate all leading zeroes). If you didn't come accross bitwise operations by now: The pipe | is used to do a bitwise OR-Operation on the values. You just add up all digits that are 1 like this:
0001
0010
+1000
-----
=1011
Second: Like my previous poster, Mark Ransom, I don't actually know if C# is capable of printing values in binary form like the "oldschool" printf() function in C or the std::cout in C++ are able to. First thought would be to use the BitConverter class of .NET and writing such a binary-print-function myself.
Hope that helps.
EDIT: Found an example here using the BitConverter I mentioned. I didn't check it in detail, but on first looks it seems alright: http://www.eggheadcafe.com/software/aspnet/33292766/print-a-number-in-binary-format.aspx

Determining All Possibilities for a Random String?

I was hoping someone with better math capabilities would assist me in figuring out the total possibilities for a string given it's length and character set.
i.e. [a-f0-9]{6}
What are the possibilities for this pattern of random characters?
It is equal to the number of characters in the set raised to 6th power.
In Python (3.x) interpreter:
>>> len("0123456789abcdef")
16
>>> 16**6
16777216
>>>
EDIT 1:
Why 16.7 million? Well, 000000 ... 999999 = 10^6 = 1M, 16/10 = 1.6 and
>>> 1.6**6
16.77721600000000
* EDIT 2:*
To create a list in Python, do: print(['{0:06x}'.format(i) for i in range(16**6)])
However, this is too huge. Here is a simpler, shorter example:
>>> ['{0:06x}'.format(i) for i in range(100)]
['000000', '000001', '000002', '000003', '000004', '000005', '000006', '000007', '000008', '000009', '00000a', '00000b', '00000c', '00000d', '00000e', '00000f', '000010', '000011', '000012', '000013', '000014', '000015', '000016', '000017', '000018', '000019', '00001a', '00001b', '00001c', '00001d', '00001e', '00001f', '000020', '000021', '000022', '000023', '000024', '000025', '000026', '000027', '000028', '000029', '00002a', '00002b', '00002c', '00002d', '00002e', '00002f', '000030', '000031', '000032', '000033', '000034', '000035', '000036', '000037', '000038', '000039', '00003a', '00003b', '00003c', '00003d', '00003e', '00003f', '000040', '000041', '000042', '000043', '000044', '000045', '000046', '000047', '000048', '000049', '00004a', '00004b', '00004c', '00004d', '00004e', '00004f', '000050', '000051', '000052', '000053', '000054', '000055', '000056', '000057', '000058', '000059', '00005a', '00005b', '00005c', '00005d', '00005e', '00005f', '000060', '000061', '000062', '000063']
>>>
EDIT 3:
As a function:
def generateAllHex(numDigits):
assert(numDigits > 0)
ceiling = 16**numDigits
for i in range(ceiling):
formatStr = '{0:0' + str(numDigits) + 'x}'
print(formatStr.format(i))
This will take a while to print at numDigits = 6.
I recommend dumping this to file instead like so:
def generateAllHex(numDigits, fileName):
assert(numDigits > 0)
ceiling = 16**numDigits
with open(fileName, 'w') as fout:
for i in range(ceiling):
formatStr = '{0:0' + str(numDigits) + 'x}'
fout.write(formatStr.format(i))
If you are just looking for the number of possibilities, the answer is (charset.length)^(length). If you need to actually generate a list of the possibilities, just loop through each character, recursively generating the remainder of the string.
e.g.
void generate(char[] charset, int length)
{
generate("",charset,length);
}
void generate(String prefix, char[] charset, int length)
{
for(int i=0;i<charset.length;i++)
{
if(length==1)
System.out.println(prefix + charset[i]);
else
generate(prefix+i,charset,length-1);
}
}
The number of possibilities is the size of your alphabet, to the power of the size of your string (in the general case, of course)
assuming your string size is 4: _ _ _ _ and your alphabet = { 0 , 1 }:
there are 2 possibilities to put 0 or 1 in the first place, second place and so on.
so it all sums up to: alphabet_size^String_size
first: 000000
last: ffffff
This matches hexadecimal numbers.
For any given set of possible values, the number of permutations is the number of possibilities raised to the power of the number of items.
In this case, that would be 16 to the 6th power, or 16777216 possibilities.

Resources