Showing bits in objects - enums

I'm reading a book about C++. The author shows this enum:
[Flags] enum class FlagBits{ Ready = 1, ReadMode = 2, WriteMode = 4,
EOF = 8, Disabled = 16};
FlagBits status = FlagBits::Ready | FlagBits::ReadMode | FlagBits::EOF;
and he says that status is equals to '0000 0000 0000 0000 0000 0000 0000 1011', but when I write status to console:
Console::WriteLine(L”Current status: {0}”, status);
it shows: 'Current status: Ready, ReadMode, EOF'. How can he know it, and how can I write status to console to show its binary form?

You should look into System::Convert::ToString
int main(array<System::String ^> ^args)
{
FlagBits status = FlagBits::Ready | FlagBits::ReadMode | FlagBits::EOF;
Console::WriteLine(L"Current status: {0}", System::Convert::ToString( ( int ) status, 2 ) );
Console::ReadLine();
return 0;
}
Output: Current Status: 1011
Edit: if you want the empty zero 'padding' just do:
Console::WriteLine(L"Current status: {0}", System::Convert::ToString( ( int ) status, 2 )->PadLeft( 32, '0' ) );
If you want it segmented into byte size pieces, then just split up the result and insert space / hyphens.

The first thing will be to cast the value to an integer. I'm not sure of the best way to do this in C++/CLI, but in C it would be (int)status.
C++ does not offer a way to display a value in binary, but it does allow hexadecimal. Here's the statement for that:
Console::WriteLine(L"Current status: {0:x}", (int)status);
The output should be 0000000b.

First, the author knows that because status is being OR'ed with the three enum values
FlagBits::Ready = 1 // Binary 0001
FlagBits::ReadMode = 2 // Binary 0010
FlagBits::EOF = 8 // Binary 1000
Just add these three values together and you'll get the 1011 the author talks about (you can truncate all leading zeroes). If you didn't come accross bitwise operations by now: The pipe | is used to do a bitwise OR-Operation on the values. You just add up all digits that are 1 like this:
0001
0010
+1000
-----
=1011
Second: Like my previous poster, Mark Ransom, I don't actually know if C# is capable of printing values in binary form like the "oldschool" printf() function in C or the std::cout in C++ are able to. First thought would be to use the BitConverter class of .NET and writing such a binary-print-function myself.
Hope that helps.
EDIT: Found an example here using the BitConverter I mentioned. I didn't check it in detail, but on first looks it seems alright: http://www.eggheadcafe.com/software/aspnet/33292766/print-a-number-in-binary-format.aspx

Related

Validating phone numbers with yup -- googlelip, yup-phone, regex

I have been trying to validate a phone number with yup.
I've tried a few regex examples
const phoneSchema = yup
.string()
.matches(phoneRegExp, "Phone number is not valid")
.required();
fliflop
/^((\\+[1-9]{1,4}[ \\-]*)|(\\([0-9]{2,3}\\)[ \\-]*)|([0-9]{2,4})[ \\-]*)*?[0-9]{3,4}?[ \\-]*[0-9]{3,4}?$/
LGenzelis
/^((\+[1-9]{1,4}[ -]?)|(\([0-9]{2,3}\)[ -]?)|([0-9]{2,4})[ -]?)*?[0-9]{3,4}[ -]?[0-9]{3,4}$/
and I've tried two libs yup-phone and google-libphonenumber
but I am unsure on what the matrix would be for valid and invalid phone number formats -- a QA flagged the initial validation in place because it would allow multiple + signs in front of the number +++12345 -- but the libs consider this valid. One of the regex seems to work well but then it will allow another + sign in the middle of the number +234+32432.
Also I am concerned about how to validate the number with google lib - if it could be any type of region -- the expected behaviour being they could enter UK/International or US numbers into the system -- would it be a case of detecting the region of the entered number and solving validation that way?
let region = "US";
try {
const number = phoneUtil.parseAndKeepRawInput(value, region);
res = phoneUtil.isValidNumber(number).toString();
} catch (e) {
res = "false";
}
//google lib phone
https://codesandbox.io/s/google-phone-lib-demo-forked-k29kk8
//yup phone
https://codesandbox.io/s/yup-phone-validation-forked-ixgnjj
//regex by flipflop
https://codesandbox.io/s/regex-flipflop-phone-number-validation-forked-ggvel7
//regex by LGenzelis
https://codesandbox.io/s/regex-lgens-phone-number-validation-forked-szuy6o
this is my number matrix -- but some of these seem to fall through the validation process
//valid numbers
Standard Telephone numbers
+61 1 2345 6789
+61 01 2345 6789 (zero entered is not required but enterd by user anyway)
01 2345 6789
01-2345-6789
(01) 2345 6789
(01) 2345-6789
1234 5678
1234-5678
12345678
Mobile Numbers
0123 456 789
0123456789
International Phone Numbers
US Format - +1 (012) 456 7890
US Virgin Islands (four digit international code) +1-340 123 4567
// invalid numbers
1234+5678
+++12345678
If you want to implement this in react front end
You can use a validating function from react-phone-number-input npm package
import { isValidPhoneNumber } from 'react-phone-number-input'
phone: Yup.string()
.required("Phone number is required")
.test("is-valid-phone", "Phone number is invalid", (value) => {
return isValidPhoneNumber(value || '');
}),

`bytes.fromhex` and `to_bytes` method in Raku?

I have a Python3 function that combine two bytes, one use bytes.fromhex() method, and the other use to_bytes() method:
from datatime import datetime
def bytes_add() -> bytes:
bytes_a = bytes.fromhex('6812')
bytes_b = datetime.now().month.to_bytes(1, byteorder='little', signed=False)
return bytes_a + bytes_b
Is it possible to write a same function as above in Raku?(if so, How to control byteorder and signed params?)
as for byteorder, say convert number 1024 to bytes in Python:
(1024).to_bytes(2, byteorder='little') # Output: b'\x00\x04', byte 00 is before byte 04
as a contrast, convert number 1024 to Buf or Blob in Raku:
buf16.new(1024) # Output: Buf[uint16]:0x<0400>, byte 00 is after byte 04
is there any way to get Buf[uint16]:0x<0004> in the above example in Raku?
Update:
inspired by codesections, I try to figure out a solution similar to codesections's answer:
sub bytes_add() {
my $bytes_a = pack("H*", '6812');
my $bytes_b = buf16.new(DateTime.now.month);
$bytes_a ~ $bytes_b;
}
But still don't know how to use byteorder.
Is it possible to write a same function as above in Raku?
Yes. I'm not 100% sure I understand the overall goal of the function you provided, but a literal/line-by-line translation is certainly possible. If you would like to elaborate on the goal, it may also be possible to achieve the same goal in an easier/more idiomatic way.
Here's the line-by-line translation:
sub bytes-add(--> Blob) {
my $bytes-a = Blob(<68 12>);
my $bytes-b = Blob(DateTime.now.month);
Blob(|$bytes-a, |$bytes-b)
}
The output of bytes-add is printed by default using its hexadecimal representation (Blob:0x<44 0C 09>). If you'd like to print it more like Python prints its byte literals, you can do so with bytes-add».chr.raku, which prints as ("D", "\x[C]", "\t").
if so, How to control byteorder?
Because the code above constructs the Blob from a List, you can simply .reverse the list to use the opposite order.

Writing binary file using in Extendscript. Incorrect file size

Further to my question here I'm writing a list of hex colours to a binary file from within Photoshop using Extendscript. So far so good.
Only the binary file written with the code below is 119 bytes. When cut and pasted and saved using Sublime Text 3 it's only 48 bytes, which then causes complications later on.
This is my first time in binary land, so I may be a little lost. I suspect it's an either an encoding issue (which could explain the 2.5 file size), or doing something very wrong trying to recreate the file in a literal, character for character sense. *
// Initially, my data is a an array of strings
var myArray = [
"1a2b3c",
"4d5e6f",
"a10000",
"700000",
"d10101",
"dc0202",
"c30202",
"de0b0b",
"d91515",
"f06060",
"fbbaba",
"ffeeee",
"303030",
"000000",
"000000",
"000000"
]
// I then separate them to four character chunks
// in groups of 8
var data = "1a2b 3c4d 5e6f a100 0070 0000 d101 01dc\n" +
"0202 c302 02de 0b0b d915 15f0 6060 fbba\n" +
"baff eeee 3030 3000 0000 0000 0000 0000";
var afile = "D:\\temp\\bin.act"
var f = new File(afile);
f.encoding = "BINARY";
f.open ("w");
// f.write(data);
// amended code
for (var i = 0; i < data.length; i++)
{
var bytes = String.fromCharCode(data.charCodeAt(i));
f.write(bytes);
}
f.close();
alert("Written " + afile);
* ...or it's the tracking on my VHS.
I'm rubbish at JavaScript but I have hacked something together that will show you how to write 3 bytes of hex to a file in binary. I hope it is enough for you to work out how to do the rest!
I saved this file as /Users/mark/StackOverflow/AdobeJavascript.jsx
alert("Starting");
// Open binary file
var afile = "/Users/mark/StackOverflow/data.bin"
var f = new File(afile);
f.encoding = "BINARY";
f.open ("w");
// Define hex string
str = "1a2b3c"
for(offset=0;offset<str.length;offset+=2) {
i = parseInt(str.substring(offset, offset+2), 16)
f.write(String.fromCharCode(i));
}
f.close();
alert("Done");
If you dump the data.bin you'll see 3 bytes:
xxd data.bin
00000000: 1a2b 3c
You can write more of your values by simply changing the string to:
str = "1a2b3c"+ "4d5e6f"+ "a10000";
I also discovered how to run ExtendScript from a shell script in Terminal which is my "happy place" so I'll add that in here for my own reference:
#!/bin/bash
osascript << EOF
tell application "Adobe Photoshop CC 2019"
do javascript "#include /Users/mark/StackOverflow/AdobeJavascript.jsx"
end tell
EOF
The corresponding reading part of this answer is here.

Why are my byte arrays not different even though print() says they are?

I am new to python so please forgive me if I'm asking a dumb question. In my function I generate a random byte array for a given number of bytes called "input_data", then I add bytewise some bit errors and store the result in another byte array called "output_data". The print function shows that it works exactly as expected, there are different bytes. But if I compare the byte arrays afterwards they seem to be identical!
def simulate_ber(packet_length, ber, verbose=False):
# generate input data
input_data = bytearray(random.getrandbits(8) for _ in xrange(packet_length))
if(verbose):
print(binascii.hexlify(input_data)+" <-- simulated input vector")
output_data = input_data
#add bit errors
num_errors = 0
for byte in range(len(input_data)):
error_mask = 0
for bit in range(0,7,1):
if(random.uniform(0, 1)*100 < ber):
error_mask |= 1 << bit
num_errors += 1
output_data[byte] = input_data[byte] ^ error_mask
if(verbose):
print(binascii.hexlify(output_data)+" <-- output vector")
print("number of simulated bit errors: " + str(num_errors))
if(input_data == output_data):
print ("data identical")
number of packets: 1
bytes per packet: 16
simulated bit error rate: 5
start simulation...
0d3e896d61d50645e4e3fa648346091a <-- simulated input vector
0d3e896f61d51647e4e3fe648346001a <-- output vector
number of simulated bit errors: 6
data identical
Where is the bug? I am sure the problem is somewhere between my ears...
Thank you in advance for your help!
output_data = input_data
Python is a referential language. When you do the above, both variables now refer to the same object in memory. e.g:
>>> y=['Hello']
>>> x=y
>>> x.append('World!')
>>> x
['Hello', 'World!']
>>> y
['Hello', 'World!']
Cast output_data as a new bytearray and you should be good:
output_data = bytearray(input_data)

Extracting plain text output from binary file

I am working with Graphchi's pagerank example: https://github.com/GraphChi/graphchi-cpp/wiki/Example-Apps#pagerank-easy
The example app writes a binary file with vertex information that I would like to read/convert to a plan text file (to later call into R or some other language).
The documentation states that:
"GraphChi will write the values of the edges in a binary file, which is easy to handle in other programs. Name of the file containing vertex values is GRAPH-NAME.4B.vout. Here "4B" refers to the vertex-value being a 4-byte type (float)."
The 'easy to handle' part is what I'm struggling with - I have experience with high level languages but not C++ or dealing with binary files. I have found a few things through searching stackoverflow but no luck yet in reading this file. Ideally this would be done through bash or python.
thanks very much for your help on this.
Update: hexdump graph-name.4B.vout | head -5 gives:
0000000 999a 3e19 7468 3e7f 7d2a 3e93 d8e0 3ec4
0000010 cec6 3fe4 d551 3f08 eff2 3e54 999a 3e19
0000020 999a 3e19 3690 3e8c 0080 3f38 9ea3 3ef5
0000030 b7d6 3f66 999a 3e19 10e3 3ee1 400c 400d
0000040 a3df 3e7c 999a 3e19 979c 3e91 5230 3f18
Here is example code how you can use GraphCHi to write the output out as a string:
https://github.com/GraphChi/graphchi-cpp/wiki/Vertex-Aggregators
But the array is simple byte array. Here is example how to read it in python:
import struct
from array import array as binarray
import sys
inputfile = sys.argv[1]
data = open(inputfile).read()
a = binarray('c')
a.fromstring(data)
s = struct.Struct("f")
l = len(a)
print "%d bytes" %l
n = l / 4
for i in xrange(0, n):
x = s.unpack_from(a, i * 4)[0]
print ("%d %f" % (i, x))
I was having the same trouble. Luckily I work with a bunch of network engineers who helped me out! On Mac Linux, the following command works to print the 4B.vout data one line per node, with the integer values the same as is given in the summary file. If your file is called eg, filename.4B.vout, then some command line perl gets you:
cat filename.4B.vout | LANG= perl -0777 -e '$,=\"\n\"; print unpack(\"L*\",<>),\"\";'
Edited to add: this is for the assignments of connected component ID and community ID, written implicitly the 1st line is the ID of the node labeled 0, the 2nd line is the node labeled 1 etc. But I am copypasting here so I'm not sure how it would need to change for floats. It works great for the integer values per node.

Resources