Upper and Lower Half of UInt16 in Enum? - enums

I need to make an F# enum where the upper and lower bytes of the enum's value have meaning.
To make the code easily readable and maintainable in the future, I want to show each half of the uint16. Documentation refers to the number as the first 8 bits and the second 8 bits. Endianness is outside the scope of my question.
I have tried this, but it does not work.
type headers : uint16 =
| firstHeader = (0 <<< 8) + (0)
| secondHeader = (1 <<< 8) + (4)
| thirdHeader = (2 <<< 8) + (10)
| fourthHeader = (3 <<< 8) + (1)
The following works, but it obfuscates the number's meaning and introduces the possibility of calculating the composite number incorrectly.
type headers =
| firstHeader = 0us
| secondHeader = 260us
| thirdHeader = 522us
| fourthHeader = 769us
It does not need to be an enum, but I do need the same functionality (ie. being able to refer to the value as headers.firstHeader and get the underlying value let x = uint16 headers.firstHeader).
Does anyone know how to accomplish something like this?

You can't do this with an enum in F# because enum values must be literals. However, you could simply declare these as values within a module instead:
[<RequireQualifiedAccess>]
module headers =
let firstHeader = (0 <<< 8) + (0)
let secondHeader = (1 <<< 8) + (4)
let thirdHeader = (2 <<< 8) + (10)
let fourthHeader = (3 <<< 8) + (1)
The RequireQualifiedAccess attribute forces uses of the values to explicitly refer to headers, just like an enum (e.g. headers.firstHeader).

Related

How to recode this string variable into a new variable?

I want to recode my variable Ucod in Stata with >100000 different observations into 3-4 classified values in the form of a new variable.
The problem is that I don't want to enter all the values of Ucod to recode. For example I want to use an if condition like if any value in Ucod starts with I (e.g, I234, I345, I587) recode the whole value to CVD.
I have tried using strpos() function using different conditions but I was unsuccessful.
Attaching picture of my data and variable Ucod
You could just use gen and a series of replace commands:
gen ucod_category = 0 if ucod >= "I00" & ucod <= "I519"
replace ucod_category = 1 if ucod >= "I60" & ucod <= "I698"
Then label these categories as CVD, Stroke, etc. This should sort in the expected way for your I10 codes with missing decimal points (e.g. "I519" < "I60").
However it might be more convenient to convert ucod into a number (with first digit 0 for A, 1 for B etc.) so that you can recode it with labels in a single command:
gen ucod_numeric = (ascii(substr(ucod, 0, 1)) - 65) * 1000 + real(substr(ucod, 1)) / cond(strlen(ucod) == 4, 10, 1)
recode ucod_numeric (800/851.9=0 "CVD") (860/869.8=1 "Stroke"), generate(ucod_category)
Again, this should sort in the expected order: I519 (which becomes 851.9) < I60 (860).
EDIT: since ascii isn't working (possibly a Stata version issue) you can try something like this to change the letter to a number.
gen ucod_letter_code = -1
forvalues i = 0/25 {
replace ucod_letter_code = `i' if substr(ucod, 1) == char(`i' + 65)
}
gen ucod_numeric = ucod_letter_code * 1000 + real(substr(ucod, 1)) / cond(strlen(ucod) == 4, 10, 1)
recode ucod_numeric (800/851.9=0 "CVD") (860/869.8=1 "Stroke"), generate(ucod_category)

Finding the formula for an alphanumeric code

A script I am making scans a 5-character code and assigns it a number based on the contents of characters within the code. The code is a randomly-generated number/letter combination. For example 7D3B5 or HH42B where any position can be any one of (26 + 10) characters.
Now, the issue I am having is I would like to figure out the number from 1-(36^5) based on the code. For example:
00000 = 0
00001 = 1
00002 = 2
0000A = 10
0000B = 11
0000Z = 36
00010 = 37
00011 = 38
So on and so forth until the final possible code which is:
ZZZZZ = 60466176 (36^5)
What I need to work out is a formula to figure out, let's say G47DU in its number form, using the examples below.
Something like this?
function getCount(s){
if (!isNaN(s))
return Number(s);
return s.charCodeAt(0) - 55;
}
function f(str){
let result = 0;
for (let i=0; i<str.length; i++)
result += Math.pow(36, str.length - i - 1) * getCount(str[i]);
return result;
}
var strs = [
'00000',
'00001',
'00002',
'0000A',
'0000B',
'0000Z',
'00010',
'00011',
'ZZZZZ'
];
for (str of strs)
console.log(str, f(str));
You are trying to create a base 36 numeric system. Since there are 5 'digits' each digit being 0 to Z, the value can go from 0 to 36^5. (If we are comparing this with hexadecimal system, in hexadecimal each 'digit' goes from 0 to F). Now to convert this to decimal, you could try use the same method used to convert from hex or binary etc... system to the decimal system.
It will be something like d4 * (36 ^ 4) + d3 * (36 ^ 3) + d2 * (36 ^ 2) + d1 * (36 ^ 1) + d0 * (36 ^ 0)
Note: Here 36 is the total number of symbols.
d0, d1, d2, d3, d4 can range from 0 to 35 in decimal (Important: Not 0 to 36).
Also, you can extend this for any number of digits or symbols and you can implement operations like addition, subtraction etc in this system itself as well. (It will be fun to implement that. :) ) But it will be easier to convert it to decimal do the operations and convert it back though.

Why am I getting negative integer after adding two positive 16 bit integers?

I am a newbie to golang, actually, I am new to type based programming. I have only knowledge of JS.
While going through simple examples in golang tutorials. I found that adding a1 + a2 provides a negative integer value?
var a1 int16 = 127
var a2 int16 = 32767
var rr int16 = a1 + a2
fmt.Println(rr)
Result:
-32642
Excepted:
The compiler will throw an error as a exceeded the int16 max.
( OR ) GO automatically convert the int16 to int32.
32,894
Can you guys explain why it is showing -32642.
This is the result of Integer Overflow behaving as defined in the specification.
You don't see your expected results, because
Overflow happens at runtime, not compile time.
Go is statically typed.
32,894 is greater than the max value representable by an int16.
It’s very simple.
The 16 bit integer maps the positive part I 0 - 32767 (0x0000, 0x7FFF) and the negative part from 0x8000 (−32768) to 0xFFFF (-1).
For example 0 - 1 = -1 and it’s store as 0xFFFF.
Now in your specific case: 32767 + 127.
You overflow because 32767 is the max value for a signed 16 bit integer, but, if you force the addition 0x7FFF + 7F = 807E and convert 807E to signed 16 bit integer you obtain -32642.
You can better understand here: Signed number representations
Aditionally, check these Math Constants:
const (
MaxInt8 = 1<<7 - 1
MinInt8 = -1 << 7
MaxInt16 = 1<<15 - 1
MinInt16 = -1 << 15
MaxInt32 = 1<<31 - 1
MinInt32 = -1 << 31
MaxInt64 = 1<<63 - 1
MinInt64 = -1 << 63
MaxUint8 = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1
)
And check the human version of these values here

Sort order of a Bitwise Enum

Is there an intelligent way to identify the order of a Bitwise Enum?
Take this enum for example:
[System.FlagsAttribute()]
internal enum AnswersSort : int
{
None = 0,
Bounty = 1,
MarkedAnswer = 2,
MostVotes = 4
}
If I combine these in different ways:
var result = AnswersSort.MarkedAnswer | AnswersSort.MostVotes | AnswersSort.Bounty;
result = AnswersSort.MostVotes | AnswersSort.Bounty | AnswersSort.MarkedAnswer;
Both results are 7 and the order is lost. Is there a way to do this without using an array or a list? Ideally I'm looking for a solution using an enum but I'm not sure how or if it's possible.
If you have 10 values, you need 4 bits per item. You could treat the combination as a single 40-bit value, encoded with 4 bits per digit. So, given your two examples:
var result = AnswersSort.MarkedAnswer | AnswersSort.MostVotes | AnswersSort.Bounty;
result = AnswersSort.MostVotes | AnswersSort.Bounty | AnswersSort.MarkedAnswer;
The first would be encoded as
0010 0100 0001
---- ---- ----
| | - Bounty
| - MostVotes
- MarkedAnswer
You could build that in a 64-bit integer:
long first = BuildValue(AnswersSort.MarkedAnswer, AnswersSort.MostVotes, AnswersSort.Bounty);
long BuildValue(params AnswersSort[] values)
{
long result = 0;
foreach (var val in values)
{
result = result << 4;
result |= (int)val;
}
return result;
}

Swift: convert a 2's complement number in String form into an Double

I have a sensor which is generating 16 bit values in 2's complement in a string form. I need to do some maths with these value so ultimately I need to convert the to Doubles. Where "f112" is 2's complement example value which will generates -3822.
I am very much a Swift beginner and feel there must be an easier way?
let value2 = UInt32("f112",radix:16)
if value2 > 32767 {
// handle -ve values
value5 = Int32(bitPattern:value2! | 0xFFFF0000)
} else {
// +ve
value5 = Int32(bitPattern:value2! )
}
let doubleValue = Double(value5)
There are probably several solutions, this is one.
First it creates the UInt16 value and converts it to Int to be able to do 32 bit math.
Then it subtracts 0x10000 (65536) if the most significant bit is set.
let value2 = Int(UInt16("f112",radix:16)!)
let doubleValue = value2 > 0x7fff ? Double(value2 - 0x10000) : Double(value2)
or using the bitwise NOT operator (~)
let value2 = UInt16("f112",radix:16)!
let doubleValue = value2 > 0x7fff ? -Double(~value2 + 1) : Double(value2)
my solution is a little bit different from vadian's
let str = "f112"
// if you are sure about str (so force unwrapping is fine)
let d = Double(Int16(bitPattern: UInt16(str, radix: 16)!)) // -3822
// or more 'safe' version, which return 0 in case of invalid parameter
let d0 = Double(Int16(bitPattern: UInt16(str, radix: 16) ?? 0)) // -3822
by the way
Int16("f112", radix: 16) == nil // true!
looks like a bug for me ...

Resources