Convert Enum to Binary (via Integer or something similar) - enums

I have an Ada enum with 2 values type Polarity is (Normal, Reversed), and I would like to convert them to 0, 1 (or True, False--as Boolean seems to implicitly play nice as binary) respectively, so I can store their values as specific bits in a byte. How can I accomplish this?

An easy way is a lookup table:
Bool_Polarity : constant Array(Polarity) of Boolean
:= (Normal=>False, Reversed => True);
then use it as
B Boolean := Bool_Polarity(P);
Of course there is nothing wrong with using the 'Pos attribute, but the LUT makes the mapping readable and very obvious.
As it is constant, you'd like to hope it optimises away during the constant folding stage, and it seems to: I have used similar tricks compiling for AVR with very acceptable executable sizes (down to 0.6k to independently drive 2 stepper motors)

3.5.5 Operations of Discrete Types include the function S'Pos(Arg : S'Base), which "returns the position number of the value of Arg, as a value of type universal integer." Hence,
Polarity'Pos(Normal) = 0
Polarity'Pos(Reversed) = 1
You can change the numbering using 13.4 Enumeration Representation Clauses.
...and, of course:
Boolean'Val(Polarity'Pos(Normal)) = False
Boolean'Val(Polarity'Pos(Reversed)) = True

I think what you are looking for is a record type with a representation clause:
procedure Main is
type Byte_T is mod 2**8-1;
for Byte_T'Size use 8;
type Filler7_T is mod 2**7-1;
for Filler7_T'Size use 7;
type Polarity_T is (Normal,Reversed);
for Polarity_T use (Normal => 0, Reversed => 1);
for Polarity_T'Size use 1;
type Byte_As_Record_T is record
Filler : Filler7_T;
Polarity : Polarity_T;
end record;
for Byte_As_Record_T use record
Filler at 0 range 0 .. 6;
Polarity at 0 range 7 .. 7;
end record;
for Byte_As_Record_T'Size use 8;
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_As_Record_T,
Target => Byte_T);
function Convert is new Ada.Unchecked_Conversion
(Source => Byte_T,
Target => Byte_As_Record_T);
begin
-- TBC
null;
end Main;
As Byte_As_Record_T & Byte_T are the same size, you can use unchecked conversion to convert between the types safely.
The representation clause for Byte_As_Record_T allows you to specify which bits/bytes to place your polarity_t in. (i chose the 8th bit)
My definition of Byte_T might not be what you want, but as long as it is 8 bits long the principle should still be workable. From Byte_T you can also safely upcast to Integer or Natural or Positive. You can also use the same technique to go directly to/from a 32 bit Integer to/from a 32 bit record type.

Two points here:
1) Enumerations are already stored as binary. Everything is. In particular, your enumeration, as defined above, will be stored as a 0 for Normal and a 1 for Reversed, unless you go out of your way to tell the compiler to use other values.
If you want to get that value out of the enumeration as an Integer rather than an enumeration value, you have two options. The 'pos() attribute will return a 0-based number for that enumeration's position in the enumeration, and Unchecked_Conversion will return the actual value the computer stores for it. (There is no difference in the value, unless an enumeration representation clause was used).
2) Enumerations are nice, but don't reinvent Boolean. If your enumeration can only ever have two values, you don't gain anything useful by making a custom enumeration, and you lose a lot of useful properties that Boolean has. Booleans can be directly selected off of in loops and if checks. Booleans have and, or, xor, etc. defined for them. Booleans can be put into packed arrays, and then those same operators are defined bitwise across the whole array.
A particular pet peeve of mine is when people end up defining themselves a custom boolean with the logic reversed (so its true condition is 0). If you do this, the ghost of Ada Lovelace will come back from the grave and force you to listen to an exhaustive explanation of how to calculate Bernoulli sequences with a Difference Engine. Don't let this happen to you!
So if it would never make sense to have a third enumeration value, you just name objects something appropriate describing the True condition (eg: Reversed_Polarity : Boolean;), and go on your merry way.

It seems all I needed to do was pragma Pack([type name]); (in which 'type name' is the type composed of Polarity) to compress the value down to a single bit.

Related

Julia type instability: Array of LinearInterpolations

I am trying to improve the performance of my code by removing any sources of type instability.
For example, I have several instances of Array{Any} declarations, which I know generally destroy performance. Here is a minimal example (greatly simplified compared to my code) of a 2D Array of LinearInterpolation objects, i.e
n,m=5,5
abstract_arr=Array{Any}(undef,n+1,m+1)
arr_x=LinRange(1,10,100)
for l in 1:n
for alpha in 1:m
abstract_arr[l,alpha]=LinearInterpolation(arr_x,alpha.*arr_x.^n)
end
end
so that typeof(abstract_arr) gives Array{Any,2}.
How can I initialize abstract_arr to avoid using Array{Any} here?
And how can I do this in general for Arrays whose entries are structures like Dicts() where the Dicts() are dictionaries of 2-tuples of Float64?
If you make a comprehension, the type will be figured out for you:
arr = [LinearInterpolation(arr_x, ;alpha.*arr_x.^n) for l in 1:n, alpha in 1:m]
isconcretetype(eltype(arr)) # true
When it can predict the type & length, it will make the right array the first time. When it cannot, it will widen or extend it as necessary. So probably some of these will be Vector{Int}, and some Vector{Union{Nothing, Int}}:
[rand()>0.8 ? nothing : 0 for i in 1:3]
[rand()>0.8 ? nothing : 0 for i in 1:3]
[rand()>0.8 ? nothing : 0 for i in 1:10]
The main trick is that you just need to know the type of the object that is returned by LinearInterpolation, and then you can specify that instead of Any when constructing the array. To determine that, let's look at the typeof one of these objects
julia> typeof(LinearInterpolation(arr_x,arr_x.^2))
Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{LinRange{Float64}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}}
This gives a fairly complicated type, but we don't necessarily need to use the whole thing (though in some cases it might be more efficient to). So for instance, we can say
using Interpolations
n,m=5,5
abstract_arr=Array{Interpolations.Extrapolation}(undef,n+1,m+1)
arr_x=LinRange(1,10,100)
for l in 1:n
for alpha in 1:m
abstract_arr[l,alpha]=LinearInterpolation(arr_x,alpha.*arr_x.^n)
end
end
which gives us a result of type
julia> typeof(abstract_arr)
Matrix{Interpolations.Extrapolation} (alias for Array{Interpolations.Extrapolation, 2})
Since the return type of this LinearInterpolation does not seem to be of known size, and
julia> isbitstype(typeof(LinearInterpolation(arr_x,arr_x.^2)))
false
each assignment to this array will still trigger allocations, and consequently there actually may not be much or any performance gain from the added type stability when it comes to filling the array. Nonetheless, there may still be performance gains down the line when it comes to using values stored in this array (depending on what is subsequently done with them).

How to Define a Constant Value of a User-defined Type in Go?

I am implementing a bit-vector in Go:
// A bit vector uses a slice of unsigned integer values or “words,”
// each bit of which represents an element of the set.
// The set contains i if the ith bit is set.
// The following program demonstrates a simple bit vector type with these methods.
type IntSet struct {
words []uint64 //uint64 is important because we need control over number and value of bits
}
I have defined several methods (e.g. membership test, adding or removing elements, set operations like union, intersection etc.) on it which all have a pointer receiver. Here is one such method:
// Has returns true if the given integer is in the set, false otherwise
func (this *IntSet) Has(m int) bool {
// details omitted for brevity
}
Now, I need to return an empty set that is a true constant, so that I can use the same constant every time I need to refer to an IntSet that contains no elements. One way is to return something like &IntSet{}, but I see two disadvantages:
Every time an empty set is to be returned, a new value needs to be allocated.
The returned value is not really constant since it can be modified by the callers.
How do you define a null set that does not have these limitations?
If you read https://golang.org/ref/spec#Constants you see that constants are limited to basic types. A struct or a slice or array will not work as a constant.
I think that the best you can do is to make a function that returns a copy of an internal empty set. If callers modify it, that isn't something you can fix.
Actually modifying it would be difficult for them since the words inside the IntSet are lowercase and therefore private. If you added a value next to words like mut bool you could add a if mut check to every method that changes the IntSet. If it isn't mutable, return an error or panic.
With that, you could keep users from modifying constant, non-mutable IntSet values.

Retrieving keys with maximum values in a Hashmap in java 8 [duplicate]

As of Java 1.5, you can pretty much interchange Integer with int in many situations.
However, I found a potential defect in my code that surprised me a bit.
The following code:
Integer cdiCt = ...;
Integer cdsCt = ...;
...
if (cdiCt != null && cdsCt != null && cdiCt != cdsCt)
mismatch = true;
appeared to be incorrectly setting mismatch when the values were equal, although I can't determine under what circumstances. I set a breakpoint in Eclipse and saw that the Integer values were both 137, and I inspected the boolean expression and it said it was false, but when I stepped over it, it was setting mismatch to true.
Changing the conditional to:
if (cdiCt != null && cdsCt != null && !cdiCt.equals(cdsCt))
fixed the problem.
Can anyone shed some light on why this happened? So far, I have only seen the behavior on my localhost on my own PC. In this particular case, the code successfully made it past about 20 comparisons, but failed on 2. The problem was consistently reproducible.
If it is a prevalent problem, it should be causing errors on our other environments (dev and test), but so far, no one has reported the problem after hundreds of tests executing this code snippet.
Is it still not legitimate to use == to compare two Integer values?
In addition to all the fine answers below, the following stackoverflow link has quite a bit of additional information. It actually would have answered my original question, but because I didn't mention autoboxing in my question, it didn't show up in the selected suggestions:
Why can't the compiler/JVM just make autoboxing “just work”?
The JVM is caching Integer values. Hence the comparison with == only works for numbers between -128 and 127.
Refer: #Immutable_Objects_.2F_Wrapper_Class_Caching
You can't compare two Integer with a simple == they're objects so most of the time references won't be the same.
There is a trick, with Integer between -128 and 127, references will be the same as autoboxing uses Integer.valueOf() which caches small integers.
If the value p being boxed is true, false, a byte, a char in the range \u0000 to \u007f, or an int or short number between -128 and 127, then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2.
Resources :
JLS - Boxing
On the same topic :
autoboxing vs manual boxing java
"==" always compare the memory location or object references of the values. equals method always compare the values. But equals also indirectly uses the "==" operator to compare the values.
Integer uses Integer cache to store the values from -128 to +127. If == operator is used to check for any values between -128 to 127 then it returns true. for other than these values it returns false .
Refer the link for some additional info
Integer refers to the reference, that is, when comparing references you're comparing if they point to the same object, not value. Hence, the issue you're seeing. The reason it works so well with plain int types is that it unboxes the value contained by the Integer.
May I add that if you're doing what you're doing, why have the if statement to begin with?
mismatch = ( cdiCt != null && cdsCt != null && !cdiCt.equals( cdsCt ) );
The issue is that your two Integer objects are just that, objects. They do not match because you are comparing your two object references, not the values within. Obviously .equals is overridden to provide a value comparison as opposed to an object reference comparison.
Besides these given great answers, What I have learned is that:
NEVER compare objects with == unless you intend to be comparing them
by their references.
As well for correctness of using == you can just unbox one of compared Integer values before doing == comparison, like:
if ( firstInteger.intValue() == secondInteger ) {..
The second will be auto unboxed (of course you have to check for nulls first).

Verilog, using enum with don't cares

Is it possible to use enum with don't cares? I've tried the following
typedef enum reg [31:0] {
BLTZ = 32'b000001_?????_00000_????????????????,
BGEZ = 32'b000001_?????_00001_????????????????,
BEQ = 32'b000100_?????_?????_????????????????,
BNE = 32'b000101_?????_?????_????????????????,
.
.
.
Then using the syntax given by doulos.com, I tried the following to see if I can get an "ADD" instruction to be displayed on the waveform viewer
op_mne_e op_mnemonic;
assign op_mnemonic = op_mne_e'(32'b000000_?????_?????_?????_?????_10000);
but what I see is
000000zzzzzzzzzzzzzzzzzzzz10000
Is it possible to have something similar to a casez for enum?
I have edited the tags to this question, because you are asking about System-Verilog, not Verilog. What we call Verilog is now a subset of the System-Verilog standard, IEEE-1800.
In System-Verilog, enumeration types have an underlying base type. By default this type is int, which is a 2-state type (each bit can only take the values 0 or 1). You can specify other base types if you wish. Each member of the enumeration type is represented by a different value of the type of the base type.
You have specified a 4-state, 32-bit base type: reg [31:0]*. Those 4 states are 0, 1, Z (or ?) and X. So, each member of the enumeration type is represented by a 4-state value, ie some combination of 0, 1, Z (or ?) and X. But, when you display the value with a "%b" format specifier, that's what you get: you get the underlying 4-state value (using Zs, not ?s).
http://www.edaplayground.com/x/3khr
In a casez statement, a Z or a ? represents a don't care. So, you can use an such an enum with a 4-state base type in a casez statement if you wish:
casez (op_mnemonic)
BLTZ : $display("BLTZ");
BGEZ : $display("BGEZ");
BEQ : $display("BEQ");
BNE : $display("BNE");
endcase
but, as we're speaking System-Verilog here, why not use case ... inside instead?
case (op_mnemonic) inside
BLTZ : $display("BLTZ");
BGEZ : $display("BGEZ");
BEQ : $display("BEQ");
BNE : $display("BNE");
endcase
http://www.edaplayground.com/x/4g3J
case ... inside is usually considered safer than the old casez, because it exhibits asymmetrical wildcard matching. In other words, unlike in a casez, in a case ... inside an X or Z (or ?) in the test expression (op_mnemonic in this case) does not act like a don't care (but does in the branch expression, of course).
*It would be more usual in System-Verilog to specify logic [31:0], which is identical, but logic is usually used in System-Verilog in preference to reg.
If you want the labels of your enum variable displayed in the waveform, you will need to set the radix to display it. Most tools default to displaying in binary. SystemVerilog has a number of operators that treat 'z' as a don't care (casez is one of them) so '?' is allowed as part of a numeric literal in place of a 'z'. However, that '?' gets immediately converted over to a 'z' and you will never see a '?' printed out.
If you are trying to assign a value to an enum and have it decode the instruction and pick a matching label, that won't work. You would need to loop the the enum values and use the wildcard equality operator ==? to find a match.
But if you are only doing this to get a label in the waveform, Modelsim/Questa has a radix define command that will decode the instruction for you.

Ada Enumerated Type Range

As I understand it, Ada uses 0 based indexes on its enumerated types.. So in Status_Type below, the ordinal value goes from 0 to 5.
type Status_Type is
(Undefined,
Available,
Fout,
Assigned,
Effected,
Cleared);
My question is.. what are the ordinal values for the following examples? Do they start at 0 or do they start from the ordinal value from the super type?
subtype Sub_Status_Type is Status_Type
range Available.. Effected;
subtype Un_Status_Type is Sub_Status_Type
range Fout .. Assigned;
Would Sub_Status_Type ordinal values go from 1 to 4 or from 0 to 3?
Would Un_Status_Type ordinal values go from 3 to 4 or from 1 to 2 or from 0 to 1?
For the subtypes, a 'pos will return the same value as it would have for the base type (1..4 and 2..3 respectively, I believe). Subtypes aren't really new and different types, so much as they are the same old type, but with some extra limitations on its possible values.
But it should be noted that these values are assigned under the scenes. It really should make no difference to you what they are, unless you are using the 'val and 'pos attributes, or you are interfacing to code written outside of Ada (or to hardware).
Plus, if it does end up mattering, you should know that the situation is actually much more complicated. 'pos and 'val don't return the actual bit value the compiler uses for those enumeration values when it generates code. They just return their "ordinal position"; their offset from the first value.
By default they will usually be the same thing. However, you can change the value assignments (but not the ordinal position assignments) yourself with a for ... use clause, like in the code below:
for Status_Type use
(Undefined => 1,
Available => 2,
Out => 4,
Assigned => 8,
Effected => 16,
Cleared => 32);
The position number is defined in terms of the base type. So Sub_Status_Type'Pos(Assigned) is the same as Status_Type'Pos(Assigned), and the position values of Sub_Status_Type go from 1 to 4, not 0 to 3.
(And note that the position number isn't affected by an enumeration representation clause; it always starts at 0 for the first value of the base type.)
Incidentally, it would have been easy enough to find out by running a small test program that prints the values of Sub_Status_Type'Pos(...) -- which would also have told you that you can't use the reserved word out as an identifier.
As I understand it, Ada uses 0 based indexes on its enumerated types
Yes, it uses 0 for the indexes, or rather for the position of the values of the type. This is not the value of the enumeration literals, and not the binary representation of them.
what are the ordinal values for the following examples?
There are no "ordinal" values. The values of the type are the ones you specified. You are confusing "value", "representation", and "position" here.
The values of your Status_Type are Undefined, Available, Out, Assigned, Effected, and Cleared.
The positions are 0, 1, 2, 3, 4, and 5. These are what you can use to translate with 'Pos and 'Val.
The representation defaults to the position, but you can freely assign other values (as long as you keep the correct order). These are used if you write it to a file, or send it through a socket, or load it into a register..
I think the best way to answer your questions is in reverse:
A subtype is, mathematically speaking, a continuous subset of its parent type. So, if the type SIZES is (1, 2, 3, 4, 5, 6, 7, 8) and you define a subtype MEDIUM as (4,5) the first element of MEDIUM is 4.
Example:
Type Small_Natural is 0..16;
Subtype Small_Positive is Small_Natural'Succ(Small_Natural'First)..Small_Natural'Last;
This defines two small sets of possible-values, which are tightly related: namely that Positive numbers are all the Natural Numbers save Zero.
I used this form to illustrate that with a few text-changes we have the following example:
Type Device is ( Not_Present, Power_Save, Read, Write );
Subtype Device_State is Device'Succ(Device'First)..Device'Last;
And here we are modeling the intuitive notion that a device must be present to have a state, but note that the values in the subtype ARE [exactly] the values in the type from which they are derived.
This answers your second question: Yes, an element of an enumeration would have the same value that its parent-type would.
As to the first, I believe the starting position is actually implementation defined (if not then I assume the LM defaults it to 0). You are, however free to override that and provide your own numbering, the only restriction being that elements earlier in the enumeration are valued less than the value that you are assigning currently [IIRC].

Resources