What is the range of a VHDL integer definition VHDL-2019? - vhdl

While using the VHDL-2019 IEEE spec
section. 5.2.3.1. General
"However, an implementation shall allow the declaration of any integer
type whose range is wholly contained within the bounds –(2**63) and
(2**63)–1 inclusive."
(I added the exponential **)
Does this mean –(2**63) = -9223372036854775808 ?
In the 1993 spec it states -((2**31) - 1) and (2**31) - 1)
-2147483647 & 2147483647
Does the new VHDL spec have an error in that definition?
Ken

The change is quite intentional. See LCS2016_026c. You could note this gives the same range as a 64 bit integer in programming languages. The non-symmetrical effect comes from two's complement numbers which are the basis of integer types in VHDL tool implementations, the age of big iron with decimal based ALUs long faded.
The previous symmetrical range was not an implementation concern, VHDL arithmetic semantics requires run time detection of rollover or underflow. This change allows simpler detection based on changing signs without testing values while performing arithmetic in yet a larger than 64 bits universal integer.
The value range increase is an attempt to force synthesis vendors to support more than the minimum range specified in previous editions of the the standard. How well that works out (and over what implementation interval) will be a matter of history at some future date. There are also secondary effects based on index ranges (IEEE Std 1076-2019 5.3.2.2 Index constraints and discrete ranges) and positional correspondence for enumerated types (5.2.2 Enumerated types, 5.2.2.1 General). It's not practicable to simulate (or synthesize) composite objects with extreme index value ranges, starting with stack size issues. Industry practice isn't settled, and likely may result in today's HDLs being obsoleted.
Concerns to the accuracy of the standard's semantic description can be addressed to the IEEE-SAs VASG subcommittee which encourages participation by interested parties. You will find Stackoverflow vhdl tag denizens here who have been involved in the standardization process.

Related

Should I use double data structure to store very large Integer values?

int types have a very low range of number it supports as compared to double. For example I want to use a integer number with a high range. Should I use double for this purpose. Or is there an alternative for this.
Is arithmetic slow in doubles ?
Whether double arithmetic is slow as compared to integer arithmetic depends on the CPU and the bit size of the integer/double.
On modern hardware floating point arithmetic is generally not slow. Even though the general rule may be that integer arithmetic is typically a bit faster than floating point arithmetic, this is not always true. For instance multiplication & division can even be significantly faster for floating point than the integer counterpart (see this answer)
This may be different for embedded systems with no hardware support for floating point. Then double arithmetic will be extremely slow.
Regarding your original problem: You should note that a 64 bit long long int can store more integers exactly (2^63) while double can store integers only up to 2^53 exactly. It can store higher numbers though, but not all integers: they will get rounded.
The nice thing about floating point is that it is much more convenient to work with. You have special symbols for infinity (Inf) and a symbol for undefined (NaN). This makes division by zero for instance possible and not an exception. Also one can use NaN as a return value in case of error or abnormal conditions. With integers one often uses -1 or something to indicate an error. This can propagate in calculations undetected, while NaN will not be undetected as it propagates.
Practical example: The programming language MATLAB has double as the default data type. It is used always even for cases where integers are typically used, e.g. array indexing. Even though MATLAB is an intepreted language and not so fast as a compiled language such as C or C++ is is quite fast and a powerful tool.
Bottom line: Using double instead of integers will not be slow. Perhaps not most efficient, but performance hit is not severe (at least not on modern desktop computer hardware).

64 bit integer and 64 bit float homogeneous representation

Assume we have some sequence as input. For performance reasons we may want to convert it in homogeneous representation. And in order to transform it into homogeneous representation we are trying to convert it to same type. Here lets consider only 2 types in input - int64 and float64 (in my simple code I will use numpy and python; it is not the matter of this question - one may think only about 64-bit integer and 64-bit floats).
First we may try to cast everything to float64.
So we want something like so as input:
31 1.2 -1234
be converted to float64. If we would have all int64 we may left it unchanged ("already homogeneous"), or if something else was found we would return "not homogeneous". Pretty straightforward.
But here is the problem. Consider a bit modified input:
31000000 1.2 -1234
Idea is clear - we need to check that our "caster" is able to handle large by absolute value int64 properly:
format(np.float64(31000000), '.0f') # just convert to float64 and print
'31000000'
Seems like not a problem at all. So lets go to the deal right away:
im = np.iinfo(np.int64).max # maximum of int64 type
format(np.float64(im), '.0f')
format(np.float64(im-100), '.0f')
'9223372036854775808'
'9223372036854775808'
Now its really undesired - we lose some information which maybe needed. I.e. we want to preserve all the information provided in the input sequence.
So our im and im-100 values cast to the same float64 representation. The reason of this is clear - float64 has only 53 significand of total 64 bits. That is why its precision enough to represent log10(2^53) ~= 15.95 i.e. about all 16-length int64 without any information loss. But int64 type contains up to 19 digits.
So we end up with about [10^16; 10^19] (more precisely [10^log10(53); int64.max]) range in which each int64 may be represented with information loss.
Q: What decision in such situation should one made in order to represent int64 and float64 homogeneously.
I see several options for now:
Just convert all int64 range to float64 and "forget" about possible information loss.
Motivation here is "majority of input barely will be > 10^16 int64 values".
EDIT: This clause was misleading. In clear formulation we don't consider such solutions (but left it for completeness).
Do not make such automatic conversions at all. Only if explicitly specified.
I.e. we agree with performance drawbacks. For any int-float arrays. Even with ones as in simplest 1st case.
Calculate threshold for performing conversion to float64 without possible information loss. And use it while making casting decision. If int64 above this threshold found - do not convert (return "not homogeneous").
We've already calculate this threshold. It is log10(2^53) rounded.
Create new type "fint64". This is an exotic decision but I'm considering even this one for completeness.
Motivation here consists of 2 points. First one: it is frequent situation when user wants to store int and float types together. Second - is structure of float64 type. I'm not quite understand why one will need ~308 digits value range if significand consists only of ~16 of them and other ~292 is itself a noise. So we might use one of float64 exponent bits to indicate whether its float or int is stored here. But for int64 it would be definitely drawback to lose 1 bit. Cause would reduce our integer range twice. But we would gain possibility freely store ints along with floats without any additional overhead.
EDIT: While my initial thinking of this was as "exotic" decision in fact it is just a variant of another solution alternative - composite type for our representation (see 5 clause). But need to add here that my 1st composition has definite drawback - losing some range for float64 and for int64. What we rather do - is not to subtract 1 bit but add one bit which represents a flag for int or float type stored in following 64 bits.
As proposed #Brendan one may use composite type consists of "combination of 2 or more primitive types". So using additional primitives we may cover our "problem" range for int64 for example and get homogeneous representation in this "new" type.
EDITs:
Because here question arisen I need to try be very specific: Devised application in question do following thing - convert sequence of int64 or float64 to some homogeneous representation lossless if possible. The solutions are compared by performance (e.g. total excessive RAM needed for representation). That is all. No any other requirements is considered here (cause we should consider a problem in its minimal state - not writing whole application). Correspondingly algo that represents our data in homogeneous state lossless (we are sure we not lost any information) fits into our app.
I've decided to remove words "app" and "user" from question - it was also misleading.
When choosing a data type there are 3 requirements:
if values may have different signs
needed precision
needed range
Of course hardware doesn't provide a lot of types to choose from; so you'll need to select the next largest provided type. For example, if you want to store values ranging from 0 to 500 with 8 bits of precision; then hardware won't provide anything like that and you will need to use either 16-bit integer or 32-bit floating point.
When choosing a homogeneous representation there are 3 requirements:
if values may have different signs; determined from the requirements from all of the original types being represented
needed precision; determined from the requirements from all of the original types being represented
needed range; determined from the requirements from all of the original types being represented
For example, if you have integers from -10 to +10000000000 you need a 35 bit integer type that doesn't exist so you'll use a 64-bit integer, and if you need floating point values from -2 to +2 with 31 bits of precision then you'll need a 33 bit floating point type that doesn't exist so you'll use a 64-bit floating point type; and from the requirements of these two original types you'll know that a homogeneous representation will need a sign flag, a 33 bit significand (with an implied bit), and a 1-bit exponent; which doesn't exist so you'll use a 64-bit floating point type as the homogeneous representation.
However; if you don't know anything about the requirements of the original data types (and only know that whatever the requirements were they led to the selection of a 64-bit integer type and a 64-bit floating point type), then you'll have to assume "worst cases". This leads to needing a homogeneous representation that has a sign flag, 62 bits of precision (plus an implied 1 bit) and an 8 bit exponent. Of course this 71 bit floating point type doesn't exist, so you need to select the next largest type.
Also note that sometimes there is no "next largest type" that hardware supports. When this happens you need to resort to "composed types" - a combination of 2 or more primitive types. This can include anything up to and including "big rational numbers" (numbers represented by 3 big integers in "numerator / divisor * (1 << exponent)" form).
Of course if the original types (the 64-bit integer type and 64-bit floating point type) were primitive types and your homogeneous representation needs to use a "composed type"; then your "for performance reasons we may want to convert it in homogeneous representation" assumption is likely to be false (it's likely that, for performance reasons, you want to avoid using a homogeneous representation).
In other words:
If you don't know anything about the requirements of the original data types, it's likely that, for performance reasons, you want to avoid using a homogeneous representation.
Now...
Let's rephrase your question as "How to deal with design failures (choosing the wrong types which don't meet requirements)?". There is only one answer, and that is to avoid the design failure. Run-time checks (e.g. throwing an exception if the conversion to the homogeneous representation caused precision loss) serve no purpose other than to notify developers of design failures.
It is actually very basic: use 64 bits floating point. Floating point is an approximation, and you will loose precision for many ints. But there are no uncertainties other than "might this originally have been integral" and "does the original value deviates more than 1.0".
I know of one non-standard floating point representation that would be more powerfull (to be found in the net). That might (or might not) help cover the ints.
The only way to have an exact int mapping, would be to reduce the int range, and guarantee (say) 60 bits ints to be precise, and the remaining range approximated by floating point. Floating point would have to be reduced too, either exponential range as mentioned, or precision (the mantissa).

Generate a Random number in Uppaal

My question is Can I generate a random number in Uppaal?
I would like to generate a number from a range of values. Even more, I would like to generate not just integers I would like to generate double values as well.
for example: double [7.25,18.3]
I found this question that were talking about the same. I tried it.
However, I got this error: syntax error unexpected T_SELECT.
It doesn't work. I'm pretty new in Uppaal world, I would appreciate any help that you can provide me.
Regards,
This is a common and misunderstood question in Uppaal.
Simple answer:
double val; // declaration
val = random(18.3-7.25)+7.25; // use in update, works in SMC (Uppaal v4.1)
Verbose answer:
Uppaal supports symbolic analysis as well as statistical and the treatment and possibilities are radically different. So one has to decide first what kind of analysis is needed. Usually one starts with simple symbolic analysis and then augment with stochastic features, sometimes stochastic behavior needs also to be checked symbolically.
In symbolic analysis (queries A[], A<>, E<>, E[] etc), random is synonymous with non-deterministic, i.e. if the model contains some "random" behavior, then verification should check all of them any way. Therefore such behavior is modelled as non-deterministic choices between edges. It is easy to setup a set of edges over an integer range by using select statement on the edge where a temporary variable is declared and its value can be used in guards, synchronization and update. Symbolic analysis supports only integer data types (no floating point types like double) and continuous ranges over clocks (specified by constraints in guards and invariants).
Statistical analysis (via Monte-Carlo simulations, queries like Pr[...](<> p), E[...](max: var), simulate, etc) supports double types and floating point functions like sin, cos, sqrt, random(MAX) (uniform distribution over [0, MAX)), random_normal(mean, dev) etc. in addition to int data types. Clock variables can also be treated as floating point type, except that their derivative is set to 1 by default (can be changed in the invariants which allow ODEs -- ordinary differential equations).
It is possible to create models with floating point operations (including random) and still apply symbolic analysis provided that the floating point variables do not influence/constrain the model behavior, and act merely as a cost function over the state space. Here are systematic rules to achieve this:
a) the clocks used in ODEs must be declared of hybrid clock type.
b) hybrid clock and double type variables cannot appear in guard and invariant constraints. Only ODEs are allowed over the hybrid clocks in the invariant.

Representation of negative integers

Does ISO-Prolog have any prescriptions / recommendations
regarding the representation of negative integers and operations on them? 2's complement, maybe?
Asking as a programmer/user: Are there any assumptions I can safely make when performing bit-level operations on negative integers?
ISO/IEC 13211-1 has several requirements for integers, but a concrete representation is not required. If the integer representation is bounded, one of the following conditions holds
7.1.2 Integer
...
minint = -(*minint)
minint = -(maxint+1)
Further, the evaluable functors listed in 9.4 Bitwise functors, that is (>>)/2, (<<)/2, (/\)/2, (\/)/2, (\)/1, and xor/2 are implementation defined for negative values. E.g.,
8.4.1 (>>)/2 – bitwise right shift
9.4.1.1 Description
...
The value shall be implementation defined depending onwhether the shift is logical (fill with zeros) or arithmetic(fill with a copy of the sign bit).The value shall be implementation defined if VS is negative,or VS is larger than the bit size of an integer.
Note that implementation defined means that a conforming processor has to document this in the accompanying documentation. So before using a conforming processor, you have to read the manual.
De facto, there is no current Prolog processor (I am aware of) that does not provide arithmetic right shift and does not use 2's complement.
Strictly speaking these are two different questions:
Actual physical representation: this isn't visible at the Prolog level, and therefore the standard quite rightly has nothing to say about it. Note that many Prolog systems have two or more internal representations (e.g. two's complement fixed size and sign+magnitude bignums) but present a single integer type to the programmer.
Results of bitwise operations: while the standard defines these operations, it leaves much of their behaviour implementation defined. This is a consequence of (a) not having a way to specify the width of a bit pattern, and (b) not committing to a specific mapping between negative numbers and bit patterns.
This not only means that all bitwise operations on negative numbers are officially not portable, but also has the curious effect that the result of bitwise negation is totally implementation-defined (even for positive arguments): Y is \1 could legally give -2, 268435454, 2147483646, 9223372036854775806, etc. All you know is that negating twice returns the original number.
In practice, fortunately, there seems to be a consensus towards "The bitwise arithmetic operations behave as if operating on an unlimited length two's complement representation".

Why use 1 instead of -1?

At 29min mark of http://channel9.msdn.com/Events/GoingNative/2013/Writing-Quick-Code-in-Cpp-Quickly Andrei Alexandrescu says when using constants to prefer 0 and mentions hardware knows how to handle it. I did some assembly and I know what he is talking about and about the zero flag on CPUs
Then he says prefer the constant 1 rather then -1. -1 IIRC is not actually special but because it is negative the sign flag on CPUs would be set. 1 from my current understanding is simply a positive number there is no bit on the processor flag for it and no way to distinguish from 0 or other positive numbers.
But Andrei says to prefer 1 over -1. Why? What does hardware do with 1 that is better then -1?
First, it should be noted that Andrea Alexandrescu emphasized the difference between zero and the other two good constants, that the difference between using one and negative one is less significant. He also bundles compiler issues with hardware issues, i.e., the hardware might be able to perform the operation efficiently but the compiler will not generate the appropriate machine code given a reasonably clear expression in the chosen higher level language.
While I cannot read his mind, there are at least two aspects that may make one better than negative one.
Many ISAs provide comparison operations (or flag to GPR transfers) that return zero or one (e.g., MIPS has Set on Less Than) not zero or negative one. (SIMD instructions are an exception; SIMD comparisons typically generate zero or negative one [all bits set].)
At least one implementation of SPARC made loading smaller signed values more expensive, and I seem to recall that at least one ISA did not provide an instruction for loading signed bytes. Naive implementation of sign extension adds latency because whether to set or clear the more significant bits is not known until the value has been loaded.
Negative one does have some benefits. As you mentioned, testing for negativity is often relatively easy, so if negative one is the only negative value used it may be handled less expensively. Also, conditionally clearing a value based on zero or negative one is simply an and operation. (For conditionally setting or clearing a single bit, one rather than negative one would be preferred since such would involve only a shift and an and.)

Resources