Can an integer in vhdl have an X or Z value? - vhdl

I was curious if I need to initialize an integer in vhdl to get a value of zero by default.
Would an integer initialize as a zero or an X?

No. An integer is purely a numerical type - is has no binary representation, hence cannot have meta values.
The default integer type range is implementation defined, but most current implementations use a nearly full 32 bit range, from -2^31+1 to 2^31-1.
All types initialise by default to type'left

Related

What is the purpose of arbitrary precision constants in Go?

Go features untyped exact numeric constants with arbitrary size and precision. The spec requires all compilers to support integers to at least 256 bits, and floats to at least 272 bits (256 bits for the mantissa and 16 bits for the exponent). So compilers are required to faithfully and exactly represent expressions like this:
const (
PI = 3.1415926535897932384626433832795028841971
Prime256 = 84028154888444252871881479176271707868370175636848156449781508641811196133203
)
This is interesting...and yet I cannot find any way to actually use any such constant that exceeds the maximum precision of the 64 bit concrete types int64, uint64, float64, complex128 (which is just a pair of float64 values). Even the standard library big number types big.Int and big.Float cannot be initialized from large numeric constants -- they must instead be deserialized from string constants or other expressions.
The underlying mechanics are fairly obvious: the constants exist only at compile time, and must be coerced to some value representable at runtime to be used at runtime. They are a language construct that exists only in code and during compilation. You cannot retrieve the raw value of a constant at runtime; it is is not stored at some address in the compiled program itself.
So the question remains: Why does the language make such a point of supporting enormous constants when they cannot be used in practice?
TLDR; Go's arbitrary precision constants give you the possibility to work with "real" numbers and not with "boxed" numbers, so "artifacts" like overflow, underflow, infinity corner cases are relieved. You have the possibility to work with higher precision, and only the result have to be converted to limited-precision, mitigating the effect of intermediate errors.
The Go Blog: Constants: (emphasizes are mine answering your question)
Numeric constants live in an arbitrary-precision numeric space; they are just regular numbers. But when they are assigned to a variable the value must be able to fit in the destination. We can declare a constant with a very large value:
const Huge = 1e1000
—that's just a number, after all—but we can't assign it or even print it. This statement won't even compile:
fmt.Println(Huge)
The error is, "constant 1.00000e+1000 overflows float64", which is true. But Huge might be useful: we can use it in expressions with other constants and use the value of those expressions if the result can be represented in the range of a float64. The statement,
fmt.Println(Huge / 1e999)
prints 10, as one would expect.
In a related way, floating-point constants may have very high precision, so that arithmetic involving them is more accurate. The constants defined in the math package are given with many more digits than are available in a float64. Here is the definition of math.Pi:
Pi = 3.14159265358979323846264338327950288419716939937510582097494459
When that value is assigned to a variable, some of the precision will be lost; the assignment will create the float64 (or float32) value closest to the high-precision value. This snippet
pi := math.Pi
fmt.Println(pi)
prints 3.141592653589793.
Having so many digits available means that calculations like Pi/2 or other more intricate evaluations can carry more precision until the result is assigned, making calculations involving constants easier to write without losing precision. It also means that there is no occasion in which the floating-point corner cases like infinities, soft underflows, and NaNs arise in constant expressions. (Division by a constant zero is a compile-time error, and when everything is a number there's no such thing as "not a number".)
See related: How does Go perform arithmetic on constants?

Overflow in a random number generator and 4-byte vs. 8-byte integers

The famous linear congruential random number generator also known as minimal standard use formula
x(i+1)=16807*x(i) mod (2^31-1)
I want to implement this using Fortran.
However, as pointed out by "Numerical Recipes", directly implement the formula with default Integer type (32bit) will cause 16807*x(i) to overflow.
So the book recommend Schrage’s algorithm is based on an approximate factorization of m. This method can still implemented with default integer type.
However, I am wondering fortran actually has Integer(8) type whose range is -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 which is much bigger than 16807*x(i) could be.
but the book even said the following sentence
It is not possible to implement equations (7.1.2) and (7.1.3) directly
in a high-level language, since the product of a and m − 1 exceeds the
maximum value for a 32-bit integer.
So why can't we just use Integer(8) type to implement the formula directly?
Whether or not you can have 8-byte integers depends on your compiler and your system. What's worse is that the actual value to pass to kind to get a specific precision is not standardized. While most Fortran compilers I know use the number of bytes (so 8 would be 64 bit), this is not guaranteed.
You can use the selected_int_kindmethod to get a kind of int that has a certain range. This code compiles on my 64 bit computer and works fine:
program ran
implicit none
integer, parameter :: i8 = selected_int_kind(R=18)
integer(kind=i8) :: x
integer :: i
x = 100
do i = 1, 100
x = my_rand(x)
write(*, *) x
end do
contains
function my_rand(x)
implicit none
integer(kind=i8), intent(in) :: x
integer(kind=i8) :: my_rand
my_rand = mod(16807_i8 * x, 2_i8**31 - 1)
end function my_rand
end program ran
Update and explanation of #VladimirF's comment below
Modern Fortran delivers an intrinsic module called iso_fortran_env that supplies constants that reference the standard variable types. In your case, one would use this:
program ran
use, intrinsic :: iso_fortran_env, only: int64
implicit none
integer(kind=int64) :: x
and then as above. This code is easier to read than the old selected_int_kind. (Why did R have to be 18 again?)
Yes. The simplest thing is to append _8 to the integer constants to make them 8 bytes. I know it is "old style" Fortran but is is portable and unambiguous.
By the way, when you write:
16807*x mod (2^31-1)
this is equivalent to take the result of 16807*x and use an and with a 32-bit mask where all the bits are set to one except the sign bit.
The efficient way to write it by avoiding the expensive mod functions is:
iand(16807_8*x, Z'7FFFFFFF')
Update after comment :
or
iand(16807_8*x, 2147483647_8)
if your super modern compiler does not have backwards compatibility.

Applying mod operator on large integer

I am generating large integers in my fortran code which are used as seed to random number function I'm using. The problem is that I have several of those, and I noticed that sometime the generated numbers are too big and simply get 2147483647 which, to my understanding is the 8 digit limit of integer in fortran.
I want to solve this by taking the mod of my number with that limit. How do I achieve this?
Most Fortran compilers provide 64-bit integers which will provide at least 18 decimal digits. (Fortran 2008 requires that 64-bit integers be supported.) You can select these via:
integer, parameter :: VeryLongInt_K = selected_int_kind (18)
integer (kind=VeryLongInt_K) :: variable
or you can use the ISO_FORTRAN_ENV and the int64 type to select 64-bit integers:
use, intrinsic :: ISO_FORTRAN_ENV
integer (int64) :: variable
Just use larger integer kind.
integer, parameter :: ip = selected_int_kind(number_of_digits_you_need)
integer(ip) :: var !all the integer variables that need to be large
The integers will be able to have larger values and the mod function will generate results of the same kind.

How to convert fixed-point VHDL type back to float?

I am using IEEE fixed point package in VHDL.
It works well, but I now facing a problem concerning their string representation in a test bench : I would like to dump them in a text file.
I have found that it is indeed possible to directly write ufixed or sfixed using :
write(buf, to_string(x)); --where x is either sfixed or ufixed (and buf : line)
But then I get values like 11110001.10101 (for sfixed q8.5 representation).
So my question : how to convert back these fixed point numbers to reals (and then to string) ?
The variable needs to be split into two std-logic-vector parts, the integer part can be converted to a string using standard conversion, but for the fraction part the string conversion is a bit different. For the integer part you need to use a loop and divide by 10 and convert the modulo remainder into ascii character, building up from the lower digit to the higher digit. For the fractional part it also need a loop but one needs to multiply by 10 take the floor and isolate this digit to get the corresponding character, then that integer is used to be substracted to the fraction number, etc. This is the concept, worked in MATLAB to test and making a vhdl version I will share soon. I was surprised not to find such useful function anywhere. Of course fixed-point format can vary Q(N,M) N and M can have all sorts of values, while for floating point, it is standardized.

What are Go arrays indexed by?

I am some 'memory allocator' type code, by using an array and indexes rather than pointers. I'm hoping that the size of the index of the array is smaller than a pointer. I care because I am storing 'pointers' as integer indexes in an array rather than 64-bit pointers.
I can't see anything in the Go spec that says what an array is indexed by. Obviously it's some kind of integer. Passing very large values makes the runtime complain that I can't pass negative numbers, so I'm guessing that it's somehow cast to a signed integer. So is it an int32? I'm guessing it's not an int64 because I didn't touch the top bit (which would have been 2's compliment for a negative number).
Arrays may be indexed by any integer type.
The Array types section of the Go Programming Language Specification says that in an array type definition,
The length is part of the array's type and must be a constant
expression that evaluates to a non-negative integer value.
In an index expression such as a[x]:
x must be an integer value and 0 <= x < len(a)
But there is a limitation on the magnitude of an index; the description of Length and capacity says:
The built-in functions len and cap take arguments of various types and
return a result of type int. The implementation guarantees that the
result always fits into an int.
So the declared size of an array, or the index in an index expression, can be of any integer type (int, uint, uintptr, int8, int16, int32, int64, uint8, uint16, uint32, uint64), but it must be non-negative and within the range of type int (which is the same size as either int32 or int64 -- though it's a distinct type from either).
It's a very interesting question indeed. I have not found any direct rules in documentation too; instead I've found two great discussions in Groups.
In the first one, among many things, I've found an answer why indexes are implemented as int - but not uint:
Algorithms can benefit from the ability to express negative offsets
and such. If indexes were unsigned you'd always need a conversion in
these cases.
The second one specifically talks about possibility (but possibility only!) of using int64 for large arrays, mentioning limitations of len and cap functions (which limitations are actually mentioned in the doc):
The built-in functions len and cap take arguments of various types and
return a result of type int. The implementation guarantees that the
result always fits into an int.
I do agree, though, that more... official point of view wouldn't hurt. )
Arrays and slices are indexed by ints. An int is defined as being a 32 or 64 bit signed integer. The most common implementation (6g) uses 32 bit integers regardless of the architecture at this point in time. However, it is planed that eventually an int will be 64bit on 64bit machines and therefore the same length as a pointer.
The language spec defines 3 implementation dependent numeric types:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value

Resources