Ada Endianness Float/Integer - endianness

I'm a new developper in Ada so forgive me if I not clear enough.
I am confronted with a problem and I do not know where from the fault can come. I explain first of all the context:
I possess a set of tests which work on a qemu (BE). I wished execute them on PC native (x86) with the pragma Default_Scalar_Storage_Order ( High_Order_First). I noticed that some of my test worked perfectly but it was not the case for the tests including float. To make simple I have write a test including a FLOAT and an INT.
with AUNIT.ASSERTIONS; use AUNIT.ASSERTIONS;
with BASIC_TYPES;
with BASIC_TYPES.STREAM;
with INTERFACES;
with ADA.INTEGER_TEXT_IO;
with ADA.FLOAT_TEXT_IO;
with ADA.TEXT_IO;
with STREAMS;
with SYSTEM;
package body TEST.TEST is
function Integer2Hexa(Hex_Int : Integer; Bits_Nbr : Integer) return String is
Hexa : String(1..Bits_Nbr);
begin
Ada.Integer_Text_IO.Put(Hexa,Hex_Int,16);
return Hexa;
end Integer2Hexa;
function NAME (T : TEST) return AUNIT.MESSAGE_STRING is
pragma UNREFERENCED (T);
begin
return AUNIT.FORMAT ("Test package");
end NAME;
IntegerNbr : BASIC_TYPES.INT32_T;
FloatNbr : INTERFACES.IEEE_Float_32;
procedure RUN_TEST (T : in out TEST) is
PACKED_ARRAY : BASIC_TYPES.UINT8_ARRAY_NC_T (1 .. 8) := (others => 0);
MY_STREAM : STREAMS.STREAM_T;
use type BASIC_TYPES.UINT8_ARRAY_NC_T;
begin
IntegerNbr := 479037433;
FloatNbr := 2.0012151e+09;
ADA.TEXT_IO.PUT_LINE ("Default bit order: " & SYSTEM.Default_Bit_Order'IMG);
ADA.TEXT_IO.PUT_LINE ("Integer size : " & INTEGER'IMAGE (INTEGER'SIZE));
ADA.TEXT_IO.PUT ("16#4EEE903D#"); -- 2.0012151e+09 in FLOAT BIG ENDIAN
ADA.TEXT_IO.PUT (Integer2Hexa(Integer(IntegerNbr),32)); -- 16#1C8D87F9# in INT BIG ENDIAN
ADA.TEXT_IO.NEW_LINE;
-- Init the stream
STREAMS.INIT (MY_STREAM => MY_STREAM,
STREAM_ADDRESS => PACKED_ARRAY (PACKED_ARRAY'FIRST)'ADDRESS,
STREAM_SIZE => PACKED_ARRAY'LENGTH);
BASIC_TYPES.STREAM.WRITE_FLOAT_T (MY_STREAM => MY_STREAM,
ITEM => FloatNbr,
ALIGN_MODE => STREAMS.PACK);
BASIC_TYPES.STREAM.WRITE_INT32_T (MY_STREAM => MY_STREAM,
ITEM => IntegerNbr,
ALIGN_MODE => STREAMS.PACK);
if (not ASSERT(PACKED_ARRAY = (16#4e#, 16#ee#, 16#90#, 16#3d#, 16#1c#, 16#8d#, 16#87#, 16#f9#), "PACKED_ARRAY incorrect")) then
for I in PACKED_ARRAY'RANGE loop
ADA.TEXT_IO.PUT (Integer2Hexa(Integer(PACKED_ARRAY (I)),8));
end loop;
ADA.TEXT_IO.NEW_LINE;
end if;
end RUN_TEST;
end TEST.TEST;
I noticed that the writing of the INT is correctly made but it is not the case of the FLOAT (it is written in Little Endian). Indeed in exit I should have
16#4e#, 16#ee#, 16#90#, 16#3d#, 16#1c#, 16#8d#, 16#87#, 16#f9#
but I get
16#3d#, 16#90#, 16#ee#, 16#4e#, 16#1c#, 16#8d#, 16#87#, 16#f9#
I used this site to confirm my results: https://www.scadacore.com/tools/programming-calculators/online-hex-converter/
I don't know if the conversion thanks to the pragma is correctly used for the FLOAT. I call it in my gpr file in the package Compiler with this text in the PRAGMA.txt : pragma Default_Scalar_Storage_Order(High_Order_First);
package Compiler is
for Local_Configuration_Pragmas use "PRAGMAS.txt";
for Switches ("ada") use ("-g");
end Compiler;
Does the problem come to my way to use the pragma?
Here are the called procedures:
procedure WRITE_FLOAT_T
(MY_STREAM : in out STREAMS.STREAM_T;
ITEM : in BASIC_TYPES.FLOAT_T;
ALIGN_MODE : in STREAMS.ALIGN_MODE_T)
is
pragma UNREFERENCED (ALIGN_MODE);
-- Temporary types for non pack case
type TMP_TYPE_T is new STANDARD.FLOAT;
for TMP_TYPE_T'VALUE_SIZE use FLOAT_T_SIZE_C;
TMP_TYPE : TMP_TYPE_T;
subtype BITS_FIELD_T is STREAMS.BIT_FIELD_ARR_NC_T (1 .. STREAMS.SIZE_T (FLOAT_T_SIZE_C));
function TO_BITS_ARRAY is new UNCHECKED_CONVERSION (TMP_TYPE_T,
BITS_FIELD_T);
begin
-- Convert item to a temporary type
TMP_TYPE := TMP_TYPE_T(ITEM);
STREAMS.WRITE (MY_STREAM => MY_STREAM,
DATA => TO_BITS_ARRAY(TMP_TYPE));
end WRITE_FLOAT_T;
procedure WRITE (MY_STREAM : in out STREAM_T;
DATA : in BIT_FIELD_ARR_NC_T) is
begin
if (MY_STREAM.ERROR_CODE = NO_ERROR)
and then (MY_STREAM.WRITE_OFFSET + DATA'LENGTH - 1 <= MY_STREAM.STREAM_SIZE * 8) then
if (MY_STREAM.WRITE_OFFSET mod 8 = 1) and then (DATA'LENGTH mod 8 = 0) then
-- Byte mode
WRITE_BYTES(MY_STREAM => MY_STREAM,
DATA => DATA);
else
-- Bit mode
WRITE_BITS(MY_STREAM => MY_STREAM,
DATA => DATA);
end if;
elsif (MY_STREAM.ERROR_CODE = NO_ERROR) then
-- Update ERROR_CODE on first error
MY_STREAM.ERROR_CODE := END_ERROR;
end if;
end WRITE;
procedure WRITE_BYTES (MY_STREAM : in out STREAM_T;
DATA : in BIT_FIELD_ARR_NC_T) is
BYTE_FIELD_ARR : BYTE_FIELD_ARR_NC_T (1 .. MY_STREAM.STREAM_SIZE);
for BYTE_FIELD_ARR'ADDRESS use MY_STREAM.STREAM_ADDRESS;
TMP_BYTE_FIELD_ARR : BYTE_FIELD_ARR_NC_T (1 .. DATA'LENGTH / 8);
for TMP_BYTE_FIELD_ARR'ADDRESS use DATA'ADDRESS;
begin
-- Write byte field
BYTE_FIELD_ARR ((MY_STREAM.WRITE_OFFSET + 7) / 8 .. (MY_STREAM.WRITE_OFFSET + 7) / 8 + (DATA'LENGTH / 8) - 1) := TMP_BYTE_FIELD_ARR;
MY_STREAM.WRITE_OFFSET := MY_STREAM.WRITE_OFFSET + DATA'LENGTH;
end WRITE_BYTES;
Thank you in advance!
Q.Dherb

According to documentation of Scalar_Storage_Order:
This implementation defined attribute only apply to Array and Record. This means it has no effect for the memory layout of scalar type such as Float or Integer. Whatever the value of the Default_Scalar_Storage_Order attribute, on a big endian machine a 16#12345678# integer would be represented as 12 34 56 78 and on a low endian machine it would be represented as 78 56 34 12.
For array it determines the order of storage_element (that is usually byte) of each scalar component. In your case, all of your array component have a size which is inferior or equal to a storage element which means the Scalar_Storage_Order clause has no effect.
Here is an example that show the effect of this clause for array:
with Ada.Text_IO;
with System;
with Interfaces;
with Ada.Streams;
with Ada.Integer_Text_IO;
procedure Scalar_Storage_Element_Exemple is
type T_U16_Arr_Le is array (Positive range <>) of Interfaces.Unsigned_16
with Component_Size => 16, Scalar_Storage_Order => System.Low_Order_First;
type T_U16_Arr_Be is array (Positive range <>) of Interfaces.Unsigned_16
with Component_Size => 16, Scalar_Storage_Order => System.High_Order_First;
type T_U8_Arr_Le is array (Positive range <>) of Interfaces.Unsigned_8
with Component_Size => 8, Scalar_Storage_Order => System.Low_Order_First;
type T_U8_Arr_Be is array (Positive range <>) of Interfaces.Unsigned_8
with Component_Size => 8, Scalar_Storage_Order => System.High_Order_First;
Arr_16_LE : T_U16_Arr_Le (1 .. 2) := (16#1234#, 16#5678#);
Arr_16_BE : T_U16_Arr_Be (1 .. 2) := (16#1234#, 16#5678#);
Arr_8_LE : T_U8_Arr_Le (1 .. 4) := (16#12#, 16#34#, 16#56#, 16#78#);
Arr_8_BE : T_U8_Arr_Be (1 .. 4) := (16#12#, 16#34#, 16#56#, 16#78#);
Sea_16_LE : Ada.Streams.Stream_Element_Array (1 .. 4) with Address => Arr_16_LE'Address;
Sea_16_BE : Ada.Streams.Stream_Element_Array (1 .. 4) with Address => Arr_16_BE'Address;
Sea_8_LE : Ada.Streams.Stream_Element_Array (1 .. 4) with Address => Arr_8_LE'Address;
Sea_8_BE : Ada.Streams.Stream_Element_Array (1 .. 4) with Address => Arr_8_BE'Address;
function byte2Hexa(byte : Integer) return String is
Hexa : String(1..8);
begin
Ada.Integer_Text_IO.Put(Hexa,byte,16);
return Hexa;
end byte2Hexa;
begin
for byte of Sea_16_LE loop
Ada.Text_IO.Put(byte2Hexa(Integer(byte)));
end loop;
-- display 16#34# 16#12# 16#78# 16#56#
-- each item of the array is in LE
Ada.Text_IO.New_Line;
for byte of Sea_16_BE loop
Ada.Text_IO.Put(byte2Hexa(Integer(byte)));
end loop;
-- 16#12# 16#34# 16#56# 16#78#
-- each item of the array is in BE
Ada.Text_IO.New_Line;
for byte of Sea_8_LE loop
Ada.Text_IO.Put(byte2Hexa(Integer(byte)));
end loop;
-- 16#12# 16#34# 16#56# 16#78#
-- no effect as size of component is inferior or equal to storage_element size
Ada.Text_IO.New_Line;
for byte of Sea_8_BE loop
Ada.Text_IO.Put(byte2Hexa(Integer(byte)));
end loop;
-- 16#12# 16#34# 16#56# 16#78#
-- no effect as size of component is inferior or equal to storage_element size
end Scalar_Storage_Element_Exemple;
Your float serialization works on your QEMU because you are already on BE. Therefore the Scalar_Storage_Order is only confirming and has no effect.
It doesn't works on x86 because the native endianess is LE and as explained previously the BE Scalar_Storage_Order clause have no effect for the types that are involved. So the end result is a LE float.
Provided you use the same logic for serialization (the relevant code is not provided so I assume it's different), Integer or Float should have behaved similarly here.

It's not entirely clear, because you've included a lot of confusing detail, but I think you're trying to write to streams in an endianness-independent way in order to communicate (over the net?) between machines of different endianness.
The issue with your procedure WRITE_FLOAT_T is that its ITEM is a plain float, so Scalar_Storage_Order has no effect.
The way I've used Scalar_Storage_Order is to declare the record I wanted to send,
type SNTP_Packet is record
-- contents
end record
with
Bit_Order => System.High_Order_First,
Scalar_Storage_Order => System.High_Order_First,
Size => 48 * 8;
for SNTP_Packet use record
-- placement of content
end record;
subtype Net_Packet is Ada.Streams.Stream_Element_Array (1 .. 48);
-- This is what actually gets streamed
function To_Net_Packet
is new Ada.Unchecked_Conversion (SNTP_Packet, Net_Packet);
function To_SNTP_Packet
is new Ada.Unchecked_Conversion (Net_Packet, SNTP_Packet);
You could use pragma Default_Scalar_Storage_Order, but then I'm not sure what happens about the need to make Bit_Order match.
Alternatively, if you want to be able to use e.g. Float'Write, you can alter the way that GNAT streams fundamental types.
The Ada runtime handles streaming for fundamental types using the package System.Stream_Attributes, in files s-stratt.ads, s-stratt.adb, and provides an alternative implementation in s-stratt__xdr.adb (in the latest compilers; older compilers may use a different file name, but there'll be an xdr in there).
Getting the compiler to use this alternate version isn't very straightforward, but this worked for me:
copy s-stratt__xdr.adb to s-stratt.adb in your working directory
use gnatmake -a to compile the necessary parts of the runtime locally (-gnatpg says "compile for the runtime"):
gnatmake -a -f s-stratt.adb -gnatpg
build your program:
gprbuild main.adb
Note, gprbuild doesn’t support -a. It might be possible to use a project file to allow you to make a library containing the modified runtime components.

You are trying to encode data in bigendian (probably for network transmission) independently of your hot endianess.
You expect both arguments of your UNCHECKED_CONVERSION to be Scalar_Storage_Order=System.High_Order_First defined here
If the opposite storage order is specified, then whenever the value of a scalar component of an object of type S is read, the storage elements of the enclosing machine scalar are first reversed.
Your problem comes from the use of an old gcc version.
I tried to decompose the problem by testing the conversion of a FLOAT_T.Scalar_Storage_Order from System.Default_Bit_Order to System.High_Order_First thanks to the UNCHECKED_CONVERSION with following code:
inc.ads:
with SYSTEM;
package inc is
type BITS32_T is mod (2 ** 32);
for BITS32_T'SIZE use 32;
subtype UINT32_T is BITS32_T;
subtype INT32_UNSIGNED_T is UINT32_T;
type SIZE_T is new UINT32_T;
subtype INDEX_T is SIZE_T range 1 .. SIZE_T'LAST;
type BIT_T is mod (2 ** 1);
for BIT_T'SIZE use 1;
type BITS8_T is mod (2 ** 8);
for BITS8_T'SIZE use 8;
-- 64-bit signed integer
type INT64_T is range -(2 ** (64 - 1)) .. (2 ** (64 - 1) - 1);
for INT64_T'SIZE use 64;
subtype INT64_SIGNED_T is INT64_T;
type BIT_FIELD_ARR_NC_T is array (INDEX_T range <>) of BIT_T;
for BIT_FIELD_ARR_NC_T'COMPONENT_SIZE use 1;
for BIT_FIELD_ARR_NC_T'Scalar_Storage_Order use System.High_Order_First;
--Low_Order_First
type BYTE_FIELD_ARR_HOST_ENDIANNESS_NC_T is array (INDEX_T range <>) of BITS8_T;
for BYTE_FIELD_ARR_HOST_ENDIANNESS_NC_T'COMPONENT_SIZE use 8;
type BIT_FIELD_ARR_HOST_ENDIANNESS_NC_T is array (INDEX_T range <>) of BIT_T;
for BIT_FIELD_ARR_HOST_ENDIANNESS_NC_T'COMPONENT_SIZE use 1;
end inc;
test_types.adb:
with inc;
with INTERFACES;
with Ada.Text_IO;
with Ada.Integer_Text_IO;
with UNCHECKED_CONVERSION;
procedure TEST_TYPES is
longfloat : INTERFACES.IEEE_FLOAT_64 := INTERFACES.IEEE_FLOAT_64(1e11);
--float64 : inc.INT64_T := 16#1122334455667788#;
int64 : inc.INT64_T := 16#1122334455667788#;
---------------- TYPE used to print represnentation in memory ------------------------------
subtype BYTES_ARRAY_T is inc.BYTE_FIELD_ARR_HOST_ENDIANNESS_NC_T (1 .. 8);
-------- tableau de bits -------
subtype BITS_FIELD_T is inc.BIT_FIELD_ARR_NC_T (1 .. 64);
subtype BITS_FIELD_HOST_ENDIANNESS_T is inc.BIT_FIELD_ARR_HOST_ENDIANNESS_NC_T (1 .. 64);
---------------- FLOAT with BIG ENDIAN encoding ------------------------------
type TMP_TYPE_T is new STANDARD.LONG_FLOAT;
for TMP_TYPE_T'VALUE_SIZE use 64;
TMP_TYPE : TMP_TYPE_T;
function TO_BYTES_ARRAY is new UNCHECKED_CONVERSION (TMP_TYPE_T, BITS_FIELD_T);
bytes: BITS_FIELD_T;
---------------- FLOAT with host ENDIANNESS ------------------------------
function TO_BYTES_HOST_ENDIANNESS_ARRAY is new UNCHECKED_CONVERSION (TMP_TYPE_T, BITS_FIELD_HOST_ENDIANNESS_T);
bytesNoEndian: BITS_FIELD_HOST_ENDIANNESS_T;
---------------- INTEGER with ENDIAN CONVERSION ------------------------------
type TMP_Integer_T is new STANDARD.LONG_LONG_INTEGER;
for TMP_Integer_T'VALUE_SIZE use 64;
TMP_Integer : TMP_Integer_T;
function TO_BYTES_ARRAY_Integer is new UNCHECKED_CONVERSION (TMP_Integer_T, BITS_FIELD_T);
bytes_integer: BITS_FIELD_T;
---------------- INTEGER without ENDIAN CONVERSION ------------------------------
function TO_BYTES_ARRAY_HOST_ENDIANNESS_Integer is new UNCHECKED_CONVERSION (TMP_Integer_T, BITS_FIELD_HOST_ENDIANNESS_T);
bytes_no_endian_integer: BITS_FIELD_HOST_ENDIANNESS_T;
-- representation in memory
float_rep: BYTES_ARRAY_T;
float_bits_field_rep: BYTES_ARRAY_T;
int_rep: BYTES_ARRAY_T;
int_bits_field_rep: BYTES_ARRAY_T;
for float_rep'ADDRESS use bytesNoEndian'ADDRESS;
for float_bits_field_rep'ADDRESS use bytes'ADDRESS;
for int_rep'ADDRESS use bytes_no_endian_integer'ADDRESS;
for int_bits_field_rep'ADDRESS use bytes_integer'ADDRESS;
------------------ FUNCTION FROM STACKOVERFLOW-----------------------
function byte2hexa(byte : Integer) return String is
Hexa : String(1..8);
begin
Ada.Integer_Text_IO.Put(Hexa, byte, 16);
return Hexa;
end byte2hexa;
procedure array2hexa(bytes : BYTES_ARRAY_T) is
begin
Ada.Integer_Text_IO.Put(Integer(bytes(1)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(2)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(3)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(4)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(5)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(6)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(7)), Base => 16);
Ada.Integer_Text_IO.Put(Integer(bytes(8)), Base => 16);
Ada.Text_IO.New_line;
end array2hexa;
begin
-- test serialisation on float
TMP_TYPE := TMP_TYPE_T(longfloat);
bytesNoEndian := TO_BYTES_HOST_ENDIANNESS_ARRAY(TMP_TYPE);
Ada.Text_IO.Put_line("float in native endianess ");
array2hexa(float_rep);
Ada.Text_IO.New_line;
Ada.Text_IO.Put_line("float into BigEndian Bit array");
TMP_TYPE := TMP_TYPE_T(longfloat);
bytes := TO_BYTES_ARRAY(TMP_TYPE);
array2hexa(float_bits_field_rep);
Ada.Text_IO.New_line;
-- test serialisation on integer
TMP_Integer := TMP_Integer_T(int64);
bytes_no_endian_integer := TO_BYTES_ARRAY_HOST_ENDIANNESS_Integer(TMP_Integer);
Ada.Text_IO.Put_line("Integer in native endianess ");
array2hexa(int_rep);
Ada.Text_IO.New_line;
Ada.Text_IO.Put_line("Integer into BigEndian Bit array");
TMP_Integer := TMP_Integer_T(int64);
bytes_integer := TO_BYTES_ARRAY_Integer(TMP_Integer);
array2hexa(int_bits_field_rep);
end TEST_TYPES;
In my proposed code, the problem comes from that endianess of BITS_FIELD_T'element is clearly defined but behavior of UNCHECKED_CONVERSION is undefined (regarding the convertion from Endianess of the Float toward the Endianess of BITS_FIELD_T )
Surprisingly, using gcc (GCC) 6.2.1 20161010 (for GNAT Pro 17.2 20170606)
the UNCHECKED_CONVERSION convert the endianess of the integer but not the floating point :
float in native endianess
16#0# 16#0# 16#0# 16#E8# 16#76# 16#48# 16#37# 16#42#
float into BigEndian Bit array
16#0# 16#0# 16#0# 16#E8# 16#76# 16#48# 16#37# 16#42#
Integer in native endianess
16#88# 16#77# 16#66# 16#55# 16#44# 16#33# 16#22# 16#11#
Integer into BigEndian Bit array
16#11# 16#22# 16#33# 16#44# 16#55# 16#66# 16#77# 16#88#
but with gcc (GCC) 7.3.1 20181018 (for GNAT Pro 20.0w 20181017)
Floating point values are corectly swapped:
float in native endianess
16#0# 16#0# 16#0# 16#E8# 16#76# 16#48# 16#37# 16#42#
float into BigEndian Bit array
16#42# 16#37# 16#48# 16#76# 16#E8# 16#0# 16#0# 16#0#
One solution (for old compiler) is to pass through an intermediate BigEndian structure before UNCHECKED_CONVERSION:
procedure WRITE_LONG_FLOAT_T
(MY_STREAM : in out STREAMS.STREAM_T;
ITEM : in BASIC_TYPES.LONG_FLOAT_T;
ALIGN_MODE : in STREAMS.ALIGN_MODE_T)
is
pragma UNREFERENCED (ALIGN_MODE);
-- Temporary types for non pack case
type TMP_TYPE_T is new STANDARD.LONG_FLOAT;
for TMP_TYPE_T'VALUE_SIZE use LONG_FLOAT_T_SIZE_C;
TMP_TYPE : TMP_TYPE_T;
subtype BITS_FIELD_T is STREAMS.BIT_FIELD_ARR_NC_T (1 .. STREAMS.SIZE_T (LONG_FLOAT_T_SIZE_C));
function TO_BITS_ARRAY is new UNCHECKED_CONVERSION (TMP_TYPE_T,
BITS_FIELD_T);
type ITEM_ENDIAN_T is
record
TMP_TYPE_ENDIAN : TMP_TYPE_T;
end record;
for ITEM_ENDIAN_T'Bit_Order use System.High_Order_First;
for ITEM_ENDIAN_T'Scalar_Storage_Order use System.High_Order_First;
ITEM_ENDIAN : ITEM_ENDIAN_T;
--subtype LONG_FLOAT_TO_ARRAY_T is PUS.TYPES.BYTE_FIELD_NC_T (1 .. 8);
function TO_BITS_ARRAY_ENDIAN is new UNCHECKED_CONVERSION (ITEM_ENDIAN_T, BITS_FIELD_T);
begin
-- Convert item to a temporary type
TMP_TYPE := TMP_TYPE_T(ITEM);
ITEM_ENDIAN.TMP_TYPE_ENDIAN := TMP_TYPE;
STREAMS.WRITE (MY_STREAM => MY_STREAM,
--DATA => TO_BITS_ARRAY(TMP_TYPE));
DATA => TO_BITS_ARRAY_`enter code here`ENDIAN(ITEM_ENDIAN));
end WRITE_LONG_FLOAT_T;

One way (or another source of ideas...) is to use the IEEE 754 representations packages: http://www.dmitry-kazakov.de/ada/components.htm#IEEE_754
An endian-neutral use of those packages is done here: http://excel-writer.sf.net/

Related

Type enumeration in VHDL

What is the type enumeration in the VHDL?
where can I use it, to make the code shorter and more understandable?
for example, consider a bellow statement:
TYPE st_State IS (st_Idle, st_CheckHeader1, st_CheckHeader2, st_ReceiveData)
when must to use it.
Your example is only a declaration of a type with name st_State and this type contains four elements. Each element gets a number from 0 to Elements - 1. This is similar to a C typedef with an C enum.
Please check this explanation for more detailed information.
A typical application for this is a state machine to name the different states:
architecture Top_Arch of Top is
type State_Type is (S0, S1, S2, S3);
signal CurrentState : State_Type := S0;
begin
process(Clock)
begin
if(rising_edge(Clock)) then
case CurrentState is
when S0 => ...
when S1 => ...
when S2 => ...
when S3 => ...
end case;
end if;
end Top_Arch;
Using this method result in a more readable and cleaner code, but it is equivalent to this approach (untested):
architecture Top_Arch of Top is
signal CurrentState : INTEGER RANGE 0 to 3 := 0;
begin
process(Clock)
begin
if(rising_edge(Clock)) then
case CurrentState is
when 0 => ...
when 1 => ...
when 2 => ...
when 3 => ...
end case;
end if;
end Top_Arch;
NOTE: Check the range statement. You have to use it, because you have to declare each value for your state machine. So you have to use when others or reduce the integer to 2 bits only. Otherwise you have to declare 2^32 - 1 states.
So you need at least a type declaration with type <YourType> is ... to declare your custom type and a signal to use your type (CurrentState in the above example).
Enumerated types have many other uses than just states in state machines.
You can use them as index types in arrays, loop variables, etc. For example,
type channel is (R,G,B);
Colour : array(channel) of byte;
constant Black : Colour := (R => 0, G => 0, B => 0);
signal VGA_Out : Colour;
-- in a process
for c in channel loop
VGA_Out(c) <= A(c) + B(c); -- mix video signals A and B
end loop;
and so on

How to constrain VHDL-2008 integer_vector?

VHDL-2008 defines
type integer_vector is array (natural range <>) of integer
and it can be used to create arrays of unconstrained integers just fine:
signal sUnconsrainedIntA : integer_vector(0 to 1) := (others => 0);
However, how to declare array of constrained integers, e.g:
-- does not work:
-- signal sConstrainedTestIntA : integer_vector(0 to 1) range 0 to 3 := (others => 0);
-- ** Error: filetest.vhd(65): Range constraints cannot be applied to array types.
-- ** Error: filetest.vhd(65): Range expression is type Integer; expecting type std.STANDARD.INTEGER_VECTOR
-- What you can do is:
type my_int_array is array (natural range <>) of integer range 0 to 3;
signal sConstrainedIntA : my_int_array(0 to 1) := (others => 0);
Is there a way to constrain the integers in the array without the custom type?
VHDL 2008 supports package generic parameters. You could try something like:
package foo_pkg is
generic(l, h: integer);
subtype my_integer is integer range l to h;
type my_integer_vector is array(natural range <>) of my_integer;
end package foo_pkg;
package foo_pkg_m17_p39 is new work.foo_pkg
generic map(l => -17, h => 39);
package foo_pkg_p57_p134 is new work.foo_pkg
generic map(l => 57, h => 134);
entity foo is
port(iv1: work.foo_pkg_m17_p39.my_integer_vector(0 to 7);
iv2: work.foo_pkg_p57_p134.my_integer_vector(0 to 7)
);
end entity foo;
Not very user friendly because you need one package instantiation declaration per integer constraint. But it is what I found that resembles most what you ask for...
Even if it looks more complicated than what you expected, it still allows you to factorize your custom code for all variants of my_integer_vector.

How would I create a function to convert from an integer to std_logic vector in VHDL?

I am seeking help as I am learning this language construct.
Here is what I have:
function int_slv(val,width: integer) return std_logic_vector is
variable R: std_logic_vector(0 to width-1):=(others=>'0')
variable b:integer:= width;
begin
if (b>32) then
b=32;
else
assert 2**bits >val report
"value too big for std_logic_vector"
severity warning
end if;
for i in 0 to b-1 loop
if val ((val/(2**i)) MOD 2 = 1) then
R(i)='1';
end if;
end loop;
return(R);
end int_slv;
In addition to 5 syntax errors, one wrong identifier and a modulo reduction expressions expressed as an element of an array as well as several sets of redundant parentheses, your modified code:
library ieee;
use ieee.std_logic_1164.all;
package int2bv_pkg is
function int_slv (val, width: integer) return std_logic_vector;
end package;
package body int2bv_pkg is
function int_slv (val, width: integer) return std_logic_vector is
variable R: std_logic_vector(0 to width-1):=(others=>'0'); -- added ';'
variable b:integer:= width;
begin
if b > 32 then
b := 32; -- ":=" is used for variable assignment
else
assert 2 ** width > val report -- width not bits
"value too big for std_logic_vector"
severity warning; -- missing semicolon at the end of assertion
end if;
for i in 0 to b - 1 loop
if val/2 ** i MOD 2 = 1 then -- not val (...)
R(i) := '1'; -- ":=" variable assign.
end if;
end loop;
return R; -- parentheses not needed
end int_slv;
end package body int2bv_pkg;
analyzes (compiles). The exponentiation operator "**" is the highest priority, the division operators "/" and "mod" are the same priority and executed in the order they are found (left to right). It's likely worthwhile learning VHDL operator precedence.
You were using "=" for variable assignment when you should have been using ":=" in two places, you were missing two semicolons and were using the identifier bits (which isn't declared in your function) where apparently you meant width.
The modified example analyzes, and hasn't been tested absent a Minimal, Complete and Verifiable example in the question.
Note that a package body is a design unit as is a package declaration. There are various other places in other design units you can introduce a function body.
You could also note the 2 ** 31 is outside the guaranteed range of an integer in VHDL equal to 2147483648, while the INTEGER value range guaranteed to be from -2147483647 to +2147483647 at a minimum.
This implies that were ever you are using a value that derived from an expression equivalent to 2 ** 31 you can incur a range error during execution (either at elaboration or during simulation).
This pretty much says you need a VHDL implementation with a larger INTEGER value range or you need to rethink what you're doing.
As a matter of course there are integer to unsigned and integer to signed functions found in package numeric_std in library IEEE.
The result of such can be type converted to std_logic_vector, and the source code can make great learning aids on how to wend through the limitations VHDL imposes. These to_signed or to_unsigned functions would be capable of dealing with the maximum value an INTEGER can hold and specify the length of the resulting array type while providing zero or sign filling for array lengths greater than the INTEGER's binary value. That utility extends to clipping using length as well.
VHDL -2008 package numeric_std_unsigned contains a function To_StdLogicVector that does what your int_slv function is intended to do although limited to a NATURAL range for the integer type input.
As #user1155120 has already indicated, the VHDL-2008 package numeric_std_unsigned has a builtin to_stdlogicvector. And #user1155120 already pointed out the to_signed and to_unsigned in numeric_std are available as well.
So, to expand on the previous answer, you can do:
constant C : integer := -6817563;
constant C_VEC : std_logic_vector(31 downto 0) := std_logic_vector(to_signed(c, 32));
And this mechanism will accept the full range of integer. You can also use to_unsigned, but this is limited to the range of natural.

No function declarations for operator + error in VHDL

In this piece of code I get this error for the line with +
function func (bv1 : in bit_vector; bv2 : in integer) return bit_vector is
variable temp : natural := 2**bv2;
variable result : bit_vector(1 to 32);
begin
report "asd" & natural'image(temp);
result <= bv1 + temp; // this line causes the error
return result;
end func;
The error is :
No function declarations for operator +
How can I solve this? I also get a similar error for "=" as well.
Don't use bit_vectors (or std_logic_vectors, really) for anything you want to do arithmetic on.
Use the ieee.numeric_std library and then declare your signals (or whatever) to be of type signed ot unsigned depending on what type of vector you want. (Or of course, you can just use integers and the subtypes of that)
It's because you try to add a natural to a bit_vector which does not work because they are of different types. So you'll have to use a converter, e.g. as shown here within one of the functions. The other method is to stick to all the same types, but that isn't always possible.
Some initial problems with the code are that VHDL comments markup is --, not
//, and assign to result variable must use :=, since <= is for assign
to signal.
Then, the reason for the error:
No function declarations for operator +
is that VHDL is a strong typed language, so it is not possible just to add a
natural type and a bit_vector type, as attempted in result <= bv1 + temp.
Instead you need to use the package numeric_bit_unsigned, and for example
convert temp to bit_vector using function to_bitvector before adding.
Resulting code can then be:
library ieee;
use ieee.numeric_bit_unsigned.all;
...
function func (bv1 : in bit_vector; bv2 : in integer) return bit_vector is
variable temp : natural := 2**bv2;
variable result : bit_vector(1 to 32);
begin
report "asd" & natural'image(temp);
result := bv1 + to_bitvector(temp, result'length); -- this line causes the error
return result;
end func;
You should check that the length is enough to handle the required values.
However, instead of using bit_vector type, you may consider the
std_logic_vector (depending on the design), since the std_logic_vector has
additional values that may reveal design problem in simulation.

How to use generic parameters that depend on other generic parameters for entities?

I am trying to convert some Verilog code that produces a slower clock from a faster clock for a UART module. The original verilog code is based on the module over at fpga4fun.com, and this is my attempt to translate it for my VHDL-based design.
entity baud_generator is
generic(
f_clk : integer := 50000000; -- default: 50 MHz
baud : integer := 115200; -- default: 115,200 baud
accum_width : integer := 16;
accum_inc : integer := (baud sll accum_width) / f_clk
);
port(
clock : in std_logic;
reset_n : in std_logic;
enable : in std_logic;
baud_clock : out std_logic
);
end entity baud_generator;
However, my compiler, Aldec-HDL, doesn't like the following line:
accum_inc : natural := (baud sll accum_width) / f_clk
Here is the exact error message:
# Error: COMP96_0300: baud_generator.vhd : (20, 52): Cannot reference "f_clk" until the interface list is complete.
# Error: COMP96_0300: baud_generator.vhd : (20, 28): Cannot reference "baud" until the interface list is complete.
# Error: COMP96_0071: baud_generator.vhd : (20, 28): Operator "sll" is not defined for such operands.
# Error: COMP96_0104: baud_generator.vhd : (20, 27): Undefined type of expression.
# Error: COMP96_0077: baud_generator.vhd : (20, 27): Assignment target incompatible with right side. Expected type 'INTEGER'.
In verilog, I have something like this:
module baud_generator(
input clock,
input reset_n,
input enable,
output baud_clock
);
parameter f_clock = 50000000;
parameter baud = 115200;
parameter accum_width = 16;
parameter accum_inc = (baud << accum_width) / f_clock;
//...
endmodule
What is it that I need to modify in that line to make the compiler happy? Is it possible to use generics chained together like that?
This basically says you cannot do computations with the generic values to caluclate (default values for) other generics.
Just use accum_inc as a constant, not as a generic.
Also, the SLL (shift logic left) operator is meant for bit patterns (unsigned and signed datatypes in the ieee.numeric_std and ieee.numeric_bit packages), not for integers. You can do the same by multiplying by a power of two.
It looks to me like accum_inc is a constant, not a parameter (as it's calculated from the generics, so there's no reason to override it)
So it doesn't want to be in the generic part - simply move it to the architecture and make it a constant (and as Philippe noted, do your shifting with multiplies):
constant accum_inc : integer := (baud * (2**accum_width)) / f_clk;
You may find that you overflow what integers can manage, depending on the values of the generics, so you might find you want to use unsigned vectors in the generics and/or calculation.

Resources