Extend bit pattern to generic vector size in VHDL - vhdl

constant alternate_bits : std_logic_vector(C_BIT_SIZE-1 downto 0) := X;
What do I write in place of X to set it to an alternating pattern of bits, while keeping it generic and without getting upset if C_BIT_SIZE isn't even?
For example, if C_BIT_SIZE = 4 it should produce "1010" and if C_BIT_SIZE = 5 it should produce "01010". (And it should work for any value of C_BIT_SIZE >= 1.)

A function can be used:
-- Returns std_logic_vector(BIT_SIZE-1 downto 0) with bits on even indexes
-- as '0' and bits on odd indexes as '1', e.g. 5 bit vector as "01010".
function alternate_fun(BIT_SIZE : natural) return std_logic_vector is
variable res_v : std_logic_vector(BIT_SIZE - 1 downto 0);
begin
res_v := (others => '0');
for i in 1 to BIT_SIZE / 2 loop
res_v(2 * i - 1) := '1';
end loop;
return res_v;
end function;

I wrote a function that seems to do the trick but I'm interested in alternate tidier answers:
subtype data_vector is std_logic_vector(C_BIT_SIZE-1 downto 0);
function make_alternate_bits return data_vector is
variable bits : data_vector;
begin
for i in 0 to C_BIT_SIZE-1 loop
if (i mod 2) = 0 then
bits(i) := '0';
else
bits(i) := '1';
end if;
end loop;
return bits;
end function;
constant alternate_bits : data_vector := make_alternate_bits;

Related

std_logic_vector (to_unsigned(X, Y));

This is a test-bench, and I have these signals:
signal DATA_INPUT :std_logic_vector(0 to 31);
signal rand_num :integer;
I am trying to put random numbers into this 32bit signal by this:
DATA_INPUT <= std_logic_vector(to_unsigned(rand_num, 32));
My question is, I need numbers more than 31bits but when the random numbers goes above this number: 2147483647 which is INTEGER'high, I am getting this error:
near "4294967295": (vcom-119) Integer value exceeds INTEGER'high.
# ** Error: tb.vhd: (vcom-1144) Value -1 (of type
std.STANDARD.NATURAL) is out of range 0 to 2147483647.
I tried to modify the TO_UNSIGNED() function and change the NATURAL input to something else but nothing.
Here is the TO_UNSIGNED function from IEEE and RANDOOM GENERATOR process:
function TO_UNSIGNED(ARG, SIZE: NATURAL) return UNSIGNED is
variable RESULT: UNSIGNED (SIZE-1 downto 0);
variable i_val: NATURAl := ARG;
begin
if (SIZE < 1) then return NAU; end if;
for i in 0 to RESULT'left loop
if (i_val MOD 2) = 0 then
RESULT(i) := '0';
else RESULT(i) := '1';
end if;
i_val := i_val/2;
end loop;
if not(i_val=0) then
assert NO_WARNING
report "numeric_std.TO_UNSIGNED : vector truncated"
severity WARNING;
end if;
return RESULT;
end TO_UNSIGNED;
Random generator:
process
variable seed1, seed2 :positive;
variable rand :real;
variable range_of_rand :real:= 46340.0;
begin
uniform(seed1, seed2, rand);
rand_num <= integer(rand*range_of_rand);
wait for 1 ns;
end process;
You can make a new,bigger random number by combining two.
The simplest solution is to convert two random integers to vectors and then concatenate until you get the number of bits you need. This gives you 64 bits:
DATA_INPUT <= std_logic_vector(to_unsigned(rand_num, 32)) & std_logic_vector(to_unsigned(rand_num, 32));

Unexpected function output when function parameter is negated

I have a priority encoding function that returns a vector containing a 1 at the position where the first 1 is found in the input vector. The function works as expected, unless I try to negate the input vector. Here's an example that demonstrates the unexpected behavior:
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
entity tb IS
end tb;
architecture run of tb is
constant N : natural := 5;
function get_first_one_in_vec (vec_in: std_logic_vector) return std_logic_vector is
variable ret: std_logic_vector(vec_in'high downto vec_in'low);
begin
ret := (others => '0');
for i in vec_in'low to vec_in'high loop
if vec_in(i)='1' then
ret(i) := '1';
exit;
end if;
end loop;
return ret;
end get_first_one_in_vec;
signal a : std_logic_vector(N-1 downto 0);
signal abar : std_logic_vector(N-1 downto 0);
signal first_a : std_logic_vector(N-1 downto 0);
signal first_nota : std_logic_vector(N-1 downto 0);
signal first_abar : std_logic_vector(N-1 downto 0);
begin
process
begin
a <= "10100";
wait for 10 ns;
a <= "01011";
wait for 10 ns;
wait;
end process;
abar <= not(a);
first_a <= get_first_one_in_vec(a);
first_nota <= get_first_one_in_vec(not(a));
first_abar <= get_first_one_in_vec(abar);
end run;
To my understanding, first_nota should be the same as first_abar. However, my simulator (ModelSim - Intel FPGA Starter Edition 10.5b, rev. 2016.10) thinks otherwise, as you can see here:
What am I missing here?
This works OK:
function get_first_one_in_vec (vec_in: std_logic_vector) return std_logic_vector is
variable ret: std_logic_vector(vec_in'length downto 1);
variable inp: std_logic_vector(vec_in'length downto 1) := vec_in;
begin
ret := (others => '0');
for i in inp'right to inp'left loop
if inp(i)='1' then
ret(i) := '1';
exit;
end if;
end loop;
return ret;
end get_first_one_in_vec;
https://www.edaplayground.com/x/3zP_
Why does yours not work? Well, when you call your function with the not operator* as part of the expression:
first_nota <= get_first_one_in_vec(not a);
the numbering of the input to the function is changed to 1 to by the not operator. Why? Here is the code for the not operator and you can see why:
-------------------------------------------------------------------
-- not
-------------------------------------------------------------------
FUNCTION "not" ( l : std_logic_vector ) RETURN std_logic_vector IS
-- pragma built_in SYN_NOT
-- pragma subpgm_id 204
--synopsys synthesis_off
ALIAS lv : std_logic_vector ( 1 TO l'LENGTH ) IS l;
VARIABLE result : std_logic_vector ( 1 TO l'LENGTH ) := (OTHERS => 'X');
--synopsys synthesis_on
BEGIN
--synopsys synthesis_off
FOR i IN result'RANGE LOOP
result(i) := not_table( lv(i) );
END LOOP;
RETURN result;
--synopsys synthesis_on
END;
---------------------------------------------------------------------
Anyway, this breaks your code (which starts scanning from the other end of the word).
One way of making function agnostic to the ordering of the numbering of its input is to normalise the inputs like this:
variable inp: std_logic_vector(vec_in'length downto 1) := vec_in;
Once you have done this, you're in control. So, instead of loops from 'high downto 'low, we can be more explicit and loop from 'right to 'left:
for i in inp'right to inp'left loop
not is an operator not a function. You don't need the brackets.

Vivado synthesis: complex assignment not supported

I implemented a Booth modified multiplier in vhdl. I need to make a synthesis with Vivado but it's not possible because of this error:
"complex assignment not supported".
This is the shifter code that causes the error:
entity shift_register is
generic (
N : integer := 6;
M : integer := 6
);
port (
en_s : in std_logic;
cod_result : in std_logic_vector (N+M-1 downto 0);
position : in integer;
shift_result : out std_logic_vector(N+M-1 downto 0)
);
end shift_register;
architecture shift_arch of shift_register is
begin
process(en_s)
variable shift_aux : std_logic_vector(N+M-1 downto 0);
variable i : integer := 0; --solo per comoditÃ
begin
if(en_s'event and en_s ='1') then
i := position;
shift_aux := (others => '0');
shift_aux(N+M-1 downto i) := cod_result(N+M-1-i downto 0); --ERROR!!
shift_result <= shift_aux ;
end if;
end process;
end shift_arch;
the booth multiplier works with any operator dimension. So I can not change this generic code with a specific one.
Please help me! Thanks a lot
There's a way to make your index addressing static for synthesis.
First, based on the loop we can tell position must have a value within the range of shift_aux, otherwise you'd end up with null slices (IEEE Std 1076-2008 8.5 Slice names).
That can be shown in the entity declaration:
library ieee;
use ieee.std_logic_1164.all;
entity shift_register is
generic (
N: integer := 6;
M: integer := 6
);
port (
en_s: in std_logic;
cod_result: in std_logic_vector (N + M - 1 downto 0);
position: in integer range 0 to N + M - 1 ; -- range ADDED
shift_result: out std_logic_vector(N + M - 1 downto 0)
);
end entity shift_register;
What's changed is the addition of a range constraint to the port declaration of position. The idea is to support simulation where the default value of can be integer is integer'left. Simulating your shift_register would fail on the rising edge of en_s if position (the actual driver) did not provide an initial value in the index range of shift_aux.
From a synthesis perspective an unbounded integer requires you take both positive and negative integer values in to account. Your for loop is only using positive integer values.
The same can be done in the declaration of the variable i in the process:
variable i: integer range 0 to N + M - 1 := 0; -- range ADDED
To address the immediate synthesis problem we look at the for loop.
Xilinx support issue AR# 52302 tells us the issue is using dynamic values for indexes.
The solution is to modify what the for loop does:
architecture shift_loop of shift_register is
begin
process (en_s)
variable shift_aux: std_logic_vector(N + M - 1 downto 0);
-- variable i: integer range 0 to N + M - 1 := 0; -- range ADDED
begin
if en_s'event and en_s = '1' then
-- i := position;
shift_aux := (others => '0');
for i in 0 to N + M - 1 loop
-- shift_aux(N + M - 1 downto i) := cod_result(N + M - 1 - i downto 0);
if i = position then
shift_aux(N + M - 1 downto i)
:= cod_result(N + M - 1 - i downto 0);
end if;
end loop;
shift_result <= shift_aux;
end if;
end process;
end architecture shift_loop;
If i becomes a static value when the loop is unrolled in synthesis it can be used in calculation of indexes.
Note this gives us an N + M input multiplexer where each input is selected when i = position.
This construct can actually be collapsed into a barrel shifter by optimization, although you might expect the number of variables involved for large values of N and M might take a prohibitive synthesis effort or simply fail.
When synthesis is successful you'll collapse each output element in the assignment into a separate multiplexer that will match Patrick's
barrel shifter.
For sufficiently large values of N and M we can defined the depth in number of multiplexer layers in the barrel shifter based on the number of bits in a binary expression of the integer range of distance.
That either requires a declared integer type or subtype for position or finding the log2 value of N + M. We can use the log2 value because it would only be used statically. (XST supports log2(x) where x is a Real for determining static values, the function is found in IEEE package math_real). This gives us the binary length of position. (How many bits are required to to describe the shift distance, the number of levels of multiplexers).
architecture barrel_shifter of shift_register is
begin
process (en_s)
use ieee.math_real.all; -- log2 [real return real]
use ieee.numeric_std.all; -- to_unsigned, unsigned
constant DISTLEN: natural := integer(log2(real(N + M))); -- binary lengh
type muxv is array (0 to DISTLEN - 1) of
unsigned (N + M - 1 downto 0);
variable shft_aux: muxv;
variable distance: unsigned (DISTLEN - 1 downto 0);
begin
if en_s'event and en_s = '1' then
distance := to_unsigned(position, DISTLEN); -- position in binary
shft_aux := (others => (others =>'0'));
for i in 0 to DISTLEN - 1 loop
if i = 0 then
if distance(i) = '1' then
shft_aux(i) := SHIFT_LEFT(unsigned(cod_result), 2 ** i);
else
shft_aux(i) := unsigned(cod_result);
end if;
else
if distance(i) = '1' then
shft_aux(i) := SHIFT_LEFT(shft_aux(i - 1), 2 ** i);
else
shft_aux(i) := shft_aux(i - 1);
end if;
end if;
end loop;
shift_result <= std_logic_vector(shft_aux(DISTLEN - 1));
end if;
end process;
end architecture barrel_shifter;
XST also supports ** if the left operand is 2 and the value of i is treated as a constant in the sequence of statements found in a loop statement.
This could be implemented with signals instead of variables or structurally in a generate statement instead of a loop statement inside a process, or even as a subprogram.
The basic idea here with these two architectures derived from yours is to produce something synthesis eligible.
The advantage of the second architecture over the first is in reduction in the amount of synthesis effort during optimization for larger values of N + M.
Neither of these architectures have been verified lacking a testbench in the original. They both analyze and elaborate.
Writing a simple case testbench:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity shift_register_tb is
end entity;
architecture foo of shift_register_tb is
constant N: integer := 6;
constant M: integer := 6;
signal clk: std_logic := '0';
signal din: std_logic_vector (N + M - 1 downto 0)
:= (0 => '1', others => '0');
signal dout: std_logic_vector (N + M - 1 downto 0);
signal dist: integer := 0;
begin
DUT:
entity work.shift_register
generic map (
N => N,
M => M
)
port map (
en_s => clk,
cod_result => din,
position => dist,
shift_result => dout
);
CLOCK:
process
begin
wait for 10 ns;
clk <= not clk;
if now > (N + M + 2) * 20 ns then
wait;
end if;
end process;
STIMULI:
process
begin
for i in 1 to N + M loop
wait for 20 ns;
dist <= i;
din <= std_logic_vector(SHIFT_LEFT(unsigned(din),1));
end loop;
wait;
end process;
end architecture;
And simulating reveals that the range of position and the number of loop iterations only needs to cover the number of bits in the multiplier and not the multiplicand. We don't need a full barrel shifter.
That can be easily fixed in both shift_register architectures and has the side effect of making the shift_loop architecture much more attractive, it would be easier to synthesize based on the multiplier bit length (presumably M) and not the product bit length (N+ M).
And that would give you:
library ieee;
use ieee.std_logic_1164.all;
entity shift_register is
generic (
N: integer := 6;
M: integer := 6
);
port (
en_s: in std_logic;
cod_result: in std_logic_vector (N + M - 1 downto 0);
position: in integer range 0 to M - 1 ; -- range ADDED
shift_result: out std_logic_vector(N + M - 1 downto 0)
);
end entity shift_register;
architecture shift_loop of shift_register is
begin
process (en_s)
variable shift_aux: std_logic_vector(N + M - 1 downto 0);
-- variable i: integer range 0 to M - 1 := 0; -- range ADDED
begin
if en_s'event and en_s = '1' then
-- i := position;
shift_aux := (others => '0');
for i in 0 to M - 1 loop
-- shift_aux(N + M - 1 downto i) := cod_result(N + M - 1 - i downto 0);
if i = position then -- This creates an N + M - 1 input MUX
shift_aux(N + M - 1 downto i)
:= cod_result(N + M - 1 - i downto 0);
end if;
end loop; -- The loop is unrolled in synthesis, i is CONSTANT
shift_result <= shift_aux;
end if;
end process;
end architecture shift_loop;
Modifying the testbench:
STIMULI:
process
begin
for i in 1 to M loop -- WAS N + M loop
wait for 20 ns;
dist <= i;
din <= std_logic_vector(SHIFT_LEFT(unsigned(din),1));
end loop;
wait;
end process;
gives a result showing the shifts are over the range of the multiplier value (specified by M):
So the moral here is you don't need a full barrel shifter, only one that works over the multiplier range and not the product range.
The last bit of code should be synthesis eligible.
You are trying to create a range using a run-time varying value, and this is not supported by the synthesis tool. cod_result(N+M-1 downto 0); would be supported, because N, M, and 1 are all known at synthesis time.
If you're trying to implement a multiplier, you will get the best result using x <= a * b, and letting the synthesis tool choose the best way to implement it. If you have operands wider than the multiplier widths in your device, then you need to look at the documentation to determine the best route, which will normally involve pipelining of some sort.
If you need a run-time variable shift, look for a 'Barrel Shifter'. There are existing answers on these, for example this one.

Is there a way to print the values of a signal to a file from a modelsim simulation?

I need to get the values of several signals to check them against the simulation (the simulation is in Matlab). There are many values, and I want to get them in a file so that I could run it in a script and avoid copying the values by hand.
Is there a way to automatically print the values of several signals into a text file?
(The design is implemented in VHDL)
First make functions that convert std_logic and std_logic_vector to
string like:
function to_bstring(sl : std_logic) return string is
variable sl_str_v : string(1 to 3); -- std_logic image with quotes around
begin
sl_str_v := std_logic'image(sl);
return "" & sl_str_v(2); -- "" & character to get string
end function;
function to_bstring(slv : std_logic_vector) return string is
alias slv_norm : std_logic_vector(1 to slv'length) is slv;
variable sl_str_v : string(1 to 1); -- String of std_logic
variable res_v : string(1 to slv'length);
begin
for idx in slv_norm'range loop
sl_str_v := to_bstring(slv_norm(idx));
res_v(idx) := sl_str_v(1);
end loop;
return res_v;
end function;
Using the bit-wise format has the advantage that any non-01 values will show
with the exact std_logic value, which is not the case for e.g. hex
presentation.
Then make process that writes the strings from std_logic and
std_logic_vector to file for example at rising_edge(clk) like:
library std;
use std.textio.all;
...
process (clk) is
variable line_v : line;
file out_file : text open write_mode is "out.txt";
begin
if rising_edge(clk) then
write(line_v, to_bstring(rst) & " " & to_bstring(cnt_1) & " " & to_bstring(cnt_3));
writeline(out_file, line_v);
end if;
end process;
The example above uses rst as std_logic, and cnt_1 and cnt_3 as
std_logic_vector(7 downto 0). The resulting output in "out.txt" is then:
1 00000000 00000000
1 00000000 00000000
1 00000000 00000000
0 00000000 00000000
0 00000001 00000011
0 00000010 00000110
0 00000011 00001001
0 00000100 00001100
0 00000101 00001111
0 00000110 00010010
I would like to present a flexible way to convert std_logic(_vector) to a string:
First you can define two functions to convert std_logic-bits and digits to a character:
FUNCTION to_char(value : STD_LOGIC) RETURN CHARACTER IS
BEGIN
CASE value IS
WHEN 'U' => RETURN 'U';
WHEN 'X' => RETURN 'X';
WHEN '0' => RETURN '0';
WHEN '1' => RETURN '1';
WHEN 'Z' => RETURN 'Z';
WHEN 'W' => RETURN 'W';
WHEN 'L' => RETURN 'L';
WHEN 'H' => RETURN 'H';
WHEN '-' => RETURN '-';
WHEN OTHERS => RETURN 'X';
END CASE;
END FUNCTION;
function to_char(value : natural) return character is
begin
if (value < 10) then
return character'val(character'pos('0') + value);
elsif (value < 16) then
return character'val(character'pos('A') + value - 10);
else
return 'X';
end if;
end function;
And now it's possible to define two to_string functions which convert from boolean and std_logic_vector to string:
function to_string(value : boolean) return string is
begin
return str_to_upper(boolean'image(value)); -- ite(value, "TRUE", "FALSE");
end function;
FUNCTION to_string(slv : STD_LOGIC_VECTOR; format : CHARACTER; length : NATURAL := 0; fill : CHARACTER := '0') RETURN STRING IS
CONSTANT int : INTEGER := ite((slv'length <= 31), to_integer(unsigned(resize(slv, 31))), 0);
CONSTANT str : STRING := INTEGER'image(int);
CONSTANT bin_len : POSITIVE := slv'length;
CONSTANT dec_len : POSITIVE := str'length;--log10ceilnz(int);
CONSTANT hex_len : POSITIVE := ite(((bin_len MOD 4) = 0), (bin_len / 4), (bin_len / 4) + 1);
CONSTANT len : NATURAL := ite((format = 'b'), bin_len,
ite((format = 'd'), dec_len,
ite((format = 'h'), hex_len, 0)));
VARIABLE j : NATURAL := 0;
VARIABLE Result : STRING(1 TO ite((length = 0), len, imax(len, length))) := (OTHERS => fill);
BEGIN
IF (format = 'b') THEN
FOR i IN Result'reverse_range LOOP
Result(i) := to_char(slv(j));
j := j + 1;
END LOOP;
ELSIF (format = 'd') THEN
Result(Result'length - str'length + 1 TO Result'high) := str;
ELSIF (format = 'h') THEN
FOR i IN Result'reverse_range LOOP
Result(i) := to_char(to_integer(unsigned(slv((j * 4) + 3 DOWNTO (j * 4)))));
j := j + 1;
END LOOP;
ELSE
REPORT "unknown format" SEVERITY FAILURE;
END IF;
RETURN Result;
END FUNCTION;
This to_string function can convert std_logic_vectors to binary (format='b'), dicimal (format='d') and hex (format='h'). Optionally you can define a minimum length for the string, if length is greater then 0, and a fill-character if the required length of the std_logic_vector is shorter then length.
And here are the required helper function:
-- calculate the minimum of two inputs
function imin(arg1 : integer; arg2 : integer) return integer is
begin
if arg1 < arg2 then return arg1; end if;
return arg2;
end function;
-- if-then-else for strings
FUNCTION ite(cond : BOOLEAN; value1 : STRING; value2 : STRING) RETURN STRING IS
BEGIN
IF cond THEN
RETURN value1;
ELSE
RETURN value2;
END IF;
END FUNCTION;
-- a resize function for std_logic_vector
function resize(vec : std_logic_vector; length : natural; fill : std_logic := '0') return std_logic_vector is
constant high2b : natural := vec'low+length-1;
constant highcp : natural := imin(vec'high, high2b);
variable res_up : std_logic_vector(vec'low to high2b);
variable res_dn : std_logic_vector(high2b downto vec'low);
begin
if vec'ascending then
res_up := (others => fill);
res_up(vec'low to highcp) := vec(vec'low to highcp);
return res_up;
else
res_dn := (others => fill);
res_dn(highcp downto vec'low) := vec(highcp downto vec'low);
return res_dn;
end if;
end function;
Ok, this solution looks a bit long, but if you gather some of this functions -- and maybe overload them for several types -- you get an extended type converting system and in which you can convert nearly every type to every other type or representation.
Because there's more than one way to skin a cat:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
-- library std;
use std.textio.all;
entity changed_morten is
end entity;
architecture foo of changed_morten is
signal clk: std_logic := '0';
signal rst: std_logic := '1';
signal cnt_1: unsigned (7 downto 0);
signal cnt_3: unsigned (7 downto 0);
function string_it (arg:unsigned) return string is
variable ret: string (1 to arg'LENGTH);
variable str: string (1 to 3); -- enumerated type "'X'"
alias varg: unsigned (1 to arg'LENGTH) is arg;
begin
if arg'LENGTH = 0 then
ret := "";
else
for i in varg'range loop
str := std_logic'IMAGE(varg(i));
ret(i) := str(2); -- the actual character
end loop;
end if;
return ret;
end function;
begin
PRINT:
process (clk) is
variable line_v : line;
variable str: string (1 to 3); -- size matches charcter enumeration
file out_file : text open write_mode is "out.txt";
begin
if rising_edge(clk) then
str := std_logic'IMAGE(rst);
write ( line_v,
str(2) & " " &
string_it(cnt_1) & " " &
string_it(cnt_3) & " "
);
writeline(out_file, line_v);
end if;
end process;
COUNTER1:
process (clk,rst)
begin
if rst = '1' then
cnt_1 <= (others => '0');
elsif rising_edge(clk) then
cnt_1 <= cnt_1 + 1;
end if;
end process;
COUNTER3:
process (clk,rst)
begin
if rst = '1' then
cnt_3 <= (others => '0');
elsif rising_edge(clk) then
cnt_3 <= cnt_3 + 3;
end if;
end process;
RESET:
process
begin
wait until rising_edge(clk);
wait until rising_edge(clk);
wait until rising_edge(clk);
rst <= '0';
wait;
end process;
CLOCK:
process
begin
wait for 10 ns;
clk <= not clk;
if Now > 210 ns then
wait;
end if;
end process;
end architecture;
And mostly because Morten's expression
"" & std_logic'image(sl)(2); -- "" & character to get string
isn't accepted by ghdl, it's not an indexed name, the string is unnamed.
The issue appears to be caused by the lack of recognition of the function call ('IMAGE) being recognized as a prefix for the indexed name. For any ghdl users you'd want to use an intermediary named string target for the output of the attribute function call (shown in the string_it function and in line in the PRINT process). I submitted a bug report.
Addendum
Another way to express Morten's to_bstring(sl : std_logic) return string function is:
function to_bstring(sl : std_logic) return string is
variable sl_str_v : string(1 to 3) := std_logic'image(sl); -- character literal length 3
begin
return "" & sl_str_v(2); -- "" & character to get string
end function;
And the reason this works is because function calls are dynamically elaborated, meaning the string sl_str_v is created each time the function is called.
See IEEE Std 1076-1993 12.5 Dynamic elaboration, b.:
Execution of a subprogram call involves the elaboration of the
parameter interface list of the corresponding subprogram declaration;
this involves the elaboration of each interface declaration to create
the corresponding formal parameters. Actual parameters are then
associated with formal parameters. Finally, if the designator of the
subprogram is not decorated with the 'FOREIGN attribute defined in
package STANDARD, the declarative part of the corresponding subprogram
body is elaborated and the sequence of statements in the subprogram
body is executed.
The description of dynamic elaboration of a subprogram call has been expanded a bit in IEEE Std 1076-2008, 14.6.

VHDL How to convert 32 bit variable to 4 x 8bit std_logic_vector?

I have a question which is probably in 2 parts:
I am using a (nominally 32 bit) integer variable which I would like to write to an 8 bit UART as 4 bytes (i.e., as binary data)
i.e. variable Count : integer range 0 to 2147483647;
How should I chop the 32 bit integer variable into 4 separate 8 bit std_logic_vectors as expected by my UART code, and how should I pass these to the UART one byte at a time ?
I am aware std_logic_vector(to_unsigned(Count, 32)) will convert the integer variable into a 32 bit std_logic_vector, but then what ? Should I create a 32 bit std_logic_vector, assign the converted Count value to it, then subdivide it using something like the following code ? I realise the following assumes the count variable does not change during the 4 clock cycles, and assumes the UART can accept a new byte every clock cycle, and lacks any means of re-triggering the 4 byte transmit cycle, but am I on the right track here, or is there a better way ?
variable CountOut : std_logic_vector(31 downto 0);
process (clock)
variable Index : integer range 0 to 4 := 0;
begin
if rising_edge(clock) then
CountOut <= std_logic_vector(to_unsigned(Count, 32);
if (Index = 0) then
UartData(7 downto 0) <= CountOut(31 downto 24);
Index := 1;
elsif (Index = 1) then
UartData(7 downto 0) <= CountOut(23 downto 16);
Index := 2;
elsif (Index = 2) then
UartData(7 downto 0) <= CountOut(15 downto 8);
Index := 3;
elsif (Index =31) then
UartData(7 downto 0) <= CountOut(7 downto 0);
Index := 4;
else
Index := Index;
end if;
end if;
end process;
Any comments or recommendations would be appreciated.
Thanks,
MAI-AU.
You seem to be on the right track. I believe there are two basic solutions to this problem:
Register the output value as a 32-bit vector, and use different ranges for each output operation (as you did in your code example)
Register the output value as a 32-bit vector, and shift this value 8 bits at a time after each output operation. This way you can use the same range in all operations. The code below should give you an idea:
process (clock)
variable Index: integer range 0 to 4 := 0;
begin
if rising_edge(clock) then
if (Index = 0) then
CountOut <= std_logic_vector(to_unsigned(Count, 32));
Index := Index + 1;
elsif (Index < 4) then
UartData <= CountOut(31 downto 24);
CountOut <= CountOut sll 8;
Index := Index + 1;
end if;
end if;
end process;
Also, please check your assignments, in your example CountOut is declared as a variable but is assigned to as a signal.
There's nothing wrong with the code you've shown. You can do something to separate the the assignment to UartData using Index to allow a loop.
library ieee;
use ieee.std_logic_1164.all;
entity union is
end entity;
architecture foo of union is
type union32 is array (integer range 1 to 4) of std_logic_vector(7 downto 0);
signal UartData: std_logic_vector(7 downto 0);
begin
TEST:
process
variable quad: union32;
constant fourbytes: std_logic_vector(31 downto 0) := X"deadbeef";
begin
quad := union32'(fourbytes(31 downto 24), fourbytes(23 downto 16),
fourbytes(15 downto 8),fourbytes(7 downto 0));
for i in union32'RANGE loop
wait for 9.6 us;
UartData <= Quad(i);
end loop;
wait for 9.6 us; -- to display the last byte
wait; -- one ping only
end process;
end architecture;
Or use a type conversion function to hide complexity:
library ieee;
use ieee.std_logic_1164.all;
entity union is
type union32 is array (integer range 1 to 4) of std_logic_vector(7 downto 0);
end entity;
architecture fee of union is
signal UartData: std_logic_vector(7 downto 0);
function toquad (inp: std_logic_vector(31 downto 0)) return union32 is
begin
return union32'(inp(31 downto 24), inp(23 downto 16),
inp(15 downto 8), inp( 7 downto 0));
end function;
begin
TEST:
process
variable quad: union32;
constant fourbytes: std_logic_vector(31 downto 0) := X"deadbeef";
begin
quad := toquad (fourbytes);
for i in union32'RANGE loop
wait for 9.6 us;
UartData <= Quad(i);
end loop;
wait for 9.6 us; -- to display the last byte
wait; -- one ping only
end process;
end architecture;
And gives the same answer.

Resources