Converting a std_logic_vector to integer within Process to test values? - vhdl

What I'm trying to do is pretty simple, just generating a pulse from a basic counter. My code is shown below. My question is if there's an efficient way of comparing a std_logic_vector and an integer? I only need to compare them at that one instance in the process. Also, can you do aritmetic on a 4 bit signal as shown in my code? DO you need a specific library?
signal Top16: std_logic; -- 1 clk spike at 16x baud rate
signal Div16: std_logic_vector(3 downto 0);
DIVISOR: natural := 120 -- Can be 120 or 60, depending on user preference.
------------------------------------------------------------------------
process (RST, LCLK_MULT_BUFG)
begin
if RST='1' then
Top16 <= '0'; --1 bit signal
Div16 <= x"0"; -- 4 bit signal
elsif rising_edge(LCLK_MULT_BUFG) then
Top16 <= '0';
if Div16 = Divisor then -----> signal to integer comparison?
Div16 <= 0;
Top16 <= '1';
else
Div16 <= Div16 + 1; -----arithmetic on std_logic_vector??
end if;
end if;
EDIT:
The number of bits within the Div16 std_logic_vector will vary depending on the size of Divisor chosen (shown below). How to correctly format this? What libraries will be needed?
DIVISOR: natural := 120 -- Can be 120 or 60, depending on user preference.
constant COUNTER_BITS : natural := integer(ceil(log2(real(DIVISOR))));
signal Div16: std_logic_vector(COUNTER_BITS);

If at all possible, avoid the non-standard std_logic_unsigned library. It would be better to use numeric_std and declare Div16 as unsigned.
signal Div16: unsigned(3 downto 0);
Then your comparison and arithmetic should simply work. And of course it's synthesisable.
Your bonus question should also be synthesisable though DIVISOR ought to be a CONSTANT so that it can be evaluated at compile time, and I think you meant
signal Div16: unsigned(COUNTER_BITS - 1 downto 0);

For the arithmetic you can use std_logic_unsigned. This library contains the following functions:
function "+"(L: STD_LOGIC_VECTOR; R: INTEGER) return STD_LOGIC_VECTOR;
function "+"(L: INTEGER; R: STD_LOGIC_VECTOR) return STD_LOGIC_VECTOR;
For the comparison you can just leave it like this if you are using std_logic_unsigned. This library contains the following functions:
function "="(L: STD_LOGIC_VECTOR; R: INTEGER) return BOOLEAN;
function "="(L: INTEGER; R: STD_LOGIC_VECTOR) return BOOLEAN;
You could also define Div16 as unsigned and then use numeric_std. This library contains the following functions for comparison:
function "=" ( L: NATURAL; R: UNSIGNED) return BOOLEAN;
function "=" ( L: UNSIGNED; R: NATURAL) return BOOLEAN;
And for the addition:
function "+" ( L: UNSIGNED; R: NATURAL) return UNSIGNED;
function "+" ( L: NATURAL; R: UNSIGNED) return UNSIGNED;

Related

Xilinx ISE: found '0' definitions of operator "+", cannot determine exact overloaded matching definition for "+"

I am writing Bin to BCD code Multiplier and in the top module Xilinx ISE gives this error:
Line 30: found '0' definitions of operator "+", cannot determine exact
overloaded matching definition for "+"
while I have mapped the ports to the top module
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
-- Uncomment the following library declaration if using
-- arithmetic functions with Signed or Unsigned values
--use IEEE.NUMERIC_STD.ALL;
-- Uncomment the following library declaration if instantiating
-- any Xilinx primitives in this code.
-- library UNISIM;
-- use UNISIM.VComponents.all;
entity EightDisplayControl is
Port ( clk : in STD_LOGIC;
leftL, near_leftL : in STD_LOGIC_VECTOR (3 downto 0);
near_rightL, rightL : in STD_LOGIC_VECTOR (3 downto 0);
leftR, near_leftR : in STD_LOGIC_VECTOR (3 downto 0);
near_rightR, rightR : in STD_LOGIC_VECTOR (3 downto 0);
select_display : out STD_LOGIC_VECTOR (7 downto 0);
segments : out STD_LOGIC_VECTOR (6 downto 0));
end EightDisplayControl;
architecture Behavioral of EightDisplayControl is
signal Display : std_logic_vector(2 downto 0);
signal div : std_logic_vector(16 downto 0);
signal convert_me : std_logic_vector(3 downto 0);
begin
div<= div+1 when rising_edge(clk);
Display <= div(16 downto 14);
process(Display, leftL, near_leftL, near_rightL, rightL, leftR, near_leftR, near_rightR, rightR)
begin
if Display ="111" then select_display <= "11111110"; convert_me <= leftL;
elsif Display ="110" then select_display <= "11111101"; convert_me <= near_leftL;
elsif Display ="101" then select_display <= "11111011"; convert_me <= near_rightL;
elsif Display ="100" then select_display <= "11110111"; convert_me <= rightL;
elsif Display ="011" then select_display <= "11101111"; convert_me <= leftR;
elsif Display ="010" then select_display <= "11011111"; convert_me <= near_leftR;
elsif Display ="001" then select_display <= "10111111"; convert_me <= near_rightR;
else select_display <= "01111111"; convert_me <= rightR;
end if;
end process;
decoder : entity work.segment_decoder
port map (convert_me, segments);
end Behavioral;
As has already been stated in comments, the problem is that you have defined signal div as a std_logic_vector. The IEEE.numeric_std library does not define an addition operation for std_logic_vector.
Looking in the library we see:
--============================================================================
-- Id: A.3
function "+" (L, R: UNSIGNED) return UNSIGNED;
-- Result subtype: UNSIGNED(MAX(L'LENGTH, R'LENGTH)-1 downto 0).
-- Result: Adds two UNSIGNED vectors that may be of different lengths.
-- Id: A.4
function "+" (L, R: SIGNED) return SIGNED;
-- Result subtype: SIGNED(MAX(L'LENGTH, R'LENGTH)-1 downto 0).
-- Result: Adds two SIGNED vectors that may be of different lengths.
-- Id: A.5
function "+" (L: UNSIGNED; R: NATURAL) return UNSIGNED;
-- Result subtype: UNSIGNED(L'LENGTH-1 downto 0).
-- Result: Adds an UNSIGNED vector, L, with a non-negative INTEGER, R.
-- Id: A.6
function "+" (L: NATURAL; R: UNSIGNED) return UNSIGNED;
-- Result subtype: UNSIGNED(R'LENGTH-1 downto 0).
-- Result: Adds a non-negative INTEGER, L, with an UNSIGNED vector, R.
-- Id: A.7
function "+" (L: INTEGER; R: SIGNED) return SIGNED;
-- Result subtype: SIGNED(R'LENGTH-1 downto 0).
-- Result: Adds an INTEGER, L(may be positive or negative), to a SIGNED
-- vector, R.
-- Id: A.8
function "+" (L: SIGNED; R: INTEGER) return SIGNED;
-- Result subtype: SIGNED(L'LENGTH-1 downto 0).
-- Result: Adds a SIGNED vector, L, to an INTEGER, R.
--============================================================================
This clearly shows that only functions for adding unsigned, signed, natural, and integer are supported.
As #Tricky has stated in the comments, you need to define div as an unsigned.

VHDL - GHDL Initialise std_logic_vector with smaller bit length

I have a signal dataIn : std_logic_vector ( 15 downto 0);
I want to give an input less than 16-bits for example dataIn <= x"000a" and those bits occupy the most significant bits and the rest to be zero.
In verilog you can do that very easy but in VHDL you get the error:
"string length does not match that of the anonymous integer subtype defined t... ".
I know that if you use 16x"bit_string" solves the problem but this is only for VHDL-2008 and ghdl doesn't support yet VHDL-2008.
Are there any method for IEEE Std 1076-2002?
For VHDL-87/93/2002 you could use the resize function from the numeric_std package.
library ieee;
use ieee.numeric_std.all;
...
constant FOO : std_logic_vector(2 downto 0) := "010";
signal dataIn : std_logic_vector(15 downto 0) := std_logic_vector(resize(unsigned(FOO), 16));
Note that the resize function is only defined for types signed and unsigned.
If you want the short bit string to be placed into the MSBs you may need to use the 'reverse_order attribute.
Often you will find it easier to define a dedicated function which encapsulates more complicated initializations.
constant FOO : std_logic_vector(2 downto 0) := "010";
function init_dataIn (bar : std_logic_vector; len : integer) return std_logic_vector is
begin
return bar & (len - bar'length - 1 downto 0 => '0');
end function init_dataIn;
signal dataIn : std_logic_vector(15 downto 0) := init_dataIn(FOO, 16);

Direction independent slicing

I'm creating a package with some functions I often use and some functions need to take slices of their parameters. I usually use downto direction for all my signals, but sometimes signals change their direction unexpectedly, e.g., appending a zero bit (sig & '0') seems to change the direction to positive.
Is there a way to slice arrays (std_logic_vector, unsigned, signed) independent of their direction? For example how would you implement a function taking the lowest two bits? The only implementation I came up with uses an additional constant with the expected direction:
function take_two(x : std_logic_vector) return std_logic_vector is
constant cx : std_logic_vector(x'length-1 downto 0) := x;
begin
return cx(1 downto 0);
end function;
I've also tried something like x(x'low+1 downto x'low) but Quartus doesn't like this.
The question is actually not on the input, but on the required output. What do you prefer?
If you look at how functions are implemented in for instance std_logic_1164-body.vhdl, your function would similarly be something like (in a complete example):
entity e is end entity;
library ieee;
architecture a of e is
use ieee.std_logic_1164.all;
signal test : std_logic_vector(7 downto 0) := "10010110";
signal output : std_logic_vector(2 downto 0);
function slice(s: STD_LOGIC_VECTOR; u, l : natural) return STD_LOGIC_VECTOR is
alias sv : STD_LOGIC_VECTOR (s'length-1 downto 0) is s;
variable result : STD_LOGIC_VECTOR (u downto l);
begin
for i in result'range loop
result(i) := sv(i);
end loop;
return result;
end function;
begin
output <= slice(test & '0', 5, 3); -- test becomes 'to' range.
-- output still becomes "101"
end architecture;

VHDL , Division using storing algorithm..the code is not working

I wrote a simple code that divides (Numerator, Denominator) using restoring algorithm. The syntax is fine but in the simulation I only get "11111111" in the quotient!
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;
entity DIV_8bit is
Port (
Numerator : in STD_LOGIC_vector(7 downto 0);
Denominator : in STD_LOGIC_vector(7 downto 0);
Quotient : out STD_LOGIC_vector(7 downto 0);
Remainder : out STD_LOGIC_vector(7 downto 0)
);
end DIV_8bit;
architecture Behavioral of DIV_8bit is
begin
process(Numerator , Denominator)
variable Num: std_logic_vector(7 downto 0) := Numerator;
variable p :std_logic_vector (7 downto 0) := (others=>'0');
variable Den: std_logic_vector (7 downto 0) := Denominator;
variable i : integer :=0;
begin
Division_loop: for i in 0 to 7 loop
p(7 downto 1) := p(6 downto 0);
p(0) := Num(7);
Num(7 downto 1) := Num(6 downto 0) ;
p := p - Den;
if (p < 0) then
P := P + Denominator;
Num(0) := '0';
else
Num(0) := '1';
end if ;
end loop;
Quotient <= Num;
Remainder <= p;
end process;
end Behavioral;
You should not use the package ieee.std_logic_unsigned. Instead use the package ieee.numeric_std. This package defines the data-types unsigned and signed as sub-types of std_logic_vector. Theses types tell the compiler how to interpret the bit-sequence as a number. This package also allows to mix both unsigned and signed numbers in one module which is not possible with ieee.std_logic_unsigned.
Among others, the package defines appropiate operators to compare unsigned / signed with integers.
To convert to and from std_logic_vector, for example if your inputs and outputs are of this type, just use the following type conversions (shown here as functions, but actually they are not):
function std_logic_vector(x : unsigned) return std_logic_vector;
function std_logic_vector(x : signed) return std_logic_vector;
function signed (x : std_logic_vector) return signed;
function unsigned (x : std_logic_vector) return unsigned;
You can also convert from and to integers:
function to_integer (x : unsigned) return integer;
function to_integer (x : signed) return integer;
function to_unsigned(x: integer; size : natural) return unsigned;
function to_signed (x: integer; size : natural) return signed;
There is lot of more nice stuff in the package ieee.numeric_std such as arithmetic operators with integers, sign extension and much more.

Conversion from numeric_std unsigned to std_logic_vector in vhdl

I have a question related to conversion from numeric_std to std_logic_vector. I am using moving average filter code that I saw online and filtering my ADC values to stable the values.
The filter package code is:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
package filterpack is
subtype number is unsigned(27 downto 0);
type numbers is array(natural range <>) of number;
function slv_to_num(signal slv: in std_logic_vector) return number;
procedure MAF_filter(
signal x: in number;
signal h: inout numbers;
signal y: out number
);
end filterpack;
package body filterpack is
function slv_to_num(signal slv: in std_logic_vector) return number is
variable x: number := (others => '0');
begin
for i in slv'range loop
if slv(i) = '1' then
x(i+4) := '1';
end if;
end loop;
return x;
end function slv_to_num;
procedure MAF_filter(
signal x: in number;
signal h: inout numbers;
signal y: out number
) is
begin
h(0) <= x + h(1); -- h[n] = x[n] + h[n-1]
y <= h(0) - h(h'high); -- y[n] = h[n] - h[n-M]
end MAF_filter;
end package body filterpack;
In my top level file, I call the MAF_filter procedure.
Asign_x: x <= slv_to_num(adc_dat);
Filter: MAF_filter(x,h,y);
The adc_dat is defined as:
adc_dat : out std_logic_vector (23 downto 0);
I want to convert the output of the MAF_Filter to std_logic_vector (23 downto 0). Can anyone tell how can I convert filter output 'y' to 'std_logic_vector'?
Many Thanks!
What do you want to do with the 4 extra bits? Your type number has 28 bits, but your signal adc_dat has only 24.
If it's ok to discard them, you could use:
adc_dat <= std_logic_vector(y(adc_dat'range));
Also, is there a reason not to write your function slv_to_num as shown below?
function slv_to_num(signal slv: in std_logic_vector) return number is
begin
return number(slv & "0000");
end function slv_to_num;
The conversion has to solve 2 problems : the type difference you noted, and the fact that the two words are different sizes.
The type difference is easy : std_logic_vector (y) will give you the correct type. Because the two types are related types, this is just a cast.
The size difference ... only you have the knowledge to do that.
adc_dat <= std_logic_vector(y(23 downto 0)) will give you the LSBs of Y - i.e. the value of Y itself, but can overflow. Or as Rick says, adc_dat <= std_logic_vector(y(adc_dat'range)); which is usually better, but I wanted to expose the details.
adc_dat <= std_logic_vector(y(27 downto 4)) cannot overflow, but actually gives you y/16.

Resources