I'm looking to implement the functions y = a and b; y = (a or b) and (c or d).
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity task1_tb is
-- Port ( ); end task1_tb;
architecture Behavioral of task1_tb is
--declaring the component component task1
Port ( a : in STD_LOGIC;
b : in STD_LOGIC;
y : out STD_LOGIC); end component;
signal y,a,b: std_logic;
signal counter: unsigned(1 downto 0):="00";
begin
uut: task1 port map(a => a, b => b, y => y );
end Behavioral;
How can I assign a (bit 1) and b (bit 2) so it will test ever possible value and make a 20ns delay between each combination? I've been trying to learn VHDL these past two days for a school project and not even sure if what I have is right.
You're looking to use a wait for <duration> in your stimulus process.
process
begin
for i in 0 to 2**2-1 loop --2**(number of input bits)-1
(a, b) <= to_unsigned(i,2);
wait for 20 ns;
end loop;
wait;
end process;
Credit to user1155120 for refinements.
Related
I'm trying to write a code in vhdl to create a 16 to 1 mux using 2 to 1 mux.
I actually thought that to do this we may need 15 two to one multiplexers and by wiring them together and using structural model I wrote the code below.
First I wrote a 2 to 1 mux:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity MUX_2_1 is
port (
w0 , w1 : IN STD_LOGIC;
SELECT_I: IN std_logic;
DATA_O: out std_logic
);
end MUX_2_1;
architecture MUX_2_1_arch of MUX_2_1 is
--
begin
--
WITH SELECT_I SELECT
DATA_O <= w0 WHEN '0',
w1 WHEN '1',
'X' when others;
--
end MUX_2_1_arch;
and made a package from it, just to use it simple and easy:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
PACKAGE mux2to1_package IS
COMPONENT mux2to1
PORT (w0, w1: IN STD_LOGIC ;
SELECT_I: IN std_logic;
DATA_O: out std_logic ) ;
END COMPONENT ;
END mux2to1_package ;
and then my 16 to 1 mux looks like this:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
USE work.mux2to1_package.all ;
ENTITY mux16to1 IS
PORT (w : IN STD_LOGIC_VECTOR(15 DOWNTO 0) ;
s : IN STD_LOGIC_VECTOR(3 DOWNTO 0) ;
f : OUT STD_LOGIC ) ;
END mux16to1 ;
ARCHITECTURE Structure OF mux16to1 IS
SIGNAL im : STD_LOGIC_VECTOR(7 DOWNTO 0) ;
SIGNAL q : STD_LOGIC_VECTOR(3 DOWNTO 0);
SIGNAL p : STD_LOGIC_VECTOR(1 DOWNTO 0);
BEGIN
Mux1: mux2to1 PORT MAP ( w(0), w(1), s(0), im(0)) ;
Mux2: mux2to1 PORT MAP ( w(2), w(3), s(0), im(1)) ;
Mux3: mux2to1 PORT MAP ( w(4), w(5), s(0), im(2)) ;
Mux4: mux2to1 PORT MAP ( w(6), w(7), s(0), im(3)) ;
Mux5: mux2to1 PORT MAP ( w(8), w(9), s(0), im(4)) ;
MUX6: mux2to1 PORT MAP ( w(10), w(11), s(0), im(5));
Mux7: mux2to1 PORT MAP ( w(12), w(13), s(0), im(6)) ;
Mux8: mux2to1 PORT MAP ( w(14), w(15), s(0), im(7)) ;
Mux9: mux2to1 PORT MAP ( im(0), im(1), s(1), q(0)) ;
Mux10: mux2to1 PORT MAP ( im(2), im(3), s(1), q(1)) ;
Mux11: mux2to1 PORT MAP ( im(4), im(5), s(1), q(2)) ;
Mux12: mux2to1 PORT MAP ( im(6), im(7), s(1), q(3)) ;
Mux13: mux2to1 PORT MAP ( q(0), q(1), s(2), p(0)) ;
Mux14: mux2to1 PORT MAP ( q(2), q(3), s(2), p(1)) ;
Mux15: mux2to1 PORT MAP ( p(0), p(1), s(3), f) ;
END Structure ;
and also my testbench is:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
USE work.mux2to1_package.all ;
ENTITY Mux_test IS
END Mux_test;
ARCHITECTURE test OF Mux_test IS
COMPONENT mux16to1 PORT(w : IN STD_LOGIC_VECTOR(15 DOWNTO 0) ;
s : IN STD_LOGIC_VECTOR(3 DOWNTO 0) ;
f : OUT STD_LOGIC ) ;
END COMPONENT;
SIGNAL wi : STD_LOGIC_VECTOR(15 DOWNTO 0) ;
SIGNAL selecting : STD_LOGIC_VECTOR(3 DOWNTO 0) ;
SIGNAL fi : STD_LOGIC ;
BEGIN
a1: mux16to1 PORT MAP(wi , selecting , fi);
wi<= "0101110010001010" , "1001000101010101" after 100 ns;
selecting <= "0011" , "1010" after 20 ns , "1110" after 40 ns, "1100" after 60 ns , "0101" after 80 ns,
"0011" after 100 ns , "1010" after 120 ns , "1110" after 140 ns, "1100" after 160 ns , "0101" after 180 ns;
END ARCHITECTURE;
my simulation:
But when I try to simulate this nothing shows in my output. I'm thinking that maybe that's because I wrote my code in concurrent part and signals im and q and p are not initialized yet so I tried using default values "00000000" for im and "0000" for q and "00" for p when I was declaring the signals, but then I got bunch of errors saying "Instance mux2to1 is unbound" in simulation and nothing actually changed.
Any idea what is the problem??
Also I think there is something wrong with my select input logically.
but I don't understand how i should use the select to be correct for this problem.
I would appreciate if anyone can help me with my problem.
Virtual component binding using component declarations can either be explicit using a configuration specification to supply a binding indication, or rely on a default binding indication.
A default binding indication would rely on finding an entity declared in a reference library whose name matches the component name. That's not the case here, your entity is named MUX_2_1 (case insensitive) while the component name is mux2to1.
It's not illegal to have components unbound in VHDL, it's the equivalent of not loading a component in a particular location in a printed circuit or bread board, it simply produces no output which shows in simulation here as a 'U'.
Here the solutions could be to either change the name of the entity in both the entity declaration and it's architecture from MUX_2_1 to mux2to1, change the component declaration to MUX_2_1 or provide a configuration specification providing an explicit binding indication as a block declarative item in the architecture for mux16to1 of the form
ARCHITECTURE Structure OF mux16to1 IS
SIGNAL im : STD_LOGIC_VECTOR(7 DOWNTO 0) ;
SIGNAL q : STD_LOGIC_VECTOR(3 DOWNTO 0);
SIGNAL p : STD_LOGIC_VECTOR(1 DOWNTO 0);
for all: mux2to1 use entity work.MUX_2_1; -- ADDED
When used the latter method provides '1' and '0' outputs on testbench signal fi during simulation.
The testbench can be made more elaborate to demonstrate that the selects are valid. One way would be with marching '0's or '1's in w elements while scanning all the elements and looking for a mismatch:
library ieee;
use ieee.std_logic_1164.all;
entity mux16to1_tb is
end mux16to1_tb;
architecture test of mux16to1_tb is
component mux16to1 is
port (
w: in std_logic_vector(15 downto 0);
s: in std_logic_vector(3 downto 0);
f: out std_logic
);
end component;
signal w: std_logic_vector(15 downto 0);
signal s: std_logic_vector(3 downto 0);
signal f: std_logic;
function to_string (inp: std_logic_vector) return string is
variable image_str: string (1 to inp'length);
alias input_str: std_logic_vector (1 to inp'length) is inp;
begin
for i in input_str'range loop
image_str(i) := character'VALUE(std_ulogic'IMAGE(input_str(i)));
end loop;
return image_str;
end function;
begin
DUT:
mux16to1
port map (
w => w,
s => s,
f => f
);
STIMULI:
process
use ieee.numeric_std.all;
begin
for i in w'reverse_range loop
w <= (others => '1');
w(i) <= '0';
for j in w'reverse_range loop
s <= std_logic_vector(to_unsigned(j, s'length));
wait for 10 ns;
end loop;
end loop;
wait;
end process;
VALIDATE:
process
begin
for x in w'reverse_range loop
for y in w'reverse_range loop
wait for 10 ns;
assert f = w(y)
report
LF & HT & "f = " & std_ulogic'image(f) & " " &
"expected " & std_ulogic'image(w(y)) &
LF & HT & "w = " & to_string(w) &
LF & HT & "s = " & to_string(s)
severity ERROR;
end loop;
end loop;
wait;
end process;
end architecture;
The output f of mux16to1 is selected for each value of w using a marching '0's pattern. Any mismatch between f and the selected name element value of w is reported with diagnostic information.
Here we see that mux16t01 implements a 16:1 selection properly without the need to modify the original posters design.
Without error injection the testbench waveforms for w, s and f can be viewed in a waveform display to validate correct operation.
I am getting U in the waveform instead of proper output.I don;t understand the reason it is happening in such a way. Can anyone please correct my mistake. Providing the code:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity circuit1 is port (
A, B: in std_logic;
F1 : out std_logic);
end circuit1;
architecture structural of circuit1 is
signal A_B, B_A: std_logic;--internal signal declarations for A_B and B_A
component and_1 is port (--Component declaration for and_1
i1, i2: in std_logic;
o1: out std_logic);
end component;
component nor_1 is port (--Component declaration for nor_1
i1, i2: in std_logic;
o1: out std_logic);
end component;
begin
--Component placement and connections (formally called component instantiations)
C1: and_1 port map (i1 => A, i2 => B, o1 => A_B);
C2: and_1 port map (i1 => B, i2 => A, o1 => B_A);
C3: nor_1 port map (i1 => A_B, i2 => B_A, o1 => F1);
end structural;
Here is my Test bench code. I have tried to assign different values to A and B , and want the simulation to give the output accordingly.
library IEEE;
use IEEE.Std_logic_1164.all;
use IEEE.Numeric_Std.all;
entity circuit1_tb is
end;
architecture bench of circuit1_tb is
component circuit1 port (
A, B: in std_logic;
F1 : out std_logic);
end component;
signal A, B: std_logic;
signal F1: std_logic;
begin
uut: circuit1 port map ( A => A,
B => B,
F1 => F1 );
stimulus: process
begin
-- Put initialisation code here
A<='1';
B<='1';
F1<='1';
-- Put test bench stimulus code here
A<='0';
B<='0';
wait for 100 ns;
A<='0';
B<='1';
wait for 100 ns;
A<='1';
B<='0';
wait for 100 ns;
A<='1';
B<='1';
wait for 100 ns;
wait;
end process;
end;
Waveform:
enter image description here
I already done the code, and it can work, However, when I try to write the test bench, I got some troubles on that. The input x sets up as 8 bits, and x: IN BIT_VECTOR (N -1 DOWNTO 0).
When I write the test bench I connot enter the bits number.
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
USE ieee.std_logic_unsigned.all;
ENTITY Count_ones IS
GENERIC (N: INTEGER := 8); -- number of bits
PORT ( x: IN BIT_VECTOR (N -1 DOWNTO 0); y: OUT NATURAL RANGE 0 TO N);
END ENTITY ;
architecture Behavioral of Count_ones is
TYPE count is Array (N DOWNTO 1) OF Natural;
signal a : count;
begin
a(0) <= 1 when (x(0) = '1')
else
0;
gen: FOR i IN N-1 DOWNTO 0
GENERATE
a(i+1) <= (a(i)+1) when (x(i)='0')
else
a(i);
END GENERATE;
y <= a(N-1);
end Behavioral;
The Test Bench:
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
USE ieee.std_logic_unsigned.all;
ENTITY Count_ones_TB IS
END Count_ones_TB;
ARCHITECTURE behavior OF Count_ones_TB IS
COMPONENT Count_ones
PORT(
x : IN std_logic_vector(7 downto 0);
y : OUT std_logic_vector(0 to 3)
);
END COMPONENT;
--Inputs
signal x : std_logic_vector(7 downto 0) := (others => '0');
--Outputs
signal y : std_logic_vector(0 to 3);
BEGIN
-- Instantiate the Unit Under Test (UUT)
uut: Count_ones PORT MAP (
x => x,
y => y
);
stim_proc: process
begin
x <= "00010101";
wait for 100 ns;
x <= "00001001";
wait for 100 ns;
x <= "11111111101"
wait for 100ns;
-- insert stimulus here
wait;
end process;
END;
The error is
Entity port x does not match with type std_logic_vector of component port
Entity port y does not match with type std_logic_vector of component port
Please help me, I real cannot figure out the way to solve that.
The answer to your specific question is that the types of the ports in the entity, the ports in the component and the types of the signals must match. Here is a link to your code with those errors and many more corrected.
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
USE ieee.std_logic_unsigned.all;
ENTITY Count_ones IS
GENERIC (N: INTEGER := 8); -- number of bits
PORT ( x: IN BIT_VECTOR (N -1 DOWNTO 0); y: OUT NATURAL RANGE 0 TO N);
END ENTITY ;
architecture Behavioral of Count_ones is
TYPE count is Array (N DOWNTO 0) OF Natural;
signal a : count;
begin
a(0) <= 1 when (x(0) = '1')
else
0;
gen: FOR i IN N-1 DOWNTO 0
GENERATE
a(i+1) <= (a(i)+1) when (x(i)='0')
else
a(i);
END GENERATE;
y <= a(N-1);
end Behavioral;
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
USE ieee.std_logic_unsigned.all;
ENTITY Count_ones_TB IS
END Count_ones_TB;
ARCHITECTURE behavior OF Count_ones_TB IS
COMPONENT Count_ones
GENERIC (N: INTEGER := 8); -- number of bits
PORT ( x: IN BIT_VECTOR (N -1 DOWNTO 0);
y: OUT NATURAL RANGE 0 TO N);
END COMPONENT;
--Inputs
signal x : BIT_VECTOR(7 downto 0) := (others => '0');
--Outputs
signal y : natural;
BEGIN
-- Instantiate the Unit Under Test (UUT)
uut: Count_ones PORT MAP (
x => x,
y => y
);
stim_proc: process
begin
x <= "00010101";
wait for 100 ns;
x <= "00001001";
wait for 100 ns;
x <= "11111101";
wait for 100ns;
-- insert stimulus here
wait;
end process;
END;
However I must point out that you are a long way from achieving your goal of trying to count the number of ones.
Because of that:
My corrections to your code are not the only correct answer. In
fact, my corrections are not even a good answer. I have simply made
the minimum corrections to make your code compile and run. You need
to think very carefully what type all the ports and signals in your
design should be.
My corrections will not make your code work, i.e. count the number of
ones.
I'm trying to learn VHDL through P. Ashenden's book: Designer's Guide to VHDL. Chapter one's exercise 10 asks you to write 2-to-1 (I'm assuming 1 bit wide) MUX in VHDL and simulate it. I apologize in advance for being a complete noob. This is my first VHDL code.
My MUX didn't produce any errors or warnings in synthesis. My test bench doesn't produce errors or warnings, either. However, the simulation comes up completely blank, except for the names of the signals.
I've tried looking at a multitude of other MUX examples online (as well as a bench test example from the book), all of which gave errors when I tried sythesizing them, so I wasn't confident enough to use them as guides and didn't get much out of them. I'm not sure what I'm doing wrong here. I'd include an image of the simulation, but I don't have enough rep points :(
Also, I realize that a good MUX should also have cases for when it receives no select input/high impedance values, ect.. In this case, I'm just trying to get the toy model working.
The MUX code is:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity MUXtop is
Port (a, b, sel: in bit;
z: out bit);
end MUXtop;
architecture behav of MUXtop is
begin
choose: process is
begin
if sel = '0' then
z <= b;
else
z <= a;
end if;
end process choose;
end architecture behav;
The test bench code is:
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
ENTITY MUXtest IS
END MUXtest;
ARCHITECTURE behavior OF MUXtest IS
-- Component Declaration for the Unit Under Test (UUT)
COMPONENT MUXtop
PORT(
a : IN bit;
b : IN bit;
sel : IN bit;
z : OUT bit
);
END COMPONENT MUXtop;
--Inputs
signal a : bit := '0';
signal b : bit := '0';
signal sel : bit := '0';
--Outputs
signal z : bit;
BEGIN
-- Instantiate the Unit Under Test (UUT)
uut: MUXtop PORT MAP (
a => a,
b => b,
sel => sel,
z => z
);
-- Stimulus process
stimulus: process
begin
wait for 10 ns;
a <= '1';
wait for 10 ns;
sel <= '1';
wait for 10 ns;
b <= '1';
wait;
end process stimulus;
END architecture behavior;
You don't need a use clause for package std_logic_1164 when using type bit (declared in package standard).
Your process statement choose in MUXtop has no sensitivity clause which cause the process to continually execute in simulation. (It won't do anything until you trip over a delta cycle iteration limit which might be set to infinity).
I added a sensitivity list, commented out the superfluous use clauses in the two design units and added some more stimulus steps as well as a final wait for 10 ns; to allow the last action to be seen in your testbench:
library IEEE;
-- use IEEE.STD_LOGIC_1164.ALL;
entity MUXtop is
Port (a, b, sel: in bit;
z: out bit);
end MUXtop;
architecture behav of MUXtop is
begin
choose: process (a, b, sel) -- is
begin
if sel = '0' then
z <= b;
else
z <= a;
end if;
end process choose;
end architecture behav;
LIBRARY ieee;
-- USE ieee.std_logic_1164.ALL;
ENTITY MUXtest IS
END MUXtest;
ARCHITECTURE behavior OF MUXtest IS
-- Component Declaration for the Unit Under Test (UUT)
COMPONENT MUXtop
PORT(
a : IN bit;
b : IN bit;
sel : IN bit;
z : OUT bit
);
END COMPONENT MUXtop;
--Inputs
signal a : bit := '0';
signal b : bit := '0';
signal sel : bit := '0';
--Outputs
signal z : bit;
BEGIN
-- Instantiate the Unit Under Test (UUT)
uut: MUXtop PORT MAP (
a => a,
b => b,
sel => sel,
z => z
);
-- Stimulus process
stimulus: process
begin
wait for 10 ns;
a <= '1';
wait for 10 ns;
sel <= '1';
wait for 10 ns;
sel <= '0'; -- added
wait for 10 ns; -- added
b <= '1';
wait for 10 ns; -- added
wait;
end process stimulus;
END architecture behavior;
And that gives:
(clickable)
I am facing a confusing problem in my program. I need in my program to port map (calling) a component. Also, inside the component, I need to do another port mapping (calling) which is illegal in VHDL. Do you have an alternative solution to this problem. Here is an example of what I meant.
Here I start my program:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity binary1 is
port( N: in std_logic;
d: out integer);
end binary1 ;
Architecture Behavior1 of binary1 is
Here is a component for example:
component binary_integer_1 is
port ( b1: in std_logic;
int1: out integer);
end component;
The command for calling the component:
begin
s0: binary_integer_1 port map(n,d);
end Behavior1 ;
Also, here is the main program:
library ieee;
use ieee.std_logic_1164.all;
entity binary_integer_1 is
port ( b1: in std_logic;
int1: out integer);
end binary_integer_1;
architecture Behavior4 of binary_integer_1 is
begin
process(b1)
begin
if b1 = '1' then
int1 <= 1;
else
int1 <= 0;
end if;
end process;
end Behavior4;
For example, if I want to do a port map inside the upper entity. I have got an illegal statement.
Please, provide me with another way to do it.
I did a small example of a three level design hierarchy. The entity and architecture pairs are listed from bottom to top.
entity comp1 is
port (
x: in integer;
y: out integer
);
end entity;
architecture foo of comp1 is
begin
y <= x after 2 ns;
end architecture;
entity comp2 is
port (
a: in integer;
b: out integer
);
end entity;
architecture fum of comp2 is
component comp1 is
port (
x: in integer;
y: out integer
);
end component;
begin
INST_COMP1:
comp1 port map (X => A, Y => B);
end architecture;
entity top is
end entity;
architecture fum of top is
component comp2 is
port (
a: in integer;
b: out integer
);
end component;
signal a: integer := 0;
signal b: integer;
begin
INST_COMP2:
comp2 port map (a => a, b => b);
TEST:
process
begin
wait for 5 ns;
a <= 1;
wait for 5 ns;
a <= 2;
wait for 5 ns;
a <= 3;
wait for 5 ns;
wait;
end process;
end architecture;
ghdl -a component.vhdl
ghdl -e top
ghdl -r top --wave=top.ghw
(open top.ghw with gtkwave, setup waveform display), and:
So we have a top level entity top, which happens to be a test bench (no ports), it instantiates component comp2 which contains an instantiated component comp1, which provides a 2 ns delayed assigned to the output from the input.
The maximum negative value for the integer b is the left value for the integer range, and is the default, just like for std_logic the left value is 'U'; The output shows the default value until simulation time advances to an occurrence of x being assigned to y in comp1 (after 2 ns). The transition to 0 happened because of the default value for x in top.
I used integers to avoid context clauses (a library clause and a use clause). I could have used direct entity instantiation, but you showed a component declaration.