signal a : std_logic_vector (7 downto 0) := (others => '0');
a <= a (6 downto 0) & '0';
So I understand that a is a signal that is 8 bits and all of those bits are 0. Is the next line assigning bits 6 down to 0 to be zero again?
Maybe the equivalent syntax would help understand:
a(7 downto 0) <= a(6 downto 0) & '0';
So a(7) gets the value of a(6), a(6) the value of a(5), ... and a(0) is '0'.
This code describe a shift register (assuming the statement is enclosed in a synchronous process, which should be to prevent combinational loop) where values are shifted left every cycle.
In VHDL, & is the concatenation operator. So "0101" & "1010" is equal to "01011010". In your example, the 7 LSBs are concatenated to a logical zero to form a new shifted 8 bits vector.
It's a very short definition for shifting the bits to the left by replacing rightmost bit with '0'.
Related
I programmed an 8-bit shifter in vhdl:
entity 8b is
port(s, clk : in std_logic; p : out std_logic_vector (7 downto 0));
end entity;
architecture arch of 8b is
Signal iq : std_logic_vector (7 downto 0);
begin
process(clk)
begin
if rising_edge(clk) then
iq(7) <= s;
iq(6 downto 0) <= iq(7 downto 1);
end if;
end process;
p <= iq;
end architecture;
The idea is that I'm taking input and giving it to my first D-FF.
Then over the next 7 cycles, the other Flip Flops get the other serial inputs which will be given to the parallel output p.
However, I'm not sure if this logic is flawed because this is the solution we got for this exercise:
architecture behavior of 8b is
signal p_intern : std_logic_vector(7 downto 0);
begin
P <= p_intern;
process(CLK)
begin
if rising_edge(CLK) then
p_intern <= p_intern(6 downto 0) & S;
end if;
end process;
end architecture;
But I don't get the p_intern <= p_inter(6 downto 0) & S; part.
Can someone please explain the logic behind this and if my version is also valid?
The only difference between the two implementations seem to be the lines
iq(7) <= s;
iq(6 downto 0) <= iq(7 downto 1);
vs.
p_intern <= p_intern(6 downto 0) & S;
and that iq is named p_intern. Let's assume they are both named iq for the sake of comparison.
Let's see what they are doing:
The first implementation (yours) assigns to the positions of iq:
7 6 5 ... 1 0
s iq(7) iq(6) ... iq(2) iq(1)
The second implementation (the solution) assigns
7 6 5 ... 1 0
iq(6) iq(5) iq(4) ... iq(0) s
Where iq(6 downto 0) & s means "concatenate s to the right of iq(6 downto 0)".
So they are not equivalent. Your implementation shifts in the values from the left, and the solution shifts in the values from the right. Which one is correct depends on the specification (presumably the solution is correct).
I'm trying to make a 16bit to BCD conversion.
I have found this link for a 8 bit and I'm trying to convert it to 16 bits.
http://vhdlguru.blogspot.nl/2010/04/8-bit-binary-to-bcd-converter-double.html
I don't know what im doing wrong the rpm_1000 keeps changing and the rpm_100 stays at 4. Does anyone have a idea what i did wrong?
process (Hex_Display_Data)
variable i : integer:=0;
variable bcd : std_logic_vector(19 downto 0) := (others => '0');
variable bint : std_logic_vector(15 downto 0) := Hex_Display_Data;
begin
for i in 0 to 15 loop -- repeating 16 times.
bcd(19 downto 1) := bcd(18 downto 0); --shifting the bits.
bcd(0) := bint(15); -- shift bit in
bint(15 downto 1) := bint(14 downto 0); --removing msb
bint(0) :='0'; -- adding a '0'
if(i < 15 and bcd(3 downto 0) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(3 downto 0) := bcd(3 downto 0) + "0011";
end if;
if(i < 15 and bcd(7 downto 4) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(7 downto 4) := bcd(7 downto 4) + "0011";
end if;
if(i < 15 and bcd(11 downto 8) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(11 downto 8) := bcd(11 downto 8) + "0011";
end if;
if(i < 15 and bcd(15 downto 12) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(15 downto 12) := bcd(15 downto 12) + "0011";
end if;
end loop;
rpm_1000 <= bcd(15 downto 12);
rpm_100 <= bcd(11 downto 8);
rpm_10 <= bcd(7 downto 4);
rpm_1 <= bcd(3 downto 0);
end process ;
Note four BCD digits can be wholly contained in 14 bits of input (your Hex_Display_Data) and unused bcd 'bits' (19 downto 16) will get eaten during synthesis along with all the add 3's that can't occur because their upper two bits are '0's (not > 4).
If you constrain your bcd value to 4 hex digits, and your loop iteration to 14 bits:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity bin2bcd is
port (
input: in std_logic_vector (15 downto 0);
ones: out std_logic_vector (3 downto 0);
tens: out std_logic_vector (3 downto 0);
hundreds: out std_logic_vector (3 downto 0);
thousands: out std_logic_vector (3 downto 0)
);
end entity;
architecture fum of bin2bcd is
alias Hex_Display_Data: std_logic_vector (15 downto 0) is input;
alias rpm_1: std_logic_vector (3 downto 0) is ones;
alias rpm_10: std_logic_vector (3 downto 0) is tens;
alias rpm_100: std_logic_vector (3 downto 0) is hundreds;
alias rpm_1000: std_logic_vector (3 downto 0) is thousands;
begin
process (Hex_Display_Data)
type fourbits is array (3 downto 0) of std_logic_vector(3 downto 0);
-- variable i : integer := 0; -- NOT USED
-- variable bcd : std_logic_vector(15 downto 0) := (others => '0');
variable bcd: std_logic_vector (15 downto 0);
-- variable bint : std_logic_vector(15 downto 0) := Hex_Display_Data;
variable bint: std_logic_vector (13 downto 0); -- SEE process body
begin
bcd := (others => '0'); -- ADDED for EVERY CONVERSION
bint := Hex_Display_Data (13 downto 0); -- ADDED for EVERY CONVERSION
for i in 0 to 13 loop
bcd(15 downto 1) := bcd(14 downto 0);
bcd(0) := bint(13);
bint(13 downto 1) := bint(12 downto 0);
bint(0) := '0';
if i < 13 and bcd(3 downto 0) > "0100" then
bcd(3 downto 0) :=
std_logic_vector (unsigned(bcd(3 downto 0)) + 3);
end if;
if i < 13 and bcd(7 downto 4) > "0100" then
bcd(7 downto 4) :=
std_logic_vector(unsigned(bcd(7 downto 4)) + 3);
end if;
if i < 13 and bcd(11 downto 8) > "0100" then
bcd(11 downto 8) :=
std_logic_vector(unsigned(bcd(11 downto 8)) + 3);
end if;
if i < 13 and bcd(15 downto 12) > "0100" then
bcd(11 downto 8) :=
std_logic_vector(unsigned(bcd(15 downto 12)) + 3);
end if;
end loop;
(rpm_1000, rpm_100, rpm_10, rpm_1) <=
fourbits'( bcd (15 downto 12), bcd (11 downto 8),
bcd ( 7 downto 4), bcd ( 3 downto 0) );
end process ;
end architecture;
Note the use of aliases to enable your names to be used in an existing otherwise compatible Minimal, Complete and Verifiable Example which your question did not provide.
Aggregate signal assignment is also taken from the original, your assignment to the individual digits should work just fine.
There are two changes besides limiting the conversion to 14 bits and the number of BCD digits to match the number of digits output.
The bcd and bint variables are now cleared every time the process is resumed (sensitive to updates to Hex_Display_Data). These were causing causing your otherwise unverifiable errors more than likely.
Extraneous parentheses have been removed.
You didn't supply context clauses. The code shown uses package numeric_std as opposed to the -2008 numeric_std_unsigned offering compatibility with earlier revisions of the standard while using IEEE authored packages.
You'll get something that works, provable with a testbench:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity bin2bcd_tb is
end entity;
architecture foo of bin2bcd_tb is
signal input: std_logic_vector (15 downto 0) := (others => '0');
signal ones: std_logic_vector (3 downto 0);
signal tens: std_logic_vector (3 downto 0);
signal hundreds: std_logic_vector (3 downto 0);
signal thousands: std_logic_vector (3 downto 0);
begin
DUT:
entity work.bin2bcd
port map (
input => input,
ones => ones,
tens => tens,
hundreds => hundreds,
thousands => thousands
);
STIMULUS:
process
begin
for i in 0 to 1001 loop
wait for 20 ns;
input <= std_logic_vector(to_unsigned(9999 - i, 16));
end loop;
wait for 20 ns;
wait;
end process;
end architecture;
Some other stimulus scheme can be used to toggle BCD digit roll over of all four digits.
This testbench provides input values starting at 9999 and decrementing 1001 times to show all four digits transitioning:
I can easily be modified to prove every transition of every BCD digit.
In summary the errors you were encountering appear to have come from the difference in elaboration for variables in a subprogram, where bcd and bint would be dynamically elaborated and initialized every function call, and in the process where they would be only initialized once.
From examining Xilinx's User Guide 901 Vivado Design Suite User Guide, Synthesis (2015.3), Chapter 4: VHDL Support, Combinatorial Processes, case Statements, for-loop Statements, the for loop appears to be supported for synthesis and has been reported to be synthesis eligible in other double dabble questions on stackoverflow. The issue would be support for repetitive assignment to variables in repeated sequences of sequential statements, which should be supported. There are is at least one other double dabble question on stackoverflow where successful synthesis had been reported using such a for loop.
Note that constraining the input value you deal with to 14 bits doesn't detect the effects of larger binary numbers (> 9999) which your process does not otherwise do either providing only 4 BCD output digits. You could deal with that by checking if the input value is greater than 9999 (x"270F").
The + 3 represents 1 LUT depth in an FPGA (4 bit input, 4 bit output), there are some number of them layered in depth based on the size of the converted number (the range of i). Allowing time for conversion propagation through ADD3's is offset by the rate at which the display can be visually interpreted. If you updated Hex_Display_Data in the millisecond range you likely could not tell the difference visually.
Running the loop 16 times will cause the value in the BCD registers to be multiplied by 65536 (mod 100000) and added to the value in the binary registers. Say the value is 4000. Then 4000x65536 yields 44000. 44000x65536 yields 84000. 84000x65536 yields 24000. 24000x65536 yields 64000. And 64000x65536 yields 4000.
To make the algorithm work, you must start out by clearing the BCD registers. It also wouldn't hurt to fix the comment about how many times your loop runs.
Incidentally, a practical implementation of a binary to BCD converter should generally accept a clock input, and perform one step for each active clock edge. If your VHDL is running entirely in simulation the complexity of the resulting logic won't matter, but trying to perform everything at once in real hardware will be rather expensive. By contrast, the hardware to do a simple shift of the binary number and a multiply-by-two of the BCD number will be much simpler. Note that if you do things "all at once", the most significant bit of the output will depend upon the second-least-significant bit of the input, meaning the input signal will have to propagate through all the logic in one step. By contrast, if you shift by one bit per clock cycle, each bit of the output will depend only upon at most four bits of the input (since each digit will be in the range 0-9 before the adjustment phase, adding 3 will never cause a carry out).
Also, the "double dabble" algorithm requires that the adjustment be performed before the BCD shifts, but it looks as though the code is performing the adjustment after. Doing the adjustment after is fine if one looks at bit
ranges 16..13, 12..9, 8..5, and 4..1 rather than 15..12, etc. Alternatively, one could specify that the value of bits 19..17 should be the value of bits 18..16, the value of bits 16..13 should be either the value of bits 15..12 (if less than 5) or the value of bits 15..12, plus three (if greater), etc. Such a formulation would set the value of each bit in exactly one place, which would make it easier to see how it should be rendered into hardware.
I tried to implement and addition of two signed numbers. The first one is 32 bit and the second one is also 32 bit, but correspond the addition of the earlier operation. The code VHLD is below :
Entity Sum_Position is
port
(
Clk: in std_logic;
Reset: in std_logic;
Raz_position: in std_logic;
Position_In: in std_logic_vector(31 downto 0);
Position_Out: out std_logic_vector(31 downto 0)
);
end Sum_Position;
Architecture Arch_position of sum_Position is
-- create locale signals
signal position_before: signed (31 downto 0):= (OTHERS => '0');
-- both signals have one more bit than the original
signal Position_s : SIGNED(Position_In'length downto 0):= (OTHERS => '0');
signal Position_Before_s : SIGNED(Position_In'length downto 0):= (OTHERS => '0');
signal Sum_Pos_s : SIGNED(Position_In'length downto 0):= (OTHERS => '0');
Begin -- begin of architecture
-- convert type and perform a sign-extension
Position_s <=SIGNED(Position_In(31) & Position_In);
Position_Before_s<=resize(signed(position_before), Position_Before_s'length);
Sum_of_position: process(Clk, Reset)
begin
IF (Reset='0') THEN -- when reset is selected
-- initialize all values
Sum_Pos_s<= (OTHERS => '0');
ELSIF (Clk'event and Clk = '1') then
-- addition of two 33 bit values
Sum_Pos_s <= Position_s + Position_Before_s;
END IF;
end process Sum_of_position;
-- resize to require size and type conversion
position_before <= (OTHERS => '0') WHEN Raz_position='1' else
signed(resize(Sum_Pos_s, position_before'length));
-- Resize and output the result
Position_Out <= (OTHERS => '0') WHEN Raz_position='1' else
std_logic_vector(resize(Sum_Pos_s, Position_Out'length));
end Arch_position;
But, i have overflow because the result is very strange. Can you please suggest me a solution?
Best regards;
First of all your code is quite unclear.
Secondly, there is no reason for position_before(_s) to be asynchronous, it should be clocked, e.g. (summarized):
begin
IF (Reset='0') THEN -- when reset is selected
-- initialize all values
Sum_Pos_s<= (OTHERS => '0');
ELSIF (Clk'event and Clk = '1') then
Position_Before_s <= Sum_Pos_s
Sum_Pos_s <= Position_s + Position_Before_s;
END IF;
end process Sum_of_position;
Thirdly, the answer to your question. You pass floats to your VHDL engine. Interprete them as signed and add them. You should look at IEEE754 floats. There is a fixed field for the sign bit, one for the exponent and one for the mantissa. You can't just add everything up.
Step 1 is to express both on the same exponent basis. Then add the adjusted mantissas and keep the exponent. Then rescale the mantissa for the most significant bit to correspond to 0.5.
What you to is the following:
0.4 + 40 = (0.1) * 4 + (10) * 4
mantissas are both 4
exponents are -1 and 1. without fields overflowing, your result becomes an exponent of 0 and a mantissa of 8, so 8.
Most modern VHDL tools have Integer types (signed and unsigned).
These are usually 32 bits wide unless you use the Range modifier.
I suggest you consider using integers rather than std_logic_vector.
You can convert between types like casts in C.
This is my favourite diagram on casting/ converting VHDL types. I have printed it out and put it on my wall http://www.bitweenie.com/listings/vhdl-type-conversion
A page in integers in VHDL http://vhdl.renerta.com/mobile/source/vhd00039.htm
So here's the problem. I've written code for a binary divider that should output 7-bit 7 segment display binary code to go into an 8 x 7segment display. (2 7segments for dividend,divisor,quotient,remainder each and in that order). This 8 x 7segment display on my dev-board has one 7-bit input(a to g) and a 3-bit select.
So the basic idea is I have to output the dividend,divisor,quotient and remainder sequentially, continuously and fast enough such that to the human eye the output looks constant despite the fact that the each of the eight 7 segments is being enabled one by one according to what my output is.
Originally, the divider gives all the outputs (dividend,divisor,quotient,remainder)in binary which are then converted by a function to 8-bit bcd and that bcd number is then broken down into two 4-bit bcd numbers by another function(Now I have 8 output variables: 2 representing dividend,2 representing divisor etc).These 4-bit numbers are converted by another function to 7 segment.
Here is the full code:
library IEEE;
use IEEE.STD_LOGIC_1164.all;
use IEEE.STD_LOGIC_UNSIGNED.all;
use IEEE.STD_LOGIC_ARITH.all;
entity division is
generic(SIZE: INTEGER := 8);
port(reset: in STD_LOGIC; --reset
en: in STD_LOGIC; --enable
clk: in STD_LOGIC; --clock
num: in STD_LOGIC_VECTOR((SIZE - 1) downto 0); --dividend
den: in STD_LOGIC_VECTOR((SIZE - 1) downto 0); --divisor
whatgoes:out STD_LOGIC_VECTOR(6 downto 0) --output
);
end division;
architecture behav of division is
signal bufreg: STD_LOGIC_VECTOR((2 * SIZE - 1) downto 0); --signal array to hold both accumulator and dividend registers as one i.e bufreg(18 bits)
signal dbuf: STD_LOGIC_VECTOR((SIZE - 1) downto 0); --signal array to hold the divisor
signal count: INTEGER range 0 to SIZE; --count to determine when to stop
signal MYcount: INTEGER range 0 to 100;
signal res: STD_LOGIC_VECTOR((SIZE - 1) downto 0); --result/quotient
signal rm : STD_LOGIC_VECTOR((SIZE - 1) downto 0); --remainder
alias ADreg is bufreg((2 * SIZE - 1) downto SIZE); --ADreg is is alias for top half of bufreg register(17th to 9th bit)
alias DVNDreg is bufreg((SIZE - 1) downto 0); --DVNDreg is is alias for bottom half of bufreg register(8th to 0th bit)
--Function definitions
function to_bcd ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector; --converts 8 bit binary to 8 bit BCD
function m7seg (bin : std_logic_vector(3 downto 0) ) return std_logic_vector; --converts 4 bit BCD to 7 bit 7segment
function breakdown1 ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector; --breaks an 8 bit BCD into a 4 bit BCD with lower bits
function breakdown2 ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector; ----breaks an 8 bit BCD into a 4 bit BCD with higher bits
--this function assigns the first 4 bits of an 8 bit BCD number to a 4-bit vector
function breakdown1 ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector is
variable bint : std_logic_vector(3 downto 0) :=bin(3 downto 0);
begin
return bint;
end breakdown1;
--this function assigns the last 4 bits of an 8 bit BCD number to a 4-bit vector
function breakdown2 ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector is
variable bint : std_logic_vector(3 downto 0) :=bin(7 downto 4);
begin
return bint;
end breakdown2;
--This function converts 8 bit binary to 8 bit BCD
function to_bcd ( bin : std_logic_vector(7 downto 0) ) return std_logic_vector is
variable i : integer:=0;
variable bcd : std_logic_vector(7 downto 0) :=(others => '0');
variable bint : std_logic_vector(7 downto 0) :=bin;
variable bcd2 : std_logic_vector(7 downto 0) :=(others => '0');
begin
for i in 0 to 7 loop -- repeating 8 times.
bcd(7 downto 1) := bcd(6 downto 0); --shifting the bits.
bcd(0) := bint(7);
bint(7 downto 1) := bint(6 downto 0);
bint(0) :='0';
if(i < 7 and bcd(3 downto 0) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(3 downto 0) := bcd(3 downto 0) + "0011";
end if;
if(i < 7 and bcd(7 downto 4) > "0100") then --add 3 if BCD digit is greater than 4.
bcd(7 downto 4) := bcd(7 downto 4) + "0011";
end if;
--if(i < 7 and bcd(11 downto 8) > "0100") then --add 3 if BCD digit is greater than 4.
--bcd(11 downto 8) := bcd(11 downto 8) + "0011";
--end if;
end loop;
bcd2(7 downto 0):=bcd(7 downto 0);
return bcd2;
end to_bcd;
--This function converts 4 bit bcd to 7 segment
function m7seg (bin : std_logic_vector(3 downto 0))return std_logic_vector is
variable bint : std_logic_vector(3 downto 0):=bin(3 downto 0);
variable out7 : std_logic_vector(6 downto 0);
begin
case bint is
when "0000"=> out7:="1111110";
when "0001"=> out7:="0110000";
when "0010"=> out7:="1101101";
when "0011"=> out7:="1111001";
when "0100"=> out7:="0110011";
when "0101"=> out7:="1011011";
when "0110"=> out7:="X011111";
when "0111"=> out7:="1110000";
when "1000"=> out7:="1111111";
when "1001"=> out7:="111X011";
when others=> out7:="0000000";
end case;
return out7;
end m7seg;
begin
--our process begins here (shift and subtract/ Non restoring division)
p_001: process(reset, en, clk, bufreg)
begin
if reset = '1' then
res <= (others => '0');
rm <= (others => '0');
dbuf <= (others => '0');
bufreg <= (others => '0');
count <= 0;
MYcount <= 1;
elsif rising_edge(clk) then
if en = '1' then
case count is
when 0 =>
ADreg <= (others => '0');
DVNDreg <= num;
dbuf <= den;
res <= DVNDreg;
rm <= ADreg;
count <= count + 1;
when others =>
if bufreg((2 * SIZE - 2) downto (SIZE - 1)) >= dbuf then
ADreg <= '0' & (bufreg((2 * SIZE - 3) downto (SIZE - 1)) - dbuf((SIZE - 2) downto 0));
DVNDreg <= DVNDreg ((SIZE - 2) downto 0) & '1';
else
bufreg <= bufreg((2 * SIZE - 2) downto 0) & '0';
end if;
if count /= SIZE then
count <= count + 1;
else
count <= 0;
end if;
end case;
end if;
res <= DVNDreg;
rm <= ADreg;
MYcount<=MYcount+1;
whatgoes<=(others => '0');
case MYcount is
when 2 =>
whatgoes<=m7seg(breakdown1(to_bcd(rm))); --first 7segment(lower bits of remainder)
when 3 =>
whatgoes<=m7seg(breakdown2(to_bcd(rm))); --second 7segment (higher bits of remainder)
when 4 =>
whatgoes<=m7seg(breakdown1(to_bcd(res))); --third 7segment (lower bits of result/quotient)
when 5 =>
whatgoes<=m7seg(breakdown2(to_bcd(res))); --fourth 7segment (higher bits of result/quotient)
when 6 =>
whatgoes<=m7seg(breakdown1(to_bcd(den))); --fifth 7segment (lower bits of divisor)
when 7 =>
whatgoes<=m7seg(breakdown2(to_bcd(den))); --sixth 7segment (higher bits of divisor)
when 8 =>
whatgoes<=m7seg(breakdown1(to_bcd(num))); --seventh 7segment (lower bits of number/dividend)
when 9 =>
whatgoes<=m7seg(breakdown2(to_bcd(num))); --eigth 7segment (higher bits of number/dividend)
when 10 =>
MYcount<=1;
when others =>
NULL;
end case;
end if;
end process;
end behav;
When I try to run a simulation, it gives me all kinds of funky stuff. I want the output (whatgoes(6 downto 0)) to change with the rising edge of the clock(clk). The problem is that since I'm a beginner at VHDL,Ive been having a lot of problems with synthesizing sequential statements.
Inside the process p_001 with enable, clock, and reset in the sensitivity list, i put this case statement. It executes on a positive edge condition.
Code extract:
case MYcount is
when 2 =>
whatgoes<=m7seg(breakdown1(to_bcd(rm))); --first 7segment(lower bits of remainder)
when 3 =>
whatgoes<=m7seg(breakdown2(to_bcd(rm))); --second 7segment (higher bits of remainder)
when 4 =>
whatgoes<=m7seg(breakdown1(to_bcd(res))); --third 7segment (lower bits of result/quotient)
when 5 =>
whatgoes<=m7seg(breakdown2(to_bcd(res))); --fourth 7segment (higher bits of result/quotient)
when 6 =>
whatgoes<=m7seg(breakdown1(to_bcd(den))); --fifth 7segment (lower bits of divisor)
when 7 =>
whatgoes<=m7seg(breakdown2(to_bcd(den))); --sixth 7segment (higher bits of divisor)
when 8 =>
whatgoes<=m7seg(breakdown1(to_bcd(num))); --seventh 7segment (lower bits of number/dividend)
when 9 =>
whatgoes<=m7seg(breakdown2(to_bcd(num))); --eigth 7segment (higher bits of number/dividend)
when 10 =>
MYcount<=1;
when others =>
NULL;
end case;
I'm pretty sure my problem lies here since the rest of my code works fine.
I apologize for uploading such a convoluted mess of code. I'm genuinely stuck and I've been at this for a good number of hours.
Any help would be greatly appreciated. I know it takes a special kind of devotion and patience to answer such a long,boring and nooby problem.
But to whoever can help or provide a link to something that has an answer to my kind of problem, you'd have done me a great service.
I'm using ISE 14.3and iSim.
So, thanks to rick, I solved this.
He helped me realize that I was forgetting to drive the 3-bit select output. As it turns out, driving it using a case statement and counting variable solved my problem of executing the code sequentially.
I know the code is not exactly written in an organized way but i hope with time i'll get better.
process (clk,tmp,rm,res,den,num)
variable CLR: boolean:=true;
begin
if (CLR=true) then
tmp <= "000";
CLR:=false;
elsif (clk'event and clk='1') then
tmp <= tmp + 1;
if tmp<=8 then
CLR:=true;
end if;
end if;
case tmp is
when "000" =>
whatgoes<=m7seg(breakdown1(to_bcd(rm))); --first 7segment(lower bits of remainder)
when "001" =>
whatgoes<=m7seg(breakdown2(to_bcd(rm))); --second 7segment (higher bits of remainder)
when "010" =>
whatgoes<=m7seg(breakdown1(to_bcd(res))); --third 7segment (lower bits of result/quotient)
when "011" =>
whatgoes<=m7seg(breakdown2(to_bcd(res))); --fourth 7segment (higher bits of result/quotient)
when "100" =>
whatgoes<=m7seg(breakdown1(to_bcd(den))); --fifth 7segment (lower bits of divisor)
when "101" =>
whatgoes<=m7seg(breakdown2(to_bcd(den))); --sixth 7segment (higher bits of divisor)
when "110" =>
whatgoes<=m7seg(breakdown1(to_bcd(num))); --seventh 7segment (lower bits of number/dividend)
when "111" =>
whatgoes<=m7seg(breakdown2(to_bcd(num))); --eigth 7segment (higher bits of number/dividend)
when others =>
NULL;
end case;
sel<=tmp;
end process;
I'm basically shooting in the dark here; maybe if you post a simulation picture it will help us understand your problem better. Anyway, since we're at it, why not talk about a few random issues:
The code would be easier to understand (and to work with) if you'd split it into a few blocks, each with a single purpose. You could have one block do the division, and output only the quotient and remainder. Another block could take in 8 BCD values, and multiplex them so that they appear correctly on your board's displays. If we can concentrate on one part of the problem at a time, it will be easier to spot anything wrong.
You mention a 3-bit select on the LCD, but I don't see in your code where you drive it. Maybe you should output something based on your signal MYcount?
To make sure your functions are working ok, you could put them in a package and create a self-cheking testbench. At least that's how I'd do it. This would take that variable out of the equation.
Please post some simulation results so that we can help you out.
so, I'm developing an ALU for MIPS architecture and I'm trying to make a shift left and a shift right so that the ALU can shift any amount of bits.
the Idea I had is to convert the shift value to an integer and select the piece of the entry that'll be on the result(the integer is stored in X) but Quartus doesn't accept a variable value, only constants.
What could I do to make this?
(Cases are on lines "WHEN "1000" =>..." and "WHEN "1001" =>...")
Thanks.
PROCESS ( ALU_ctl, Ainput, Binput, X )
BEGIN
-- Select ALU operation
--ALU_output_mux <= X"00000000"; --padrao
CASE ALU_ctl IS
WHEN "1000" => ALU_output_mux(31 DOWNTO X) <= (Ainput( 31-X DOWNTO 0 ));
WHEN "1001" => ALU_output_mux(31-X DOWNTO 0) <= (Ainput( 31 DOWNTO X ));
WHEN OTHERS => ALU_output_mux <= X"00000000";
END CASE;
END PROCESS;
If Quartus doesn't like it you have two choices:
Write it some way that Quartus does like - you're trying to infer a barrel shifter, so you could write one out longhand and then instantiate that. Potentially expensive in time
Get a different synthesizer that will accept it. Potentially expensive in money.
I have had issues with this in Quartus as well, although your code also has some implicit latches (you are not assigning all bits of the output in your two shift cases).
The work-around I use is to define an intermediate array with all the possible results, then select one of those results using your selector. In your case, something like the following:
subtype DWORD_T is std_logic_vector( 31 downto 0);
type DWORD_A is array (natural range <>) of DWORD_T;
signal shift_L : DWORD_A(31 downto 0);
signal shift_R : DWORD_A(31 downto 0);
signal zero : DWORD_T;
...
zero <= (others=>'0');
process (Ainput)
begin
for index in Ainput'range loop
shift_L(index) <= Ainput(31 - index downto 0) & zero(index - 1 downto 0);
shift_R(index) <= zero(index - 1 downto 0) & Ainput(31 downto index);
end loop;
end process;
ALR_output_mux <= shift_L(to_integer(X)) when ALU_ctl="1000",
shift_R(to_integer(X)) when ALU_ctl="1001",
(others=>'0') when others;
You could work around this by using generate or for to create each shift/rotate level, or you can use the standard functions ({shift,rotate}_{left,right}) for shifting and rotating.