Is there any algorithm or code to find square root of integer in VHDL? The code must not contain these library:
IEEE.std_logic_arith.all;
IEEE.std_logic_unsigned.all;
IEEE.math_real.all;
IEEE.std_logic_signed.all;
IEEE.std_logic_unsigned.all;
See VHDL samples
...
32-bit parallel integer square root
The VHDL source code is sqrt32.vhdl
The output of the VHDL simulation is sqrt32.out
The schematic was never drawn. sqrt8m.vhdl was expanded
using "generate" statements to create sqrt32.vhdl
Only contains references to package ieee.std_logic_1164, accepts a std_logic_vector length 32 and returns a length 16.
Amazing what you can find googling with search terms square root VHDL .
Addendum
I got curious and a testbench for sqrt32.vhdl is small. There's an error in the code, it's not functional. The apparent way to correct it would be to re-implement it. It likely suffers from an erroneous assumption in expanding sqrt8m.vhdl mentioned as the source (which could also be validated).
There are other square root VHDL models available. Sequential (successive subtraction divider) models are not uncommon in books on VHDL arithmetic, with the various implementations of division (e.g. non-restoring).
There's also a square root function in -2008 IEEE package float_pkg which is synthesis eligible and has the dynamic range for a 32 bit integer in the mantissa of a 64 bit floating point number. It's not one of the proscribed packages and the package has the necessary conversion routines.
Appears you are looking for synthesizable code, and in that case the question should mention that.
Some mathematical operations are usually supported by the synthesis tools, like integer addition (a + b), integer negation (- a), integer subtraction (a - b), integer multiplication (*), while other mathematical operations are not, like point square root operation.
So a synthesizable square root operation must be implemented separately, like suggested by user1155120, and the implementation will depend on requirements to arguments, accuracy, throughput, latency, size, etc.
But for simulation purpose, and not synthesizable, you can use ieee.math_real.sqrt, with an example below that prints the square root value for 2.0:
library ieee;
use ieee.math_real.all;
...
report real'image(sqrt(2.0)) severity NOTE;
If the literal wording is that the code cannot contain IEEE.math_real.all; you can circumvent the problem by selecting only what you need from math_real.
Then IEEE.math_real.sqrt; will do what you want.
However while this satisfies the letter of the question asked above, I cannot guarantee it satisfies the intent.
A better answer would be to take ANY algorithm for computing square root - there are many, easily found in the usual sources. Implement it in VHDL, and test it.
Try this solution based on Mr. Crenshaw's algorithm:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity SQRT is
Generic ( b : natural range 4 to 32 := 16 );
Port ( value : in STD_LOGIC_VECTOR (15 downto 0);
result : out STD_LOGIC_VECTOR (7 downto 0));
end SQRT;
architecture Behave of SQRT is
begin
process (value)
variable vop : unsigned(b-1 downto 0);
variable vres : unsigned(b-1 downto 0);
variable vone : unsigned(b-1 downto 0);
begin
vone := to_unsigned(2**(b-2),b);
vop := unsigned(value);
vres := (others=>'0');
while (vone /= 0) loop
if (vop >= vres+vone) then
vop := vop - (vres+vone);
vres := vres/2 + vone;
else
vres := vres/2;
end if;
vone := vone/4;
end loop;
result <= std_logic_vector(vres(result'range));
end process;
end;
Related
I am new to vhdl, I am trying to add 2 vectors of 5 bit unsigned numbers.In the following code the signal firstsum gives proper output in waveform but the vector sum does not show any output, I am using quartus ii. What is the error in this code?
library IEEE;
use IEEE.STD_LOGIC_1164.all;
use ieee.numeric_std.all;
package UVEC is
subtype UINT5 is std_logic_vector (4 downto 0);
type UVEC5 is array (2 downto 0) of UINT5;
subtype UINT6 is std_logic_vector (5 downto 0);
type UVEC6 is array (2 downto 0) of UINT6;
end UVEC;
library IEEE;
use IEEE.STD_LOGIC_1164.all;
use ieee.numeric_std.all;
use work.UVEC.all;
entity FP_Vecsum1 is
port(
a,b : in UVEC5;
sum : out UVEC6;
firstsum : out UINT6
);
end FP_Vecsum1;
architecture FP_Vecsum1_MX of FP_Vecsum1 is
begin
firstsum <= std_logic_vector(('0'&unsigned(a(0)))+('0'&unsigned(b(0))));
sum(0) <= std_logic_vector(('0'&unsigned(a(0)))+('0'&unsigned(b(0))));
sum(1) <= std_logic_vector(('0'&unsigned(a(1)))+('0'&unsigned(b(1))));
sum(2) <= std_logic_vector(('0'&unsigned(a(2)))+('0'&unsigned(b(2))));
end FP_Vecsum1_MX;
welcome to the VHDL world.
I also haven't found anything wrong with your code, but you can try the following, maybe this will help:
first, try to cast the signals to unsigned in the beginning of your architecture, before doing the math:
a_us(0) <= unsigned(a(0));
a_us(1) <= unsigned(a(1));
a_us(2) <= unsigned(a(2));
this is quite convenient: if your ports to the outside world are neutral vectors, the math inside your component is either signed or unsigned. do the conversion once, and you're free.
second, instead of manually doing the sign extension, now that you have determined your vectors as unsigned, you can use resize function to automatically set the summed vectors to the result length:
sum(0) <= std_logic_vector(resize(a_us(0),sum(0)'length) + resize(b_us(0),sum(0)'length));
you can also do a little trick by adding a zero with a relevant vector width:
sum(0) <= std_logic_vector( to_unsigned(0,sum(0)'length) + a_us(0) + b_us(0) );
it might look a little longer, but in my opinion it's a more robust code.
hope this helps,
ilan.
I have two codes, one in Verilog and another in vhdl, which counts the number of one's in a 16 bit binary number. Both does the same thing, but after synthesising using Xilinx ISE, I get different synthesis reports.
Verilog code:
module num_ones_for(
input [15:0] A,
output reg [4:0] ones
);
integer i;
always#(A)
begin
ones = 0; //initialize count variable.
for(i=0;i<16;i=i+1) //for all the bits.
ones = ones + A[i]; //Add the bit to the count.
end
endmodule
VHDL code:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
entity num_ones_for is
Port ( A : in STD_LOGIC_VECTOR (15 downto 0);
ones : out STD_LOGIC_VECTOR (4 downto 0));
end num_ones_for;
architecture Behavioral of num_ones_for is
begin
process(A)
variable count : unsigned(4 downto 0) := "00000";
begin
count := "00000"; --initialize count variable.
for i in 0 to 15 loop --for all the bits.
count := count + ("0000" & A(i)); --Add the bit to the count.
end loop;
ones <= std_logic_vector(count); --assign the count to output.
end process;
end Behavioral;
Number of LUT's used in VHDL and Verilog - 25 and 20.
Combination delay of the circuit - 3.330 ns and 2.597 ns.
As you can see the verilog code looks much more efficient. Why is that?
The only difference I can see is, how 4 zeros are appended on MSB side in VHDL code. But I did this, because otherwise VHDL throws an error.
Is this because of the tool I am using, or HDL language or the way I wrote the code?
You will need to try a number of different experiments before coming to any conclusions. But my observation is that Verilog is used more frequently in the most critical capacity/area/performance designs. Therefore the majority of research effort goes into handling Verilog language tools first.
What issues could I run into with this code? I was thinking that there could be an issue if the result from the addition is bigger than what 15 bits can represent (32767), or if I get a negative number in the subtraction.
library ieee;
use ieee.std_logic_1164.all;
use ieee.std_logic_unsigned.all;
use ieee.std_logic_arith.all;
use ieee.numeric_std.all;
entity test is
port( input: in std_logic_vector(14 downto 0);
sel : out boolean;
output: out std_logic_vector(14 downto 0));
end test;
architecture test of test is
constant first : integer := 1050;
constant second : integer := 33611;
begin
output <= input - first;
output <= input + second;
sel <= input < first;
end test;
The primary issue you have is that the design intent is not communicated so it is impossible to distinguish correct from incorrect results - in that sense, whatever it does must be right!
I differ from David's opinion in one respect : where he says "std_logic_vector is an unsigned representation" I suggest that std_logic_vector is neither signed nor unsigned; it is just a bag of bits. If it happens to follow unsigned rules, that's an accident of the set of libraries you have included.
Instead, I would delete the non-standard libraries:
use ieee.std_logic_unsigned.all;
use ieee.std_logic_arith.all;
and use exclusively the standard libraries:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
Then - if the input and output ports are meant to represent unsigned numbers, the best thing to do is say so...
port( input : in unsigned(14 downto 0);
sel : out boolean;
output : out unsigned(14 downto 0));
(If you are not allowed to change the port types, you can use unsigned signals internally, and type convert between them and the ports.)
Now as regards the expressions, they may overflow (and in the case of "second" obviously will!).
In simulation, these overflows OUGHT to be reported as arithmetic errors. (Note : at least one simulator runs with overflow checks off as the default setting! Just dumb...)
As the designer, you decide what the correct semantics for overflows are:
They represent bugs. Simulate with overflow checks enabled, detect and fix the bugs.
They are permitted, and e.g. negative numbers represent large positive numbers. Express this in the code, e.g. as output <= (input - first) mod 2**output'length; Now anyone reading the code understands that overflow is allowed, and simply wraps.
Overflow should saturate to the positive or negative limit. Signal this by writing output <= saturate(input - first); I'll leave writing the Saturate function as an exercise...
The adding operators "+" and "-" are performed bit wise - std_logic_vector is an array type with a base element type of std_ulogic which represents 'bits' as a multi level value system that includes meta values. Their result is bounded by the longer of the two operands. (They don't overflow).
See the source for package std_logic_unsigned:
function "+"(L: STD_LOGIC_VECTOR; R: STD_LOGIC_VECTOR) return STD_LOGIC_VECTOR is
-- pragma label_applies_to plus
constant length: INTEGER := maximum(L'length, R'length);
variable result : STD_LOGIC_VECTOR (length-1 downto 0);
begin
result := UNSIGNED(L) + UNSIGNED(R);-- pragma label plus
return std_logic_vector(result);
end;
Which uses the unsigned add from std_logic_arith:
function "+"(L: UNSIGNED; R: UNSIGNED) return UNSIGNED is
-- pragma label_applies_to plus
-- synopsys subpgm_id 236
constant length: INTEGER := max(L'length, R'length);
begin
return unsigned_plus(CONV_UNSIGNED(L, length),
CONV_UNSIGNED(R, length)); -- pragma label plus
end;
An this uses unsigned_plus also found in std_logic_arith:
function unsigned_plus(A, B: UNSIGNED) return UNSIGNED is
variable carry: STD_ULOGIC;
variable BV, sum: UNSIGNED (A'left downto 0);
-- pragma map_to_operator ADD_UNS_OP
-- pragma type_function LEFT_UNSIGNED_ARG
-- pragma return_port_name Z
begin
if (A(A'left) = 'X' or B(B'left) = 'X') then
sum := (others => 'X');
return(sum);
end if;
carry := '0';
BV := B;
for i in 0 to A'left loop
sum(i) := A(i) xor BV(i) xor carry;
carry := (A(i) and BV(i)) or
(A(i) and carry) or
(carry and BV(i));
end loop;
return sum;
end;
std_logic_vector is an unsigned representation, there is no concept of negative numbers, it's a bag of bits. If you want to signify signed operations you should be using package numeric_std, and either type convert or use operands for your relational and adding operators that are type signed.
That being said you'll get the same answers using std_logic_vector with Synopsys's std_logic_unsigned package or unsigned with the IEEE numeric_std package.
(And your last two use clauses aren't needed by the code you show).
And the reason you don't need a use clause making packages numeric_std or std_logic_arith visible is because you aren't using signed or unsigned types and package std_logic_unsigned has it's own use clause for std_logic_arith and otherwise has declarations for everything you're using in your design specification ("+", "-" and "<").
I have input std_logic_vector of (0 to X).
Range of x is 0 to 1000 bytes and the code should support any value of X.
I would like to slice the input into 128 bit blocks, for further processing and operations.
a) how can it be done?
b) is there a way to make the following pseudo-code work? so i can adopt it for solving a)?
i need to use the loop index for naming the signals but i guess its not possible with VHDL (?)
for i in 0 to N loop
block_i <= input (X, X-127);
end loop;
Thanks in advance.
Something like this ?
library ieee;
use ieee.std_logic_1164.all;
entity slicer is
generic(X : natural:=1000);
port (input : in std_logic_vector(X*128-1 downto 0));
end entity;
architecture rtl of slicer is
type block_type is array(0 to X-1) of std_logic_vector(127 downto 0);
signal blocks : block_type;
begin
slicing:for i in 0 to X-1 generate
blocks(i) <= input(128*(i+1)-1 downto 128*i);
end generate;
end rtl;
You have a few options with how to accomplish this. One is to use the flattened 1-D array that is selectively sliced as demonstrated by #JCLL. Another option is to create a new type that is an array of an array.
subtype word is std_logic_vector(127 downto 0); -- Constrained subtype
type word_vec is array(natural range <>) of word; -- New unconstrained type
...
entity foo is
port (
X : in word_vec -- Get our constraint when instantiated
);
end entity;
...
for i in X'range loop
blocks(i) <= X(i);
end loop;
This solution skips the arithmetic needed for the 1-D slicing but is limited by the need for a constrained type for the elements of word_vec. This last limitation is lifted in VHDL-2008 where you can do the following:
-- Both unconstrained arrays
type word_vec is array(natural range <>) of std_logic_vector;
The best solution depends on what your task is and how much flexibility you need for size changes in the future.
A final less appealing option is to use a 2-D array but that gets ugly when you need more than bitwise access.
Yes, you can assign parts of a large logic vector to another smaller vector. I'm not sure about your specific implementation (you did not provide the signal types and sizes -- is the large vector 1000 bytes or 1000 bits?). However, If you know what X is at time of synthesis, use generics, like
entity foo is
generic(X : Natural);
port(input: in std_logic_vector(X-1 downto 0);
block_i: out std_logic_vector(127 downto 0));
end entity;
Otherwise you just need to pass in a size as well:
entity foo is
port(input: in std_logic_vector(X-1 downto 0);
block_i: out std_logic_vector(127 downto 0);
X : in Natural);
end entity;
And then use the size when you are assigning parts to block_i.
Note that you will need to either use the generic or a hard-coded constant (ie: 1000 for the worse case) for the loop. VHDL does not like variable loop ranges. You can work around this, but I usually don't need to (see: Using FOR loop in VHDL with a variable)
I'm learning VHDL and I'm having a problem with some code I'm trying to write to satisfy a bound-check exception.
Here is my basic summarized code:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use ieee.std_logic_arith.all;
use IEEE.NUMERIC_STD.ALL;
use ieee.std_logic_unsigned.all;
...
port(
Address: in std_logic_vector(15 downto 0);
...
constant SIZE : integer := 4096;
variable addr: integer range 0 to SIZE-1 := 0;
...
process ...
addr := conv_integer(Address) and (SIZE-1); --error here
The error message I get is
src/memory.vhd:37:35: no function declarations for operator "and"
Basically, my goal is to make a 16-bit address bus, reference memory with only 4096 bytes. Why do I get this odd error? Am I missing a library include or something?
First: Don't use std_logic_arith and numeric_std. And you don't need std_logic_arith
You can't do bitwise ANDs on integers, so you need to do something like:
addr := Address and to_unsigned(SIZE-1, Address'length);
But you'll probably want to guarantee SIZE is a power-of-2
what I tend to do is create a constant in bits and work up from there:
constant mem_bits : integer := 16;
constant SIZE : integer := 2**16;
then
addr := Address(mem_bits-1 downto 0);
I don't think and is defined for integers, although there might be a standard library that includes that functionality.
Why not keep your address as a std_logic_vector though? When it comes to addresses, you often want to be able to do easy decoding by looking directly at certain bits, so I think it makes rather good sense.
Just make addr a std_logic_vector(11 downto 0), and assign the lowest 12 bits of address to it - that will ignore the upper 4 bytes, and give you 4096 bytes of space (for an 8-bit databus).
And does not make sense for an integer. Integer is a number within a range, but it has no standard way of implementing itself, i.e. it has no predefined representation in binary.
you can use something like the syntax, below;
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.std_logic_arith.all;
entity testand is
generic (nBITS:integer:=32);
port (
i:in integer;
a:in std_logic_vector(nBITS-1 downto 0);
o:out std_logic_vector(nBITS-1 downto 0));
end entity;
architecture beh of testand is
signal v:std_logic_vector(a'length-1 downto 0);
begin
v<=std_logic_vector(conv_unsigned(i,o'length));
o<=v and a;
end architecture;
In your specific case you could also use "mod SIZE" instead of "and (SIZE-1)".