How to make the 2-complement of a number without using adder - vhdl

In two-complement to invert the sign of a number you usually just negate every bit and add 1.
For example:
011 (3)
100 + 1 = 101 (-3)
In VHDL is:
a <= std_logic_vector(unsigned(not(a)) + 1);
In this way the synthesizer uses an N-bit adder.
Is there another more efficient solution without using the adder?

I would guess there's not an easier way to do it, but the adder is probably not as bad as you think it is.
If you are trying to say invert a 32-bit number, the synthesis tool might start with a 32-bit adder structure. However then upon seeing that the B input is always tied to 1, it can 'hollow out' a lot of the structure due to the unused gates (AND gates with one pin tied to ground, OR gates with one pin tied to logic 1, etc).
So what you're left with I'd imagine would be a reasonably efficient blob of logic which just increments an input number.

If you are just trying to create a two's complement bit pattern then a unary - also works.
a = 3'b001 ; // 1
b = -a ; //3'b111 -1
c = ~a + 1 ; //3'b111 -1
Tim has also correctly pointed out that just because you use a + or imply one through a unary -, the synthesis tools are free to optimise this.
A full adder has 3 inputs (A, B, Carry_in) and 2 outputs (Sum Carry_out). Since for our use the second input is only 1 bit wide, and at the LSB there is no carry, we do not need a 'full adder'.
A half adder which has 2 inputs (A, B) and 2 outputs (Sum Carry), is perfect here.
For the LSB the half adders B input will be high, the +1. The rest of the bits B inputs will be used to propagate the Carry from the previous bit.
There is no way that I am aware of to write in verilog that you want a half adder, but any size number plus 1 bit only requires a half adder rather than a fulladder.

Related

Why my 4 bit binary half adder circuit design is showing incompatible width in LOGISIM?

I am trying to create a half adder circuit using logisim to compute two 4 bit binary numbers but somehow Logisim tells me that I have incompatible widths and I therefore have to change the bit width of every single component including the carry-out which is suppose to be a 1 bit (showing carry 1 or carry 0). Now I understand that my output has to be at least 4 bit in length and I need an extra bit as a carry out but even when I change the length the way Logisim wants then my design does not work anymore.
Half adder of a 2 four bit binaries
It's simply because the input width on the AND gate is 1 bit. You can't have a 4-bit output going into a 1-bit input.

Iterating over bits in FPGA

Now I'm trying to figure out best method for iterating over bits in FPGA. I'm using some variation of fast powering algorithm, a.k.a exponentiation by squaring (more precisely it's doubling and add algorithm for elliptic curve mathematics). To implement it on hardware, I know I must use FSM which does iteration. My problem is how to properly "handle" moving from bit to bit. My first thought was to switch order of bytes, but when my k = 17 is 32bit, I must discard first 27 bits, so it's rather stupid idea. Another concept was with "moving" 0001000 pattern and bitwise & it with number, but it also requires to find first nonzero bit.
TL&DR
Got for example k = 17 (32bits, so: 17x0 10001) and want to iterate 5 times (that means I start iteration on first "real" bit of number) knowing each bit I iterate over.
Language doesn't matter - I need only the algorithm, not solution in specific language. However, if it is easily done in Verilog, I wouldn't mind. :P
A dedicated combinatorial circuit to find the first nonzero bit, shift it to the first position and tell you the shift amount should be fairly light on resources.
In principle, the compiler should be able to find this solution on its own and improve on it:
if none of the top 16 bits are set, set bit 4 of the shift amount, and shift by 16.
if none of the top 8 bits are set, set bit 3 of the shift amount, and shift by 8.
...
The compiler should be able to find further optimizations on this.
Don not code for FPGA but still:
rewrite algorithm to iterate number x from LSB to MSB
then in each iteration bit shift x right by 1 bit
stop if x==0.
this way you have bit-scan inside your main loop and do not need additional cycles for it.
x!=0 is done easily by ORing all its bits together
C++ code example:
DWORD x = ...;
for (; x != 0; x >>= 1)
{
//here is your iteration loop stuff like:
if (DWORD(x & 1) !=0 ) ...;
}
Something like:
always # *
casex(num)
8XXX_XXXX: k = 32;
4XXX_XXXX: k = 31;
2XXX_XXXX: k = 30;
...
Should give you the value of k.
You can have a shift register which can be parallel loaded so you can write a 1 to the kth bit, so you know when your iterations have ended.
If you loop from 0 to 31 and discard the 27 leading zeros...you aren't necessarily wasting cycles. Depends on whether you've surrounded this with a synchronous process, or a asynchronous one.
One gives you a rather small clocked circuit with a 32 clock latency.
The other gives you a giant rats nest of ANDs and ORs which won't run at a very high frequency.
Depends on what you want. Remember though, that even if you do decide to loop over 32 clocks, you can PIPELINE it such that you start a new calculation every clock. It might take you 32 clocks to get an answer, but you CAN do them at high speed.

In VHDL ..... how to count leading zeros of vector?

I'm working in a VHDL project and I'm facing a problem to calculate the length of vector. I know there is length attribute of a vector but this not the length I'm looking for. For example, I have std_logic_vector
E : std_logic_vector(7 downto 0);
then
E <= "00011010";
so, len = E'length = 8 but I'm not looking for this. I want to calculate len after discarding the left most zeros , so len = 5;
I know that I can use for loop by checking "0"s bits from left to right and stop if "1" bit occur. But that's not efficient, because I have 1024 or more of bits and that will slow my circuit. So is there is any method or algorithm to calculate the length in efficient way? Such as using combinational gates of log(n) level of gates, ( where n = number of bits ).
What you do with your "bit counting" very similar to the logarithm (base 2).
This is commonly used in VHDL to figure out how many bits are required to represent a signal. For example if you want to store up to N elements in RAM, the number of bits required for addressing that RAM is ceil(log2(N)). For this I use:
function log2ceil(m:natural) return natural is
begin -- note: for log(0) we return 0
for n in 0 to integer'high loop
if 2**n >= m then
return n;
end if;
end loop;
end function log2ceil;
Usually, you want to do this at synthesis time with constants, and speed is no concern. But you can also generate FPGA logic, if that's really what you want.
As others have mentioned, a "for" loop in VHDL is just used to generate a lookup table, which may be slow due to long signal paths, but still only takes a single clock. What can happen is that your maximum clock frequency goes down. Usually this is only a problem if you operate on vectors larger than 64bit (you mentioned 1024 bits) and clocks faster than 100MHz. Maybe the synthesizer already told you that this is your problem, otherwise I suggest you try first.
Then you have to split up the operation over multiple clocks, and store some intermediate result into a FF. (I would upfront forget about trying to outsmart the synthesizer by rearranging your code. A lookup-table is a table. Why should it matter how you generate the values in this table? But make sure you tell the synthesizer about "don't care" values if you have them.)
If speed is your concern, use the first clock to check all 16bit blocks in parallel (independent of each other), and then use a second clock cycle to combine the results of all 16bit blocks into a single result. If the amount of FPGA logic is your concern, implement a state machine that checks a single 16bit block at every clock cycle.
But be careful that you don't re-invent the CPU while doing that.
The problem with using a loop is that when you synthesize you might get a very long chain of logic.
Another way to look at your problem is to find the index of the most significant set bit.
To do this you can use a priority encoder. The nice thing about this is you can make a large priority encoder by using smaller priority encoders in a tree structure, so the delay is O(log N) instead of O(N).
Here is a 4 bit priority encoder:
http://en.wikibooks.org/wiki/VHDL_for_FPGA_Design/Priority_Encoder
You can make a 16 bit priority encoder using 5 of these blocks, then a 256 bit encoder from five 16 bit encoders, etc.
But since you have so many bits it is going to be fairly huge.
Well, VHDL is not SW, it does not take time to perform an operation like this, it just takes resources from your FPGA.
You can divide your 1024 bits data into 32 bits section and perform an OR between all the bits, this way, you check 32 bits at a time. It is not really necessary since the the for loop would work perfectly fine for what you want to do, just write the code, look for the first 1 in the array and stop the loop and use the loop index number as the pointer to the first 1 in your array. I didn't compile this code, but something like this should work for you:
FirstOne <= 1023;
for i in E'reverse_range loop
if (E(i) == '1') then
FirstOne <= i;
exit;
end if;
end loop;
It will not be such a big blocks inside an FPGA after all.
Most synthesizers these days support recursive functions. And indeed this will give you complexity comparable to log(N) where N is the number of bits:
Cut your vector into halves
If the top half are all zeros
The leading bit of your answer is '1', low bits depend on the bottom half vector
Otherwise
The leading bit of your answer is '0', low bits depend on the top half vector
Recurse on the half vector of interest chosen above

VHDL - creating a variable number of signals

I'm creating a full adder with a variable number of bits. I've got a component that is a half-adder which takes in three inputs (the two bits to add, and a carry in bit) and gives 2 outputs (one bit output and a carry out bit).
I need to tie the carry out of one half-adder to the carry in of another. And I need to do this a variable number of times (if I'm adding 4 digit numbers, I'll need 4 half adders. If I'm doing 32 bit numbers, I'll need 32 half adders).
I was going to tie the carry outs of one half-adder to the carry in of another using signals, but I don't know how to create a variable number of signals.
I can instantiate a variable number of half-adders using a for-loop in a process, but since signals are defined outside of processes, I can't use a for loop for it. I don't know how I should tie the half-adders together.
The easiest way to write an adder in VHDL is not to worry about full adders and half adders, but just type:
a <= b + c;
where a,b and c are signed or unsigned
95% of the time, the synthesis tools will do a better job than you would.
I think you want variable-width signals not variable numbers of signals
Your signals need to be std_logic_vector(31 downto 0) for example - and then you wire up the bits of those signals to your half-adders appropriately.
Of course, as those signals are numbers, then don't use std_logic_vector use signed or unsigned (and the ieee.numeric_std lib).
And (as Philippe rightly points out) unless this is a learning exercise, just use the + operator.

downto vs. to in VHDL

I'm not sure I understand the difference between 'downto' vs. 'to' in vhdl.
I've seen some online explanations, but I still don't think I understand. Can anyone lay it out for me?
If you take a processor, for Little endian systems we can use "downto" and for Bigendian systems we use "to".
For example,
signal t1 : std_logic_vector(7 downto 0); --7th bit is MSB and 0th bit is LSB here.
and,
signal t2 : std_logic_vector(0 to 7); --0th bit is MSB and 7th bit is LSB here.
You are free to use both types of representations, just have to make sure that other parts of the design are written accordingly.
This post says something different:
"The term big endian (or little endian) designates the byte order in byte oriented processors and doesn't fit for VHDL bit vectors. The technical term is ascending and descending array range. Predefined numerical types like signed and unsigned are restricted to descending ranges by convention."
So, this answer can be confusing...
One goes up, one goes down:
-- gives 0, 1, 2, 3:
for i in 0 to 3 loop
-- gives 3, 2, 1, 0:
for i in 3 downto 0 loop
An interesting online reference I found is here, where among others, under the section "Array Assignments," you can read:
Two array objects can be assigned to each other, as long as they are of the same type and same size. It is important to note that the assignment is by position, and not by index number. There is no concept of a most significant bit defined within the language. It is strickly interpreted by the user who uses the array. Here are examples of array assignments:
with the following declaration:
....
SIGNAL z_bus: BIT_VECTOR (3 DOWNTO 0);
SIGNAL a_bus: BIT_VECTOR (1 TO 4);
....
z_bus <= a_bus;
is the same as:
z_bus(3) <= a_bus(1);
z_bus(2) <= a_bus(2);
z_bus(1) <= a_bus(3);
z_bus(0) <= a_bus(4);
Observations:
1) Any difference of "downto" and "to" appears when we want to use a bit-vector not just to represent an array of bits, where each bit has an independent behavior, but to represent an integer number. Then, there is a difference in bit significance, because of the way numbers are processed by circuits like adders, multipliers, etc.
In this arguably special case, assuming 0 < x < y, it is a usual convention that when using x to y, x is the most significant bit (MSB) and y the least significant bit (LSB). Conversely, when using y downto x, y is the MSB and x the LSB. You can say the difference, for bit-vectors representing integers, comes from the fact the index of the MSB comes first, whether you use "to" or "downto" (though the first index is smaller than the second when using "to" and larger when using "downto").
2) You must note that y downto x meaning y is the MSB and, conversely, x to y meaning x is the MSB are known conventions, usually utilized in Intellectual Property (IP) cores you can find implemented and even for free. It is, also, the convention used by IEEE VHDL libraries, I think, when converting between bit-vectors and integers. But, there is nothing even difficult about structural modeling of, e.g., a 32-bit adder that uses input bit-vectors of the form y downto x and use y as the LSB, or uses input bit-vectors of the form x to y where x is used as the LSB...
Nevertheless, it is reasonable to use the notation x downto 0 for a non-negative integer, because bit positions correspond to the power of 2 multiplied by the digit to add up to the number value. This seems to have been extended also in most other practice involving integers.
3) Bit order has nothing to do with endianness. Endianness refers to byte ordering (well, byte ordering is a form of bit ordering...). Endianness is an issue exposed at the Instruction Set Architecture (ISA) level, i.e., it is visible to the programmer that may access the same memory address with different operand sizes (e.g., word, byte, double word, etc). Bit ordering in the implementation (as in the question) is never exposed at the ISA level. Only the semantics of relative bit positions are visible to the programmer (e.g., shift left logical can be actually implemented by shifting right a register who's bit significance is reversed in the implementation).
(It is amazing how many answers that mention this have been voted up!)
In vector types, the left-most bit is the most significant. Hence, for 0 to n range, bit 0 is the msb, for a n downto 0 range bit n is the msb.
This comes in handy when you are combining IP which uses both big-endian and little-endian bit orderings to keep your head straight!
For example, Microblaze is big-endian and uses 0 as its msb. I interfaced one to an external device which was little-endian, so I used 15 downto 0 on the external pins and remapped them to 16 to 31 on the microblaze end in my interface core.
VHDL forces you to be explicit about this, so you can't do le_vec <= be_vec; directly.
Mostly it just keeps you from mixing up the bit order when you instantiate components. You wouldn't want to store the LSB in X(0) and pass that to a component that expects X(0) to contain the MSB.
Practically speaking, I tend to use DOWNTO for vectors of bits (STD_LOGIC_VECTOR(7 DOWNTO 0) or UNSIGNED(31 DOWNTO 0)) and TO for RAMs (TYPE data_ram IS ARRAY(RANGE NATURAL<>) OF UNSIGNED(15 DOWNTO 0); SIGNAL r : data_ram(0 TO 1023);) and integral counters (SIGNAL counter : NATURAL RANGE 0 TO max_delay;).
To expand on #KerrekSB's answer, consider a priority encoder:
ENTITY prio
PORT (
a : IN STD_LOGIC_VECTOR(7 DOWNTO 1);
y : OUT STD_LOGIC_VECTOR(2 DOWNTO 0)
);
END ENTITY;
ARCHITECTURE seq OF prio IS
BEGIN
PROCESS (a)
BEGIN
y <= "000";
FOR i IN a'LOW TO a'HIGH LOOP
IF a(i) = '1' THEN
y <= STD_LOGIC_VECTOR(TO_UNSIGNED(i, y'LENGTH));
END IF;
END LOOP;
END PROCESS;
END ENTITY;
The direction of the loop (TO or DOWNTO) controls what happens when multiple inputs are asserted (example: a := "0010100"). With TO, the highest numbered input wins (y <= "100"). With DOWNTO, the lowest numbered input wins (y <= "010"). This is because the last assignment in a process takes precedence. But you could also use EXIT FOR to determine the priority.
I was taught that a good rule is to use "downto" for matters where maintaining binary order is important (for instance an 8 bit signal holding a character) and "to" is used when the signal is not necessarily interconnected for instance if each bit in the signal represents an LED that you are turning on and off.
connecting a 4 bit "downto" and a 4 bit "to" looks something like
sig1(3 downto 0)<=sig2(0 to 3)
-------3--------------------0
-------2--------------------1
-------1--------------------2
-------0--------------------3
taking part of the signal instead sig1(2 downto 1) <= sig2(0 to 1)
-------2--------------------0
-------1--------------------1
Though there is nothing wrong with any of the answers above, I have always believed that the provision of both is to support two paradigms.
First is number representation. If I write the number 7249 you immediately interpret it as 7 thousand 2 hundred and forty-nine. Numbers read from left to right where the most significant digit is on the left. This is the 'downto' case.
The second is time representation where we always think of time progressing from left to right. On a clock the numbers increase over time and 2 always follows 1. Here I naturally write the order of bits from left 'to' right in time ascending order regardless of the representation of the bits. In RS232 for instance we start with a start bit followed by 8 data bits (LSB first) then a stop bit. Here the MSB is on the right; the 'to' case.
As said the most important thing is not to mix them arbitrarily. In decoding an RS232 stream we may end up doing just that to turn bits received in time order into bytes which are MSB first but this is very much the exception rather than the rule.

Resources