Integer to Binary Conversion in Simulink - fpga

This might look a repetition to my earlier question. But I think its not.
I am looking for a technique to convert the signal in the Decimal format to binary format.
I intend to use the Simulink blocks in the Xilinx Library to convert decimal to binary format.
So if the input is 3, the expected output should in 11( 2 Clock Cycles). I am looking for the output to be obtained serially.
Please suggest me how to do it or any pointers in the internet would be helpful.
Thanks

You are correct, what you need is the parallel to serial block from system generator.
It is described in this document:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_ref.pdf
This block is a rate changing block. Check the mentions of the parallel to serial block in these documents for further descriptions:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_gs.pdf
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_user.pdf

Use a normal constant block with a Matlab variable in it, this already gives the output in "normal" binary (assuming you set the properties on it to be unsigned and the binary point at 0.
Then you need to write a small serialiser block, which takes that input, latches it into a shift register and then shifts the register once per clock cycle with the bit that "falls off the end" becoming your output bit. Depending on which way your shift, you can make it come MSB first of LSB first.
You'll have to build the shift register out of ordinary registers and a mux before each one to select whether you are doing a parallel load or shifting. (This is the sort of thing which is a couple of lines of code in VHDL, but a right faff in graphics).
If you have to increase the serial rate, you need to clock it from a faster clock - you could use a DCM to generate this.

Matlab has a dec2bin function that will convert from a decimal number to a binary string. So, for example dec2bin(3) would return 11.
There's also a corresponding bin2dec which takes a binary string and converts to a decimal number, so that bin2dec('11') would return 3.
If you're wanting to convert a non-integer decimal number to a binary string, you'll first want to determine what's the smallest binary place you want to represent, and then do a little bit of pre- and post-processing, combined with dec2bin to get the results you're looking for. So, if the smallest binary place you want is the 1/512th place (or 2^-9), then you could do the following (where binPrecision equals 1/512):
function result = myDec2Bin(decNum, binPrecision)
isNegative=(decNum < 0);
intPart=floor(abs(decNum));
binaryIntPart=dec2bin(intPart);
fracPart=abs(decNum)-intPart;
scaledFracPart=round(fracPart / binPrecision);
scaledBinRep=dec2bin(scaledFracPart);
temp=num2str(10^log2(1/binPrecision)+str2num(scaledBinRep),'%d');
result=[binaryIntPart,'.',temp(2:end)];
if isNegative
result=['-',result];
end
end
The result of myDec2Bin(0.256, 1/512) would then be 0.010000011, and the result of myDec2Bin(-0.984, 1/512) would be -0.111111000. (Note that the output is a string.)

Related

Fastest algorithm to convert hexadecimal numbers into decimal form without using a fixed length variable to store the result

I want to write a program to convert hexadecimal numbers into their decimal forms without using a variable of fixed length to store the result because that would restrict the range of inputs that my program can work with.
Let's say I were to use a variable of type long long int to calculate, store and print the result. Doing so would limit the range of hexadecimal numbers that my program can handle to between 8000000000000001 and 7FFFFFFFFFFFFFFF. Anything outside this range would cause the variable to overflow.
I did write a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations but it runs much slower, even for numbers that are as big as 7FFFFFFFF!
Then I stumbled onto this site which could take numbers that are way outside the range of a 64 bit variable. I tried their converter with numbers much larger than 16^65 - 1 and still couldn't get it to overflow. It just kept on going and printing the result.
I figured that they must be using a much better algorithm for hex to decimal conversion, one that isn't limited to 64 bit values.
So far, Google's search results have only led me to algorithms that use some fixed-length variable for storing the result.
That's why I am here. I wanna know if such an algorithm exists and if it does, what is it?
Well, it sounds like you already did it when you wrote "a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations".
Converting from base 16 (hexadecimal) to base 10 means implementing multiplication and addition of numbers in a base 10x representation. Then for each hex digit d, you calculate result = result*16 + d. When you're done you have the same number in a 10-based representation that is easy to write out as a decimal string.
There could be any number of reasons why your string-based method was slow. If you provide it, I'm sure someone could comment.
The most important trick for making it reasonably fast, though, is to pick the right base to convert to and from. I would probably do the multiplication and addition in base 109, so that each digit will be as large as possible while still fitting into a 32-bit integer, and process 7 hex digits at a time, which is as many as I can while only multiplying by single digits.
For every 7 hex digts, I'd convert them to a number d, and then do result = result * ‭(16^7) + d.
Then I can get the 9 decimal digits for each resulting digit in base 109.
This process is pretty easy, since you only have to multiply by single digits. I'm sure there are faster, more complicated ways that recursively break the number into equal-sized pieces.

Make a previously unknown number of parallel operations. In VHDL

Im working on a project for which I need to make calculations with vectors (orthogonalizing a matrix using gram schmidt method). The length of this vectors is unknown now, the program must be able to adapt to different lengths. One of such calculations is calculating a new vector (C) which is the result of adding A and B. Each element of the vectors is a number in fixed-point.
I want C(i)=A(i)+B(i). For all the elements of the vector (for i=0 to N, where N is the vector length).
I can find 2 solutions for this but both present some problems:
1- I can declare in the entity, vectors whose length changes according to a generic and then just create a for loop which goes through all the vector.
for I in 0 to N loop
C(I)<=A(I)+B(I);
end loop;
The problem with this solution is that the execution would be sequential, and therefore slow. Im not completly sure about this and I dont know how to check it but I guess that the compiler is not smart enough to notice that it can be processed in parallel. In this application speed is a key factor.
2- I can declare vectors which are as long as the maximum possible length for the actual data and fill them with zeroes. Then I could just assign:
C(0)<=A(0)+B(0);
C(1)<=A(1)+B(1);
C(2)<=A(2)+B(2);
...
C(Nmax)<=A(Nmax)+B(Nmax);
This is not an elegant solution and in this application N can be between 3 and 300 therefore it could be a complete waste and tedious to program.
3- I want to find a third solution which could be able to create a number (asigned by the generic) of combinational calculations following a template such as C(i)=A(i)+B(i). Is there any solution like this? It is actually creating a loop which would not be executed sequentially but instead all at the same time.
I know that similar stuff can be done using CUDA but this project is actually a comparison between GPUs and FPGAs, so changing the platform is not a suitable solution either.
Thank you in advance
Edit: I have tought of another unsatisfactory solution but I want to share it in case it is helpful for somebody else checking this in the future. Given that A and B have the same length, you can write them in a 1-D format, that is: A(normal)=[1001,1100,0011], A(1-D)=100111000011. The same would be done with B.
If you know before hand that the sum of any two possible numbers can be expressed with the same amount of bits, there will be no problems. So with 4 unsigned bits you should make sure that in any possible case the numbers in A or B are !>0111 (not higher than 0111). You could just write C(1-D)=A(1-D)+B(1-D) and then just asign C(0)=C(1-D)(3 downto 0), C(1)=C(1-D)(7 downto 4) etc.
If you cannot make sure that the numbers are not higher than 0111 (in the 4 bit case) it wont work.
You might be able to use the length attribute to create a loop depending on the size of your vector.
https://www.csee.umbc.edu/portal/help/VHDL/attribute.html
As mentioned in the comment to the question the loop should be unrolled as long as it is not synchronized to the clock.

Comparing vector of double

I am trying to compare two vectors.
v1 = {0.520974 , 0.438171 , 0.559061}
v2 = [0.520974 , 0.438171 , 0.559061}
I write v1 to a file, read and that's v2. For some reason when I compare the two vectors, I am getting false!
When I do: v1[0]-v2[0] I get 4.3123e-8
Thanks,
Double values, unlike integers, are fragile against write and read. That means, the information that represents them in a string is not necessarily complete.
The leading reason of that is rounding: it's like if you had 1/7 and wanted to write it on a paper in the same format as in your question, you'd get:
0.142857
That's exact to 6 decimal places, but no more than that, and the difference shows up. The only difference in the computer is that it counts in binary (and rounds in binary, too), and is further complicated by the fact that at output (or input) you coerce that into decimal (or back, respectively) and round again at each step. All of those are sources of little errors.
If you want to be able to save and reload your doubles exactly (on the same machine), do it in their native binary representation using a write and read. If you want them to be human-readable, you need to sacrifice the exact reconstruction. You'd then need to compare them up to a little allowed deviation.

How to convert fixed-point VHDL type back to float?

I am using IEEE fixed point package in VHDL.
It works well, but I now facing a problem concerning their string representation in a test bench : I would like to dump them in a text file.
I have found that it is indeed possible to directly write ufixed or sfixed using :
write(buf, to_string(x)); --where x is either sfixed or ufixed (and buf : line)
But then I get values like 11110001.10101 (for sfixed q8.5 representation).
So my question : how to convert back these fixed point numbers to reals (and then to string) ?
The variable needs to be split into two std-logic-vector parts, the integer part can be converted to a string using standard conversion, but for the fraction part the string conversion is a bit different. For the integer part you need to use a loop and divide by 10 and convert the modulo remainder into ascii character, building up from the lower digit to the higher digit. For the fractional part it also need a loop but one needs to multiply by 10 take the floor and isolate this digit to get the corresponding character, then that integer is used to be substracted to the fraction number, etc. This is the concept, worked in MATLAB to test and making a vhdl version I will share soon. I was surprised not to find such useful function anywhere. Of course fixed-point format can vary Q(N,M) N and M can have all sorts of values, while for floating point, it is standardized.

Grabbing the digits of a number without using division or modulus

I am trying to implement a 7-segment counter using VHDL.
The counter starts from 0 and increments an integer value to a max of 9999.
The value is passed to a bloc that is supposed to "split" the number into digits so that i can display them on the 7-segment which are multiplexed...
I have already done this on a PIC using many methods such as Interrupts... but now that i am trying to do this on a FPGA (Xilinx Spartan 3E Starter Board to be exact) i noticed while implementing the code i've wrote that i can't use neither division nor modulus because they cannot be implemented...
Edit: I know i could just map the values 0..9999 each alone but that is far far fetched.
Surely there is another way, but i can't think of it.
Any hint on a workaround would be very appreciated!
Well, if your number is in decimal, just extract the bits containing each digit and send them to your display multiplexor. The LSD is num[3:0], the MSD is num[15:12], etc.

Resources