Two questions really, But I would like to make it more descriptive :
I am implementing a Modulator which involves Matrix Multiplication of complex Vector:
Just to give an example :
cck_encoding_table(1,:)= [ 1j 1 1j -1 1j 1 -1j 1 ];
cck_encoding_table(2,:)= [ -1j -1 -1j 1 1j 1 -1j 1 ];
cck_encoding_table(3,:)= [ -1j 1 -1j -1 -1j 1 1j 1 ];
cck_encoding_table(4,:)= [ 1j -1 1j 1 -1j 1 1j 1 ];
Basically, I need to implement this in Simulink( Xilinx) eventually in Hardware:
cck_n_code=exp(1j*Phi1)*cck_encoding_table(index+1,:);
My question, how to model Matrix Multiplication with Complex Vectors. My understanding is to use Complex Multiplier. But that is to multiply only 2 complex vectors
If I have to multiply more than 2 complex vector in a single clock would it be possible.
I am not expecting any answers like model itself, but possible approach/direction if any to solve the problem
Thanks for reading,
Kiran
Simply write out the low level equations that result from your matrix multiply. Each element of the output will be the result of summing a collection of multiplications of elements from your input vector&matrix.
If you need to do it fast, then put down as many complex multipliers and adders as you need and wire the input elements up to them - that'll give you all your outputs at once and require you to have all your inputs available at once.
Alternatively, put your inputs into a memory block (or probably 2, one for the vector, one for the matrix) and arrange some logic which will feed the correct address into that memory block to iterate over the elements in an appropriate order. Those inputs go to a complex multiplier and then on to a complex accumulator (you might have to model that from an adder and resettable register). Your control logic will need to reset this accumulator periodically.
The output of the accumulator can be fed on to a next stage, or stored in another memory block (another address for you control logic to manage).
if your encoding table is always going to have elements which are only from the set (1,-1,j,-j) then you can encode these as 2 bits rather than storing entire fully represented complex numbers and write a custom piece of logic which takes advantage of this fact to create a much simpler complex multiplier than the general purpose one.
Related
I've made an I2S transmitter to generate a "sound" out of my FPGA. The next step I would like to do, is create a sine. I've made 16 samples in a LUT. My question is how to implement something like this in VHDL. And also how you load the samples in sequence. Who has tried this already, and could share his knowledge?
I've made a Lookup table with 16 samples:
0 0π
0,382683432 1/16π
0,707106781 1/8π
0,923879533 3/16π
1 1/4π
0,923879533 5/16π
0,707106781 3/8π
0,382683432 7/16π
3,23114E-15 1π
-0,382683432 1 1/16π
-0,707106781 1 1/8π
-0,923879533 1 3/16π
-1 1 1/4π
-0,923879533 1 5/16π
-0,707106781 1 3/8π
-0,382683432 1 7/16π
-6,46228E-15 2π
The simplest solution is to make a ROM which is just a big case statement.
FPGA synthesis tools will map this on ore more LUT's.
Note that for bigger tables only 1/4 of the wave is stored, the other values are derived.
I would like to send out a 24 bit samples, do you also know how to do that with this data (binary!)?
24 bits (signed) mean you have to convert your floating point values to integer values in the range -8388608..8388607. (For symmetry reason you would use -8388608..8388607)
Thus multiply the sine values (which you know are in the range -1..1) with 8388607.
The frequency of the sine depends on how fast (many samples per second) you send.
My actual vector has 110 elements that I'll use to extract features from images in matlab, I took this one (tb) to simplify
tb=[22.9 30.0 30.3 27.8 24.1 28.2 26.4 12.6 39.7 38.0];
normalized_V = tb/norm(tb);
I = mat2gray(tb);
For normalized_v I got 0.2503 0.3280 0.3312 0.3039 0.2635 0.3083 0.2886 0.1377 0.4340 0.4154.
For I I got 0.3801 0.6421 0.6531 0.5609 0.4244 0.5756 0.5092 0 1.0000 0.9373 which one should I use if any of those 2 methods and why, and should I transform the features vector to 1 element after extraction for better training or leave it as a 110 element vector.
Normalization can be performed in several ways, such as the following:
Normalizing the vector between 0 and 1. In that case, just use:(tb-min(tb))/max(tb)
Making the maximum point at 1. In that case, just use: tb/max(tb) (which is the method that you have been used before).
Making the mean 0 and the standard deviation as 1. This is the most common method for using the returned values as features in a classification procedure and thus, I think that it is the one that you should use right now: zscore(tb) (or (tb-mean(tb))/std(tb)).
So, your final values would be:
zscore(tb)
ans =
-0.6664
0.2613
0.3005
-0.0261
-0.5096
0.0261
-0.2091
-2.0121
1.5287
1.3066
Edit:
In regard to your second question, it depends on the number of observations. Every single classifier takes an MxN matrix of data and an Mx1 vector of labels as inputs. In this case, M refers to the number of observations, whereas N refers to the number of features. Usually, in order to avoid over-fitting, it is recommended to use a number of features less than the tenth part of the number of observations (i.e., the number of observations must be M > 10N).
So, in your case, if you use the entire 110-set of features, you should have a minimum of 1100 observations, otherwise you can have problems with over-fitting.
In two-complement to invert the sign of a number you usually just negate every bit and add 1.
For example:
011 (3)
100 + 1 = 101 (-3)
In VHDL is:
a <= std_logic_vector(unsigned(not(a)) + 1);
In this way the synthesizer uses an N-bit adder.
Is there another more efficient solution without using the adder?
I would guess there's not an easier way to do it, but the adder is probably not as bad as you think it is.
If you are trying to say invert a 32-bit number, the synthesis tool might start with a 32-bit adder structure. However then upon seeing that the B input is always tied to 1, it can 'hollow out' a lot of the structure due to the unused gates (AND gates with one pin tied to ground, OR gates with one pin tied to logic 1, etc).
So what you're left with I'd imagine would be a reasonably efficient blob of logic which just increments an input number.
If you are just trying to create a two's complement bit pattern then a unary - also works.
a = 3'b001 ; // 1
b = -a ; //3'b111 -1
c = ~a + 1 ; //3'b111 -1
Tim has also correctly pointed out that just because you use a + or imply one through a unary -, the synthesis tools are free to optimise this.
A full adder has 3 inputs (A, B, Carry_in) and 2 outputs (Sum Carry_out). Since for our use the second input is only 1 bit wide, and at the LSB there is no carry, we do not need a 'full adder'.
A half adder which has 2 inputs (A, B) and 2 outputs (Sum Carry), is perfect here.
For the LSB the half adders B input will be high, the +1. The rest of the bits B inputs will be used to propagate the Carry from the previous bit.
There is no way that I am aware of to write in verilog that you want a half adder, but any size number plus 1 bit only requires a half adder rather than a fulladder.
Can anyone please explain arithmetic encoding for data compression with implementation details ? I have surfed through internet and found mark nelson's post but the implementation's technique is indeed unclear to me after trying for many hours.
Mark nelson's explanation on arithmetic coding can be located at
http://marknelson.us/1991/02/01/arithmetic-coding-statistical-modeling-data-compression/
The main idea with arithmetic compression is its the capability to code a probability using the exact amount of data length required.
This amount of data is known, proven by Shannon, and can be calculated simply by using the following formula : -log2(p)
For example, if p=50%, then you need 1 bit.
And if p=25%, you need 2 bits.
That's simple enough for probabilities which are power of 2 (and in this special case, huffman coding could be enough). But what if the probability is 63% ? Then you need -log2(0.63) = 0.67 bits. Sounds tricky...
This property is especially important if your probability is high. If you can predict something with a 95% accuracy, then you only need 0.074 bits to represent a good guess. Which means you are going to compress a lot.
Now, how to do that ?
Well, it's simpler than it sounds. You will divide your range depending on probabilities. For example, if you have a range of 100, 2 possible events, and a probability of 95% for the 1st one, then the first 95 values will say "Event 1", and the last 5 remaining values will say "Event 2".
OK, but on computers, we are accustomed to use powers of 2. For example, with 16 bits, you have a range of 65536 possible values. Just do the same : take the 1st 95% of the range (which is 62259) to say "Event 1", and the rest to say "Event 2". You obviously have a problem of "rounding" (precision), but as long as you have enough values to distribute, it does not matter too much. Furthermore, you are not constrained to 2 events, you could have a myriad of events. All that matters is that values are allocated depending on the probabilities of each event.
OK, but now i have 62259 possible values to say "Event 1", and 3277 to say "Event 2". Which one should i choose ?
Well, any of them will do. Wether it is 1, 30, 5500 or 62256, it still means "Event 1".
In fact, deciding which value to select will not depend on the current guess, but on the next ones.
Suppose i'm having "Event 1". So now i have to choose any value between 0 and 62256. On next guess, i have the same distribution (95% Event 1, 5% Event 2). I will simply allocate the distribution map with these probabilities. Except that this time, it is distributed over 62256 values. And we continue like this, reducing the range of values with each guess.
So in fact, we are defining "ranges", which narrow with each guess. At some point, however, there is a problem of accuracy, because very little values remain.
The idea, is to simply "inflate" the range again. For example, each time the range goes below 32768 (2^15), you output the highest bit, and multiply the rest by 2 (effectively shifting the values by one bit left). By continuously doing like this, you are outputting bits one by one, as they are being settled by the series of guesses.
Now the relation with compression becomes obvious : when the range are narrowed swiftly (ex : 5%), you output a lot of bits to get the range back above the limit. On the other hand, when the probability is very high, the range narrow very slowly. You can even have a lot of guesses before outputting your first bits. That's how it is possible to compress an event to "a fraction of a bit".
I've intentionally used the terms "probability", "guess", "events" to keep this article generic. But for data compression, you just to replace them with the way you want to model your data. For example, the next event can be the next byte; in this case, you have 256 of them.
Maybe this script could be useful to build a better mental model of arithmetic coder: gen_map.py. Originally it was created to facilitate debugging of arithmetic coder library and simplify generation of unit tests for it. However it creates nice ASCII visualizations that also could be useful in understanding arithmetic coding.
A small example. Imagine we have an alphabet of 3 symbols: 0, 1 and 2 with probabilities 1/10, 2/10 and 7/10 correspondingly. And we want to encode sequence [1, 2]. Script will give the following output (ignore -b N option for now):
$ ./gen_map.py -b 6 -m "1,2,7" -e "1,2"
000000111111|1111|111222222222222222222222222222222222222222222222
------011222|2222|222000011111111122222222222222222222222222222222
---------011|2222|222-------------00011111122222222222222222222222
------------|----|-------------------------00111122222222222222222
------------|----|-------------------------------01111222222222222
------------|----|------------------------------------011222222222
==================================================================
000000000000|0000|000000000000000011111111111111111111111111111111
000000000000|0000|111111111111111100000000000000001111111111111111
000000001111|1111|000000001111111100000000111111110000000011111111
000011110000|1111|000011110000111100001111000011110000111100001111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
001100110011|0011|001100110011001100110011001100110011001100110011
010101010101|0101|010101010101010101010101010101010101010101010101
First 6 lines (before ==== line) represent a range from 0.0 to 1.0 which is recursively subdivided on intervals proportional to symbol probabilities. Annotated first line:
[1/10][ 2/10 ][ 7/10 ]
000000111111|1111|111222222222222222222222222222222222222222222222
Then we subdivide each interval again:
[ 0.1][ 0.2 ][ 0.7 ]
000000111111|1111|111222222222222222222222222222222222222222222222
[ 0.7 ][.1][ 0.2 ][ 0.7 ]
------011222|2222|222000011111111122222222222222222222222222222222
[.1][ .2][ 0.7 ]
---------011|2222|222-------------00011111122222222222222222222222
Note, that some intervals are not subdivided. That happens when there is not enough space to represent every subinterval within given precision (which is specified by -b option).
Each line corresponds to a symbol from the input (in our case - sequence [1, 2]). By following subintervals for each input symbol we'll get a final interval that we want to encode with minimal amount of bits. In our case it's a first 2 subinterval on a second line:
[ This one ]
------011222|2222|222000011111111122222222222222222222222222222222
Following 7 lines (after ====) represent the same interval 0.0 to 1.0, but subdivided according to binary notation. Each line is a bit of output and by choosing between 0 and 1 you choose left or right half-subinterval. For example bits 01 corresponds to subinterval [0.25, 05) on a second line:
[ This one ]
000000000000|0000|111111111111111100000000000000001111111111111111
The idea of arithmetic coder is to output bits (0 or 1) until the corresponding interval will be entirely inside (or equal to) the interval determined by the input sequence. In our case it's 0011. The ~~~~ line shows where we have enough bits to unambiguously identify the interval we want.
Vertical lines formed by | symbol show the range of bit sequences (rows) that could be used to encode the input sequence.
First of all thanks for introducing me to the concept of arithmetic compression!
I can see that this method has the following steps:
Creating mapping: Calculate the fraction of occurrence for each letter which gives a range size for each alphabet. Then order them and assign actual ranges from 0 to 1
Given a message calculate the range (pretty straightforward IMHO)
Find the optimal code
The third part is a bit tricky. Use the following algorithm.
Let b be the optimal representation. Initialize it to empty string (''). Let x be the minimum value and y the maximum value.
double x and y: x=2*x, y=2*y
If both of them are greater than 1 append 1 to b. Go to step 1.
If both of them are less than 1, append 0 to b. Go to step 1.
If x<1, but y>1, then append 1 to b and stop
b essentially contains the fractional part of the number you are transmitting. Eg. If b=011, then the fraction corresponds to 0.011 in binary.
What part of implementation do you not understand?
I'm having trouble understanding the algorithm being used in this FPGA circuit. It deals with redundant versus non-redundant number format. I have seen some mathematical (formal) definitions of non-redundant format but I just can't really grasp it.
Excerpt from this paper describing the algorithm:
Figure 3 shows a block diagram of the scalable Montgomery multiplier. The kernel contains p w-bit PEs for a total of wp bit cells. Z is stored in carry-save redundant form. If PE p completes Z^0 before PE1 has finished Z^(e-1), the result must be queued until PE1 becomes available again. The design in [5] queues the results in redundant form, requiring 2w bits per entry. For large n the queue consumes significant area, so we propose converting Z to nonredundant form to save half the queue space, as shown in Figure 4. On the first cycle, Z is initialized to 0. When no queuing is needed, the carry-save redundant Z' is bypassed directly to avoid the latency of the carry-propagate adder. The nonredundant Z result is also an output of the system.
And the diagrams:
And here is the "improved" PE block diagram. This shows the 'improved' PE block diagram - 'improved' has to do with some unrelated aspects.
I don't have a picture of the 'not improved' FIFO but I think it is just a straight normal FIFO. What I don't understand is, does the FIFO's CPA and 3 input MUX somehow convert between formats?
Understanding redundant versus non-redundant formats (in concrete examples) is the first step, understanding how this circuit achieves it would be step 2..
A bit of background and a look at users.ece.utexas.edu/~adnan/vlsi-05-backup/lec12Datapath.ppt suggests the following:
Doing a proper binary add is relatively slow and/or area-consuming, because of the time that it takes to propagate the carries properly.
If you work bit-wise in parallel you can take three binary numbers, sum the bits at the same location in each number, and produce two binary numbers.
Slide 27 points out that 0001 + 0111 + 1101 = 1011 + 0101(0).
Since a multiplier needs to do a LOT of additions, you build the adder tree as a collection of reductions of 3 numbers to 2 numbers, eventually ending up with two numbers as output, abcde....z
and ABCDE...Z0. This is your output in redundant form, and the true answer is in fact abcde...z + ABCDE...Z0