How to convert and report time read in ps to frequency in Hz or MHz or KHz in VHDL? - vhdl

In the VHDL test bench, declared
'''
t1 : time;
t2 : time;
'''
The simulation is in picoseconds. The assignment in architecure is
'''
t1: now -- assign time to t1
t2: now -- assign time to t2 later point in test bench.
'''
Question :
How to convert the delta time in ps (t2-t1) to frequency? Either Hz or MHz or KHz, does not matter.

There isn't a special way to do this in VHDL - physical types like time are designed to make unit conversions simpler, not to swap between different sorts of physical quantities.
What that means is that you're going to need to implement some steps to do the conversion, whether in a function or as part of a sequential process. You could even define a frequency type if you're so inclined.
To do the conversion:
Convert the time value into a quantity of type integer. This is done by dividing by another span of time. Which you pick will determine the accuracy of your calculations, but 1 fs is the fundamental time unit and will give you the highest granularity.
Convert the integer into real and perform the f = 1/t calculation to get a value for the frequency as a real value.
If you need to, multiply by the appropriate power of 10 to get your desired units and cast back to integer if that's what you're after (if you define a frequency type, you could assign units, and it will have the same conversion support that time has).

Related

How to add a LUT in VHDL to generate a sine

I've made an I2S transmitter to generate a "sound" out of my FPGA. The next step I would like to do, is create a sine. I've made 16 samples in a LUT. My question is how to implement something like this in VHDL. And also how you load the samples in sequence. Who has tried this already, and could share his knowledge?
I've made a Lookup table with 16 samples:
0 0π
0,382683432 1/16π
0,707106781 1/8π
0,923879533 3/16π
1 1/4π
0,923879533 5/16π
0,707106781 3/8π
0,382683432 7/16π
3,23114E-15 1π
-0,382683432 1 1/16π
-0,707106781 1 1/8π
-0,923879533 1 3/16π
-1 1 1/4π
-0,923879533 1 5/16π
-0,707106781 1 3/8π
-0,382683432 1 7/16π
-6,46228E-15 2π
The simplest solution is to make a ROM which is just a big case statement.
FPGA synthesis tools will map this on ore more LUT's.
Note that for bigger tables only 1/4 of the wave is stored, the other values are derived.
I would like to send out a 24 bit samples, do you also know how to do that with this data (binary!)?
24 bits (signed) mean you have to convert your floating point values to integer values in the range -8388608..8388607. (For symmetry reason you would use -8388608..8388607)
Thus multiply the sine values (which you know are in the range -1..1) with 8388607.
The frequency of the sine depends on how fast (many samples per second) you send.

Does the CONSTANT declaration stores the values in Block-RAM or in flipflops of an FPGA?

For example, if I want to store my filter coefficients in n-Tap FIR filters using constants, will the CONSTANT declaration store my values in Block RAMs or registers using FPGA flipflops? Also can SIGNAL be used to store the coefficients without using RAM cells?
The constants themselves aren't "stored" anywhere - their values are simply substituted into the VHL code where you use them.
Where they're stored depends on how you use them and how the code is optimized.
If you're multiplying a signal by a constant two, for example, no elements are used at all - the data bus will be simply connected in a way that effectively shifts the value left by one bit.
Or, they may end up as hard-wired inputs to other elements like multipliers in your case.
Either way, you should look into the synthesis results to thoroughly understand the generated RTL.
[...] will the CONSTANT declaration store my values in Block RAMs or registers using FPGA flipflops?
Whether constants are stored in memory blocks or registers, or if they are merged into the boolen equations depends on your implementation of an algorithm. Let's have a look on the following mathematical equation (not VHDL code):
y = c_1 * x_1 + c_2 * x_2 + c_3 * x_3 +... + c_N * x_N
N is the number of coefficients, x_i are the input values, and c_i are the constant coefficients.
You can implements this equation in VHDL / hardware by:
N parallel multipliers and an adder tree to sum up the products; all done combinational, within one clock cycle or even pipelined with a throughput of one result per clock cycle.
Or N sequentially executed multiply-accumlate steps; with one multiply-accumlate per clock cycle.
You can take even a combination of both.
In case 1, the synthesizer optimizes each multiplication with a constant:
just wiring if the coefficient is a power of two,
addition if the binary representation of the coefficient contains a small number of ones (5*x = x + 4*x),
or multiplier hard macro with a constant value (VDD, GND) connected to one if its inputs.
Thus, in case 1 no memory or registers are required to store the constants.
In case 2, the synthesizer will map the multiply-accumulate step to a hardware multiplier plus an adder. This multiplier and adder will be re-used for all N steps, so that, the coefficients must be looked up in a memory. If you have a lot of coefficients, then memory blocks (Block-RAM) are used. The current iteration step i will make up the memory address. If you have only a small number of coefficients, then they can also be stored in distributed memory (LUT-RAM) or computed via boolean equations. But even in this case, the coefficients will not be mapped to flip-flops because their value do not change with time.
Also can SIGNAL be used to store the coefficients without using RAM cells?
Yes, of course. With a proper synchronous description they will be mapped to flip-flops.
The used storage element:
registers
distributed RAM (LUTRAM)
BlockRAM
...depends on your chosen VHDL description and size.
You should use a constant instead of a signal. Moreover it could be helpful to used synchronous read operations to infer registered outputs.
Look into the synthesis report to validate the intended description.
Building on Paebbels' answer, it depends. Though they can be implemented in distributed ROM (LUTROM) as well. It depends on the synthesis tool. For example, Xilinx's Vivado has in their synthesis guide (UG901) describes how to infer RAM/ROM.
For your example of a FIR filter, you might have something like:
type coeff_array is array(natural range<>) of std_logic_vector(17 downto 0);
constant coeffs : coeff_array(0 to N-1) := ( x"XXXX", x"XXXX", ..., x"XXXX" );
Now, whether this is a distributed ROM or RAM depends on the tool. A quick test with Vivado shows this construct synthesizes to a sea of gates (just LUT logic). However, it can be forced into a block RAM (aka block ROM) by:
signal coeffs : coeff_array(0 to N-1) := ( x"XXXX", x"XXXX", ..., x"XXXX" );
attribute ROM_STYLE : string;
attribute ROM_STYLE of coeffs : signal is "block";
The means to infer any specific type of structure (LUTs, LUTRAM, LUTROM, block ROM, block RAM) depends upon the tools in question. Run the test through synthesis to see what you get. And look at the synthesis guide for the synthesizer you are using to figure out how to get what you want.

Why does dividing two time.durations in Go result in another time.duration?

I don't understand what it means to divide a time.Duration in Go.
For example, this is super lovely:
d,_ := time.ParseDuration("4s")
fmt.Println(d/4)
print 1s. Which is ace, because (naively) 4 seconds divided by 4 is 1 second.
It gets a little confusing though when we find out that the 4 in the denominator has to be a duration. So although:
d1 := time.Duration(4)
fmt.Println(d/d1)
also prints 1s, we know that d1 is actually 4ns and I'm entirely unconvinced that 4 seconds divided by 4 nanoseconds is 1 second.
I'm confused because a duration divided by duration should be dimensionless (I think, right?), whereas a duration divided by a dimensionless number should have units of time.
And I know that type != unit, but I'm clearly misunderstanding something, or quite possibly a set of things. Any help to clear this up would be most appreciated!
Here is a go playground of the above examples. https://play.golang.org/p/Ny2_ENRlX6. And just for context, I'm trying to calculate the average time between events. I can fall back to using floats for seconds, but am trying to stay in time.Duration land.
Mathematically, you're correct: dividing two time.Durations should result in a dimensionless quantity. But that's not how go's type system works. Any mathematical operation results in a value of the same type as the inputs. You'll have to explicitly cast the result of the division to an int64 to get an "untyped" quantity.
It is so because time.Duration is int64. See documentation of time package.
You make a division of 4000000000 (4s) by 4 (4ns) and you get 1000000000 (1s). You should look at the operations as they where integers not typed values. Type Duration make it look like a physical value but for division operation it is just a number.
There are no units attached to a time.Duration. A time.Duration represents the physical concept of a duration (measured in seconds and having a unit) by providing a distinct type, namely the time.Duration type. But technically it is just a uint64.
If you try to attach actual units to types you'll enter unit-hell: What would a (time.Duration * time.Duration)/acceleration.Radial * mass.MetricTon be? Undefined most probably.

MPI Fortran WTIME not working well

I am coding using Fortran MPI and I need to get the run time of the program. Therefore I tried to use the WTIME() function but I am getting some strange results.
Part of the code is like this:
program heat_transfer_1D_parallel
implicit none
include 'mpif.h'
integer myid,np,rc,ierror,status(MPI_STATUS_SIZE)
integer :: N,N_loc,i,k,e !e = number extra points (filled with 0s)
real :: time,tmax,start,finish,dt,dx,xmax,xmin,T_in1,T_in2,T_out1,T_out2,calc_T,t1,t2
real,allocatable,dimension(:) :: T,T_prev,T_loc,T_loc_prev
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD,np,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierror)
...
t1 = MPI_WTIME()
time = 0.
do while (time.le.tmax)
...
end do
...
call MPI_BARRIER(MPI_COMM_WORLD,ierror)
t2 = MPI_WTIME()
call MPI_FINALIZE(ierror)
if(myid.eq.0) then
write(*,"(8E15.7)") T(1:N-e)
write(*,*)t2
write(*,*)t1
end if
And the output value for t1 and t2 is the same and a very big: 1.4240656E+09
Any ideas why? Thank you so much.
From the documentation: Return value: Time in seconds since an arbitrary time in the past. They didn't specify how far back ;-) Only t2-t1 is meaningful here...
Also, the return value of MPI_Wtime() is double precision! t1 and t2 are declared as single precision floats.
Building on the answer from Alexander Vogt, I'd like to add that many Unix implementations of MPI_WTIME use gettimeofday(2) (or similar) to retrieve the system time and then convert the returned struct timeval into a floating-point value. Timekeeping in Unix is done by tracking the number of seconds elapsed since the Epoch (00:00 UTC on 01.01.1970). While writing this, the value is 1424165330.897136 seconds and counting.
With many Fortran compilers, REAL defaults to single-precision floating point representation that can only hold 7.22 decimal digits, while you need at least 9 (more if subsecond precision is needed). The high-precision time value above becomes 1.42416538E9 when stored in a REAL variable. The next nearest value that can be represented by the type is 1.4241655E9. Therefore you cannot measure time periods shorter than (1.4241655 - 1.42416538).109 or 120 seconds.
Use double precision.

Integer to Binary Conversion in Simulink

This might look a repetition to my earlier question. But I think its not.
I am looking for a technique to convert the signal in the Decimal format to binary format.
I intend to use the Simulink blocks in the Xilinx Library to convert decimal to binary format.
So if the input is 3, the expected output should in 11( 2 Clock Cycles). I am looking for the output to be obtained serially.
Please suggest me how to do it or any pointers in the internet would be helpful.
Thanks
You are correct, what you need is the parallel to serial block from system generator.
It is described in this document:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_ref.pdf
This block is a rate changing block. Check the mentions of the parallel to serial block in these documents for further descriptions:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_gs.pdf
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_user.pdf
Use a normal constant block with a Matlab variable in it, this already gives the output in "normal" binary (assuming you set the properties on it to be unsigned and the binary point at 0.
Then you need to write a small serialiser block, which takes that input, latches it into a shift register and then shifts the register once per clock cycle with the bit that "falls off the end" becoming your output bit. Depending on which way your shift, you can make it come MSB first of LSB first.
You'll have to build the shift register out of ordinary registers and a mux before each one to select whether you are doing a parallel load or shifting. (This is the sort of thing which is a couple of lines of code in VHDL, but a right faff in graphics).
If you have to increase the serial rate, you need to clock it from a faster clock - you could use a DCM to generate this.
Matlab has a dec2bin function that will convert from a decimal number to a binary string. So, for example dec2bin(3) would return 11.
There's also a corresponding bin2dec which takes a binary string and converts to a decimal number, so that bin2dec('11') would return 3.
If you're wanting to convert a non-integer decimal number to a binary string, you'll first want to determine what's the smallest binary place you want to represent, and then do a little bit of pre- and post-processing, combined with dec2bin to get the results you're looking for. So, if the smallest binary place you want is the 1/512th place (or 2^-9), then you could do the following (where binPrecision equals 1/512):
function result = myDec2Bin(decNum, binPrecision)
isNegative=(decNum < 0);
intPart=floor(abs(decNum));
binaryIntPart=dec2bin(intPart);
fracPart=abs(decNum)-intPart;
scaledFracPart=round(fracPart / binPrecision);
scaledBinRep=dec2bin(scaledFracPart);
temp=num2str(10^log2(1/binPrecision)+str2num(scaledBinRep),'%d');
result=[binaryIntPart,'.',temp(2:end)];
if isNegative
result=['-',result];
end
end
The result of myDec2Bin(0.256, 1/512) would then be 0.010000011, and the result of myDec2Bin(-0.984, 1/512) would be -0.111111000. (Note that the output is a string.)

Resources