How to find delay of circuit? - fpga

How to find delay, throughput, maximum operating frequency for my circuit in vivado?
The values that I have are Worst Negative slack=2.055 ns, Total negative slack of 0ns, number of failing end points=0, total number of end points=22082.

Delay and throughput depend on your design. The slack suggests that your max frequency is about 2ns faster than what it is now.

Related

Is there any relation between advertising interval, walking speed, and window size of moving average filter?

My beacons have advertisement interval of 330ms. I use an iOS device to scan the advertisement packet whose scanning rate is 1 scan per second on average. I want to use the moving average filter to smooth the fluctuating RSSI values. Considering the walking speed of 1.2 m/s and the advertisement interval of 330 ms, what should be the size of a window in the moving average filter? Is there any mathematical relationship between them?
Thank you.
There is no one correct answer here. It is a trade-off between noise in the distance estimate and lag time.
The large (and longer) your statistical sample, the more lag time there will be in a running average. A 20 second window will tell you where you were on average over the last 20 seconds, and filter out a lot of noise. A 5 second running average will tell you where you were on average over the last 5 seconds, but with much more noise on the calculation.
How much lag you can tolerate and how much noise you can tolerate all depend on your use case. Use cases that are very time sensitive may sacrifice accuracy for the sake of less lag. Conversely use cases needing greater accuracy may accept more lag to filter out more noise on the estimate.

Whether combinational circuit will have less frequency of operation than sequential circuit?

I have designed an algorithm-SHA3 algorithm in 2 ways - combinational
and sequential.
The sequential design that is with clock when synthesized giving design summary as
Minimum clock period 1.275 ns and Maximum frequency 784.129 MHz.
While the combinational one which is designed without clock and has been put between input and output registers is giving synthesis report as
Minimum clock period 1701.691 ns and Maximum frequency 0.588 MHz.
so i want to ask is it correct that combinational will have lesser frequency than sequential?
As far as theory is concerned combinational design should be faster than sequential. But the simulation results I m getting for sequential is after 30 clock cycles where as combinational there is no delay in the output as there is no clock. In this way combinational is faster as we are getting instant output but why frequency of operation of combinational one is lesser than sequential one. Why this design is slow can any one explain please?
The design has been simulated in Xilinx ISE
Now I have applied pipe-lining to the combinational logic by inserting the registers in between the 5 main blocks which are doing the computation. And these registers are controlled by clock so now this pipelined design is giving design summary as
clock period 1.575 ns and freq 634.924 MHz
Min period 1.718 ns and freq 581.937.
So now this 1.575 ns is the delay between any of the 2 registers , its not the propagation delay of entire algorithm so how can i calculate propagation delay of entire pipelined algorithm.
What you are seeing is pipelining and its performance benefits. The combinational circuit will cause each input to go through the propagation delays of the entire algorithm, which will take at up to 1701.691ns on the FPGA you are working with, because the slowest critical path in the combinational circuitry needed to calculate the result will take up to that long. Your simulator is not telling you everything, since a behavioral simulation will not show gate propagation delays. You'll just see the instant calculation of your combinational function in your simulation.
In the sequential design, you have multiple smaller steps, the slowest of which takes 1.275ns in the worst case. Each of those steps might be easier to place-and-route efficiently, meaning that you get overall better performance because of the improved routing of each step. However, you will need to wait 30 cycles for a result, simply because the steps are part of a synchronous pipeline. With the correct design, you could improve this and get one output per clock cycle, with a 30-cycle delay, by having a full pipeline and passing data through it at every clock cycle.

Maximum clock delay Xilinx ISE

My design uses an Xilinx FPGA.
The synthesis report shows the following results:
Timing Summary:
---------------
Speed Grade: -3
Minimum period: No path found
Minimum input arrival time before clock: 1.903ns
Maximum output required time after clock: 150.906ns
Maximum combinational path delay: 97.819ns
I do not know if I should use 150.906 ns or 97.819 ns to calculate throught.
What is maximum clock delay?
I havn't heard the term 'throught' with respect to circuit timing anytime before, but maybe my explanantion will give you the right hint.
At first, the maximum clock delay can be found in the Static Timing Report after Place & Route. But, this figure is mostly meaningless because one must also take the maximum data delay from any input or to any output into account. The result is already provided by the synthesis report. Please note, that this report only provides estimated results. Real results are only available from the Static Timing Report.
If you look for the maximum clock frequency (the inverse of the minimum clock period), then your synthesis report states, that your design does not include a path from one FF to another driven by the same clock ("Minimum period: No path found").
If you want to synchronously communicate with another IC on your PCB then the other 3 numbers are relevant. For example, the line "maximum output required time after clock" states that, all output signals are valid 151 ns after the clock signal toggles at the input pin (rising or falling edge depending on your design). If any of this outputs drive the inputs of another IC and if this IC is driven by the same clock source, then you must add the "minimum input arrival time" of this second IC (found in its data sheet). If this time is for example 49 ns then, the minimum period of your shared clock would be (your) 151 ns + 49 ns = 200 ns, that would be 5 MHz.
Same applies for the "minimum input arrival time before clock" of your FPGA design which must be added to the "maximum output required time" of the driving IC. If this time is for example 31 ns, then the minimum period of your shared clock would be 31 ns + (your) 2 ns = 33 ns, that would be 30 MHz.
In the same way, the "maximum combinational path delay" must be added to the "maximum output required time" of the IC which drives your inputs plus the "minimum input arrival time" of the IC your FPGA is driving. Given the same example figures from above, then the minimum period of your shared clock would be 31 ns + (your) 98 ns + 49 ns = 178 ns, that would be 5.6 MHz.
More details are explained in Xilinx Timing Constraint User Guide. Above, I explained the System Synchronous mode.
A more compact representation for Xilinx Vivado is given in Vivado Design Suite User Guide - Using Constraints.
There was also this presentation earlier available on the internet, but I didn't find the source PDF anymore.

Are cycles in computing equal to time?

I have a book describing energy saving compiler algorithms with a variable having "cycles" as measuring unit for the "distance" until something happens (an HDD is put into idle mode).
But the results for efficiency of the algorithm have just "time" on one axis of a diagram, not "cycles". So is it safe to assume (i.e. my understanding of the cycle concept) that unless something like dynamic frequency scaling is used, cycles are equal to real physical time (seconds for example)?
The cycles are equal to real physical time, for example a CPU with a 1 GHz frequency executes 1,000,000,000 cycles per second which is the same as 1 over 1,000,000,000 seconds per cycle or, in a other words a cycle per nanosecond. In the case of dynamic frequency that would change according to the change in frequency at any particular time.

Difficult to understand the Gaussian Random Timer?

I have read the Gaussian Random Timer info in jmeter user manual but it is difficult to understand. any one have idea related to this please explain with example highly appreciated. Thanks in advance.
The Gaussian Random Timer has a random deviation (based on Gauss curve distribution) around the constant delay offset.
For example:
Deviation: 100 ms
Constant Delay Offset: 300 ms
The delay will vary between 200 ms (300 - 100) and 400 ms (300 + 100) based on Gauss distribution for about 68% of the cases.
I'll try to explain it with one of the examples already posted:
Constant delay offset: 1000 ms
Deviation: 500 ms
Approximately 68% of the delays will be between [500, 1500] ms (=[1000 - 500, 1000 + 500] ms).
According to the docs (emphasis mine):
The total delay is the sum of the Gaussian distributed value (with mean 0.0 and standard deviation 1.0) times the deviation value you specify, and the offset value
Apache JMeter invokes Random.nextGaussian()*range to calculate the delay. As explained in the Wikipedia, the value ofnextGaussian() will be between [-1,1] only for about 68% of the cases. In theory, it could have any value (though the probability to get values outside of this interval decreases very quickly with the distance to it).
As a proof, I have written a simple JMeter test that launches one thread with a dummy sampler and a Gaussian Random Timer: 3000 ms constant delay, 2000 ms deviation:
To rule out cpu load issues, I have configured an additional concurrent thread with another dummy sampler and a Constant Timer: 5000 ms:
The results are quite enlightening:
Take for instance samples 10 and 12: 9h53'04.449" - 9h52'57.776" = 6.674", that is a deviation of 3.674" in contrast to the 2.000" configured! You can also verify that the constant timer only deviates about 1ms if at all.
I could find a very nice explanation of these gaussian timers in the Gmane jmeter user's list: Timer Question.
Gaussian Random Timer is nearly the same as Uniform Random Timer.
In Uniform Random Timer the variation around constant offset has a linear distribution
In Gaussian Random Timer, the variation around constant offset has a gaussian curve distribution.
Constant delay offset(mu)=300 ms,deviation(si)=100 ms
mu-si=200,mu+si=400,There are 68% chances of the time gap between two threads are in range of[200,400]
mu-2(si)=100,mu+2(si)=500,There are 95% chances of time gap between two threads are in range of[100,500]
mu-3(si)=0,mu+3(si)=300,there are 99.7% chances of the time gap between two consecutive threads are in range of[0,600]
when you go on like this some where you will get 100% probability that time gap between two threads is 100%
I am restricting my self to 3 iterations because mu-4(si) yields a negative value and time elapsed is always a positive value in this universe.
But it will be very unrealistic to depend on gaussian timer as we have constant timer and constant through put timer with no standard deviation(si).
Hope that it helps.

Resources