I have a fpga that is taking in serial data at a bit rate of say 4.8 kbps.
Now I am not sure what clock frequency my fpga should run at to properly handle the data.
Will the clock speed simply need to be at minimum 4800 Hz?
It goes the other way round: you first have to determine how many clock cycles you need to process a single input "tick". If one cycle is enough to complete your processing, then 4800 Hz might be fine.
But if you need two cycles, then you would probably go with double speed.
This is a pretty generic answer, but your question is also pretty generic, so this is probably the best you can hope for without enhancing your input.
Will the clock speed simply need to be at minimum 4800 Hz?
Theoretical: yes, practical: no.
Theoretical.
You can receive a 4800 Hz signal with a 4800Hz clock but only if the clock is the exact right frequency. (The 4800 Hz will deviate, no clock is perfect). For that you would need something like a PLL which is in a measurement feedback loop looking at the signal en keeping the clock in step.
Practical.
Much easier is to use e.g. a 1MHz FPGA clock and use over-sampling. Even then you have the same problems as with a dedicated clock: you need to know where the bit boundaries are. Again some sort of clock locking or edge recognition mechanism is required. In fact you have to build the equivalent of a PLL but you can do it all using registers and counters.
When running at 1MHz (which is very slow for an FPGA) you have plenty of clock cycles to process your data.
Both methods depend on the protocol you are using which you did not mention. They are only possible for some type of signals/serial protocols. For example if signals are low or high for many clock cycles that would cause problems for either method.
Related
I'm looking for advice on a less than ideal situation.
I've inherited a project where we have a hardware design issue. We generate a clock to a chip which feeds the clock back in over a none clock-capable input. This works at up to 160MHz but we are looking to increase the clock so I'm researching IO options. This is used to clock 8 parallel data inputs.
Right now the data inputs go through a delay and a IDDR block. The output is fed to a FIFO. Our clock is still routed to a BUFG - so we have:
Data - IDELAY - IDDR - FIFO
Clock - BUFG ----^------^
I read somewhere that routing to a BUFG has a large delay so a BUFR-BUFIO is better. Is this the case? Have I missed a better option?
When you say generating a clock to "a chip", I will assume that you mean the Kintex7 chip.
The delay is not a problem. The issue is for your timing closure to be set up properly so that the static timing analysis can validate whether you violate any setup or hold time in all boundary corners of the board.
If you look at DS182 document, you will find under AC Switching characteristics which will give you a rough idea on how well the chip can perform.
However, the best is to let the timing analyzer inside Vivado calculate for you whether your desired clock frequency will be able to close timing.
You just need to make sure
The data input is synchronous to your final clock.
If it isn't, then clock that data input across two stages of registers with respect to the final clock.
Specify your timing constraints
Run through synthesis and implementation
Check the timing to see that there are no violations.
Or maybe I did not understand something about what you are trying to do.
Studying Common Clock Framework and have a doubt related to muxed clocks.
If we want to set particular rate of a muxed clock and the current parent of the clock is unable to set the desirable rate (parent have lesser rate).
Then, Is there any function or mechanism who switches the parent of the clock (from its parent list) automatically and sets the desire rate?
One possible solution, we can call the set_parent() manually and then call set_rate(), which can set desire rate. But what if we just call set_rate() and it swithces parent of the clock automatically and sets desirable rate.
Some clocks may up-scale a timer using a PLL. So having a parent that has a lower clocking doesn't mean that automatically trying to increase the parent clock is the best solution. The Common clock framework (CCF) is meant to allow multiple drivers/sub-systems access to a shared resource. The CCF doesn't try to be intelligent as the way different clock trees behave is difficult to know generically.
One possible solution, we can call the set_parent() manually and then call set_rate(), which can set desire rate.
I think you mean to call get_parent() and then use set_rate? Some of the time, it is not easy to call set_parent() as it maybe fixed. You need to read your SOC documentation. In some cases, there are multiple input clocks available. Ie, the real clock hierarchy is not a tree but a DAG although the active hierarchy is tree-like.
But what if we just call set_rate() and it switches parent of the clock automatically and sets desirable rate.
This might make sense for your SOC clock that you are looking at but not generically. There maybe dozens of clocks dependant on a parent and it maybe possible to re-rate grand-parents, etc. It is probably not the best choice to re-rate the system clock because an audio driver wants a clock that is a few HZ out?
It is possible to write the clock driver so that it will re-rate the parent if a request is made on a child that doesn't work. However, this is part of the clock drivers and not the CCF generally.
Example
For instance, an SOC might have an audio clock with three input sources,
A dedicated 48000khz
Some low speed bus clock (platform general)
A USB clock
Option 1 is the best sound quality with the highest power consumption. Option 2 is meant to be generic but may not match sound rates well resulting in sub-optimal DAC/wave/sound generation. Option three might be good for some sort of USB sound slave, but if you are not using USB this may be expensive for power consumption.
In the case above, set_parent() maybe a way to get the desired rate, if the SOC clock driver supports it.
There is no intelligence in the CCF; if there is some flexibility it is in the clock driver but this depends on the clock hardware. It is up to a programmer to read the SOC documentation and determine what is the best way to configure the clock tree. Probably you should also examine the clock driver for your SOC and Linux version to see what it is supporting. You can not generically change the clock rate of parents in a driver as other devices may depend on them. If you need this for a particular SOC in an SOC family, you need to special case it by examining a device tree to see which SOC the driver is running on. This is the case where you can use get_parent() and set_rate() for the particular SOC.
Reference: A question on older Linux clock structure.
I very recently began experimenting with FPGAs. In researching things around the net I've noticed in several places that designs might use multiple separate PLL clocks of the exact same speed. Why is that?
One example I will give is this site: Parallella Linux Quick Start
They have their FCLK_CLK1 and FCLK_CLK2 both at 200MHz. Why is this recommended and not a single clock at 200MHz for both? Is it just customary to give each major component their own clock even if it is the same? Or am I missing something?
Beside the already mentioned reasons multiple other reasons exist why two PLL clocks of the same speed might exist.
Even if the frequency is exactly the same, differences might exist in clock phase or jitter. Using one PLL with fixed clock phase and another one with adjustable clock phase can be useful for proper sampling of external input signals or maintaining the correct phase difference between clock and output data. Techniques like that were especially popular before components such as IDELAY and ODELAY were widely available.
Crystal oscillators also will have small derivations from their marked value. If you have a communication link between two boards and both boards have their own oscillator, then one boards main clock might run at 200.01 Mhz and the other boards could run at 199.99 Mhz. In many cases both FPGAs will then their locally generated low-jitter clock as their main clock, but will also use the remote clock to sample incoming data. You can see this in ethernet PHYs: A 100 Mbit PHY usually has a 25 Mhz receive clock recovered from the input signal and a locally generated 25 mhz transmit clock.
There are many reasons to use multiple clocks of the exact same speed. So I will just state a few. However i don't have any deep knowledge of your example.
Magic on FPGA.
Like stated in the comments a FPGA is a highly complex device. Only the vendor knows exactly what is happening there, so they might give you some advice, which can be weird.
Clock distribution.
If you have a design, with just one clock source, it is critical to route the clock correctly. The clock has to trigger everywhere at the same time, which is hard to manage for the PnR tool. Today's FPGAs don't usually have this issue.
Different IPs on one FPGA.
If you have different IPs/Designs, which you fuse on one FPGA, the IPs can use different clocks. If you want to split it later again, you will need multiple sources of the clock anyway. Besides, you are forced to implement some registers if you switch a clockdomain and during the merging of your IPs, you don't mix up evrything, which is a good design style. This also maybe the case of your example.
HDMI support is provided by an IP core from Analog Devices ...
Output.
Maybe the additional clock is only used as an output at some I/O Port.
Low-Power.
In today's CMOS technology, the most power is wasted on transitions (transistor switches) and static power-leakage (the damn thing is so small, it just leaks current). With multiple clock domain, you have the opportunity to have less transitions per second. Or you can switch off parts of your device completly.
In this slide, things looks a little off to me. Clock cycle time or clock period, is already time required per clock cycle. Question is, does the word Clock Rate makes sense?
It also says, Hardware designer must often trade off clock rate against cycle count. But, they are inversely related. If one increases Clock speed, the clock period(time for per clock cycle) will reduce automatically. Why there will be a choice?
Or am I missing something?
First things first, slides aren't always the best way to discuss technical issues. Don't take any slide as gospel. There's a huge amount of handwaving going on to support gigantic claims with so little evidence.
That said, there are tradeoffs:
faster clocks is usually better: get more integer or floating point operations done per second
but if the faster clock doesn't line up well with external memory clocks, some of those cycles might be wasted
slower clocks might draw less power
faster clocks allow an operating system kernel to get more work done with every wakeup and return to sleep faster, thus they might draw less power
faster clocks might mean some operations take more clock cycles to actually execute (think of the supremely deep pipelines of the Pentium IV -- branch mis-predictions were very costly -- despite the faster clock cycle than the Pentium III or Pentium M, in the real world, speeds were very similar for the two processor types.)
Clock Rate simply means frequency, which the reciprocal of the time of a single clock cycle, so the equations make perfect sense.
Regarding the second question, cycle count is the same as "CPU Clock Cycles"; it is not the same as clock period or time per clock cycle.
I am writing a code using VHDL to convert 24MHz and 12 MHz clock to 8 MHz clock. Can anyone please help me in this coding? Thanks in advance.
Is this for an FPGA? Or something else? Are you really dividing a clock, or just a signal? For a divide by three counter, try this link:
http://www.asic-world.com/examples/vhdl/divide_by_3.html
And for a 2/3:
http://www.edaboard.com/thread42620.html
As Martin has already said, use a clock management device by Xilinx recommendations in order to divide your clock down to a lower rate.
While you might be tempted to implement a clock divider using logic and a counter, you will not obtain good synthesis results.
Here are some tips:
Be sure to closely read and follow recommendations for the clock management hardware for your device. There can be quite a few "gotchas" related to power-up, reset, loss of clock lock, etc.
Make sure that you are operating the clock management device within its specifications. See your device's datasheet for more information (in this case for the S3-A).
Use FPGA Editor to verify correct placement and configuration of your clock management units (i.e. did it end up in the right spot on the chip)
Adhere to recommended practices for feedback clocks, and clock buffering.
Use a DCM or PLL (depending on the family of FPGA) - there's examples in the documentation. If you tell us which family, I might be able to point you more directly.
EDIT:
As you say Spartan 3ADSP - you need to either:
Use the Core Generator Clocking Wizard to create you a VHDL or Verilog file with the components you need in and hope you never need to understand what's going on
Read the libraries guide and the DCM section of the Userguide for that chip and instantiate a DCM on your own and apply the correct generics/parameters to it.
Don't forget to apply a reset pulse to the DCM after configuration has finished 0 and make sure that pulse lasts long enough. The min pulse length is different for each family, I don't recall off the top of my head what it is for that chip, so check the datasheet.