Any of you have any material about this?
I want to show an std_logic_vector(0 to 29) on the osciloscope
That's 30 bits ... you don't want to probe 30 pins.
I'd use 2 spare pins and roll a simple serial interface off a suitable (e.g. 1 MHz) clock and a /32 counter.
One pin shifts out each bit according to the count, the other is set when you send the first bit, as a convenient triggering signal.
Either let it free run, or tell it to start (inside the FPGA) every time you update that signal.
Most FPGA vendors provide some kind of in-system debugger (like ChipScope for Xilinx ISE designs). These provide a very powerful debugging perspective for your FPGA design and allow you to record waveforms on hundreds of signals.
Related
I am working on implementing a DDR SDRAM controller for a class and am not allowed to use the Xilinx MIG core.
After thrashing with the design, I am currently working synchronously to my system clock at 100MHz and creating a divided signal "clock" (generated using a counter) that is sent out on the IO pins to DDR SDRAM. I have some logic that feeds me the "rising" edge strobes of this signal clock as I am aware that I cannot use a signal to clock a process. However, this divided clock method runs very slow and I have concerns that I am not meeting the minimum required frequency of the external DDR SDRAM. I am hoping to speed up my read/write bursts, but to do so, my spartan3e will struggle with anything higher than 100MHz. After looking around online, I found this code from EDA Board:
process(Input_Clk,Reset_Control)
begin
if (Reset_Control = '1') then
Output_Data <= (others => '0');
elsif rising_edge(Input_Clk) then
Output_Data <= Input_Data1;
elsif falling_edge(Input_Clk) then
Output_Data <= Input_Data2;
end if;
end process ;
I have written a lot of VHDL, but have never seen something like this before. I'm sure this works fine in simulation, but it doesn't look synthesizable to me. The poster said this should be supported by 1076.6-2004. Does this infer two flip-flops, one clocked on the rising edge and one on the falling edge whose outputs both feed into a 2:1 mux? Does Xilinx support this? I want to avoid having to instantiate a DCM as crossing these clock domains will definitely slow me down and will add undesired complexity. Is there a way to safely generate my DDR data that is being sent to and received from DDR SDRAM without the Xilinx primitive for the MIG? How would I perform the receiving of DDR data in Verilog?
For reference, we have to code in Verilog, so I'm not too sure on how to translate that VHDL process to a Verilog always block if it is synthesizable. We are using the Micron MT46V32M16 if that is relevant.
Here are the timing diagrams for what I am trying to replicate:
I would say that implementing a DDR controller 'for class' is rather challenging. In the companies I worked for they where left for senior engineers to build.
First about the Verilog code shown:
Yes, you are right that can not be synthesized.
The approach to double-clocking inputs is to have two data paths. One on the rising edge and one on the falling edge. In a second stage the two are put in parallel but with double the data width. Thus a 32-bit wide DDR produces 64 data bits per 'system' clock.
More difficult is to clock the arriving data at the right time. As your read diagram shows the data arrives in the middle of the clock edge. For that you need a delayed clock. In an ASIC that is done using a 'tune-able' delay line which is calibrated at start-up and regularly checked for the phase. In an FPGA that would requires some esoteric logic.
I have not been close to DDRs chips for a while, but I think all the modern ones (DDR2 and up?), output a clock themselves to help with the read data.
Also after you have clocked the read data in, using that shifted clock, you have to get the data back to the system clock which requires an asynchronous FIFO.
I hope that gets you started.
I am trying to learn VHDL, an am writing a simple transmitter for serial data. However, I encountered a problem - I need a clock to run it, and the datasheet for my FPGA (MAX II) says this:
Output of the internal oscillator for MAX II devices: 3.3-5.5 MHz
So there is no way to reliably set the frequency of the internal FPGA oscillator? And if there is, how do you do it efficiently?
Thanks!
No, there is no way to set the frequency of the internal oscillator. It's most likely an RC oscillator built into the die, so its frequency will depend heavily on silicon process variations and temperature.
If you need something more precise, it'll need to be external to the CPLD.
I am designing a processor using an Altera DE1 kit.
I will be running test bench to stress the processor.
I want to know if there is any way to measure only the power consumption of my design and neglecting the other power dissipation caused by the DE1 board.
TIA for the answer.
Measure power at an idle state. The idle state can be many things. This needs to be decided by you:
The board operating when the FPGA is not programmed (no bitstream loaded).
FPGA loaded, but you hold down the reset for the logic.
Place the FPGA in some kind of suspended state (sleep mode).
Now that you have your reference power measurement, measure the power with your design running fully. Subtract one from the other, and you will have a result which is close to what you are searching for (The board may consume differently, at each idle state, than it would have been when running normally with your design).
You should be able to replace the 0-Ohm resistor R29 by a shunt resistor and measure the core current of the fpga through that. It's right in series with VCCINT so it should reflect only the current used by the fpga logic.
There's also R30 in series with VCCIO, if you want to include IO power consumption as well.
The resistor names are from this schematic (the only one I could find so far): http://d1.amobbs.com/bbs_upload782111/files_33/ourdev_586508CWZW3R.pdf
I very recently began experimenting with FPGAs. In researching things around the net I've noticed in several places that designs might use multiple separate PLL clocks of the exact same speed. Why is that?
One example I will give is this site: Parallella Linux Quick Start
They have their FCLK_CLK1 and FCLK_CLK2 both at 200MHz. Why is this recommended and not a single clock at 200MHz for both? Is it just customary to give each major component their own clock even if it is the same? Or am I missing something?
Beside the already mentioned reasons multiple other reasons exist why two PLL clocks of the same speed might exist.
Even if the frequency is exactly the same, differences might exist in clock phase or jitter. Using one PLL with fixed clock phase and another one with adjustable clock phase can be useful for proper sampling of external input signals or maintaining the correct phase difference between clock and output data. Techniques like that were especially popular before components such as IDELAY and ODELAY were widely available.
Crystal oscillators also will have small derivations from their marked value. If you have a communication link between two boards and both boards have their own oscillator, then one boards main clock might run at 200.01 Mhz and the other boards could run at 199.99 Mhz. In many cases both FPGAs will then their locally generated low-jitter clock as their main clock, but will also use the remote clock to sample incoming data. You can see this in ethernet PHYs: A 100 Mbit PHY usually has a 25 Mhz receive clock recovered from the input signal and a locally generated 25 mhz transmit clock.
There are many reasons to use multiple clocks of the exact same speed. So I will just state a few. However i don't have any deep knowledge of your example.
Magic on FPGA.
Like stated in the comments a FPGA is a highly complex device. Only the vendor knows exactly what is happening there, so they might give you some advice, which can be weird.
Clock distribution.
If you have a design, with just one clock source, it is critical to route the clock correctly. The clock has to trigger everywhere at the same time, which is hard to manage for the PnR tool. Today's FPGAs don't usually have this issue.
Different IPs on one FPGA.
If you have different IPs/Designs, which you fuse on one FPGA, the IPs can use different clocks. If you want to split it later again, you will need multiple sources of the clock anyway. Besides, you are forced to implement some registers if you switch a clockdomain and during the merging of your IPs, you don't mix up evrything, which is a good design style. This also maybe the case of your example.
HDMI support is provided by an IP core from Analog Devices ...
Output.
Maybe the additional clock is only used as an output at some I/O Port.
Low-Power.
In today's CMOS technology, the most power is wasted on transitions (transistor switches) and static power-leakage (the damn thing is so small, it just leaks current). With multiple clock domain, you have the opportunity to have less transitions per second. Or you can switch off parts of your device completly.
I am writing a code using VHDL to convert 24MHz and 12 MHz clock to 8 MHz clock. Can anyone please help me in this coding? Thanks in advance.
Is this for an FPGA? Or something else? Are you really dividing a clock, or just a signal? For a divide by three counter, try this link:
http://www.asic-world.com/examples/vhdl/divide_by_3.html
And for a 2/3:
http://www.edaboard.com/thread42620.html
As Martin has already said, use a clock management device by Xilinx recommendations in order to divide your clock down to a lower rate.
While you might be tempted to implement a clock divider using logic and a counter, you will not obtain good synthesis results.
Here are some tips:
Be sure to closely read and follow recommendations for the clock management hardware for your device. There can be quite a few "gotchas" related to power-up, reset, loss of clock lock, etc.
Make sure that you are operating the clock management device within its specifications. See your device's datasheet for more information (in this case for the S3-A).
Use FPGA Editor to verify correct placement and configuration of your clock management units (i.e. did it end up in the right spot on the chip)
Adhere to recommended practices for feedback clocks, and clock buffering.
Use a DCM or PLL (depending on the family of FPGA) - there's examples in the documentation. If you tell us which family, I might be able to point you more directly.
EDIT:
As you say Spartan 3ADSP - you need to either:
Use the Core Generator Clocking Wizard to create you a VHDL or Verilog file with the components you need in and hope you never need to understand what's going on
Read the libraries guide and the DCM section of the Userguide for that chip and instantiate a DCM on your own and apply the correct generics/parameters to it.
Don't forget to apply a reset pulse to the DCM after configuration has finished 0 and make sure that pulse lasts long enough. The min pulse length is different for each family, I don't recall off the top of my head what it is for that chip, so check the datasheet.