Tristate buffers in Quartus II - vhdl

I need to clear up a problem with an external input to a CPLD by putting it through a tristate buffer. I know Quartus II has a tristate-buffer megafunction, but I am curious - if I simply tell it to output Z on the specific pin, will in automatically synthesize so the tristate buffer is enabled on that pin, or do I have to implement the function/write a buffer?

Chapter 10 – Recommended HDL coding style – in the Quartus manual will tell you everything you need to know: http://www.altera.com/literature/hb/qts/qts_qii51007.pdf
In summary, tri-state buffers will be inferred on output ports if you drive it with a ‘Z’.

You can do it either way. If you assign 'Z' to the pin (NOTE: it has to be an upper-case Z, lower-case confuses Quartus) a tri-state buffer will be inferred. Alternately, you can directly instantiate various low-level I/O primitives which have a tri-state enable pin (including various DDR I/O primitives).
I have generally allowed Quartus to infer the tri-state buffers on 'normal' I/O pins, and used the low-level primitives when timing is critical and I want to force use of the I/O ring flip-flops, use the DDR I/O features, etc.

Related

How can I convert the serial signal from ADC to N-bit range signal?

My project goal is to design a 'heart rate module' using zed board and ppg sensor.
I'm going to use Pmod as ADC for converting the analog signal from the ppg sensor to the digital signal so that the zedboard would be able to process it.
there is a problem at this point.
my module gets a '12-bit signal' as input,
but I found out that the Pmod provides the digital output in serial peripheral protocol.
the input of the module has 12-bit range, but the output of pmod(which will be connected the module as a module input) is only 1-bit range.
I think their bit range differs, which shouldn't
how can I solve this problem?
Assuming that I have understood your problem correctly, you need to design a Deserialiser module. The most common way of doing this is by creating a Shift Register.
The Shift Register operates by shifting serial data in, 1 bit at a time. When enough bits have been shifted in (determined by your application) you can shift the contents of the register out in a parallel shift. You now have parallel data.
But wait, it may not be that easy for you. You mentioned that the device you are using communicates via a SPI bus. Unless you have a SPI module that is helpfully outputting serial data (and telling your register when to shift) then you need also design some SPI compliant logic. Don't forget to pay attention to the timing requirements of the SPI port.

How to use an osciloscope with a FPGA using Vhdl

Any of you have any material about this?
I want to show an std_logic_vector(0 to 29) on the osciloscope
That's 30 bits ... you don't want to probe 30 pins.
I'd use 2 spare pins and roll a simple serial interface off a suitable (e.g. 1 MHz) clock and a /32 counter.
One pin shifts out each bit according to the count, the other is set when you send the first bit, as a convenient triggering signal.
Either let it free run, or tell it to start (inside the FPGA) every time you update that signal.
Most FPGA vendors provide some kind of in-system debugger (like ChipScope for Xilinx ISE designs). These provide a very powerful debugging perspective for your FPGA design and allow you to record waveforms on hundreds of signals.

What is the advantage of using GPIO as IRQ.?

I know that we convert the GPIO to irq, but want to understand what is the advantage of doing so ?
If we need interrupt why can't we have interrupt line only in first place and use it directly as interrupt ?
What is the advantage of using GPIO as IRQ?
If I get your question, you are asking why even bother having a GPIO? The other answers show that someone may not even want the IRQ feature of an interrupt. Typical GPIO controllers can configure an I/O as either an input or an output.
Many GPIO pads have the flexibility to be open drain. With an open drain configuration, you may have a bi-direction 'BUS' and data can be both sent and received. Here you need to change from an input to an output. You can imagine this if you bit-bash I2C communications. This type of use maybe fine if the I2C is only used to initialize some other interface at boot.
Even if the interface is not bi-directional, you might wish to capture on each edge. Various peripherals use zero crossing and a timer to decode a signal. For example a laser bar code reader, a magnetic stripe reader, or a bit-bashed UART might look at the time between zero crossings. Is the time double a bit width? Is the line high or low; then shift previous value and add two bits. In these cases you have to look at the signal to see whether the line is high or low. This can happen even if polarity shouldn't matter as short noise pulses can cause confusion.
So even for the case where you have only the input as an interrupt, the current level of the signal is often very useful. If this GPIO interrupt happens to be connected to an Ethernet controller and active high means data is ready, then you don't need to have the 'I/O' feature. However, this case is using the GPIO interrupt feature as glue logic. Often this signalling will be integrated into a dedicated module. The case where you only need the interrupt is typically some custom hardware to detect a signal (case open, power disconnect, etc) which is not industry standard.
The ARM SOC vendor has no idea which case above the OEM might use. The SOC vendor gives lots of flexibility as the transistors on the die are cheap compared to the wire bond/pins on the package. It means that you, who only use the interrupt feature, gets economies of scale (and a cheaper part) because other might be using these features and the ARM SOC vendor gets to distribute the NRE cost between more people.
In a perfect world, there is maybe no need for this. Not so long ago when tranistors where more expensive, some lines did only behave as interrupts (some M68k CPUs have this). Historically the ARM only has a single interrupt line with one common routine (the Cortex-M are different). So the interrupt source has to be determined by reading another register. As the hardware needs to capture the state of the line on the ARM, it is almost free to add the 'input controller' portion.
Also, for this reason, all of the ARM Linux GPIO drivers have a macro to convert from a GPIO pin to an interrupt number as they are usually one-to-one mapped. There is usually a single 'GIC' interrupt for the GPIO controller. There is a 'GPIO' interrupt controller which forms a tree of interrupt controllers with the GIC as the root. Typically, the GPIO irq numbers are Max GIC IRQ + port *32 + pin; so the GPIO irq numbers are just appended to the 'GIC' irq numbers.
If you were designing a bespoke ASIC for one specific system you could indeed do precisely that - only implement exactly what you need.
However, most processors/SoCs are produced as commodity products, so more flexibility allows them to be integrated in a wider variety of systems (and thus sell more). Given modern silicon processes, chip size tends to be constrained by the physical packaging, so pin count is at an absolute premium. Therefore, allowing pins to double up as either I/O or interrupt sources depending on the needs of the user offers more functionality in a given space, or the same functionality in less space, depending on which way you look at it.
It is not about "converting" anything - on a typical processor or microcontroller, a number of peripherals are connected to an interrupt controller; GPIO is just one of those peripherals. It is also by no means universally true; different devices have different capabilities, but in any case you are simply configuring a GPIO pin to generate an interrupt - that's a normal function of the GPIO not a "conversion".
Prior to ARM Cortex, ARM did not define an interrupt controller, and the core itself had only two interrupt sources (IRQ and FIQ). A vendor defined interrupt controller was required to multiplex the single IRQ over multiple peripherals. ARM Cortex defines an interrupt controller and a more flexible interrupt architecture; it is possible to achieve zero-latency interrupt from a GPIO, so there is no real advantage in accessing a dedicated interrupt? Doing that might mean the addition of external signal conditioning circuitry that is often incorporated in GPIO on the die.

Interrupt handling with fpga in VHDL

I am writing interputs for a fpga and dsp need to interact with a dual port memory shared dpram control in vhdl.
I have External IOs coming from the SPI bus on oneside to the fpag to be communicated with dsp and on the otherhand have a camera to the to the dsp.
So my intrups are like Havinf a FIFO being reset after everytime a FSM reads and writes the interrpts with dsp.
Now my problem is
I want to enable some particular interupts at a time and disable the others.
When make a masking with logical XOR function the other interupts coming from UART goes for a timeout.
When this is done the camera gets the signal but cant be controlled.
I use the following algorithm to deal with all asynchron inputs:
In event2reg_array_proc: save all inputs to parallel buffers “fifo_data_input_array”, each input(flag) should be put into separate buffer.
In reg_array2fifo_proc2: read each buffer serially and save them in a fifo “fifo320x32”.
In main FSM read the output from fifo and do the suitable processing, each cycle read out only one value, it should be one flag.
If you get some flags which remains in register even after processing, the reason can be:
In event2reg_array_proc: and reg_array2fifo_proc2:, if one flag (in buffer) has been written in the fifo, it should be cleared from the buffer. I use the “fifo_cnt” to control this. You can use simulation to check.
Line Camera sends the FRAME_VALID signal as same as the LINE_VALID signal, so you can get a lot of CAM2DSP_FRAME_SYNC_FLAG with Line Camera.
So can any one suggest any algorithm to enable particular interupts while the the camera is still communicating with DSP.
Your question is not clearly worded enough to enable a proper answer.
But one point is clear : XOR is not a good choice for an interrupt mask!
Either AND or OR would be a better choice depending on the logic of the interrupt handler.

How to convert 24MHz and 12MHz clock to 8MHz clock using VHDL?

I am writing a code using VHDL to convert 24MHz and 12 MHz clock to 8 MHz clock. Can anyone please help me in this coding? Thanks in advance.
Is this for an FPGA? Or something else? Are you really dividing a clock, or just a signal? For a divide by three counter, try this link:
http://www.asic-world.com/examples/vhdl/divide_by_3.html
And for a 2/3:
http://www.edaboard.com/thread42620.html
As Martin has already said, use a clock management device by Xilinx recommendations in order to divide your clock down to a lower rate.
While you might be tempted to implement a clock divider using logic and a counter, you will not obtain good synthesis results.
Here are some tips:
Be sure to closely read and follow recommendations for the clock management hardware for your device. There can be quite a few "gotchas" related to power-up, reset, loss of clock lock, etc.
Make sure that you are operating the clock management device within its specifications. See your device's datasheet for more information (in this case for the S3-A).
Use FPGA Editor to verify correct placement and configuration of your clock management units (i.e. did it end up in the right spot on the chip)
Adhere to recommended practices for feedback clocks, and clock buffering.
Use a DCM or PLL (depending on the family of FPGA) - there's examples in the documentation. If you tell us which family, I might be able to point you more directly.
EDIT:
As you say Spartan 3ADSP - you need to either:
Use the Core Generator Clocking Wizard to create you a VHDL or Verilog file with the components you need in and hope you never need to understand what's going on
Read the libraries guide and the DCM section of the Userguide for that chip and instantiate a DCM on your own and apply the correct generics/parameters to it.
Don't forget to apply a reset pulse to the DCM after configuration has finished 0 and make sure that pulse lasts long enough. The min pulse length is different for each family, I don't recall off the top of my head what it is for that chip, so check the datasheet.

Resources