How can I convert the serial signal from ADC to N-bit range signal? - fpga

My project goal is to design a 'heart rate module' using zed board and ppg sensor.
I'm going to use Pmod as ADC for converting the analog signal from the ppg sensor to the digital signal so that the zedboard would be able to process it.
there is a problem at this point.
my module gets a '12-bit signal' as input,
but I found out that the Pmod provides the digital output in serial peripheral protocol.
the input of the module has 12-bit range, but the output of pmod(which will be connected the module as a module input) is only 1-bit range.
I think their bit range differs, which shouldn't
how can I solve this problem?

Assuming that I have understood your problem correctly, you need to design a Deserialiser module. The most common way of doing this is by creating a Shift Register.
The Shift Register operates by shifting serial data in, 1 bit at a time. When enough bits have been shifted in (determined by your application) you can shift the contents of the register out in a parallel shift. You now have parallel data.
But wait, it may not be that easy for you. You mentioned that the device you are using communicates via a SPI bus. Unless you have a SPI module that is helpfully outputting serial data (and telling your register when to shift) then you need also design some SPI compliant logic. Don't forget to pay attention to the timing requirements of the SPI port.

Related

Arduino Yun - UART Through USB Host to Nexys 4 DDR FPGA Board

I'm working on a project that requires reading and writing data to an SD card, and I want to make the algorithms that evaluate incoming data in hardware. I found out that the only way to read/write to an SD card through my FPGA board would be to implement a processor and then write software, which would defeat the purpose of using the FPGA board in the first place.
So, I've decided that I'm going to use my Arduino Yun to read information from a micro SD card, and then encode it into binary and send each piece of data to the FPGA in UART.
(The data consists of base pairs in DNA, so I'm making each base pair take up four bits, and making it into more of a logic map rather than any kind of ASCII map or anything - the details of this choice aren't very important, but it makes using UART and binary valid for this application.)
However, the only thing that can carry the UART signal for the FPGA board is the micro USB slot, so I have to plug a cable into that, and the other end of it into the USB Host of the Arduino.
I've come to understand that the Yun's USB Host is connected solely to the on-board Linux processor, and while I understand that this means that I need to install various packages onto the micro SD card that I've used to extend the Yun's memory, I'm not sure what packages to use, or even how to go about using them.
Could anyone point me in the direction of libraries/packages that I should install, and what sort of code I should write to implement this functionality?
Any help would be greatly appreciated.
Please let me know if I've left out any details!

What is the simplest way to transmit a signal over MGT of Xilinx FPGA?

I want to send signals (doesn't matter what type of signal, just random binary) over MGT lanes of a Xilinx FPGA. This is for testing the MGT traces on the PCB. What is the simplest way I can achieve this? For a regular IO I would simply use an output buffer (OBUF) and send out the signal to the output pins. What would be the equivalent of this (or the simplest equivalent of this) for MGT bank pins?
EDIT:
I want to stay away from ipcores as much as possible. I'm looking for a really simple solution to somehow buffer signals to MGT pins.
If you have both TX and RX lanes then I would suggest to perform loopback test. FPGA would produce data on TX link, receive it on RX and compare results.
To do so you can connect TX lanes to RX lanes on PCB connector and use FPGA Ibert core that will automatically create transmit, receive and compare circuits and produce nice results for each lane.
For 7 series here is the link to Ibert core
http://www.xilinx.com/products/intellectual-property/ibert_7series_gtx.html
For other families Ibert is also available.

What is the advantage of using GPIO as IRQ.?

I know that we convert the GPIO to irq, but want to understand what is the advantage of doing so ?
If we need interrupt why can't we have interrupt line only in first place and use it directly as interrupt ?
What is the advantage of using GPIO as IRQ?
If I get your question, you are asking why even bother having a GPIO? The other answers show that someone may not even want the IRQ feature of an interrupt. Typical GPIO controllers can configure an I/O as either an input or an output.
Many GPIO pads have the flexibility to be open drain. With an open drain configuration, you may have a bi-direction 'BUS' and data can be both sent and received. Here you need to change from an input to an output. You can imagine this if you bit-bash I2C communications. This type of use maybe fine if the I2C is only used to initialize some other interface at boot.
Even if the interface is not bi-directional, you might wish to capture on each edge. Various peripherals use zero crossing and a timer to decode a signal. For example a laser bar code reader, a magnetic stripe reader, or a bit-bashed UART might look at the time between zero crossings. Is the time double a bit width? Is the line high or low; then shift previous value and add two bits. In these cases you have to look at the signal to see whether the line is high or low. This can happen even if polarity shouldn't matter as short noise pulses can cause confusion.
So even for the case where you have only the input as an interrupt, the current level of the signal is often very useful. If this GPIO interrupt happens to be connected to an Ethernet controller and active high means data is ready, then you don't need to have the 'I/O' feature. However, this case is using the GPIO interrupt feature as glue logic. Often this signalling will be integrated into a dedicated module. The case where you only need the interrupt is typically some custom hardware to detect a signal (case open, power disconnect, etc) which is not industry standard.
The ARM SOC vendor has no idea which case above the OEM might use. The SOC vendor gives lots of flexibility as the transistors on the die are cheap compared to the wire bond/pins on the package. It means that you, who only use the interrupt feature, gets economies of scale (and a cheaper part) because other might be using these features and the ARM SOC vendor gets to distribute the NRE cost between more people.
In a perfect world, there is maybe no need for this. Not so long ago when tranistors where more expensive, some lines did only behave as interrupts (some M68k CPUs have this). Historically the ARM only has a single interrupt line with one common routine (the Cortex-M are different). So the interrupt source has to be determined by reading another register. As the hardware needs to capture the state of the line on the ARM, it is almost free to add the 'input controller' portion.
Also, for this reason, all of the ARM Linux GPIO drivers have a macro to convert from a GPIO pin to an interrupt number as they are usually one-to-one mapped. There is usually a single 'GIC' interrupt for the GPIO controller. There is a 'GPIO' interrupt controller which forms a tree of interrupt controllers with the GIC as the root. Typically, the GPIO irq numbers are Max GIC IRQ + port *32 + pin; so the GPIO irq numbers are just appended to the 'GIC' irq numbers.
If you were designing a bespoke ASIC for one specific system you could indeed do precisely that - only implement exactly what you need.
However, most processors/SoCs are produced as commodity products, so more flexibility allows them to be integrated in a wider variety of systems (and thus sell more). Given modern silicon processes, chip size tends to be constrained by the physical packaging, so pin count is at an absolute premium. Therefore, allowing pins to double up as either I/O or interrupt sources depending on the needs of the user offers more functionality in a given space, or the same functionality in less space, depending on which way you look at it.
It is not about "converting" anything - on a typical processor or microcontroller, a number of peripherals are connected to an interrupt controller; GPIO is just one of those peripherals. It is also by no means universally true; different devices have different capabilities, but in any case you are simply configuring a GPIO pin to generate an interrupt - that's a normal function of the GPIO not a "conversion".
Prior to ARM Cortex, ARM did not define an interrupt controller, and the core itself had only two interrupt sources (IRQ and FIQ). A vendor defined interrupt controller was required to multiplex the single IRQ over multiple peripherals. ARM Cortex defines an interrupt controller and a more flexible interrupt architecture; it is possible to achieve zero-latency interrupt from a GPIO, so there is no real advantage in accessing a dedicated interrupt? Doing that might mean the addition of external signal conditioning circuitry that is often incorporated in GPIO on the die.

Interrupt handling with fpga in VHDL

I am writing interputs for a fpga and dsp need to interact with a dual port memory shared dpram control in vhdl.
I have External IOs coming from the SPI bus on oneside to the fpag to be communicated with dsp and on the otherhand have a camera to the to the dsp.
So my intrups are like Havinf a FIFO being reset after everytime a FSM reads and writes the interrpts with dsp.
Now my problem is
I want to enable some particular interupts at a time and disable the others.
When make a masking with logical XOR function the other interupts coming from UART goes for a timeout.
When this is done the camera gets the signal but cant be controlled.
I use the following algorithm to deal with all asynchron inputs:
In event2reg_array_proc: save all inputs to parallel buffers “fifo_data_input_array”, each input(flag) should be put into separate buffer.
In reg_array2fifo_proc2: read each buffer serially and save them in a fifo “fifo320x32”.
In main FSM read the output from fifo and do the suitable processing, each cycle read out only one value, it should be one flag.
If you get some flags which remains in register even after processing, the reason can be:
In event2reg_array_proc: and reg_array2fifo_proc2:, if one flag (in buffer) has been written in the fifo, it should be cleared from the buffer. I use the “fifo_cnt” to control this. You can use simulation to check.
Line Camera sends the FRAME_VALID signal as same as the LINE_VALID signal, so you can get a lot of CAM2DSP_FRAME_SYNC_FLAG with Line Camera.
So can any one suggest any algorithm to enable particular interupts while the the camera is still communicating with DSP.
Your question is not clearly worded enough to enable a proper answer.
But one point is clear : XOR is not a good choice for an interrupt mask!
Either AND or OR would be a better choice depending on the logic of the interrupt handler.

connecting avr atmega32 to shift register using USART?

I want to connect ATMEGA32 microcontroller to a shift register using USART via TXD pin, the shift register then performs serial to parallel conversion on the received data. but as you know, the shift register needs clocking , this clocking is fed via the microcontroller at baud rate frequency via XCK pin (here the USART acts as master synchronous clock generator).
My problem is that i don't know how to get these clock signal out of XCK, so how to do that???
thanks
RS-232 is self-clocking (fix baud rate) - it typically uses x16 clock supplied to a UART which syncs to the incoming data start bit edge. Rather than use a bare shift register you would probably be better off just using a simple UART chip as your serial to parallel converter - it would save a lot of effort.
I don't think using the UART for the shift register would be a good idea. It would be better to use the SPI mode of communication as it also provide the clock you require for the shift register. But if you want to use UART then you would have to provide a separate clk in sync with the baud rate you've selected for the UART mode which I think would be very hard and inaccurate.

Resources