Is it possible to configure a GPIO as both input and output? - avr

I am planning to implement a GPIO based I2C in atmega16.
Since it should have two pins SCL, SDA. The SDA pin should be bi directional but what i know is that either we can set a pin as input or output at a time.
By what way we can implement it?

The SDA pin should be bi directional but what i know is that either we can set a pin as input or output at a time.
This is true, but the I2C master "knows" when to expect incoming data. Since this is a synchronous bus, the master can switch between driving the output or switching to tri-state/input right before clocking data in/out.
This application note from Atmel may be useful to you: Atmel AVR156: TWI Master Bit Bang Driver
The example from Atmel uses a polling-approach, which limits speed. If one of your GPIOs has pin change interrupt support, you could probably utilize that to get more speed if required.

Related

How can I convert the serial signal from ADC to N-bit range signal?

My project goal is to design a 'heart rate module' using zed board and ppg sensor.
I'm going to use Pmod as ADC for converting the analog signal from the ppg sensor to the digital signal so that the zedboard would be able to process it.
there is a problem at this point.
my module gets a '12-bit signal' as input,
but I found out that the Pmod provides the digital output in serial peripheral protocol.
the input of the module has 12-bit range, but the output of pmod(which will be connected the module as a module input) is only 1-bit range.
I think their bit range differs, which shouldn't
how can I solve this problem?
Assuming that I have understood your problem correctly, you need to design a Deserialiser module. The most common way of doing this is by creating a Shift Register.
The Shift Register operates by shifting serial data in, 1 bit at a time. When enough bits have been shifted in (determined by your application) you can shift the contents of the register out in a parallel shift. You now have parallel data.
But wait, it may not be that easy for you. You mentioned that the device you are using communicates via a SPI bus. Unless you have a SPI module that is helpfully outputting serial data (and telling your register when to shift) then you need also design some SPI compliant logic. Don't forget to pay attention to the timing requirements of the SPI port.

What is the simplest way to transmit a signal over MGT of Xilinx FPGA?

I want to send signals (doesn't matter what type of signal, just random binary) over MGT lanes of a Xilinx FPGA. This is for testing the MGT traces on the PCB. What is the simplest way I can achieve this? For a regular IO I would simply use an output buffer (OBUF) and send out the signal to the output pins. What would be the equivalent of this (or the simplest equivalent of this) for MGT bank pins?
EDIT:
I want to stay away from ipcores as much as possible. I'm looking for a really simple solution to somehow buffer signals to MGT pins.
If you have both TX and RX lanes then I would suggest to perform loopback test. FPGA would produce data on TX link, receive it on RX and compare results.
To do so you can connect TX lanes to RX lanes on PCB connector and use FPGA Ibert core that will automatically create transmit, receive and compare circuits and produce nice results for each lane.
For 7 series here is the link to Ibert core
http://www.xilinx.com/products/intellectual-property/ibert_7series_gtx.html
For other families Ibert is also available.

What is the advantage of using GPIO as IRQ.?

I know that we convert the GPIO to irq, but want to understand what is the advantage of doing so ?
If we need interrupt why can't we have interrupt line only in first place and use it directly as interrupt ?
What is the advantage of using GPIO as IRQ?
If I get your question, you are asking why even bother having a GPIO? The other answers show that someone may not even want the IRQ feature of an interrupt. Typical GPIO controllers can configure an I/O as either an input or an output.
Many GPIO pads have the flexibility to be open drain. With an open drain configuration, you may have a bi-direction 'BUS' and data can be both sent and received. Here you need to change from an input to an output. You can imagine this if you bit-bash I2C communications. This type of use maybe fine if the I2C is only used to initialize some other interface at boot.
Even if the interface is not bi-directional, you might wish to capture on each edge. Various peripherals use zero crossing and a timer to decode a signal. For example a laser bar code reader, a magnetic stripe reader, or a bit-bashed UART might look at the time between zero crossings. Is the time double a bit width? Is the line high or low; then shift previous value and add two bits. In these cases you have to look at the signal to see whether the line is high or low. This can happen even if polarity shouldn't matter as short noise pulses can cause confusion.
So even for the case where you have only the input as an interrupt, the current level of the signal is often very useful. If this GPIO interrupt happens to be connected to an Ethernet controller and active high means data is ready, then you don't need to have the 'I/O' feature. However, this case is using the GPIO interrupt feature as glue logic. Often this signalling will be integrated into a dedicated module. The case where you only need the interrupt is typically some custom hardware to detect a signal (case open, power disconnect, etc) which is not industry standard.
The ARM SOC vendor has no idea which case above the OEM might use. The SOC vendor gives lots of flexibility as the transistors on the die are cheap compared to the wire bond/pins on the package. It means that you, who only use the interrupt feature, gets economies of scale (and a cheaper part) because other might be using these features and the ARM SOC vendor gets to distribute the NRE cost between more people.
In a perfect world, there is maybe no need for this. Not so long ago when tranistors where more expensive, some lines did only behave as interrupts (some M68k CPUs have this). Historically the ARM only has a single interrupt line with one common routine (the Cortex-M are different). So the interrupt source has to be determined by reading another register. As the hardware needs to capture the state of the line on the ARM, it is almost free to add the 'input controller' portion.
Also, for this reason, all of the ARM Linux GPIO drivers have a macro to convert from a GPIO pin to an interrupt number as they are usually one-to-one mapped. There is usually a single 'GIC' interrupt for the GPIO controller. There is a 'GPIO' interrupt controller which forms a tree of interrupt controllers with the GIC as the root. Typically, the GPIO irq numbers are Max GIC IRQ + port *32 + pin; so the GPIO irq numbers are just appended to the 'GIC' irq numbers.
If you were designing a bespoke ASIC for one specific system you could indeed do precisely that - only implement exactly what you need.
However, most processors/SoCs are produced as commodity products, so more flexibility allows them to be integrated in a wider variety of systems (and thus sell more). Given modern silicon processes, chip size tends to be constrained by the physical packaging, so pin count is at an absolute premium. Therefore, allowing pins to double up as either I/O or interrupt sources depending on the needs of the user offers more functionality in a given space, or the same functionality in less space, depending on which way you look at it.
It is not about "converting" anything - on a typical processor or microcontroller, a number of peripherals are connected to an interrupt controller; GPIO is just one of those peripherals. It is also by no means universally true; different devices have different capabilities, but in any case you are simply configuring a GPIO pin to generate an interrupt - that's a normal function of the GPIO not a "conversion".
Prior to ARM Cortex, ARM did not define an interrupt controller, and the core itself had only two interrupt sources (IRQ and FIQ). A vendor defined interrupt controller was required to multiplex the single IRQ over multiple peripherals. ARM Cortex defines an interrupt controller and a more flexible interrupt architecture; it is possible to achieve zero-latency interrupt from a GPIO, so there is no real advantage in accessing a dedicated interrupt? Doing that might mean the addition of external signal conditioning circuitry that is often incorporated in GPIO on the die.

Assigning internal pull-ups to input port

In atmega128, what is the difference between assigning internal pull-ups and not assigning pull-up when port is used as input ? I don't see the point of assigning pull-up when using port as input..
Sometimes your input won't have an output connected to it. By enabling the internal pull up, you guarantee the input will be read as high in that condition. Without the pull up, the input would just be "floating".
In atmega128, what is the difference between assigning internal pull-ups and not assigning pull-up when port is used as input ? I don't see the point of assigning pull-up when using port as input.
If there is a component connected to the input that is always actively driving the line to either low or high, you won't require a pull-up/down. You use pull-up/down resistors to ensure a well-defined logical level under all conditions.
The simplest example is an unconnected input pin of a micro controller. It would be "floating" without a pull-up/down weakly driving it to a specific level.
Consider this circuit:
Lets assume that C is the input to your micro controller and Vin is controlled by a mechanical switch. If Vin is 0V/open, the transistor is switched off. If you wouldn't use the pull-up resistor Rc (which could be the internal pull-up of your controller), the input C would be floating. Rc also serves a a current limiter when the transistor is switched on.
You need to connect either PullUp or PullDown. So if you are ok with a PullUp, they're already there and you can spare external resistors for that matter.
If you need PullDown, you have to connect resistors externally.

connecting avr atmega32 to shift register using USART?

I want to connect ATMEGA32 microcontroller to a shift register using USART via TXD pin, the shift register then performs serial to parallel conversion on the received data. but as you know, the shift register needs clocking , this clocking is fed via the microcontroller at baud rate frequency via XCK pin (here the USART acts as master synchronous clock generator).
My problem is that i don't know how to get these clock signal out of XCK, so how to do that???
thanks
RS-232 is self-clocking (fix baud rate) - it typically uses x16 clock supplied to a UART which syncs to the incoming data start bit edge. Rather than use a bare shift register you would probably be better off just using a simple UART chip as your serial to parallel converter - it would save a lot of effort.
I don't think using the UART for the shift register would be a good idea. It would be better to use the SPI mode of communication as it also provide the clock you require for the shift register. But if you want to use UART then you would have to provide a separate clk in sync with the baud rate you've selected for the UART mode which I think would be very hard and inaccurate.

Resources