Trying to send 10-bit data over SPI as Slave when receiving Clock and Slave Select as input(with buffer mode) - avr

I'm trying to emulate data from the 10-bit AEAT-6010 rotary encoder and send it out on the MISO pin of the SPI protocol on the ATtiny3217. The ATtiny act as a slave and receive a CLK and SS signal as input to respond to by outputting the data(encoder value). The encoder follows the SSI protocol to send data as shown below:
The problem arrises when trying to send 10 bits voer the 8 bit SPI protocol on the ATtiny. The master of the SPI protocol is the TMS320F2808 chip and from it I receive 11 clock pulses and the SS signal. The measured signals and data is shown below:
Here the data I try to send is 0b10 just for testing. I can see the correct data on the MISO line, but there is three 1's extra in the middle of the signal. This is with BUFEN=1 and BUFWR=1 in the SPI settings of the ATtiny as can be seen in the configuration below(without buffer mode the three 1's comes on the last 3 bits, and aslo the first bit(MSB) is read as a 1):
int8_t SPI_0_init()
{
SPI0.CTRLA = 0 << SPI_CLK2X_bp /* Enable Double Speed: disabled */
| 0 << SPI_DORD_bp /* Data Order Setting: enabled */
| 1 << SPI_ENABLE_bp /* Enable Module: enabled */
| 0 << SPI_MASTER_bp /* SPI module in slave mode */
| SPI_PRESC_DIV4_gc; /* System Clock / 4 */
SPI0.CTRLB = 1 << SPI_BUFEN_bp /* Buffer Mode Enable: enabled */
| 1 << SPI_BUFWR_bp /* Buffer Write Mode: enabled */
| SPI_MODE_0_gc /* SPI Mode 1 */
| 0 << SPI_SSD_bp; /* Slave Select Disable: disabled */
SPI0.INTCTRL = 0 << SPI_DREIE_bp /* Data Register Empty Interrupt Enable: enabled */
| 1 << SPI_IE_bp /* Interrupt Enable: enabled */
| 0 << SPI_RXCIE_bp /* Receive Complete Interrupt Enable: disabled */
| 0 << SPI_SSIE_bp /* Slave Select Trigger Interrupt Enable: disabled */
| 0 << SPI_TXCIE_bp; /* Transfer Complete Interrupt Enable: disabled */
return 0;
}
I have confirmed that the data fromat of the SPI module of the TMS320F2808 and the ATtiny is the same(read on falling clock edge). Is there something I am missing about how the buffers of the SPI work or the interrupts of the SPI? I'm not sure what to to in the ISR when enabling different interrupts(oher than clearing the flags). This is my main function(The ISR is empty for now):
int main(void)
{
/* Initializes MCU, drivers and middleware */
atmel_start_init();
sei(); // Enable global interrupts
/* Replace with your application code */
while (1) {
SPI_transmit(enc_data_L);
}
Where the SPI_transmit() is as follows:
void SPI_transmit(uint16_t enc_data)
{
// Then start the transmission by assigning the data to the SPI data register
SPI0.DATA = enc_data;
//SPI0.DATA = enc_data_H;
// Now wait for the data transmission to complete by periodically checking the SPI status register
//the SPI_IF is the only interrupt flag with a function in non-buffered mode.
while(!(SPI0.INTFLAGS & (SPI_RXCIF_bm)));
SPI0.DATA; //Dummy read to clear flag
}
I have also tried splitting the 16 bit data, as suggested here, but the problem remains also for 8-bit data. Hope this is enough background info. I'm thankful for any ideas!

in SPI_transmit after writing to the buffer you're waiting until transmission is completed. After that the input buffer is being fully transmitted and now is empty.
while(!(SPI0.INTFLAGS & (SPI_RXCIF_bm))); // waits until data is transmitted
RXCIF is set when there is a data in the receive buffer, i.e. the transmission of the output buffer was also completed.
Therefore, when next byte is being transmitted by the master, there is no new data yet in the output buffer to transmit. Probably the SPI module just repeats transmission of the previous byte.
So, instead of waiting for RXCIF flag, wait for DREIF flag, which will be set, when the transmission buffer is empty and ready for the next byte.
You can just ignore the incoming data, since you're not use it anyhow
void SPI_transmit(uint16_t enc_data)
{
// Wait until buffer is ready to get the next data byte
while(!(SPI0.INTFLAGS & (SPI_DREIF_bm)));
// Then start the transmission by assigning the data to the SPI data register
SPI0.DATA = enc_data;
// Don't care about read buffer
}

Related

TWI on ATMega 2560 wait's in infinity loop

I get some stupid error's if I want to try initialise the connection from the TWI master to the bus. The start condition will be send but the processor waits in the infinity loop bevor starting to send the slave address to the bus.
I also have analysed the signals on the bus and one result is that the clock is running but there will be no data send on the bus.
The processor wait's in the line with the marked arrow.
We use the following code to start the and initialise the bus ...
void i2c_master_init() {
TWBR = (uint8_t)TWBR_val;
}
void i2c_master_stop() {
TWCR = (1<<TWINT) | (1<<TWEN) | (1<<TWSTO);
}
uint8_t i2c_master_start(uint8_t address) {
TWCR = 0;
TWCR |= (1<<TWSTA);
TWCR |= (1<<TWEN);
TWCR |= (1<<TWINT);
while( !(TWCR & (1<<TWINT)) ); <--
[...]
}
Currently I don't know, what's going wrong with the code. Or am I doing something else wrong. Can anyone help me?
Thank you in anticipation.
My best guess without hardware on my bench is that you should set all flags to TWI control registry at once by TWCR = (1<<TWINT)|(1<<TWSTA)|(1<<TWEN). Meanwhile you set them one by one in 3 separate operations (multiple clock cycles), while datasheet implicitly says flags must be set together, see also datasheet examples.

I²C Master Write with PIC18F45K50 : keeps SCL low

I'm writing my own I²C Master Write function according to Microchip's datasheet. I'm using MPLAB X. I generated the configuration with the Code Configurator, but here are the interesting bits :
// R_nW write_noTX; P stopbit_notdetected; S startbit_notdetected; BF RCinprocess_TXcomplete; SMP Standard Speed; UA dontupdate; CKE disabled; D_nA lastbyte_address;
SSP1STAT = 0x80;
// SSPEN enabled; WCOL no_collision; CKP Idle:Low, Active:High; SSPM FOSC/4_SSPxADD_I2C; SSPOV no_overflow;
SSP1CON1 = 0x28;
// SBCDE disabled; BOEN disabled; SCIE disabled; PCIE disabled; DHEN disabled; SDAHT 100ns; AHEN disabled;
SSP1CON3 = 0x00;
// Baud Rate Generator Value: SSP1ADD 80;
SSP1ADD = 0x50;
// clear the master interrupt flag
PIR1bits.SSP1IF = 0;
// enable the master interrupt
PIE1bits.SSP1IE = 1;
So : Standard Speed, 100ns hold time, Master Mode, clokck frequency about 50kHz.
I tried to follow the procedure described p238 of the datasheet :
http://ww1.microchip.com/downloads/en/DeviceDoc/30000684B.pdf
Here's my code :
#include "mcc_generated_files/mcc.h"
#include <stdio.h>
#define _XTAL_FREQ 16000000
#define RTS_PIN PORTDbits.RD3
#define CTS_PIN PORTDbits.RD2
#define LED_PIN PORTAbits.RA1
#define RX_FLAG PORTAbits.RA2
uint8_t c;
// Define putch() for printf())
void putch(char c)
{
EUSART1_Write(c);
}
void main(void)
{
// Initialize the device
SYSTEM_Initialize();
while (1)
{
// Generate a START condition by setting Start Enable bit
SSP1CON2bits.SEN = 1;
// Wait for START to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Load the address + RW byte in SSP1BUF
// Address = 85 ; request type = WRITE (0)
SSP1BUF = 0b10101010;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Load data (0x11) in SSP1BUF
SSP1BUF = 0x11;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Generate a STOP condition
SSP1CON2bits.PEN = 1;
// Wait for STOP to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Wait for 1s before sending the next byte
__delay_ms(1000);
}
}
The slave device is an Arduino which I have tested with another Arduino (Master) to make sure it's working correctly.
My problem is : analysing the SDA/SCL signals with a logic analyser, when I start the PIC I get 2 correct messages, that's with correct address send and byte transmission, but at the end of the second SCL is held LOW, which makes all other writings bad (can't have a proper START condition if SCL is held LOW). BTW, at the end of the first transmission, SCL is held LOW for like 3ms, but then comes HIGH again without any reason.
Can anyone here point what I'm doing wrong ? Did I forget something ?
Thanx in advance.
Best regards.
Eric
PS : when testing the slave with another Arduino as the Master, SCL is set HIGH as soon as the transmission is over.
One thing I'm noticing is that after sending the slave address you are waiting for the ACK (ACKSTAT) then waiting for the SSPIF Interrupt Flag, but you are not checking for SSPIF after the data byte. You are only checking ACKSTAT. Maybe try waiting for and clearing the SSPIF before setting PEN to assert the stop conditon?
Have you checked the state of the SSPCON and SSPSTAT registers when this behavior occurs, that might help narrow down where the problem lies.
Thanx a lot for your answer !
I cleared SSP1IF after loading the data byte, and now it's working fine !
I think I understand now what was happening : the datasheet indicates that ACKSTAT is the only register bit that reacts on the rising edge of SCL, instead of the falling edge for the other bits. So in my code, I generate the STOP condition too early, and that might make it inoperative. Thus no STOP condition is generated, SCL is stuck LOW, and the next transmission cannot be started.
Furthermore, when I wait for the STOP condition to be completed, the SSP1IF flag is still set, so he doesn't actually wait and jumps directly to the delay() function. I don't know if that matters as he waits anyway, but it could matter if ever I tried to send packets one after the other.
So I here's the function I wrote, and which is working :
(BTW it can take up to 255 data bytes)
void MasterWrite(char _size, char* _data)
{
// Generate a START condition by setting Start Enable bit
SSP1CON2bits.SEN = 1;
// Wait for START to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Load the address + RW byte in SSP1BUF
// Address = 85 ; request type = WRITE (0)
SSP1BUF = 0b10101010;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
for (int i=0; i<_size; i++)
{
// Load data in SSP1BUF
SSP1BUF = *(_data+i);
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
}
// Generate a STOP condition
SSP1CON2bits.PEN = 1;
// Wait for STOP to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
}
Thanx a lot again for your help !
Best regards.
Eric

Raspberry PI to AVR attiny26 via SPI - three wire mode, garbled output

I have Raspberry PI 3 connected via SPI to AVR Attiny26, which in turn has a LCD connected to it. I am trying to get the SPI running,
Now, the issue is that when I set up the AVR for two wire mode and don't configure pull-up on PB1 (MISO commented out):
USICR = (1<<USIOIE)|(1<<USIWM1)|(1<<USICS1); // Enable USI interrupt - USIOIE=1
// Three wire mode USIWM1=0, USIWM0=1
// Two wire mode USIWM1=1, USIWM0=0
// External clock USICS1=1
//PORTB |= (1<<SPI_MISO); // Enable pull-ups on SPI port
DDRB = 0b01001010; /* Set PORTB bits: 7-4 as input
-- PB7 - Pushbutton (KEY1)/RESET
-- PB6 - Pushbutton (KEY2)/INT0
-- PB5 - ADC8 (T2)
-- PB4 - ADC7 (T1)
-- PB3 - PUMP
-- PB2 - SCK - 0 = external clock (input)
-- PB1 - MISO (output)
-- PB0 - MOSI (input) - */
ISR(USI_OVF_vect) {
disp[counter++] = USIDR;
if(counter==16)
counter = 0;
USISR |= (1<<USIOIF);
}
I get the string transfered and printed on the LCD.
However, when I change the AVR to work in three wire mode and/or enable PB1 pull-up, all I get is garbage. Neither the received characters match the ones sent, nor does their count.
Raspberry is the master here, providing all the clocking, the setup there is always the same (default, three wire mode) and the clock is reasonably slow.
int main(int argc, char **argv) {
int res = bcm2835_init();
printf("BCM2835 Init() = %d\n", res);
res = bcm2835_spi_begin();
printf("BCM2835 Begin() = %d\n", res);
bcm2835_spi_setClockDivider(BCM2835_SPI_CLOCK_DIVIDER_65536);
char data[16];
sprintf(data,"%s","<--Some data-->");
int len = strlen(data);
printf("Sent: %s\n", data);
bcm2835_spi_writenb(data, len);
exit(0);
}
Same results with spidev_test program using ioctl, so does not seem related to the library/Pi's program.
On top, what is puzzling me is that when I disconnect the wire from PB1 (MISO), I immediately start receiving garbage from Pi. As if Pi's SPI immediately starts clocking when PB1/MISO goes afloat.
What am I missing here?
Regretfully, I have to say that this one goes into the RTFM category.
After some reserch I found that Pi GPIO works with +3.3V. The Attiny was set to use+ 5V. After rewiring the AVR to work with 3.3V as well, everything seems to be working.
The reason why it worked in two wire mode is the absence of AVR's pull-up resistors (external required), which allowed Pi to use its own and drive the AVR pins in the range acceptable to both Pi and AVR. Enabling pull-ups on AVR would drive Pi's GPIO over the limit. Apparently no damage done, only unpredictable and hard to explain behavior.

Why is this DMA I2C transfer locking up execution?

I'm trying to change a library for STM32F407 to include DMA transfers when using I2C. I'm using it do drive an OLED screen. In its original form it is working w/o problems. In the comments, somebody added DMA, but also ported it to STM32F10 and I'm trying to port it back to F407.
My problem is, after enabling DMA transfer, debugger stops working (at exactly that line) - debugger activity LED stops / turns off and debugger stays at next statement.
After some more testing (blinking a led at certain events to see if they happen) I found out that code actually continues to a certain point (specifically, next time when DMA transfer is needed - in second call to update screen). After that, program doesn't continue (LED doesn't turn ON if set ON after that statement).
The weird thing is, I know the transfer is working because the screen gets a few characters written on it. That only happens if I don't debug step by step because CPU writes new data to screen buffer in the mean time and changes content of it before it is entirely sent to the screen by DMA (I will figure out how to fix that later - probably dual buffer, but it shouldn't interfere with DMA transfer anyway). However if I debug step by step, DMA finishes before CPU writes new content to screen buffer and screen is black (as it should be as buffer is first cleared). For testing, I removed the first call to DMA (after the clearing of buffer) and let the program write the text intended into buffer. It displays without any anomalies, so that means DMA must have finished, but something happened after. I simply can't explain why debugger stops working if DMA finishes the transfer.
I tried blinking a led in transfer finished interrupt handler of DMA but it never blinks, that means it is never fired. I would appreciate any help as I'm at a loss (been debugging for a few days now).
Thank you!
Here is relevant part of code (I have omitted rest of the code because there is a lot of it, but if required I can post). The code works without DMA (with ordinary I2C transfers), it only breaks with DMA.
// TM_STM32F4_I2C.h
typedef struct DMA_Data
{
DMA_Stream_TypeDef* DMAy_Streamx;
uint32_t feif;
uint32_t dmeif;
uint32_t teif;
uint32_t htif;
uint32_t tcif;
} DMA_Data;
//...
// TM_STM32F4_I2C.c
void TM_I2C_Init(I2C_TypeDef* I2Cx, uint32_t clockSpeed) {
I2C_InitTypeDef I2C_InitStruct;
/* Enable clock */
RCC->APB1ENR |= RCC_APB1ENR_I2C3EN;
/* Enable pins */
TM_GPIO_InitAlternate(GPIOA, GPIO_PIN_8, TM_GPIO_OType_OD, TM_GPIO_PuPd_UP, TM_GPIO_Speed_Medium, GPIO_AF_I2C3);
TM_GPIO_InitAlternate(GPIOC, GPIO_PIN_9, TM_GPIO_OType_OD, TM_GPIO_PuPd_UP, TM_GPIO_Speed_Medium, GPIO_AF_I2C3);
/* Check clock, set the lowest clock your devices support on the same I2C bus */
if (clockSpeed < TM_I2C_INT_Clocks[2]) {
TM_I2C_INT_Clocks[2] = clockSpeed;
}
/* Set values */
I2C_InitStruct.I2C_ClockSpeed = TM_I2C_INT_Clocks[2];
I2C_InitStruct.I2C_AcknowledgedAddress = TM_I2C3_ACKNOWLEDGED_ADDRESS;
I2C_InitStruct.I2C_Mode = TM_I2C3_MODE;
I2C_InitStruct.I2C_OwnAddress1 = TM_I2C3_OWN_ADDRESS;
I2C_InitStruct.I2C_Ack = TM_I2C3_ACK;
I2C_InitStruct.I2C_DutyCycle = TM_I2C3_DUTY_CYCLE;
/* Disable I2C first */
I2Cx->CR1 &= ~I2C_CR1_PE;
/* Initialize I2C */
I2C_Init(I2Cx, &I2C_InitStruct);
/* Enable I2C */
I2Cx->CR1 |= I2C_CR1_PE;
}
int16_t TM_I2C_WriteMultiDMA(DMA_Data* dmaData, I2C_TypeDef* I2Cx, uint8_t address, uint8_t reg, uint16_t len)
{
int16_t ok = 0;
// If DMA is already enabled, wait for it to complete first.
// Interrupt will disable this after transmission is complete.
TM_I2C_Timeout = 10000000;
// TODO: Is this I2C check ok?
while (I2C_GetFlagStatus(I2Cx, I2C_FLAG_BUSY) && !I2C_GetFlagStatus(I2Cx, I2C_FLAG_TXE) && DMA_GetCmdStatus(dmaData->DMAy_Streamx) && TM_I2C_Timeout)
{
if (--TM_I2C_Timeout == 0)
{
return -1;
}
}
//Set amount of bytes to transfer
DMA_Cmd(dmaData->DMAy_Streamx, DISABLE); //should already be disabled at this point
DMA_SetCurrDataCounter(dmaData->DMAy_Streamx, len);
DMA_ClearFlag(dmaData->DMAy_Streamx, dmaData->feif | dmaData->dmeif | dmaData->teif | dmaData->htif | dmaData->tcif); // Clear dma flags
DMA_Cmd(dmaData->DMAy_Streamx, ENABLE); // enable DMA
//Send I2C start
ok = TM_I2C_Start(I2Cx, address, I2C_TRANSMITTER_MODE, I2C_ACK_DISABLE);
//Send register to write to
TM_I2C_WriteData(I2Cx, reg);
//Start DMA transmission, interrupt will handle transmit complete.
I2C_DMACmd(I2Cx, ENABLE);
return ok;
}
//...
// TM_STM32F4_SSD1306.h
#define SSD1306_I2C I2C3
#define SSD1306_I2Cx 3
#define SSD1306_DMA_STREAM DMA1_Stream4
#define SSD1306_DMA_FEIF DMA_FLAG_FEIF4
#define SSD1306_DMA_DMEIF DMA_FLAG_DMEIF4
#define SSD1306_DMA_TEIF DMA_FLAG_TEIF4
#define SSD1306_DMA_HTIF DMA_FLAG_HTIF4
#define SSD1306_DMA_TCIF DMA_FLAG_TCIF4
static DMA_Data ssd1306_dma_data = { SSD1306_DMA_STREAM, SSD1306_DMA_FEIF, SSD1306_DMA_DMEIF, SSD1306_DMA_TEIF, SSD1306_DMA_HTIF, SSD1306_DMA_TCIF };
#define SSD1306_I2C_ADDR 0x78
//...
// TM_STM32F4_SSD1306.c
void TM_SSD1306_initDMA(void)
{
DMA_InitTypeDef DMA_InitStructure;
NVIC_InitTypeDef NVIC_InitStructure;
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_DMA1, ENABLE);
DMA_DeInit(DMA1_Stream4);
DMA_Cmd(DMA1_Stream4, DISABLE);
//Configure DMA controller channel 3, I2C TX channel.
DMA_StructInit(&DMA_InitStructure); // Load defaults
DMA_InitStructure.DMA_Channel = DMA_Channel_3;
DMA_InitStructure.DMA_PeripheralBaseAddr = (uint32_t)(&(I2C3->DR)); // I2C3 data register address
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)SSD1306_Buffer; // Display buffer address
DMA_InitStructure.DMA_DIR = DMA_DIR_MemoryToPeripheral; // DMA from mem to periph
DMA_InitStructure.DMA_BufferSize = 1024; // Is set later in transmit function
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable; // Do not increment peripheral address
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable; // Do increment memory address
DMA_InitStructure.DMA_PeripheralDataSize = DMA_PeripheralDataSize_Byte;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_Byte;
DMA_InitStructure.DMA_Mode = DMA_Mode_Normal; // DMA one shot, no circular.
DMA_InitStructure.DMA_Priority = DMA_Priority_Medium; // Tweak if interfering with other dma actions
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_HalfFull;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
DMA_Init(DMA1_Stream4, &DMA_InitStructure);
DMA_ITConfig(DMA1_Stream4, DMA_IT_TC, ENABLE); // Enable transmit complete interrupt
DMA_ClearITPendingBit(DMA1_Stream4, DMA_IT_TC);
// Set interrupt controller for DMA
NVIC_InitStructure.NVIC_IRQChannel = DMA1_Stream4_IRQn; // I2C3 TX connect to stream 4 of DMA1
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0x05;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0x05;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
// Set interrupt controller for I2C
NVIC_InitStructure.NVIC_IRQChannel = I2C3_EV_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 1;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
I2C_ITConfig(I2C3, I2C_IT_BTF, ENABLE);
}
extern void DMA1_Channel3_IRQHandler(void)
{
//I2C3 DMA transmit completed
if (DMA_GetITStatus(DMA1_Stream4, DMA_IT_TC) != RESET)
{
// Stop DMA, clear interrupt
DMA_Cmd(DMA1_Stream4, DISABLE);
DMA_ClearITPendingBit(DMA1_Stream4, DMA_IT_TC);
I2C_DMACmd(SSD1306_I2C, DISABLE);
}
}
// Sending stop condition to I2C in separate handler necessary
// because DMA can finish before I2C finishes
// transmitting and last byte is not sent
extern void I2C3_EV_IRQHandler(void)
{
if (I2C_GetITStatus(I2C3, I2C_IT_BTF) != RESET)
{
TM_I2C_Stop(SSD1306_I2C); // send i2c stop
I2C_ClearITPendingBit(I2C3, I2C_IT_BTF);
}
}
// ...
void TM_SSD1306_UpdateScreen(void) {
TM_I2C_WriteMultiDMA(&ssd1306_dma_data, SSD1306_I2C, SSD1306_I2C_ADDR, 0x40, 1024); // Use DMA
}
edit: i noticed the wrong condition checking at initializing a new transfer, but fixing it doesn't fix the main problem
while ((I2C_GetFlagStatus(I2Cx, I2C_FLAG_BUSY) || !I2C_GetFlagStatus(I2Cx, I2C_FLAG_TXE) || DMA_GetCmdStatus(dmaData->DMAy_Streamx)) && TM_I2C_Timeout)

Configuring I2C and I2S in ALSA ASoC

I am working with a BeagleBoard running Linux 3.0.63, and I am trying to get the I2C and I2S interfaces to work, with the end goal of playing a .wav file on the beagleboard and having the I2C and I2S set up correctly.
I am currently stuck on setting the BeagleBoard to be the master clock for the I2S line. Or the slave clock could also work. In any case, I have no idea where the I2S stuff is set in the kernel code. I assumed in arch/arm/mach-omap3/board-omap3beagle.c, but I cannot find it.
Btw, is there hidden documentation on how to do this that I do not know about?
Have a look at files sound/soc/omap/omap3beagle.c and include/sound/soc-dai.h:
First one has a function:
static int omap3beagle_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
/* couple of lines */
switch (params_channels(params)) {
case 2: /* Stereo I2S mode */
fmt = SND_SOC_DAIFMT_I2S |
SND_SOC_DAIFMT_NB_NF |
SND_SOC_DAIFMT_CBM_CFM;
break;
case 4: /* Four channel TDM mode */
fmt = SND_SOC_DAIFMT_DSP_A |
SND_SOC_DAIFMT_IB_NF |
SND_SOC_DAIFMT_CBM_CFM;
break;
default:
return -EINVAL;
}
/* some stuff */
}
And the second one has macro-definitions:
/*
* DAI hardware clock masters.
*
* This is wrt the codec, the inverse is true for the interface
* i.e. if the codec is clk and FRM master then the interface is
* clk and frame slave.
*/
#define SND_SOC_DAIFMT_CBM_CFM (1 << 12) /* codec clk & FRM master */
#define SND_SOC_DAIFMT_CBS_CFM (2 << 12) /* codec clk slave & FRM master */
#define SND_SOC_DAIFMT_CBM_CFS (3 << 12) /* codec clk master & frame slave */
#define SND_SOC_DAIFMT_CBS_CFS (4 << 12) /* codec clk & FRM slave */
So using them you can adjust I2S clocking for "Stereo I2S mode" as you need.
There are a lot of other options but I guess these ones are the exactly what you need.
Some documentation can be found at Documentation/sound/alsa/soc.

Resources