Transmission using SPI in AVR - avr

When will SPIF bit in SPSR will reset after transmission of data
Suppose
void SPITransmit(uint8_t data)
{
SPDR = data;
while(!(SPSR & (1<<SPIF)));
}
After transmission SPIF will set and how to reset this bit for reception.

With SPI, you don't get to choose whether you are sending or transmitting, you do both at the same time. So there is no need to "reset SPIF for reception". I believe the received data is available in the SPDR register after your loop terminates, but you should read the datasheet for your particular AVR to make sure.
Here is a function you could use to transmit and receive at the same time:
uint8_t SPITransmit(uint8_t data)
{
SPDR = data;
while(!(SPSR & (1<<SPIF)));
return SPDR;
}

Related

SPI implementation stuck on “while(!spi_is_tx_empty(WINC1500_SPI));”

I'm currently implementing a driver for the WINC1500 to be used with an ATMEGA32 MCU and it's getting stuck on this line of "while(!spi_is_tx_empty(WINC1500_SPI));". The code builds and runs but it won't clear what's inside in this function to proceed through my code and boot up the Wifi Module. I've been stuck on this problem for weeks now with no progress and don't know how to clear it.
static inline bool spi_is_tx_empty(volatile avr32_spi_t *spi)
{
// 1 = All Transmissions complete
// 0 = Transmissions not complete
return (spi->sr & AVR32_SPI_SR_TXEMPTY_MASK) != 0;
}
Here is my implementation of the SPI Tx/Rx function
void m2mStub_SpiTxRx(uint8_t *p_txBuf,
uint16_t txLen,
uint8_t *p_rxBuf,
uint16_t rxLen)
{
uint16_t byteCount;
uint16_t i;
uint16_t data;
// Calculate the number of clock cycles necessary, this implies a full-duplex SPI.
byteCount = (txLen >= rxLen) ? txLen : rxLen;
// Read / Transmit.
for (i = 0; i < byteCount; ++i)
{
// Wait for transmitter to be ready.
while(!spi_is_tx_ready(WINC1500_SPI));
// Transmit.
if (txLen > 0)
{
// Send data from the transmit buffer
spi_put(WINC1500_SPI, *p_txBuf++);
--txLen;
}
else
{
// No more Tx data to send, just send something to keep clock active.
// Here we clock out a don't care byte
spi_put(WINC1500_SPI, 0x00U);
// Not reading it back, not being cleared 16/1/2020
}
// Reference http://asf.atmel.com/docs/latest/avr32.components.memory.sdmmc.spi.example.evk1101/html/avr32_drivers_spi_quick_start.html
// Wait for transfer to finish, stuck on here
// Need to clear the buffer for it to be able to continue
while(!spi_is_tx_empty(WINC1500_SPI));
// Wait for transmitter to be ready again
while(!spi_is_tx_ready(WINC1500_SPI));
// Send dummy data to slave, so we can read something from it.
spi_put(WINC1500_SPI, 0x00U); // Change dummy data from 00U to 0xFF idea
// Wait for a complete transmission
while(!spi_is_tx_empty(WINC1500_SPI));
// Read or throw away data from the slave as required.
if (rxLen > 0)
{
*p_rxBuf++ = spi_get(WINC1500_SPI);
--rxLen;
}
else
{
spi_get(WINC1500_SPI);
}
}
Debug output log
Disable SPI
Init SPI module as master
Configure SPI and Clock settings
spi_enable(WINC1500_SPI)
InitStateMachine()
INIT_START_STATE
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2mStub_PinSet_CE
m2mStub_PinSet_RESET
m2mStub_GetOneMsTimer();
SetChipHardwareResetState (CHIP_HARDWARE_RESET_FIRST_DELAY_1MS)
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
if(m2m_get_elapsed_time(startTime) >= 2)
m2mStub_PinSet_CE(M2M_WIFI_PIN_HIGH)
startTime = m2mStub_GetOneMsTimer();
SetChipHardwareResetState(CHIP_HARDWARE_RESET_SECOND_DELAY_5_MS);
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2m_get_elapsed_time(startTime) >= 6
m2mStub_PinSet_RESET(M2M_WIFI_PIN_HIGH)
startTime = m2mStub_GetOneMsTimer();
SetChipHardwareResetState(CHIP_HARDWARE_RESET_FINAL_DELAY);
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2m_get_elapsed_time(startTime) >= 10
SetChipHardwareResetState(CHIP_HARDWARE_RESET_COMPLETE)
retVal = true // State machine has completed successfully
g_scanInProgress = false
nm_spi_init();
reg = spi_read_reg(NMI_SPI_PROTOCOL_CONFIG)
Wait for a complete transmission
Wait for transmitter to be ready
SPI_PUT(WINC1500_SPI, *p_txBuf++);
--txLen;
Wait for transfer to finish, stuck on here
Wait for transfer to finish, stuck on here
The ATmega32 is an 8-bit AVR but you seem to be using code for the AVR32, a family of 32-bit AVRs. You're probably just using the totally wrong code and you should consult the datasheet of the ATmega32, and search for SPI for the AVR ATmega family.

UART program for PIC18f65k40

I'm trying to program my MCU in mickroc still I do not get a output.
Is there any difference between PIC18f65k40 and PIC18f65k22 in terms of initiating uart transmission? Whether there is any need to initiate or disable any special registers in PIC18f65k40?
In mikroc software there present a library for uart so I just copied the program from mickroe website and add my transmitter and receiver pins(RX4PPS = 0x11; and TX4PPS = 0x10;)in program by configuring portc as output but my circuit does not work.
char i ;
void main()
{
TRISC = 0b00000000;// making port as output
RX4PPS = 0x11;
TX4PPS = 0x10;
UART4_Init(9600); // Initialize USART module
// (8 bit, 9600 baud rate, no parity bit...)
delay_ms(500);
UART4_Write_Text("Hello world!");
UART4_Write(13); // Start a new line
UART4_Write(10);
UART4_Write_Text("PIC18F65K40 UART example");
UART4_Write(13); // Start a new line
UART4_Write(10);
while (1) {
if (UART4_Data_Ready()) { // If data has been received
i = UART4_Read(); // read it
UART4_Write(i); // and send it back
}
}
}
The PIC18F65k22 has only 2 UARTs and you are working with UART4. I guess the library don't support the controller. Better write the code on your own. There are some examples in the datasheet.

TWI on ATMega 2560 wait's in infinity loop

I get some stupid error's if I want to try initialise the connection from the TWI master to the bus. The start condition will be send but the processor waits in the infinity loop bevor starting to send the slave address to the bus.
I also have analysed the signals on the bus and one result is that the clock is running but there will be no data send on the bus.
The processor wait's in the line with the marked arrow.
We use the following code to start the and initialise the bus ...
void i2c_master_init() {
TWBR = (uint8_t)TWBR_val;
}
void i2c_master_stop() {
TWCR = (1<<TWINT) | (1<<TWEN) | (1<<TWSTO);
}
uint8_t i2c_master_start(uint8_t address) {
TWCR = 0;
TWCR |= (1<<TWSTA);
TWCR |= (1<<TWEN);
TWCR |= (1<<TWINT);
while( !(TWCR & (1<<TWINT)) ); <--
[...]
}
Currently I don't know, what's going wrong with the code. Or am I doing something else wrong. Can anyone help me?
Thank you in anticipation.
My best guess without hardware on my bench is that you should set all flags to TWI control registry at once by TWCR = (1<<TWINT)|(1<<TWSTA)|(1<<TWEN). Meanwhile you set them one by one in 3 separate operations (multiple clock cycles), while datasheet implicitly says flags must be set together, see also datasheet examples.

Why is this DMA I2C transfer locking up execution?

I'm trying to change a library for STM32F407 to include DMA transfers when using I2C. I'm using it do drive an OLED screen. In its original form it is working w/o problems. In the comments, somebody added DMA, but also ported it to STM32F10 and I'm trying to port it back to F407.
My problem is, after enabling DMA transfer, debugger stops working (at exactly that line) - debugger activity LED stops / turns off and debugger stays at next statement.
After some more testing (blinking a led at certain events to see if they happen) I found out that code actually continues to a certain point (specifically, next time when DMA transfer is needed - in second call to update screen). After that, program doesn't continue (LED doesn't turn ON if set ON after that statement).
The weird thing is, I know the transfer is working because the screen gets a few characters written on it. That only happens if I don't debug step by step because CPU writes new data to screen buffer in the mean time and changes content of it before it is entirely sent to the screen by DMA (I will figure out how to fix that later - probably dual buffer, but it shouldn't interfere with DMA transfer anyway). However if I debug step by step, DMA finishes before CPU writes new content to screen buffer and screen is black (as it should be as buffer is first cleared). For testing, I removed the first call to DMA (after the clearing of buffer) and let the program write the text intended into buffer. It displays without any anomalies, so that means DMA must have finished, but something happened after. I simply can't explain why debugger stops working if DMA finishes the transfer.
I tried blinking a led in transfer finished interrupt handler of DMA but it never blinks, that means it is never fired. I would appreciate any help as I'm at a loss (been debugging for a few days now).
Thank you!
Here is relevant part of code (I have omitted rest of the code because there is a lot of it, but if required I can post). The code works without DMA (with ordinary I2C transfers), it only breaks with DMA.
// TM_STM32F4_I2C.h
typedef struct DMA_Data
{
DMA_Stream_TypeDef* DMAy_Streamx;
uint32_t feif;
uint32_t dmeif;
uint32_t teif;
uint32_t htif;
uint32_t tcif;
} DMA_Data;
//...
// TM_STM32F4_I2C.c
void TM_I2C_Init(I2C_TypeDef* I2Cx, uint32_t clockSpeed) {
I2C_InitTypeDef I2C_InitStruct;
/* Enable clock */
RCC->APB1ENR |= RCC_APB1ENR_I2C3EN;
/* Enable pins */
TM_GPIO_InitAlternate(GPIOA, GPIO_PIN_8, TM_GPIO_OType_OD, TM_GPIO_PuPd_UP, TM_GPIO_Speed_Medium, GPIO_AF_I2C3);
TM_GPIO_InitAlternate(GPIOC, GPIO_PIN_9, TM_GPIO_OType_OD, TM_GPIO_PuPd_UP, TM_GPIO_Speed_Medium, GPIO_AF_I2C3);
/* Check clock, set the lowest clock your devices support on the same I2C bus */
if (clockSpeed < TM_I2C_INT_Clocks[2]) {
TM_I2C_INT_Clocks[2] = clockSpeed;
}
/* Set values */
I2C_InitStruct.I2C_ClockSpeed = TM_I2C_INT_Clocks[2];
I2C_InitStruct.I2C_AcknowledgedAddress = TM_I2C3_ACKNOWLEDGED_ADDRESS;
I2C_InitStruct.I2C_Mode = TM_I2C3_MODE;
I2C_InitStruct.I2C_OwnAddress1 = TM_I2C3_OWN_ADDRESS;
I2C_InitStruct.I2C_Ack = TM_I2C3_ACK;
I2C_InitStruct.I2C_DutyCycle = TM_I2C3_DUTY_CYCLE;
/* Disable I2C first */
I2Cx->CR1 &= ~I2C_CR1_PE;
/* Initialize I2C */
I2C_Init(I2Cx, &I2C_InitStruct);
/* Enable I2C */
I2Cx->CR1 |= I2C_CR1_PE;
}
int16_t TM_I2C_WriteMultiDMA(DMA_Data* dmaData, I2C_TypeDef* I2Cx, uint8_t address, uint8_t reg, uint16_t len)
{
int16_t ok = 0;
// If DMA is already enabled, wait for it to complete first.
// Interrupt will disable this after transmission is complete.
TM_I2C_Timeout = 10000000;
// TODO: Is this I2C check ok?
while (I2C_GetFlagStatus(I2Cx, I2C_FLAG_BUSY) && !I2C_GetFlagStatus(I2Cx, I2C_FLAG_TXE) && DMA_GetCmdStatus(dmaData->DMAy_Streamx) && TM_I2C_Timeout)
{
if (--TM_I2C_Timeout == 0)
{
return -1;
}
}
//Set amount of bytes to transfer
DMA_Cmd(dmaData->DMAy_Streamx, DISABLE); //should already be disabled at this point
DMA_SetCurrDataCounter(dmaData->DMAy_Streamx, len);
DMA_ClearFlag(dmaData->DMAy_Streamx, dmaData->feif | dmaData->dmeif | dmaData->teif | dmaData->htif | dmaData->tcif); // Clear dma flags
DMA_Cmd(dmaData->DMAy_Streamx, ENABLE); // enable DMA
//Send I2C start
ok = TM_I2C_Start(I2Cx, address, I2C_TRANSMITTER_MODE, I2C_ACK_DISABLE);
//Send register to write to
TM_I2C_WriteData(I2Cx, reg);
//Start DMA transmission, interrupt will handle transmit complete.
I2C_DMACmd(I2Cx, ENABLE);
return ok;
}
//...
// TM_STM32F4_SSD1306.h
#define SSD1306_I2C I2C3
#define SSD1306_I2Cx 3
#define SSD1306_DMA_STREAM DMA1_Stream4
#define SSD1306_DMA_FEIF DMA_FLAG_FEIF4
#define SSD1306_DMA_DMEIF DMA_FLAG_DMEIF4
#define SSD1306_DMA_TEIF DMA_FLAG_TEIF4
#define SSD1306_DMA_HTIF DMA_FLAG_HTIF4
#define SSD1306_DMA_TCIF DMA_FLAG_TCIF4
static DMA_Data ssd1306_dma_data = { SSD1306_DMA_STREAM, SSD1306_DMA_FEIF, SSD1306_DMA_DMEIF, SSD1306_DMA_TEIF, SSD1306_DMA_HTIF, SSD1306_DMA_TCIF };
#define SSD1306_I2C_ADDR 0x78
//...
// TM_STM32F4_SSD1306.c
void TM_SSD1306_initDMA(void)
{
DMA_InitTypeDef DMA_InitStructure;
NVIC_InitTypeDef NVIC_InitStructure;
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_DMA1, ENABLE);
DMA_DeInit(DMA1_Stream4);
DMA_Cmd(DMA1_Stream4, DISABLE);
//Configure DMA controller channel 3, I2C TX channel.
DMA_StructInit(&DMA_InitStructure); // Load defaults
DMA_InitStructure.DMA_Channel = DMA_Channel_3;
DMA_InitStructure.DMA_PeripheralBaseAddr = (uint32_t)(&(I2C3->DR)); // I2C3 data register address
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)SSD1306_Buffer; // Display buffer address
DMA_InitStructure.DMA_DIR = DMA_DIR_MemoryToPeripheral; // DMA from mem to periph
DMA_InitStructure.DMA_BufferSize = 1024; // Is set later in transmit function
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable; // Do not increment peripheral address
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable; // Do increment memory address
DMA_InitStructure.DMA_PeripheralDataSize = DMA_PeripheralDataSize_Byte;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_Byte;
DMA_InitStructure.DMA_Mode = DMA_Mode_Normal; // DMA one shot, no circular.
DMA_InitStructure.DMA_Priority = DMA_Priority_Medium; // Tweak if interfering with other dma actions
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_HalfFull;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
DMA_Init(DMA1_Stream4, &DMA_InitStructure);
DMA_ITConfig(DMA1_Stream4, DMA_IT_TC, ENABLE); // Enable transmit complete interrupt
DMA_ClearITPendingBit(DMA1_Stream4, DMA_IT_TC);
// Set interrupt controller for DMA
NVIC_InitStructure.NVIC_IRQChannel = DMA1_Stream4_IRQn; // I2C3 TX connect to stream 4 of DMA1
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0x05;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0x05;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
// Set interrupt controller for I2C
NVIC_InitStructure.NVIC_IRQChannel = I2C3_EV_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 1;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
I2C_ITConfig(I2C3, I2C_IT_BTF, ENABLE);
}
extern void DMA1_Channel3_IRQHandler(void)
{
//I2C3 DMA transmit completed
if (DMA_GetITStatus(DMA1_Stream4, DMA_IT_TC) != RESET)
{
// Stop DMA, clear interrupt
DMA_Cmd(DMA1_Stream4, DISABLE);
DMA_ClearITPendingBit(DMA1_Stream4, DMA_IT_TC);
I2C_DMACmd(SSD1306_I2C, DISABLE);
}
}
// Sending stop condition to I2C in separate handler necessary
// because DMA can finish before I2C finishes
// transmitting and last byte is not sent
extern void I2C3_EV_IRQHandler(void)
{
if (I2C_GetITStatus(I2C3, I2C_IT_BTF) != RESET)
{
TM_I2C_Stop(SSD1306_I2C); // send i2c stop
I2C_ClearITPendingBit(I2C3, I2C_IT_BTF);
}
}
// ...
void TM_SSD1306_UpdateScreen(void) {
TM_I2C_WriteMultiDMA(&ssd1306_dma_data, SSD1306_I2C, SSD1306_I2C_ADDR, 0x40, 1024); // Use DMA
}
edit: i noticed the wrong condition checking at initializing a new transfer, but fixing it doesn't fix the main problem
while ((I2C_GetFlagStatus(I2Cx, I2C_FLAG_BUSY) || !I2C_GetFlagStatus(I2Cx, I2C_FLAG_TXE) || DMA_GetCmdStatus(dmaData->DMAy_Streamx)) && TM_I2C_Timeout)

c: socketCAN connection: read() not fast enough

socketCAN connection: read() not fast enough
Hello,
I use the socket() connection for my CAN communication.
fd = socket(PF_CAN, SOCK_RAW, CAN_RAW);
I'm using 2 threads: one periodic 1ms RT thread to send data and one
thread to read the incoming messages. The read function looks like:
void readCan0Socket(void){
int receivedBytes = 0;
do
{
// set GPIO pin low
receivedBytes = read(fd ,
&receiveCanFrame[recvBufferWritePosition],
sizeof(struct can_frame));
// reset GPIO pin high
if (receivedBytes != 0)
{
if (receivedBytes == sizeof(struct can_frame))
{
recvBufferWritePosition++;
if (recvBufferWritePosition == CAN_MAX_RECEIVE_BUFFER_LENGTH)
{
recvBufferWritePosition = 0;
}
}
receivedBytes = 0;
}
} while (1);
}
The socket is configured in blocking mode, so the read function stays open
until a message arrived. The current implementation is working, but when
I measure the time between reading a message and the next waiting state of
the read function (see set/reset GPIO comment) the time varies between 30 us
(the mean value) and > 200 us. A value greather than 200us means
(CAN has a baud rate of 1000 kBit/s) that packages are not recognized while
the read() handles the previous message. The read() function must be ready within
134 us.
How can I accelerate my implementation? I tried to use two threads which are
separated with Mutexes (lock before the read() function and unlock after a
message reception), but this didn't solve my problem.

Resources