PIC SPI Beginner Problems (XC8 MCC) - pic

I've recently got I2C working in no time with some MCC generated functions which are well documented in the .h file, however SPI gives me nothing and is causing me frustraions, not helped by the fact I'm new to these serial protocols.
I'm simply trying to read a register on a MCP23S17, then write to it, then read it back again to verify it's changed.
I'm not even sure I'm going about this the right way but I've included my code below with comments. For some reason I seem to need to add some dummy writes to get the first read to work, but only happens on the 2nd loop though.
#include "mcc_generated_files/mcc.h"
uint8_t receiveData; /* Data that will be received */
uint8_t OpCodeW = 0x40;
uint8_t OpCodeR = 0x41;
void main(void)
{
SYSTEM_Initialize();
INTERRUPT_GlobalInterruptEnable();
INTERRUPT_PeripheralInterruptEnable();
Reset1_GPIO_SetLow();
__delay_ms(200);
Reset1_GPIO_SetHigh();
__delay_ms(200);
printf("Initalised \r\n");
while (1)
{
SPI1_Open(SPI1_DEFAULT);
CS1_GPIO_SetLow();
// Read IODIRA Register 0x00
SPI1_ExchangeByte(OpCodeR); // Address + Read
SPI1_ExchangeByte(0x00); // ??? -- When I add this in it works 2nd loop -- ???
receiveData = SPI1_ExchangeByte(0x00); // Returns 0x00 1st loop, then 0xFF after ...
// ... but only when duplicate sending of byte above ???
printf("Read IODIRA: 0x%02x \r\n", receiveData);
// Try writing to IODIRA Register
// Not sure what SPI1_WriteByte actually does!
// I thought it might be the same as ExchangeByte but without anything returned
// No idea!
SPI1_WriteByte(OpCodeW); // Address + Write
SPI1_WriteByte(0x00); // Register Addres IODIRA (Port A)
SPI1_WriteByte(0xF0); // Data to be written
// Read back changed IODIRA Register again - Same routine as above
SPI1_ExchangeByte(OpCodeR); // Address + Read
SPI1_ExchangeByte(0x00); // Same routine as above ...
// ... but always prints 0x00
receiveData = SPI1_ExchangeByte(0x00); // Register Address, IODIRA (Port A)
printf("Wrote to IODIRA and read back: 0x%02x \r\n", receiveData);
printf(" ----- \r\n\n");
CS1_GPIO_SetHigh();
SPI1_Close();
__delay_ms(5000);
}
}
The actual printed output looks like this:
Initalised
Read IODIRA: 0x00 // Should be 0xFF
Wrote to IODIRA and read back: 0x00 // Should be 0xF0
-----
Read IODIRA: 0xff // This is right now!
Wrote to IODIRA and read back: 0x00 // but this hasn't changed
-----
Read IODIRA: 0xff
Wrote to IODIRA and read back: 0x00
Read IODIRA: 0xff
Wrote to IODIRA and read back: 0x00
My main questions are am I approaching this the right way?
Why do I need some dummy writes to get it working on the 2nd loops? - clearly wrong
What's the difference between SPI1_ExchangeByte and SPI1_WriteByte.
Is there some documentation or guide to using these functions I'm missing?!?!?
Any help greatly appreciated.

Turns out the answer was the Chip Select need to be set (low) with each SPI write to tell the device when a message has started/completed. I was using this more like an enable (keeping the device CS low) all the time as I only wanted to talk to one device, but that was incorrect.

Related

Is using structs with virtual inheritance a bad idea when reading from EEPROM? (Arduino)

I've run into a strange problem where if I load my data structure from EEPROM it casts it incorrectly. But, if I have both my function calls to the function responsible for saving the data structure and the function responsible for reading the data structure it casts the data into my structure successfully.
I've played around with the problem a bit and I've noticed that the data saved and the data read from the EEPROM is always correct if you read it as uint8_t. But for some reason it fails to cast the data into my data structure (BareKeyboardKey2) when the save function is not used in the program and we only read the data from the EEPROM.
The following structure is what I believe causes the problem:
// testStruct.h
struct IKey2
{
int pin;
};
struct BareKeyboardKey2 : virtual IKey2
{
int keyCode;
BareKeyboardKey2() {}
BareKeyboardKey2(int _pin, int _keyCode)
{
pin = _pin;
keyCode = _keyCode;
}
};
Example output of when I first call SaveStruct then LoadStruct. (The data I save to EEPROM is a BareKeyboardKey2(2, 4):
Reading: 0x68 0x1 0x4 0x0 0x2 0x0
pin: 2, keycode: 4
as you can see it casts the BareKeyboardKey2 correctly, but...
Here is an example output of when ONLY LoadStruct is called (and the same data from the previous example is stored on the EEPROM):
Reading: 0x68 0x1 0x4 0x0 0x2 0x0
pin: -18248, keycode: 4
As you can see the data read from the EEPROM (i.e 0x68 0x1 0x4 0x0 0x2 0x0) is the same between both examples, but in the latter example it fails to properly cast the data into a BareKeyboardKey2 (pin: -18248, keycode: 4).
I have found that if I change the structure into the following it works regardless if I use SaveStruct and LoadStruct consequently or only use LoadStruct:
// testStruct.h
struct IKey2
{
};
struct BareKeyboardKey2 : virtual IKey2
{
int pin;
int keyCode;
BareKeyboardKey2() {}
BareKeyboardKey2(int _pin, int _keyCode)
{
pin = _pin;
keyCode = _keyCode;
}
};
I also found that if I move both variables into the IKey2 like this...:
struct IKey2
{
int pin;
int keyCode;
};
struct BareKeyboardKey2 : virtual IKey2
{
BareKeyboardKey2() {}
BareKeyboardKey2(int _pin, int _keyCode)
{
pin = _pin;
keyCode = _keyCode;
}
};
... this causes the program to cast both variables wrong. Example output: pin: -18248, keycode: -18248
What is causing this behavior and what can I do to make it consistent?
I found a solution to my problem. I'm not 100% sure it's right, but this is my theory...
When saving down non-POD structs as bytes to an EEPROM it does not save down the connection between the main struct and the virtual object connected to the struct. Thus, the data is the same, but if you look into the object using a debugger (in my case gdb) you will see that the virtual ptr that connects the main struct and the virtual object is an invalid pointer. Although, the rest of the data that is originally present in the main struct is still intact.
So in my case to solve the problem I converted my struct into a POD type by removing the virtual inheritance from my struct. Instead, I used a "Has a" relationship, and so far my data is reading correctly from the EEPROM regardless of the different cases used above.

Getting DTR and RTS pin of serial port in C on Windows platform

How to get DTR and RTS status of serial port on a windows platform? I want to read the current state (ON or OFF) of these two pins.
I can set pins with :
EscapeCommFunction(hSerial,SETRTS);
But I don't know how to read the pin status.
Since on Linux, it can be done with the following code, I assume it is technicaly feasable:
int status=0;
ioctl(fd, TIOCMGET, &status);
return status & TIOCM_RTS;
Using inc\api\ntddser.h API and winioctl.h, you can access DTR and RTS status. Call DeviceIoControl, set the second parameter to IOCTL_SERIAL_GET_DTRRTS:
Call:
DeviceIoControl(
handle, // handle returned by CreateFile
IOCTL_SERIAL_GET_DTRRTS,
NULL,
0,
&Status, // pointer to a DWORD variable 1
sizeof(Status),
&unused, // pointer to a DWORD variable
pOverlapped // optional pointer to overlapped buffer (may be NULL)
);
Documentation about DeviceIoControl here.
Unless you are actively changing the signal line, is the value set in DCB used?
Other than that, you control the signal line yourself, so you should remember it each time you change it.
As long as you have the serial port open, you have all control and nothing else will change.
Isn't there anybody who uses handshake or toggle mode now?
SetDefaultCommConfigW function
BOOL SetDefaultCommConfigW(
LPCWSTR lpszName,
LPCOMMCONFIG lpCC,
DWORD dwSize
);
SetCommConfig function
BOOL SetCommConfig(
HANDLE hCommDev,
LPCOMMCONFIG lpCC,
DWORD dwSize
);
GetCommConfig function
BOOL GetCommConfig(
HANDLE hCommDev,
LPCOMMCONFIG lpCC,
LPDWORD lpdwSize
);
COMMCONFIG structure
typedef struct _COMMCONFIG {
...
DCB dcb;
...
} COMMCONFIG, *LPCOMMCONFIG;
DCB structure
typedef struct _DCB {
DWORD DCBlength;
...
DWORD fDtrControl : 2;
...
DWORD fRtsControl : 2;
...
} DCB, *LPDCB;
DTR_CONTROL_DISABLE 0x00
DTR_CONTROL_ENABLE 0x01
DTR_CONTROL_HANDSHAKE 0x02
RTS_CONTROL_DISABLE 0x00
RTS_CONTROL_ENABLE 0x01
RTS_CONTROL_HANDSHAKE 0x02
RTS_CONTROL_TOGGLE 0x03
If you still want to do so, use DeviceIoControl() commented by #Hans Passant.
However, there is no guarantee that it is properly supported, since most people will not use it.
Device Input and Output Control (IOCTL)
DeviceIoControl function
The following is a sample DeviceIoControl call for a DISK drive, but you can call it by changing each of these parameters to those related to IOCTL_SERIAL_GET_DTRRTS for the serial port.
Calling DeviceIoControl
Serial Device Control Requests
IOCTL_SERIAL_GET_DTRRTS IOCTL

EEPROM in AVR doesn't work

I'm a beginner in C language. I'm trying to operate on EEPROM memory in my ATmega 8 and ATtiny2313.
Based on this tutorial I've created the following codes:
1) writes a number to place 5 in uC's eeprom
#define F_CPU 1000000UL
#include <avr/eeprom.h>
int main()
{
number=5;
eeprom_update_byte (( uint8_t *) 5, number );
while (1);
{
}
}
2) blinks the LED n times, where n is the number read from place 5 in eeprom
#define F_CPU 1000000UL
#include <avr/io.h>
#include <util/delay.h>
#include <avr/eeprom.h>
int main()
{
DDRB=0xFF;
_delay_ms(1000);
int number;
number=eeprom_read_byte (( uint8_t *) 5) ;
for (int i=0; i<number; i++) //blinking 'number' times
{
PORTB |= (1<<PB3);
_delay_ms(100);
PORTB &= (0<<PB3);
_delay_ms(400);
}
while (1);
{
}
}
The second program blinks the led many times, and it's never the amount which is supposed to be in eeprom. What's the problem? This happens in both atmega8 and attiny2313.
EDIT:
Console results after compilation of the first program:
18:01:55 **** Incremental Build of configuration Release for project eeprom ****
make all
Invoking: Print Size
avr-size --format=avr --mcu=attiny2313 eeprom.elf
AVR Memory Usage
Device: attiny2313
Program: 102 bytes (5.0% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
Finished building: sizedummy
18:01:56 Build Finished (took 189ms)
That is one of the every time failures for beginners :-)
If you compile simply with avr-gcc <source> -o <out> you will get the wrong results here, because you need optimization! The write procedure MUST be optimized to fulfil the correct write access! So please use '-Os' or '-O3' for compiling with avr-gcc!
If you have no idea if your problem comes from read or write the eeprom, read your eeprom data with avarice/avrdude or similar tools.
The next pitfall can be, that you erase your eeprom section if you program your flash. So please have a look what your programmer really do! A full chip erase erases the eeprom as well.
Next pitfall: What fuses you have set? You are running with the expected clock rate? Maybe you have programmed internal clock and your outside crystal seems to be working with wrong speed?
Another one: Just have a look for the fuses again! JTAG pins switched off? Maybe you see only JTAG flickering :-)
Please add the compiler and programming commands to your question!

USART problems with ATmega16

I have a ATMega16 and have looped the Rx Tx (just connected the Rx to the Tx), to send and receive one char in a loop. But i only seems to be receiving 0x00 instead of the char i send.
I have the CPU configured to 1MHz.
But my thought is that since the Rx and Tx are just looped, it shouldn't matter what speed i set, since both are the same?
So basically, I'm trying to get a LED to flash at PORTC when receiving the correct char.
Here is the code:
#ifndef F_CPU
#define F_CPU 10000000
#endif
#define BAUD 9600
#define BAUDRATE ((F_CPU)/(BAUD*16)-1)
#include <avr/io.h>
#include <util/delay.h>
void uart_init(void){
UBRRH = (BAUDRATE>>8);
UBRRL = BAUDRATE;
UCSRB = (1<<TXEN) | (1<<RXEN);
UCSRC = (1<<URSEL) | (1<<UCSZ0) | (1<<UCSZ1);
}
void uart_transmit (unsigned char data){
while (!(UCSRA & (1<<UDRE)));
UDR = data;
}
unsigned char uart_recive(void){
while(!(UCSRA) & (1<<RXC));
return UDR;
}
int main(void)
{
uart_init();
unsigned char c;
PORTC = 0xff;
DDRC = 0xff;
while(1)
{
_delay_ms(200);
uart_transmit(0x2B);
c = uart_recive();
if(c==0x2B){
PORTC = PORTC ^ 0xff;
}
}
}
Any thoughts of what i am doing wrong?
The code seems right.
Thing you may have to check:
if your baudrate is the one you should have
if you try to send a char like 'p'; now you are sending a '+'
check your port configuration and see if it matches to your configuration
I think the last one is the problem.
You can try this code from ATMega manual:
/* Set frame format: 8data, 2stop bit */
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0);
After building your program, go to your port configuration and make sure it it set on 8 bits data format and 2 stop bits. Then test it on you microcontroller and see what happens. Please come back with the result.
Consider real baudrate accuracy. See e.g. http://www.wormfood.net/avrbaudcalc.php?postbitrate=9600&postclock=1, AVR provides 7.5% error for 9600baud # 1MHz clock, which is rather high error. Depend what you are sending and receiving. "Normally" you can see a garbage, if you receive permanently 0x00s it looks like another problem.
your F_CPU is set to 10MHz.
you sad that it is configured to 1Mhz.
Also check your Fuses if you really activated the crystal.
If you just use the internal oscillator: it has a relatively large error so that your UART timings may be broken (i never got problems using internal oscillator for debugging).
Another source of error may be your F_CPU definition. Mostly this Preprocessor constant is defined already (propably also wrong) in Makefile (or in IDE project settings) so your #define in Code has not affect since the #ifndef
PORTC pins(TDI,TMS,TCK,SDA) always high because these pins for JTAG and JTAG is enable by default. if you want to use PORTC in your application you have to Disable the JTAG by setting fuse bit. for atmega16 JTAGEN is fuse bit set it to 1(means unprogrammed). in case of fuse bit 0(means programmed) and 1(means unprogrammed) one more thing if you use more than 8MHz you have to set fuse bit otherwise your program will give unexpected or wrong result thanks.

AVR inline assembly: registers to variables?

I'm currently trying to write some code that checks the value of SRAM at a certain address, and then executes some C code if it matches. This is running on an atmega32u4 AVR chip. Here is what I have so far:
volatile char a = 0;
void setup(){
}
void loop(){
asm(
"LDI r16,77\n" //load value 77 into r16
"STS 0x0160,r16\n" //copy r16 value into RAM location 0x0160
"LDS r17,0x0160\n" //copy value of RAM location 0x0160 into register r17
//some code to copy value r17 to char a?
);
if(a == 77){
//do something
}
}
I'm having trouble figuring out the part where I transition from assembly back to C. How do I get the value inside register r17 and put it into a variable in the C code?
I did find this code snippet, however I don't quite understand how that works, or if that is the best way to approach this.
__asm__ __volatile__ (
" ldi __tmp_reg__, 77" "\n\t"
" sts 0x0160, __tmp_reg__" "\n\t"
" lds %0, 0x0160" "\n\t"
: "=r" (a)
:
);
See here on how to inline assembly. Unless you have a very specific reason in mind, you should let the compiler take care of the variables for you. Even though you declared a in your code to be volatile, it could very well be bound to any of the 32 registers on the GP register file of the AVR core. This essentially means, the variable is never stored in RAM. If you really want to know what your compiler is doing, disassemble the final object file with avr-objdump -S and study it.

Resources