EEPROM in AVR doesn't work - avr

I'm a beginner in C language. I'm trying to operate on EEPROM memory in my ATmega 8 and ATtiny2313.
Based on this tutorial I've created the following codes:
1) writes a number to place 5 in uC's eeprom
#define F_CPU 1000000UL
#include <avr/eeprom.h>
int main()
{
number=5;
eeprom_update_byte (( uint8_t *) 5, number );
while (1);
{
}
}
2) blinks the LED n times, where n is the number read from place 5 in eeprom
#define F_CPU 1000000UL
#include <avr/io.h>
#include <util/delay.h>
#include <avr/eeprom.h>
int main()
{
DDRB=0xFF;
_delay_ms(1000);
int number;
number=eeprom_read_byte (( uint8_t *) 5) ;
for (int i=0; i<number; i++) //blinking 'number' times
{
PORTB |= (1<<PB3);
_delay_ms(100);
PORTB &= (0<<PB3);
_delay_ms(400);
}
while (1);
{
}
}
The second program blinks the led many times, and it's never the amount which is supposed to be in eeprom. What's the problem? This happens in both atmega8 and attiny2313.
EDIT:
Console results after compilation of the first program:
18:01:55 **** Incremental Build of configuration Release for project eeprom ****
make all
Invoking: Print Size
avr-size --format=avr --mcu=attiny2313 eeprom.elf
AVR Memory Usage
Device: attiny2313
Program: 102 bytes (5.0% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
Finished building: sizedummy
18:01:56 Build Finished (took 189ms)

That is one of the every time failures for beginners :-)
If you compile simply with avr-gcc <source> -o <out> you will get the wrong results here, because you need optimization! The write procedure MUST be optimized to fulfil the correct write access! So please use '-Os' or '-O3' for compiling with avr-gcc!
If you have no idea if your problem comes from read or write the eeprom, read your eeprom data with avarice/avrdude or similar tools.
The next pitfall can be, that you erase your eeprom section if you program your flash. So please have a look what your programmer really do! A full chip erase erases the eeprom as well.
Next pitfall: What fuses you have set? You are running with the expected clock rate? Maybe you have programmed internal clock and your outside crystal seems to be working with wrong speed?
Another one: Just have a look for the fuses again! JTAG pins switched off? Maybe you see only JTAG flickering :-)
Please add the compiler and programming commands to your question!

Related

How to apply Unified Memory to existing aligned host memory

I'm involved in effort integrating CUDA into some existing software. The software I'm integrating into is pseudo real-time, so it has a memory manager library that manually passes pointers from a single large memory allocation that is allocated up front. CUDA's Unified Memory is attractive to us, since in theory we'd theoretically be able to change this large memory chunk to Unified Memory, have the existing CPU code still work, and allow us to add GPU kernels with very little changes to the existing data I/O stream.
Parts of our existing CPU processing code requires memory to be aligned to certain alignment. cudaMallocManaged() does not allow me to specify the alignment for memory, and I feel like having to copy between "managed" and strict CPU buffers for these CPU sections almost defeats the purpose of UM. Is there a known way to address this issue that I'm missing?
I found this link on Stack Overflow that seems to solve it in theory, but I've been unable to produce good results with this method. Using CUDA 9.1, Tesla M40 (24GB):
#include <stdio.h>
#include <malloc.h>
#include <cuda.h>
#define USE_HOST_REGISTER 1
int main (int argc, char **argv)
{
int num_float = 10;
int num_bytes = num_float * sizeof(float);
float *f_data = NULL;
#if (USE_HOST_REGISTER > 0)
printf(
"%s: Using memalign + cudaHostRegister..\n",
argv[0]);
f_data = (float *) memalign(32, num_bytes);
cudaHostRegister(
(void *) f_data,
num_bytes,
cudaHostRegisterDefault);
#else
printf(
"%s: Using cudaMallocManaged..\n",
argv[0]);
cudaMallocManaged(
(void **) &f_data,
num_bytes);
#endif
struct cudaPointerAttributes att;
cudaPointerGetAttributes(
&att,
f_data);
printf(
"%s: ptr is managed: %i\n",
argv[0],
att.isManaged);
fflush(stdout);
return 0;
}
When using memalign() + cudaHostRegister() (USE_HOST_REGISTER == 1), the last print statement prints 0. Device accesses via kernel launches in larger files unsurprisingly report illegal accesses.
When using cudaMallocManaged() (USE_HOST_REGISTER == 0), the last print statement prints 1 as expected.
edit: cudaHostRegister() and cudaMallocManaged() do return successful error codes for me. Left this error-checking out in my sample I shared, but I did check them during my initial integration work. Just added the code to check, and both still return CUDA_SUCCESS.
Thanks for your insights and suggestions.
There is no method currently available in CUDA to take an existing host memory allocation and convert it into a managed memory allocation.

Why is this loop not executed?

I compile with GCC 5.3 2016q1 for STM32 microcontroller.
Right at the beginning of main I placed a small routine to fill stack with a pattern. Later I search the highest address that still holds this pattern to find out about stack usage, you surely know this. Here is my routine:
uint32_t* Stack_ptr = 0;
uint32_t Stack_bot;
uint32_t n = 0;
asm volatile ("str sp, [%0]" :: "r" (&Stack_ptr));
Stack_bot = (uint32_t)(&_estack - &_Min_Stack_Size);
//*
n = 0;
while ((uint32_t)(Stack_ptr) > Stack_bot)
{
Stack_ptr--;
n++;
*Stack_ptr = 0xAA55A55A;
} // */
After that I initialize hardware, also a UART and print out values of Stack_ptr, Stack_bot and n and then stack contents.
The results are 0x20007FD8 0x20007C00 0
Stack_bot is the expected value because I have 0x400 Bytes in 32k RAM starting at 0x20000000. But I would expect Stack_ptr to be 0x20008000 and n somewhat under 0x400 after the loop is finished. Also stack contents shows no entries of 0xAA55A55A. This means the loop is not executed.
I could only manage to get it executed by creating a small function that holds the above routine and disable optimization for this function.
Anybody knows why that is? And the strangest thing about it is that I could swear it worked a few days ago. I saw a lot of 0xAA55A55A in the stack dump.
Thanks a lot
Martin
Probably problem is with the assembler function, In my code I use this:
// defined by linker script, pointing to end of static allocation
extern unsigned _heap_start;
void fill_heap(unsigned fill=0x55555555) {
unsigned *dst = &_heap_start;
register unsigned *msp_reg;
__asm__("mrs %0, msp\n" : "=r" (msp_reg) );
while (dst < msp_reg) {
*dst++ = fill;
}
}
it will fill memory between _heap_start and current SP.

Control relay from PIC18 microchip

I have a PIC18F24K20 microchip, and wants to control a relay. It works fine from my RasPI over GPIO - but i cant get it working trough my microchip.
My test program is this:
#include <xc.h>
#define R1 LATBbits.LATB0
#define R1_TRIS TRISBbits.RB0
#define R2 LATBbits.LATB1
#define R2_TRIS TRISBbits.RB1
void main(void) {
R1_TRIS = 0;
R2_TRIS = 0;
R1 = 1;
R2 = 0;
return;
}
What is im doing wrong?
replace the return;
with:
while(1)
{
ClrWdt();
}
according datasheet,RB0 and RB1 have several modules connected to these pins,so you should verify they are turned off:
Analog,
ECCP,
Comparator.
BTW why using two pins in order to control one relay?
3.you may need add driver in order to operat the relay.
according datasheet, add following initialization code:
CCP1CON=0;
CCP2CON=0;
ADCON0=0;
CM1CON0=0;
CM2CON0=0;
also PBADEN bit at configuration bit should be zero.
The main function should never return in the embedded PIC processors. In some implementations, it would cause a software reset which would cause your pins to go back to high impedance mode. Try adding while (1); at the end of your main.
Check if the used pins have other functions. The typical gotcha is that the pins double as analog pins and are enable by default.
Disable them by looking up which AN pin they correspond to in the datasheet and disable them with code like
ANSEL.ANS0 = 0;
ANSEL.ANS1 = 0;
If you enable watchdog functionality you also might want to add a
ClrWdt();
to the main WHILE loop (which was a good suggestion from Mathieu)

Use of Xil_Out32 in Xilinx SDK

In Vivado I succesfully made a simple blockdiagram to control the LEDs of my Zybo board. I can observe that the offset address for my LEDs is: 0x4120 0000 and the High Address is 0x4120 FFFF. Now when I go to the SDK:
#include <xil_printf.h>
#include <xil_types.h>
#include "platform.h"
#include "xgpio_l.h"
volatile u32 *LED_DATA = (u32 *) 0x41200000 ;
int main()
{
init_platform();
xil_printf(" Writing to LEDs: \n\r");
Xil_Out32((&LED_DATA) + (0x00) , 0xFFFFFFFF); //All LEDs ON
cleanup_platform();
return 0;
}
I programmed the FPGA and run the above code. But still no success whatsoever.
Could someone point out my errors?
Thanks in advance
Your mistake is to use &LED_DATA, which return the address of the pointer LED_DATA, not 0x41200000 as I think you expect.
Try
Xil_out32(0x41200000, 0xFFFFFFFF);
or
*LED_DATA = 0xFFFFFFFF;
try
#define ADDR 0x41200000 // write this before main() function.
Then you have to add the following line within main function.
Xil_Out32(ADDR + 0x00000000) , 0xFFFFFFFF); //All LEDs ON
This should work.
This work
#define ADDRESS_GPIO_0 0x41200000 // vivado block diagram address editor
XGpioPs_Config * ConfigPtr1 = XGpioPs_LookupConfig(XPAR_PS7_GPIO_0_DEVICE_ID);
XGpioPs_CfgInitialize(&Gpio1, ConfigPtr1, ADDRESS_GPIO_0);
XGpioPs_SetDirection(&Gpio1, XGPIOPS_BANK0, 0x0F);
XGpioPs_Write(&Gpio1, XGPIOPS_BANK0, 0x0F);
Thank you for this post. It helped me resolve a compile issue in sdk. The issue was that the line below would not compile.
xil_printf("Wrote: 0x%08x \n\r", *(baseaddr_p+0));
I added this and it worked:
include "xil_printf.h"
Thanks so much
Rajat Sewal

USART problems with ATmega16

I have a ATMega16 and have looped the Rx Tx (just connected the Rx to the Tx), to send and receive one char in a loop. But i only seems to be receiving 0x00 instead of the char i send.
I have the CPU configured to 1MHz.
But my thought is that since the Rx and Tx are just looped, it shouldn't matter what speed i set, since both are the same?
So basically, I'm trying to get a LED to flash at PORTC when receiving the correct char.
Here is the code:
#ifndef F_CPU
#define F_CPU 10000000
#endif
#define BAUD 9600
#define BAUDRATE ((F_CPU)/(BAUD*16)-1)
#include <avr/io.h>
#include <util/delay.h>
void uart_init(void){
UBRRH = (BAUDRATE>>8);
UBRRL = BAUDRATE;
UCSRB = (1<<TXEN) | (1<<RXEN);
UCSRC = (1<<URSEL) | (1<<UCSZ0) | (1<<UCSZ1);
}
void uart_transmit (unsigned char data){
while (!(UCSRA & (1<<UDRE)));
UDR = data;
}
unsigned char uart_recive(void){
while(!(UCSRA) & (1<<RXC));
return UDR;
}
int main(void)
{
uart_init();
unsigned char c;
PORTC = 0xff;
DDRC = 0xff;
while(1)
{
_delay_ms(200);
uart_transmit(0x2B);
c = uart_recive();
if(c==0x2B){
PORTC = PORTC ^ 0xff;
}
}
}
Any thoughts of what i am doing wrong?
The code seems right.
Thing you may have to check:
if your baudrate is the one you should have
if you try to send a char like 'p'; now you are sending a '+'
check your port configuration and see if it matches to your configuration
I think the last one is the problem.
You can try this code from ATMega manual:
/* Set frame format: 8data, 2stop bit */
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0);
After building your program, go to your port configuration and make sure it it set on 8 bits data format and 2 stop bits. Then test it on you microcontroller and see what happens. Please come back with the result.
Consider real baudrate accuracy. See e.g. http://www.wormfood.net/avrbaudcalc.php?postbitrate=9600&postclock=1, AVR provides 7.5% error for 9600baud # 1MHz clock, which is rather high error. Depend what you are sending and receiving. "Normally" you can see a garbage, if you receive permanently 0x00s it looks like another problem.
your F_CPU is set to 10MHz.
you sad that it is configured to 1Mhz.
Also check your Fuses if you really activated the crystal.
If you just use the internal oscillator: it has a relatively large error so that your UART timings may be broken (i never got problems using internal oscillator for debugging).
Another source of error may be your F_CPU definition. Mostly this Preprocessor constant is defined already (propably also wrong) in Makefile (or in IDE project settings) so your #define in Code has not affect since the #ifndef
PORTC pins(TDI,TMS,TCK,SDA) always high because these pins for JTAG and JTAG is enable by default. if you want to use PORTC in your application you have to Disable the JTAG by setting fuse bit. for atmega16 JTAGEN is fuse bit set it to 1(means unprogrammed). in case of fuse bit 0(means programmed) and 1(means unprogrammed) one more thing if you use more than 8MHz you have to set fuse bit otherwise your program will give unexpected or wrong result thanks.

Resources