I am using Arduino uno and i am running out of SRAM.
I came through F() Macro but it saved memory for Serial.println() and with software serial object.
But most of my code use Altsoftserial libraray object and there i do not see any SRAM memory change if i use F() macro in print.
Please let me know if F() macro can be used with altSoftserial or not ??
Thanks in advance.
Related
I am programming on Arduino boards that have several serial ports (let us say for now Serial, Serial1 and Serial3). Each port is a separate object. For using a port, one needs to first initialize it with the begin() method (what I mean with need here is, to get it working fine). The problem is that, the corresponding objects are all available in the Arduino IDE by default, even if you do not declare / initialize them in your sketch, so one is not required to call the constructor and / or initialize a serial port for using it (what I mean here with required, is what should be done to avoid a compiler error). As a consequence, the following kind of code compiles fine, while there is a typo:
byte crrt_char;
void setup(){
Serial.begin(115200);
delay(100);
Serial.println("booted");
Serial3.begin(57600);
// Serial1.begin(9600);
delay(100);
}
void loop() {
if (Serial3.available() > 0){
crrt_char = Serial1.read();
Serial.println(crrt_char, HEX);
delayMicroseconds(5);
}
}
(it should be Serial3 instead of Serial1 in the loop).
I have been bitten by this kind of bug and lost a lot of time debugging (in a more complex code of course) several times, and find it sad that the compiler does not save me (for me it looks like a job for a compiler to check for this kind of typo, isnt't it?). Any way I could get some compiler help for detecting this kind of error?
The Arduino core is available here:
https://github.com/arduino/ArduinoCore-avr
Would a possibility be to write my own Arduino core / variants without the serial ports pre-declared, so that I would need to declare them myself before I can use them?
While it may seem unfair, what the compiler is doing is correct. The compiler must compile the code the way you have written it.
Though people get confused between the job of code assistance vs the job of code compiler, It's your job to ensure that the code is written correctly. It's the compilers job to confirm if the code follows proper syntax.
As for making a board variant and including it into an Arduino Core, you will have to make changes to the HardwareSerial.h file, to ensure that any un-necessary serial objects are not declared.
An easier solution would be to make a macro hold the Serial object you want to use like so
#define CONTROL_PORT Serial
#define COMMUNICATION_PORT Serial3
And in your code use CONTROL_PORT and COMMUNICATION_PORT in the following manner
CONTROL_PORT.begin(9600);
COMMUNICATION_PORT.begin(9600);
With this, you will never face a typo, and you can change Serial1 to Serial3 whenever you want.
I hope this helps.
What does the construction (p is on gpu)
#pragma acc host_data use_device(p)
{...}
exactly do?
"A host_data construct makes the address of device data available
on the host." (The OpenAcc API). use_device - "directs the compiler to use the device address of any entry in list, for instance, when passing a variable to procedure" (OpenAcc Programming and best practices Guide). Does it mean that, for example, if i have the variables
int A=1;
int B=2;
#pragma acc declare device_resident(A,B)
...
alocated on the device, i can write from the host
#pragma acc host_data use_device(A,B)
{
memcpy(&A,&B,sizeof(int));
}
i suppose this is wrong. Please, explain this to me.
The OpenACC "host_data" directive is used when you need to get the device address for a variable for use within host code. It's mostly used for interoperability with CUDA or CUDA aware MPI when you want to pass in the device address of a variable.
In your example, this would most likely cause an error since passing a device address to the system "memcpy" would give a seg fault. Though if you change "memcpy" to "cudaMemcpy" or other routine which expects a device address to be passed in, then it would be fine.
This blog post may be helpful: https://devblogs.nvidia.com/parallelforall/3-versatile-openacc-interoperability-techniques/
I want to write a program to read a status of a GPIO pin (whether it is high or not) specifically using c++. I know that I have to export it by writing a value in sys/class/gpio and then set its direction as "in". Now I am confused on how to access the interrupt generated on a GPIO pin and do some action in my code with respect to that input.. I dont want to use any custom made library functions.
Thank you.
I was able to control GPIO using mmap system call to control LED operation directly from the user space. Now I want to implement driver in kernel space.
I am trying to write my first kernel space device driver for 16*2 line of LCD in Linux for ARM controller RPi.
Now i need to access the GPIO for this purpose.
In AVR i use to access the Port like this.
#define PORTA *(volatile unsigned char*)0x30
I was reading LLD it tells to use inb() & outb() function to access the i/o port.
http://www.makelinux.net/ldd3/chp-9-sect-2
1> Can we not use #define address of port to access the GPIO ?
2> What is the advantages to use use inb() & outb() functions for controlling the GPIO ?
Please suggest.
In AVR i use to access the Port like this.
#define PORTA *(volatile unsigned char*)0x30
That's an improper definition that overloads the symbol PORTA.
Besides defining the port address as 0x30, you are also dereferencing that location.
So it is actually a read operation, but there's no indication of that in the name, i.e. you have really defined a macro for READ_PORTA.
1> Can we not use #define address of port to access the GPIO ?
Of course you can (and should).
#define PORTA (unsigned char *)0x30
You'll find similar statements in header files for device registers in the Linux source tree. When developing a new device driver, I look for a header file of #defines for all of the device's registers and command codes, and start writing one if no file is already available.
2> What is the advantages to use use inb() & outb() functions for controlling the GPIO ?
The code is then an unambiguous statement that I/O is being performed, regardless of whether the architecture uses I/O ports or memory-mapped I/O.
Anyone reading the following should be able to deduce what is going on:
x = inb(PORTA);
versus the confusion when using your macro:
x = PORTA;
The above statement using an overloaded macro would not pass a code review conducted by competent coders.
You should also get familiar with and use the Linux kernel coding style.
1) the use of defines simplifies your task often. You could, of course, not use define for your port and use this construction literally everywhere you need to access the port. But then you will have to replace the 0x30 everywhere with another address if you change the design of your device, for example, if you decide to connect your LED to port B. Also, it will make your code less readable. Alternatively you could declare a function that will access your port. If such a simple function is declared inline (if your compiler supports inlines) then there is no difference in performance.
2) the advantage of using inb() and outb() is portability of your program. If this is not an issue, then it is fine to access your port directly.
Well, I'm using Code::Blocks as the IDE, and Win AVR as the compiler.
F_CPU is selected as 8000000UL.
I'm writing code for Atmega32.
But when I run my written code (*.hex file) in Proteus design suite (ISIS) the _delay_ms(1000) doesn't give a delay for 1sec. I don't know if it is write or wrong, I've selected CKSEL fuses to be (0100) Int.RC 8MHz in edit component.
What's wrong?
please....
Have you tried setting the compiler optimization to something other than -O0? From the avr-libc docs regarding delay* functions.
In order for these functions to work as intended, compiler
optimizations must be enabled, and the delay time must be an
expression that is a known constant at compile-time.
Using PWM for servo control I figured out that even with this setting of Internal 8Mhz, Proteus are actually simulated with a clock of 1Mhz. If you change F_CPU to 1000000UL you will see that delay will work just fine.
Its just proteus simulation lags. On the real device your delay function will work properly. In order to simulate time delays the good choice is using avr studio program.