How to programatically identify which microcontroller is connected to my computer - avr

I recently bought a NodeMCU ESP8266 and started playing with it. Even though almost all the scripts I've written for Arduino micro controllers work fine on ESP8266, there are some differences. For example, reading from the EEPROM or using the internal VREF in my Esp8266.
I know that one can identify which Arduino board is connected using the following code:
#if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
//Code in here will only be compiled if an Arduino Mega is used.
#elseif defined(__AVR_ATmega328P__) || defined(__AVR_ATmega168__)
//Code in here will only be compiled if an Arduino Uno (or older) is used.
#elseif defined(__AVR_ATmega32U4__) || defined(__AVR_ATmega16U4__)
//Code in here will only be compiled if an Arduino Leonardo is used.
#endif
However, this works for arduino micro controllers. How would I be able to do it for an esp8266 microcontroller?

Like you mentioned in your question, the #if defined(__xxxxx__) statements are not actually running on the microcontroller. They're preprocessor directives. The if statements decide which code to pass to the actual compiler, and which ones to omit.
What you can do is write your code to read from the eeprom, but for the section of code that differs between microcontrollers (I'd recommend a separate function for each) you can choose between at compile time.
For example
#ifdef AVR_MICROCONTROLLER
read_from_eeprom(...)
{
code for the avr chip
}
#else // I'm assuming there's no other options besides avr and esp
read_from_eeprom(...)
{
code for esp chip
}
#endif
Then when compiling, use a -D flag to specify that you are using AVR
or omit the tag for esp.
gcc ... -D AVR_MICROCONTROLLER ...
I sense the reason you asked this question might stem from confusion about where the __AVR_ATmega1280__, etc... tags come from.
Basically, they aren't keywords used by the compiler to decide which chip to compile for. They're created by the person(s) who wrote the source file, and they're used for portability so the same file can be used with many different platforms/processors.
In my answer I used a command line tag to define the AVR_MICROCONTROLLER tag.
Other projects (E.g. Marlin firmware running on arduinos) also have config files full of define statements that can be used to configure exactly how the code is compiled. Long story short, yes the same can be done for other microcontrollers, and you would do it by writing your own preprocessor if statements and then choosing which parameters/variables to set at compile time depending on the chip you want to run the code on.

#if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
//Code in here will only be compiled if an Arduino Mega is used.
Is it possible to do the same for other types of microcontrollers? For example the one in ESP8266? What should I look for?
The mentioned macros are built-in macros provided by avr-gcc. They are used to determine for which device to compile, for example by avr-libc. Actually, these macros are no more built-in in the compiler / preprocessor today but provided by the device-specs file device-specs/specs-<device> that injects respective -D__AVR_<DEVICE>__ to the preprocessor's command line according to -mmcu=<device>.
What you can use for AVR is
#ifdef __AVR__ when compiling for AVR and, which is still a built-in macro from avr-gcc / avr-g++.
ESP8266 is a completely different architecture; you would use xtensa-g++ to compile code for that µC and that incarnation of GCC built-in defines __xtensa__ and __XTENSA__ (and definitely not __AVR__).
However, whereas device support in the AVR tools is very sophisticated and hundreds of different -mmcu=<device> options are recognized by avr-gcc, this is not the case for xtensa. You will have to define your own macros if you want to distinguish for different xtensa derivatives.
As avr and xtensa are very different architectures, you can also put the architecture specific stuff in own modules, like eeprom-avr.cpp that provides read_from_eeprom or whatever for avr and is only included in the build when building for avr with avr-g++, and a similar xtensa-only module eeprom-xtensa.cpp that's only included when built with xtensa-g++.
How to programmarically identify if the code is being compiled for AVR or ESP8266?
#if defined (__AVR__)
/* Code for AVR. */
#elif defined (__XTENSA__)
/* Code for ESP8266. */
#else
#error Compiling for unsupported target.
#endif

Related

automatically detecting errors in use of Serial ports in Arduino IDE

I am programming on Arduino boards that have several serial ports (let us say for now Serial, Serial1 and Serial3). Each port is a separate object. For using a port, one needs to first initialize it with the begin() method (what I mean with need here is, to get it working fine). The problem is that, the corresponding objects are all available in the Arduino IDE by default, even if you do not declare / initialize them in your sketch, so one is not required to call the constructor and / or initialize a serial port for using it (what I mean here with required, is what should be done to avoid a compiler error). As a consequence, the following kind of code compiles fine, while there is a typo:
byte crrt_char;
void setup(){
Serial.begin(115200);
delay(100);
Serial.println("booted");
Serial3.begin(57600);
// Serial1.begin(9600);
delay(100);
}
void loop() {
if (Serial3.available() > 0){
crrt_char = Serial1.read();
Serial.println(crrt_char, HEX);
delayMicroseconds(5);
}
}
(it should be Serial3 instead of Serial1 in the loop).
I have been bitten by this kind of bug and lost a lot of time debugging (in a more complex code of course) several times, and find it sad that the compiler does not save me (for me it looks like a job for a compiler to check for this kind of typo, isnt't it?). Any way I could get some compiler help for detecting this kind of error?
The Arduino core is available here:
https://github.com/arduino/ArduinoCore-avr
Would a possibility be to write my own Arduino core / variants without the serial ports pre-declared, so that I would need to declare them myself before I can use them?
While it may seem unfair, what the compiler is doing is correct. The compiler must compile the code the way you have written it.
Though people get confused between the job of code assistance vs the job of code compiler, It's your job to ensure that the code is written correctly. It's the compilers job to confirm if the code follows proper syntax.
As for making a board variant and including it into an Arduino Core, you will have to make changes to the HardwareSerial.h file, to ensure that any un-necessary serial objects are not declared.
An easier solution would be to make a macro hold the Serial object you want to use like so
#define CONTROL_PORT Serial
#define COMMUNICATION_PORT Serial3
And in your code use CONTROL_PORT and COMMUNICATION_PORT in the following manner
CONTROL_PORT.begin(9600);
COMMUNICATION_PORT.begin(9600);
With this, you will never face a typo, and you can change Serial1 to Serial3 whenever you want.
I hope this helps.

Implementation of putc in Versatile ARM LATEST Kernel-4.6

I am Trying to understand How linux printing
"Uncompressing Linux....... done, booting the kernel"
message even before it uncompressed itself in ARM Versatile Boad.
From this File the function decompress_kernel is writing the message through putstr() function which inturn have putc function which writing to hardware register uart.
putc is implemented in this file, putc writes directly to AMBA_UART_DR registers and these registers are different across architectures and also differs across different chips too.
But in the latest kernel-4.6 this was deprecated .
When i checked putc implemetation for ARM Versatile Boad in latest kernel its been deprecated so
how they implemented in latest kernel-4.6 where as rest of machine-specific code still exist?
How kernel is printing the banner in latest kernel?
Versatile board support code was converted to the multi-platform kernel model (ARCH_MULTIPLATFORM). Just like every other board support code of the same kind, now it takes putc() prototype from arch/arm/include/debug/uncompress.h.
Instead, the actual implementation of putc() is a generic assembly function coded into arch/arm/boot/compressed/debug.S.
Being generic, debug.S makes reference to few macros (addruart, waituart, senduart, busyuart) to get information about the actual UART hardware. These macros are defined in an include file selected by CONFIG_DEBUG_LL_INCLUDE (search arch/arm/Kconfig.debug for it). In case of the Versatile board CONFIG_DEBUG_LL_INCLUDE is defined as arch/arm/include/debug/pl01x.S, where in fact you find those macros.

gcc; Aarch64; Armv8; enable crypto; -mcpu=cortex-a53+crypto

I am trying to optimize an Arm processor (Corte-A53) with an Armv8 architecture for crypto purposes.
The problem is that however the compiler accepts -mcpu=cortex-a53+crypto etc it doesn't change the output (I checked the assembly output).
Changing mfpu, mcpu add futures like crypto or simd, it doesn't matter, it is completely ignored.
To enable Neon code -ftree-vectorize is needed, how to make use of crypto?
(I checked the -O(1,2,3) flags, it won't help).
Edit: I realized I made a mistake by thinking the crypto flag works like an optimization flag solved by the compiler. My bad.
You had two questions...
Why does -mcpu=cortex-a53+crypto not change code output?
The crypto extensions are an optional feature under the AArch64 state of ARMv8-A. The +crypto feature flag indicates to the compiler that these instructions are available use. From a practical perspective, in GCC 4.8/4.9/5.1, this defines the macro __ARM_FEATURE_CRYPTO, and controls whether or not you can use the crypto intrinsics defined in ACLE, for example:
uint8x16_t vaeseq_u8 (uint8x16_t data, uint8x16_t key)
There is no optimisation in current GCC which will automatically convert a sequence of C code to use the cryptography instructions. If you want to make this transformation, you have to do it by hand (and guard it by the appropriate feature macro).
Why do the +fpu and +simd flags not change code output?
For -mcpu=cortex-a53 the +fp and +simd flags are implied by default (for some configurations of GCC +crypto may also be implied by default). Adding these feature flags will therefore not change code generation.

change instruction set in GCC

I want to test some architecture changes on an already existing architecture (x86) using simulators. However to properly test them and run benchmarks, I might have to make some changes to the instruction set, Is there a way to add these changes to GCC or any other compiler?
Simple solution:
One common approach is to add inline assembly, and encode the instruction bytes directly.
For example:
int main()
{
asm __volatile__ (".byte 0x90\n");
return 0;
}
compiles (gcc -O3) into:
00000000004005a0 <main>:
4005a0: 90 nop
4005a1: 31 c0 xor %eax,%eax
4005a3: c3 retq
So just replace 0x90 with your inst bytes. Of course you wont see the actual instruction on a regular objdump, and the program would likely not run on your system (unless you use one of the nop combinations), but the simulator should recognize it if it's properly implemented there.
Note that you can't expect the compiler to optimize well for you when it doesn't know this instruction, and you should take care and work with inline assembly clobber/input/output options if it changes state (registers, memory), to ensure correctness. Use optimizations only if you must.
Complicated solution
The alternative approach is to implement this in your compiler - it can be done in gcc, but as stated in the comments LLVM is probably one of the best ones to play with, as it's designed as a compiler development platform, but it's still very complicated as LLVM is best suited for IR optimization stages, and is somewhat less friendly when trying to modify the target-specific backends.
Still, it's doable, and you have to do that if you also plan to have your compiler decide when to issue this instruction. I'd suggest to start from the first option though, to see if your simulator even works with this addition, and only then spending time on the compiler side.
If and when you do decide to implement this in LLVM, your best bet is to define it as an intrinsic function, there's relatively more documentation about this in here - http://llvm.org/docs/ExtendingLLVM.html
You can add new instructions, or change existing by modifying group of files in GCC called "machine description". Instruction patterns in <target>.md file, some code in <target>.c file, predicates, constraints and so on. All of these lays in $GCCHOME/gcc/config/<target>/ folder. All of this stuff using on step of generation ASM code from RTL. You can also change cases of emiting instructions by change some other general GCC source files, change SSA tree generation, RTL generation, but all of this a little bit complicated.
A simple explanation what`s happened:
https://www.cse.iitb.ac.in/grc/slides/cgotut-gcc/topic5-md-intro.pdf
It's doable, and I've done it, but it's tedious. It is basically the process of porting the compiler to a new platform, using an existing platform as a model. Somewhere in GCC there is a file that defines the instruction set, and it goes through various processes during compilation that generate further code and data. It's 20+ years since I did it so I have forgotten all the details, sorry.

How can I get a list of legal ARM opcodes from gcc (or elsewhere)?

I'd like to generate pseudo-random ARM instructions. Via assembler directives, I can tell gcc what mode I'm in, and it will complain if I try a set of opcodes and operands that's not legal in that mode, so it must have some internal listing of what can be done in which mode. Where does that live? Would it be easier to extract that info from LLVM?
Is this question "not even wrong"? Should I try a different approach entirely?
To answer my own question, this is actually really easy to do from arm.md and and constraints.md in gcc/config/arm/. I probably spent more time answering asking this question and answering comments for it than I did figuring this out. Turns out I just need to look for 'TARGET_THUMB1', until I get around to implementing thumb2.
For the ARM family the buck stops at the ARM ARM (ARM Architectural Reference Manual). There is an ARM instruction set section and a Thumb instruction set section. Within both each instruction tells you what generation (ARMvX where X is some number like 4 (arm7), or 5 (arm9 time frame) ,etc). Since the opcode and pseudo code is listed for each instruction you should be able to figure out what is a real instruction and, if any, are syntax to save typing on another (push and pop for example).
With the Cortex-m3 and thumb2 in particular you also need to look at the TRM (Technical Reference Manual) as well. ARM has, I forget the name, a universal syntax they are trying to use that should work on both Thumb and ARM. For example on an ARM you have three register instructions:
add r1,r1,r2
In thumb there are only two register operations
add r1,r2
The desire basically is to meet in the middle or I would say more accurately to encourage ARM assemblers to parse Thumb instructions and encode them with the equivalent ARM instruction without complaining. This may have started with thumb and not thumb2, I have always separated the two syntaxes in my code until recently (and I still generally use ARM syntax for ARM and Thumb for Thumb).
And then yes you have to see what the specific implementation of the assembler tool is, in your case binutils. And it sounds like you have found the binutils/gnu secret decoder ring.

Resources