How to make printf work on STM32F103? - makefile

I am new to the world of STM32F103. I have a demo code for STM32F103 and I am using arm-none-eabi to compile it.
I tried what I could find on Google, but nothing worked so far. I have already spent three days on the problem.
Anyone can give me a demo code for printf which works well?
Part of my makefile:
CFLAG = -mcpu=$(CPU) -mthumb -Wall -fdump-rtl-expand -specs=nano.specs --specs=rdimon.specs -Wl,--start-group -lgcc -lc -lm -lrdimon -Wl,--end-group
LDFLAG = -mcpu=$(CPU) -T ./stm32_flash.ld -specs=nano.specs --specs=rdimon.specs -Wl,--start-group -lgcc -lc -lm -lrdimon -Wl,--end-group

By including the following linker flags:
LDFLAGS += --specs=rdimon.specs -lc -lrdimon
it looks like you are trying to use what is called semihosting. You are telling the linker to include system call libraries.
Semihosting is a mechanism that enables code running on an ARM target to communicate and use the Input/Output facilities on a host computer that is running a debugger.
Examples of these facilities include keyboard input, screen output, and disk I/O. For example, you can use this mechanism to enable functions in the C library, such as printf() and scanf(), to use the screen and keyboard of the host instead of having a screen and keyboard on the target system.
Since you are using openSource tools for your STM32 development (Makefile and arm-none-eabi), I am assuming you are also using openOCD to program your microcontroller. openOCD requires you to enable semihosting as well using the following command:
arm semihosting enable
You can at the command to your openOCD script making sure you terminate the configuration stage and enter the run stage with the 'init' command. Below is an example of an openOCD script (adapted for STM32F103):
source [find target/stm32f1x.cfg]
init
arm semihosting enable
Other solutions mentioned here where your retarget the fputc() function to a UART interface will also work and might. Semihosting will work on all recent ARM Cortex-M but will require some compiler & debugger configuration (see above). Retargeting the fputc() function to a UART interface will work with any compiler but you will have to check your pin configurations for every board.

Writing an own printf implementation is an option, and probably the most recommended option according to me. Get some inspiration from the standard library implementation and write your own version, only to cater your requirements. In general, what you have to do is, first retarget a putc function to send char s through your serial interface. Then override the printf method by using the putc custom implementation. Perhaps, a very simple approach is sending the string character-wise by recursive calls for putc function.
Last but not least, you can find some lightweight printf implementations. The code size and the set of features offered by these lightweight implementations lie in between the custom written printf function and the stock standard printf function (aka the beast). I have recently tried this Tiny Printf and very pleased with its performance on an ARM core in terms of memory footprint and the number of execution cycles required.
-PS
Copied from my own writings sometime back.

Link: How to retarget printf() on an STM32F10x?
Try hijacking the _write function like so:
#define STDOUT_FILENO 1
#define STDERR_FILENO 2
int _write(int file, char *ptr, int len)
{
switch (file)
{
case STDOUT_FILENO: /*stdout*/
// Send the string somewhere
break;
case STDERR_FILENO: /* stderr */
// Send the string somewhere
break;
default:
return -1;
}
return len;
}
The original printf will go through this function (depending on what libs you use of course).

Look there. This is printf from glib. But you have microcontroller. So you sould write own printf, where vfprintf will return result into buffer and next you will send data from buffer to UART. Kind of
void printf( const char * format, ... )
{
char buffer[256];
va_list args;
va_start (args, format);
vsprintf (buffer,format, args);
send_via_USART1 (buffer);
va_end (args);
}
Also you can write own vsprintf. Standart vsprintf is very heavy. Usually little part of vsprintf features is used.

Related

How to prevent GCC from inserting memset during link-time optimization?

While developping a bare metal firmware in C for a RV32IM target (RISC-V), I encountered a linking error when LTO is enabled:
/home/duranda/riscv/lib/gcc/riscv64-unknown-elf/10.2.0/../../../../riscv64-unknown-elf/bin/ld: /tmp/firmware.elf.5cZNyC.ltrans0.ltrans.o: in function `.L0 ':
/home/duranda/whatever/firmware.c:493: undefined reference to `memset'
There are however no call to memset in my firmware. The memset is inserted by GCC during optimization as described here. The build is optimized for size using GCC -Os and -flto -fuse-linker-plugin flags. In addition, the -fno-builtin-memset -nostdinc -fno-tree-loop-distribute-patterns -nostdlib -ffreestanding flags are used to prevent the use of memset during optimization and to not include standard libs.
How to prevent memset insertion during LTO? Note that the firmware should not be linked against libc. I also tried providing a custom implementation of memset but the linker does not want to use it for memset inserted during optimization (still throws undefined reference).
I hit similar issue servers years ago and tried to fixed that, but it turns out I misunderstanding the meaning of -fno-builtin[1], -fno-builtin not guaranteed GCC won't call memcpy, memmove or memset implicitly.
I guess the simplest solution is, DO NOT compile your libc.c with -flto, or in another word, compile libc.c with -fno-lto.
That's my guess about what happen, I don't have know how to reproduce what you see, so it might incorrect,
During the first phase of LTO, LTO will collect any symbol you used in program
And then ask linker to provide those files, and discard any unused symbol.
Then read those files into GCC and optimize again, in this moment gcc using some built-in function to optimize or code gen, but it not pull-in before.
The symbol reference is created at LTO stage, which is too late pull in any symbol in current GCC LTO flow, and in this case, memset is discard in earlier stage...
So you might have question about why compile libc.c with -fno-lto will work? because if it didn't involved into LTO flow, which means it won't be discarded in the LTO flow.
Some sample program to show the gcc will call memset even you compile with -fno-builtin, aarch64 gcc and riscv gcc will generate a function call to memset.
// $ riscv64-unknown-elf-gcc x.c -o - -O3 -S -fno-builtin
struct bar {
int a[100];
};
struct bar y;
void foo(){
struct bar x = {{0}};
y = x;
}
Here is the corresponding gcc source code[2] for this case.
[1] https://gcc.gnu.org/pipermail/gcc-patches/2014-August/397382.html
[2] https://github.com/riscv/riscv-gcc/blob/riscv-gcc-10.2.0/gcc/expr.c#L3143
I'm not sure -fno-builtin-* does what you think it does. If you use those flags, then GCC will try to call an external function. If you don't use those flags, GCC will instead just insert inline code instead of relying on the library.
So it would appear to me you should simply not use any -fno-builtin flags.

How can I specify the target platform when using libclang to analyse C code?

I am using a source code analysis tool using Clang (version 6.0.1). The source code I want to analyse was written for an ARM processor and will be compiled using arm-none-eabi-gcc. My tool is running on Linux or Windows. How can I tell libclang to analyze this code for the target platform, not the host platform?
When calling clang_indexSourceFile(...) to analyze the source code, I give it the same -D and -I options that I use for arm-none-eabi-gcc, including options that are implicitly added by arm-none-eabi-gcc. These can be obtained by running the following command:
arm-none-eabi-gcc -v -dM -E - </dev/null
I am also passing these ARM_specific flags to both arm-none-eabi-gcc and clang_indexSourceFile(...): -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16 -mfloat-abi=hard
Still, libclang uses built-in type sizes matching the host platform (Linux or Windows) instead of the target platform (Arm). One way to test this it the following, admittedly somewhat contrived code:
int c = 1;
switch (c) {
case sizeof(long double): return 1;
case 16: return 2;
default: return 0;
}
When I analyze this code with libclang, I get a diagnostic "Duplicate case value '16'" proving that it assumes that a long double is 16 bytes. With arm-none-eabi-gcc, I do not get this error (but I get a similar error when I replace 16 by 8).
Use -target arm-none-eabi as an option in he clang_indexSourceFile(...) call. Thank you Ivan Kosarev for suggesting this solution.

Linking to a bootloader callback function from firmware

I'm trying to achieve something similar as in this quesition. I'm compiling a firmware file written in C, and the code needs to call a function in the bootloader.
My firmware file looks like this:
void callback(void);
int main(void){
__asm__("nop; ");
callback();
__asm__("nop; ");
return(0)
}
The firmware function compiles without error using gcc firmware.c but the function body only contains the two nop instruction with nothing in-between them (which makes sense, the function is undefined).
I made a script that runs the bootloader and prints out the address &callback, which i can use in the firmware to define a function pointer in my main():
void (*call_this)(void) = (void (*)(void )) 0x555555554abd;
call_this();
That makes the callback work, but I don't want to have to run the bootloader to compile the firmware.
I've tried fumbling around with linker scripts, but I'm new to those.
I tried supplying
PROVIDE(callback = 0x0000000000000969);
or
PROVIDE(callback = 0x555555554abd);
to the linker by compiling the firmware with:
gcc -Xlinker -T linkerscript firmware.c
The first address is from nm firmware.out | grep callback, the other from running the bootloader in gdb. Compiling with the linker script gives this error:
/usr/bin/ld: firmware.out: Not enough room for program headers, try linking with -N
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
After some more reading, I think I should to use the -R flag of ld to accomplish this.
Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once.
Just haven't made it work quite right yet.
Use the --no-dynamic-linker linking option, as done by U-Boot to solve this issue. Note that if you invoke the linker trough gcc the option must be set using -Wl,--no-dynamic-linker.

ld fails to find the entry symbol main when linking

I am writing a simple hello world bootloader in C with inline assembly using this article. Nothing fancy, no kernel loading and other advanced topics. Just a plain old "hello world" message.
Here are my files:
boot.c
/* generate 16-bit code */
__asm__(".code16\n");
/* jump boot code entry */
__asm__("jmpl $0x0000, $main\n");
/* user defined function to print series of characters terminated by null
character */
void printString(const char* pStr) {
while (*pStr) {
__asm__ __volatile__ (
"int $0x10" : : "a"(0x0e00 | *pStr), "b"(0x0007)
);
++pStr;
}
}
void main() {
/* calling the printString function passing string as an argument */
printString("Hello, world!");
}
boot.ld
ENTRY(main);
SECTIONS
{
. = 0x7C00;
.text : AT(0x7C00)
{
*(.text);
}
.sig : AT(0x7DFE)
{
SHORT(0xaa55);
}
}
I then ran the following commands: (different from the first article; adapted from another StackOverflow article as the commands in the first article won't work for me)
gcc -std=c99 -c -g -Os -march=i686 -m32 -ffreestanding -Wall -Werror boot.c -o boot.o
ld -static -T boot.ld -m elf_i386 -nostdlib --nmagic -o boot.elf boot.o
The first line compiles successfully, but I get errors upon executing the second line:
ld: warning: cannot find entry symbol main; defaulting to 0000000000007c00
boot.o:boot.c:(.text+0x2): undefined reference to 'main'
boot.o: In function 'main':
C:(...)/boot.c:16: undefined reference to '__main'
C:(...)/boot.c:16:(.text.startup+0xe): relocation truncated to fit: DISP16 against undefined symbol '__main'
What's wrong? I use Windows 10 x64 with the gcc compiler that comes with Dev-C++.
I'd suggest an i686-elf cross compiler rather than using a native windows compiler and tool chain. I think part of your problem is peculiarities related to the Windows i386pe format.
The .sig section is likely not being written at all since that unknown section probably isn't marked allocatable data. The result of that is the signature isn't written to the final binary file. It is also possible the virtual memory address (VMA) is not being set in boot.ld so it may not advance the boot signature into the last 2 bytes of the 512 byte sector. As well with the Windows format read only data will be placed in sections starting with .rdata. You'll want to make sure those are included after the data section and before the boot signature. Failure to do this will default the linker script into placing unprocessed input sections at the end beyond the boot signature.
Assuming you have made the changes as you mentioned in the comments about the extra underscores your files may work this way:
boot.ld:
ENTRY(__main);
SECTIONS
{
. = 0x7C00;
.text : AT(0x7C00)
{
*(.text);
}
.data :
{
*(.data);
*(.rdata*);
}
.sig 0x7DFE : AT(0x7DFE) SUBALIGN(0)
{
SHORT(0xaa55);
}
}
The commands to compile/link and adjust the .sig section to be a regular readonly allocated data section would look like:
gcc.exe -std=c99 -c -g -Os -march=i686 -m32 -ffreestanding -Wall -Werror boot.c -o boot.o
ld.exe -mi386pe -static -T boot.ld -nostdlib --nmagic -o boot.elf boot.o
# This adjusts the .sig section attributes and updates boot.elf
objcopy --set-section-flags .sig=alloc,contents,load,data,readonly boot.elf boot.elf
# Convert to binary
objcopy -O binary boot.elf boot.bin
Other Observations
Your use of __asm__(".code16\n"); will not generate usable code for a bootloader. You'll want to use the experimental pseudo 16-bit code generation that forces the assembler to modify instructions to be compatible with 32-bit code but encoded to be usable in 16-bit real mode. You can do this by using __asm__(".code16gcc\n"); at the top of each C/C++ files.
This tutorial has some bad advice. The global level basic assembly statement that does the JMP to main may be relocated to somewhere other than the beginning of the bootloader (some optimization levels may cause this). The startup code doesn't set ES, DS, CS to 0x0000, nor does it set the SS:SP stack segment and pointer. This can cause problems.
If trying to run from a USB drive on real hardware you may find you'll need a Boot Parameter Block. This Stackoverflow Answer I wrote discusses this issue and a possible work around under Real Hardware / USB / Laptop Issues
Note: The only useful code that GCC currently generates is 32-bit code that can run in 16-bit real mode. This means that you can't expect this code to run on a processor earlier than a 386 like the 80186/80286/8086 etc.
My general recommendation is to not create bootloaders with GCC unless you know what you are really doing and understand all the nuances involved. Writing it in assembly is probably a much better idea.
If you want a C/C++ compiler that generates true 16-bit code you may wish to look at OpenWatcom

strdup error on g++ with c++0x

I have some C++0x code. I was able to reproduce it below. The code below works fine without -std=c++0x however i need it for my real code.
How do i include strdup in C++0x? with gcc 4.5.2
note i am using mingw. i tried including cstdlib, cstring, string.h and tried using std::. No luck.
>g++ -std=c++0x a.cpp
a.cpp: In function 'int main()':
a.cpp:4:11: error: 'strdup' was not declared in this scope
code:
#include <string.h>
int main()
{
strdup("");
return 0;
}
-std=gnu++0x (instead of -std=c++0x) does the trick for me; -D_GNU_SOURCE didn't work (I tried with a cross-compiler, but perhaps it works with other kinds of g++).
It appears that the default (no -std=... passed) is "GNU C++" and not "strict standard C++", so the flag for "don't change anything except for upgrading to C++11" is -std=gnu++0x, not -std=c++0x; the latter means "upgrade to C++11 and be stricter than by default".
strdup may not be included in the library you are linking against (you mentioned mingw). I'm not sure if it's in c++0x or not; I know it's not in earlier versions of C/C++ standards.
It's a very simple function, and you could just include it in your program (though it's not legal to call it simply "strdup" since all names beginning with "str" and a lowercase letter are reserved for implementation extensions.)
char *my_strdup(const char *str) {
size_t len = strlen(str);
char *x = (char *)malloc(len+1); /* 1 for the null terminator */
if(!x) return NULL; /* malloc could not allocate memory */
memcpy(x,str,len+1); /* copy the string into the new buffer */
return x;
}
This page explains that strdup is conforming, among others, to the POSIX and BSD standards, and that GNU extensions implement it. Maybe if you compile your code with "-D_GNU_SOURCE" it works?
EDIT: just to expand a bit, you probably do not need anything else than including cstring on a POSIX system. But you are using GCC on Windows, which is not POSIX, so you need the extra definition to enable strdup.
add this preprocessor "_CRT_NONSTDC_NO_DEPRECATE" to Project Properties->C/C++ Build->GCC C++ Compiler->Preprocessor->Tool Settings
Don't forget to check Preprocessor Only(-E)
This worked for me on windows mingw32.

Resources