Override GCC linker symbols in c code using weak declaration - gcc

I am building an elf target. I have a linker script where I input some of the symbol locations like(these symbols are defined in a different locations like ROM whose address is provided below),
A = 0x12345678;
B = 0x1234567c;
D = 0x1234568c;
In the C code I can use these variables A and B without declaring them which is expected.
I want to know if I can override the symbol D i.e., My current executable can have its own declaration of D. In that case the linker should ignore D. Is there a way to declare the symbols in linker script as 'weak'? so that the linker can use 'input symbols' only if it is not declared in any of the linked objects.

Use PROVIDE directive
PROVIDE(D = 0x1234568c);
From ld documentation
In some cases, it is desirable for a linker script to define a symbol only if it is referenced and is not defined by any object included in the link.
…
If, on the other hand, the program defines … the linker will silently use the definition in the program.

Related

is there a way to have the compiler warn/err if an attribute section is not defined in the linker script?

Using arm-none-eabi-gcc compiler toolchain for arm microcontrollers, and am defining a specific section in FLASH where this foo variable should live.
Let's say I have the example definition:
int foo __attribute__((section(".bar"))) = 5;
What I have observed is that if .bar is not assigned in the linker script, than the build will successfully succeed and foo will live in RAM instead as initialized data (the constant initial value will of course add to the FLASH size as well). The annoying part is, the linker does not complain when the section does not exist and so if expecting data to reside in FLASH it may silently live in a non-fixed location in RAM. Is there a compile/linker option to force a failure if this occurs?
According to GNU ld documentation, ld can be told to handle orphan linker sections as errors using the --orphan-handling=error command-line option.
Assuming orphan.c does contain the following code:
int foo __attribute__((section(".bar"))) = 5;
int main(void)
{
return 0;
}
The following command does succeed:
aarch64-elf-gcc --specs=rdimon.specs -o orphan orphan.c
But that one does fail:
aarch64-elf-gcc --specs=rdimon.specs -Wl,--orphan-handling=error -o orphan orphan.c
c:/git/cortex-baremetal/opt/gcc-linaro-7.3.1-2018.05-i686-mingw32_aarch64-elf/bin/../lib/gcc/aarch64-elf/7.3.1/../../../../aarch64-elf/bin/ld.exe: error: unplaced orphan section `.tm_clone_table' from `c:/git/cortex-baremetal/opt/gcc-linaro-7.3.1-2018.05-i686-mingw32_aarch64-elf/bin/../lib/gcc/aarch64-elf/7.3.1/crtbegin.o'.
c:/git/cortex-baremetal/opt/gcc-linaro-7.3.1-2018.05-i686-mingw32_aarch64-elf/bin/../lib/gcc/aarch64-elf/7.3.1/../../../../aarch64-elf/bin/ld.exe: error: unplaced orphan section `.bar' from `C:\Users\user\AppData\Local\Temp\cc6aRct8.o'.
c:/git/cortex-baremetal/opt/gcc-linaro-7.3.1-2018.05-i686-mingw32_aarch64-elf/bin/../lib/gcc/aarch64-elf/7.3.1/../../../../aarch64-elf/bin/ld.exe: error: unplaced orphan section `.tm_clone_table' from `c:/git/cortex-baremetal/opt/gcc-linaro-7.3.1-2018.05-i686-mingw32_aarch64-elf/bin/../lib/gcc/aarch64-elf/7.3.1/crtend.o'.
It seems the default linker script I used for the purpose of this example is missing another section, '.tm_clone_table'. It would have to be fixed in order not to trigger an error when '.bar' section is properly defined.

'libdenpli.so : undefined reference to symbol 'Tcl_InitStubs'

I am getting 'libdenpli.so : undefined reference to symbol 'Tcl_InitStubs' while creating executable.
When I check with nm, I am getting this output:
nm libdenpli.so | grep Tcl_InitStubs
U denaliTcl_InitStubs
I looked at other machine with different platform where it worked fine. And I seen the output with t:
nm libdenpli.so | grep Tcl_InitStubs
<address> t denaliTcl_InitStubs
What is the difference?
As you can see in nm's manpage:
"t" The symbol is in the text (code) section.
"U" The symbol is undefined.
In other words, your libdenpli.so uses the symbol, but does not define it -- you need to link to the library that defines that symbol as well.
Since the other system's library seems to define it, maybe there are differences on how the libraries are supposed to be linked with, due to version, platform, build options, etc. Try to take a look at the documentation of the library to see how you are supposed to link to it.

Calling external module from Chapel

I am trying to use my NumSuch module in another program. My Makefile includes
NUMSUCH_HOME=/home/buddha314/numsuch/src
MODULES=-M$(NUMSUCH_HOME)
yummly: yummlyAnalysis.chpl
$(CC) $(FLAGS) $(MODULES) -o yummlyAnalysis $<
#$(CC) $(MODULES) -o yummlyAnalysis $<
Within the code, I don't want to use NumSuch because I don't want to pollute the name space. I thought I could
var g = NumSuch.buildFromSparseMatrix(A, weighted=false, directed=false);
But during compilation, I get
yummlyAnalysis.chpl:72: error: 'NumSuch' undeclared (first use this function)
Makefile:12: recipe for target 'yummly' failed
The problem with this program is that Chapel doesn't know that NumSuch is the name of a module as opposed to a record, class, or typo. As a result, it doesn't go looking for it in your module search path. The fix is to let Chapel know that there is a module named NumSuch:
One way to do this is via a use statement (this asserts that there is a module with the given name, and will cause the compiler to go looking for it if it hasn't already found it). You can avoid namespace pollution as you'd hoped by using filters that cause no symbols to be made visible within the scope of the use statement:
use NumSuch only ; // only make this (empty) list of symbols available
or:
use NumSuch except *; // make all symbols available except for `*` (all of them)
After either of these statements, your call should work:
NumSuch.buildFromSparseMatrix(...);
and an unqualified call should not since no symbols were made available via the use:
buildFromSparseMatrix(...);
You could even put the use statement into some other scope which would cause the compiler to go looking for the module, find it, know that there's a module with that name, and limit the namespace pollution to that scope (though I consider this stylistically worse compared to the previous, more idiomatic, approaches):
{
use NumSuch; // causes the compiler to go looking for module NumSuch; limits namespace pollution to this scope...
}
NumSuch.buildFromSparseMatrix(...);
A second way to do this is to list the NumSuch.chpl source file explicitly on the chpl command-line. By default, all source files named on the command line are parsed and their modules made known to the compiler.

Dynamically load code on embedded target

I have an application which runs on bare metal target and has the following structure
main.c
service.c/.h
It's compiled to ELF executable (system.elf) using standard gcc -c, ld sequence. I use linker to generate a map file showing adresses of all symbols.
Now, without re-flashing my system I need to add an extra functionality with a custom run-time loader. Remember, this is a bare-metal with no OS.
I'd like to
compile extra.c which uses APIs defined in service.h (and somehow link against existing service.o/system.elf)
copy the resulting executable to my SDRAM at runtime and jump to it
loaded code should be able to run and accesses the exported symbols from service.c as expected
I thought I'd be able to to reuse map file to link the extra.o against system.elf but this didn't work:
ld -o extraExe extra.o system.map
Does gcc or ld have some mode to make this late linking procedure? If not, how can I achieve dynamic code loading which I outlined above?
You can use the '-R filename' or '--just-symbols=filename' command options in ld. It reads symbol names and their addresses from filename, but does not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in your system.elf program.
(refer to ftp://ftp.gnu.org/old-gnu/Manuals/ld-2.9.1/html_node/ld_3.html).
So here filename will be 'system.elf'. You can compile extra.c with GCC normally including services.h but without linking and generate 'extra.o' then call ld as below:
ld -R"system.elf" -o"extra.out" extra.o
The 'extra.out' shall have your symbols linked. You can use objdump to compare contents of both 'extra.out' and 'extra.o'.
Note that you can always pass the start address of your program to the ld (e.g. -defsym _TEXT_START_ADDR=0xAAAA0123) as well as start address of other memory sections like bss,data. (i.e. -Tbss, -Tdata)
Be careful to use a valid address that does not conflict with your 'system.elf' as ld will not generate error for that. You can define new areas for the loaded code+data+bss in your original linker script and re-compile the system.elf then point the start addresses to your defined areas while linking 'extra.o'.

xcode ld detect duplicate symbol in static libraries

This question has been asked previously for gcc, but Darwin's ld (clang?) appears to handle this differently.
Say I have a main() function defined in two files, main1.cc and main2.cc. If I attempt to compile these both together I'll get (the desired) duplicate symbol error:
$ g++ -o main1.o -c main1.cc
$ g++ -o main2.o -c main2.cc
$ g++ -o main main1.o main2.o
duplicate symbol _main in:
main1.o
main2.o
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
But if I instead stick one of these into a static library, when I go to link the application I won't get an error:
$ ar rcs libmain1.a main1.o
$ g++ -o main libmain1.a main2.o
(no error)
With gcc you can wrap the lib with --whole-archive and then gcc's ld will produce an error. This option is not available with the ld that ships w/ xcode.
Is it possible to get ld to print an error?
I'm sure you know that you're not supposed to put an object file
containing a main function in a static library. In case any of our readers
doesn't: A library is for containing functions that may be reused by many programs.
A program can contain only one main function, and the likelihood
is negligible that the main function of program will be reusable as the main
function of another. So main functions don't go in libraries. (There are a few odd exceptions to this rule).
On then to the problem you're worried about. For simplicity,
I'll exclude linkage of shared/dynamic libraries from consideration in the rest of this.
Your linker detects a duplicate symbol error (a.k.a. multiple definition error)
in the linkage when the competing definitions are in different input object files
but doesn't detect it when one definition is an input object file and the other
is in an input static library. In that scenario, the GNU linker can detect
the multiply defined symbol if it is passed the --whole-archive option before
the static library. But your linker, the Darwin Mach-O linker,
doesn't have that option.
Note that while your linker doesn't support --whole-archive, it has an
equivalent option -all_load. But don't run away with that, because the worry is groundless anyhow. For both linkers:
There really is a multiple definition error in the linkage in the [foo.o ...
bar.o] case.
There really is not a multiple definition error in the linkage in the [foo.o ... libbar.a] case.
And in addition for the GNU linker:
There really is a multiple definition error in the linkage in the
[foo.o ... --whole-archive libbar.a] case.
In no case does either linker allow multiple definitions of a symbol to
get into your program undetected and arbitrarily use one of them.
What's the difference between linking foo.o and linking libfoo.o?
The linker will only add object files to your program.
More precisely, when it meets an input file foo.o, it adds to your program
all the symbol references and symbol definitions from foo.o. (For starters
at least: it may finally discard unused definitions if you've requested that,
and if it can do so without collaterally discarding any used ones).
A static library is just a bag of object files. When the linker meets an input file
libfoo.a, by default it won't add any of the object files in the bag to
your program.
It will only inspect the contents of the bag if it has to, at that point in the linkage.
It will have to inspect the contents of the bag if it has already added
some symbol references to your program that don't have definitions. Those
unresolved symbols might be defined in some of the object files in the bag.
If it has to look in the bag, then it will inspect the object files to
see if any of them contain definitions of unresolved symbols already in the
program. If there are any such object files then it will add them to the program and consider afresh whether it needs to keep looking in the bag. It stops looking in the bag when it finds no more object files in it that the program needs or has found definitions for all symbols referenced by the program, whichever comes first.
If any object files in the bag are needed, this adds at least one more symbol
definition to your program, and possibly more unresolved symbols. Then the linker carries on.
Once it has met libfoo.a and considered which, if any, object files in that bag it needs for your program,
it won't consider it again, unless it meets it again, later in the linkage
sequence.
So...
Case 1. The input files contain [foo.o ... bar.o]. Both foo.o and bar.o
define symbol A. Both object files must be linked, so both definitions of A must
be added to the program and that is a multiple definition error. Both linkers detect it.
Case 2 The input files contain [foo.o ... libbar.a].
libbar.a contains object files a.o and b.o.
foo.o defines symbol A and references, but does not define, symbol B.
a.o also defines A but does not define B, and defines no other symbols
that are referenced by foo.o.
b.o defines symbol B.
Then:-
At foo.o, the object file must be linked. The linker adds the
definition of A and an unresolved reference to B to the program.
At libbar.a, the linker needs a definition for unresolved reference B so it looks in the bag.
a.o does not define B or any other unresolved symbol. It is not linked. The second definition of A is not added.
b.o defines B, so it is linked. The definition of B is added to the program.
The linker carries on.
No two object files that both define A are needed in the program. There is no
multiple definition error.
Case 3 The input files contain [foo.o ... libbar.a].
libbar.a contains object files a.o and b.o.
foo.o defines symbol A. It references but does not define, symbols B and C.
a.o also defines A and it defines B, and defines no other symbols
that are referenced by foo.o.
b.o defines symbol C.
Then:-
At foo.o, the object file is linked. The linker adds to the program the definition of A and a unresolved references to B and C.
At libbar.a, the linker needs definitions for unresolved referencesB
and C so it looks in the bag.
a.o does not define C. But it does define B. So a.o is linked. That adds the required definition of B, plus the not-required, surplus definition of A.
That is a multiple definition error. Both linkers detect it. Linkage ends.
There is a multiple definition error if and only if two definitions
of some symbol are contained in object files that are linked in the program. Object files from a static library are linked only to provide definitions of symbols that the program references. If there is
a multiple definition error, then both linkers detect it.
So why does the GNU linker option --whole-archive give different outcomes?
Suppose that libbar.a contains a.o and b.o. Then:
foo.o --whole-archive -lbar
tells the linker to link all the object files in libbar.a whether
they are needed or not. So this fragment of the linkage command is simply equivalent
to:
foo.o a.o b.o
Thus in case 2 above, the addition of --whole-archive is a way of
creating a multiple definition error where there is none without it. Not
a way of detecting a multiple definition error that was not detected without
it.
And if --whole-archive is mistakenly is used as a way "detecting" fictitious
multiple definition errors, then in those cases where the linkage nevertheless
succeeds, it is also a way of adding an unlimited amount of redundant code
to the program. The same goes for the -all_load option of the Mach-O linker.
Not satisfied?
Even when all that is clear, maybe you still hanker for some way to make it
an error when an input object file in your linkage defines a symbol that
is also defined in another object file that is not needed by the linkage but
happens to be contained in some input static library.
Well, that might be a situation that you want to know about, but it just
isn't any kind of linkage error, multiple-definition or otherwise. The purpose
of static libraries in linkage is to provide default definitions of symbols
that you don't define in the input object files. Provide your own definition
in an object file and the libary default is ignored.
If you don't want linkage to work like that - the way it is intended to work -
but:-
You still want to use a static library
You don't want any definition from an input object file ever to prevail over
one that's in a member of the static library
You don't want to link any redundant object files.
then the simplest solution (though not necessarily the least time-consuming at build time)
is this:
In your project build extract all the members of the static library as a
prerequisite of the link step in a manner that also gives you the list of
their names, e.g.:
$ LIBFOOBAR_OBJS=`ar xv libfoobar.a | sed 's/x - //'g`
$ echo $LIBFOOBAR_OBJS
foo.o bar.o
(But extract them someplace where they cannot clobber any object files you build). Then, again before the link step, run a preliminary throw-away
linkage in which $LIBFOOBAR_OBJS replaces libfoobar.a. E.g
instead of
cc -o prog x.o y.o z.o ... -lfoobar ...
run
cc -o deleteme x.o y.o z.o ... $LIBFOOBAR_OBJS ...
If the preliminary linkage fails - with a multiple definition error or
anything else - then stop there. Otherwise go ahead with the real linkage.
You won't link any redundant object files in prog. The price is performing
a linkage of deleteme that is redundant unless it fails with a multiple
definition error1
In professional practice, nobody runs builds like that to head off the
remote possibility that a programmer has defined a function in
one of x.o y.o z.o that knocks out a function defined in a member of
libfoobar.a without meaning to. Competence and code-review are
counted on to avoid that, in the same way they are counted on to avoid
a programmer defining a function in x.o y.o z.o to do anything that
should be be done using library resources.
[1] Rather than extracting all the object files from the static
library for use in the throw-away linkage, you might consider a
throwaway linkage using --whole-archive, with the GNU linker,
or -all_load, with the Mach-O linker. But there are potential pitfalls with
this approach I won't delve into here.

Resources