I have my main application and two libraries: foo and bar.
foo uses bar in some methods, and it has it specified in the LDADD.
My main application uses foo, and indirectly bar, so it has, in the Makefile, LDADD = foo.
In this case, if i don't also add bar library to the LDADD for my main application, i will get an compilation error.
Undefined reference; and it says that the .so files from foo require the .so files from bar.
I don't understand this.
Once i compile ( non-static ) foo with LDADD = bar, why do i need it again when i'm compiling an app that is using foo ?
You don't specify if foo and / or bar are libtool libraries built as part of the source tree. If they are, libtool should take take of the linking. i.e., since foo requires bar as part of a library:
libfoo_la_LIBADD = ../bar/libbar.la # in: foo/Makefile.am
and,
prog_LDADD = ../foo/libfoo.la # in: app/Makefile.am
Related
Current situation
In order to find the binary foo I use the following cmake code:
find_program(
FOO
NAMES foo
HINTS /Applications/foo-company/foo-subdir/foo/foo.app
PATH_SUFFIXES Contents/MacOS/bin
)
This works so far but it is rather ugly.
Desired situation
I want cmake to actually search for me instead of me having to point it's nose directly at it.
What I tried so far
It felt like find_package could do the job, but the following code failed to find anything:
find_package(
FOO
NAMES foo
)
find_package(
FOO
NAMES foo
HINTS /Applications/foo-company/foo-subdir
)
find_package(
FOO
NAMES foo
HINTS /Applications/foo-company/foo-subdir/foo
)
Update 1, 2020-07-12 18:00
Corrected find_program to find_package in "What I tried so far".
I did not explicitly mention it but using find_package I hoped to find the app bundle first and then specify the program I need from it.
find_package fails for reasons squareskittles explains: I'd nee a Find<PackageName>.cmake or <PackageName>Config.cmake file for that.
I'm using autotools to build a library which will later be loaded by another program. This library has to have a .so extension to it regardless of the platform I'm on (this is a requirement imposed by the program loading it), and I'd also like it to not have a version specifier. How can I set the output name of such a library?
This is what Makefile.am looks like
lib_LTLIBRARIES = mylib.la
mylib_la_SOURCES = mylib.c
mylib_la_CPPFLAGS = $(LTDLINCL)
mylib_la_CFLAGS = $(CFLAGS) $(LIBFFI_CFLAGS)
LDADD = $(LIBLTDL) -dlopen self
Reading through the libtool manpage, it seems I need to set -install_name, but I don't see it referenced in the generated Makefile anywhere.
I also need to be able to reference this library's output directory elsewhere in the Makefiles (for testing purposes), is there a variable containing its basename or full path?
How can I set the output name of such a library?
The output name of the library will be the name given in lib_LTLIBRARIES without the suffix .a. It will generate by default a shared library - .so. Thus, you do not need to specify anything else.
I also need to be able to reference this library's output directory elsewhere in the Makefiles (for testing purposes), is there a variable containing its basename or full path?
The variable $(libdir) or #libdir# in the Makefile will point to the full path of the library directory.
I am working with two Fortran modules. The first one contains a subroutine foo:
module fmod1
contains
subroutine foo(i)
implicit none
integer, intent(inout) :: i
i=i+1
end subroutine foo
end module fmod1
The second one also contains a subroutine called foo which calls the foo of the first module, renamed as foo_first:
module fmod2
use fmod1, only : foo_first => foo
contains
subroutine foo(i)
implicit none
integer, intent(inout) :: i
i=i+2
call foo_first(i)
end subroutine foo
end module fmod2
When I compile these with gfortran to get two object files and then look inside them with nm, I see the expected result:
fmod1.o:
0000000000000020 s EH_frame1
0000000000000000 T ___fmod1_MOD_foo
fmod2.o:
0000000000000030 s EH_frame1
U ___fmod1_MOD_foo
0000000000000000 T ___fmod2_MOD_foo
I then have no problem in writing a Fortran program which loads the second module and calls the foo within it (___fmod2_MOD_foo, which itself calls ___fmod1_MOD_foo).
My problem comes when I try to do the same thing from a C program using iso_c_binding. I change the second module by adding bind(c) to the subroutine:
module fmod2
use iso_c_binding
use fmod1, only : foo_first => foo
contains
subroutine foo(i) bind(c)
implicit none
integer, intent(inout) :: i
i=i+2
call foo_first(i)
end subroutine foo
end module fmod2
Running nm again on the object files now gives this:
fmod1.o:
0000000000000020 s EH_frame1
0000000000000000 T ___fmod1_MOD_foo
fmod2.o:
0000000000000030 s EH_frame1
0000000000000000 T _foo
i.e., the second module no longer seems to require the first module. When I try experimenting with calling foo from the second module from a C program it becomes apparent that the subroutine, instead of calling foo from the first module, is calling itself in an infinite loop.
Is this a bug, or am I doing something which I shouldn't be doing?
When you add BIND(C) to the procedure, you are specifying (indirectly) the binding name instead of the compiler applying its own rules (that include the module name).
It's not that "the second module no longer seems to require the first module" but that you've changed the binding name of the routine in the second module. You haven't touched the binding name of foo in the first module (which is not its local name due to the rename.)
That said, the compiler ought to know the binding name of foo in the first module, referenced through its local name, and put out the correct name in the object for the call. From what other commenters have said, the version of gfortran you're using may have a bug here. Try a newer one.
This is now GCC Bug 79485. I have already reported very similar and very probably related bugs before (ICE with binding-name equal to the name of a use-associated procedure and Wrong subroutine called, clash of specific procedure name and binding-name). Unfortunately the gfortran developers are only a few and very busy and did not fix this problem yet. If they see other people encountering it they might consider it with a slightly higher priority.
I've two targets foo and bar. Neither depend on the other, but if bar has to be rebuilt, it has to be done before foo. They are what gnu-make calls phony targets, their rules have always to be executed when they are specified.
Currently, we express a main target which depends on both like this:
# user level targets
all: bar
#$(MAKE) foo
#echo all
alt: foo
#echo alt
# internal targets
foo:
#echo foo
bar: qux
#echo bar
qux:
#echo qux
#touch qux
and we have the required behavior: if qux is not up-to-date: make bar outputs qux bar foo all (in that order) and make alt outputs foo alt; if qux is up-to-date, make bar output bar foo all and make alt outputs foo alt.
This is increasingly uncomfortable as foo has to be handled specifically (all targets which depend on both have to be handled that way, foo can't be put in a variable describing dependencies if bar is also there, the submake is itself an issue and the command line has to be maintained to pass additional variables). We now have another target which has to be handled in the same way and I'm looking for other, more convenient, ways to handle the structure.
Note 1 : In practice, I'm currently using only gnu-make but the only known dependency on a gnu-make extension over POSIX is the possibility to include files (which is quite widely available). I'd prefer something which keep the current state (i.e. widely supported constructs), but if it is not possible, the use of a gnu-make only extension is acceptable.
Note 2: gnu-make has a notion of order-only-prerequisites, but it apparently doesn't provide what we need. With
# user level targets
all: bar foo
#echo all
alt: foo
#echo alt
# internal targets
foo: | bar
#echo foo
bar:
#echo bar
make alt also build bar (if a file bar exist, its date doesn't influence the decision of rebuilding foo, which is the documented behavior).
Note 3: The more I think about it, the less I think it is possible to solve this problem with make without using a recursive call. It seems to me that it need two passes on the dependency graph, one to determine what has to be built, one to determine the ordering and I know nothing in make behavior which can't be done with a one pass algorithm.
Hmmm, how about this hack (for a hack it undoubtedly is :-)).
Basically, you could run make -d -n plus your command arguments. The output will contain several lines like Must remake target 'clean'. This information tells you whether this run of make will attempt to build both foo and bar. If this turns out to be the case, just add a rule to cause the serialisation you want.
A sketch:
this := $(lastword ${MAKEFILE_LIST})
ifndef DONTRECURSE
targets-that-will-get-remade := $(patsubst %',%,$(shell ${MAKE} -f ${this} ${MAKECMDGOALS} --debug=b -n DONTRECURSE=nosiree | grep -Po "Must remake target '\K.*'"))
endif
ifeq (bar foo,$(sort $(filter bar foo,${targets-that-will-get-remade})))
foo: bar
endif
.PHONY: foo bar
foo bar:
sleep 3
: $#
So, you run make. DONTRECURSE is not set so the $(shell …) runs. That runs make a second time with the same makefile and goals, but adds the -d (debug) and -n (don't actually run the recipes) flags. DONTRECURSE is set to prevent a third copy of make running.
The expansion of all that is a list of the targets this run of make will attempt to build on this run. (Extracting the target names is pretty tiresome—there is probably a cleaner way.)
If this list of targets includes both foo and bar, simply add a foo: bar dependency. Job done. The sleep 3 lines show this serialisation working when you use -j4 (say).
I am new to Ocaml and trying to write some small example application. I am using ocamlc version 3.11.2 under Linux Ubuntu 10.04. I want to compile two files:
a.ml
b.ml
File b.ml uses definitions from a.ml. As far as I understand, I can use ocamlc -c to perform compilation only. I can call ocamlc one final time when I have all the .cmo files to link them to an executable. Also, when compiling a file that uses definitions from another file, I have to tell the compiler in which .cmi file to find the external definitions.
So my idea was to use:
ocamlc -i -c a.ml > a.mli
ocamlc -c a.mli b.ml
ocamlc -o b a.cmo b.cmo
The first step works and produces files a.mli and a.cmo, but when running the second step I get
File "b.ml", line 1, characters 28-31:
Error: Unbound value foo
where foo is a function that is defined in a.ml and called in b.ml.
So my question is: how can I compile each source file separately and specify the interfaces to be imported on the command line? I have been looking in the documentation and as far as I can understand I have to specify the .mli files to be included, but I do not know how.
EDIT
Here some more details. File a.ml contains the definition
let foo = 5;;
File b.ml contains the expression
print_string (string_of_int foo) ^ "\n";;
The real example is bigger but with these files I already have the error I reported above.
EDIT 2
I have edited file b.ml and replaced foo with A.foo and this works (foo is visible in b.ml even though I have another compilation error which is not important for this question). I guess it is cleaner to write my own .mli files explicitly, as suggested by
It would be clearer if you showed the code that's not working. As Kristopher points out, though, the most likely problem is that you're not specifyig which module foo is in. You can specify the module explicitly, as A.foo. Or you can open A and just use the name foo.
For a small example it doesn't matter, but for a big project you should be careful not to use open too freely. You want the freedom to use good names in your modules, and if you open too many of them, the good names can conflict with each other.
First fix the unbound value issue, as explained by Jeffrey's answer.
This is a comment about the commands you're using.
Decomposing compilation in several steps is a good way to understand what's going on.
If you want to write your own a.mli, most likely to hide some values of the module A, then your command ocaml -i -c a.ml > a.mli is a good way to get a first version of the this file and then edit it. But if you're not touching a.mli, then you don't need to generate it: you can also directly enter
ocamlc -o foo a.ml b.ml
which will produce a.cmo, b.cmo and the exectuable foo.
(It will also generate a.cmi, which is the compiled version of a.mli, that you get by issuing ocamlc -c a.mli. Likewise it will also generate b.cmi).
Note that order matters: you need to provide a.ml before b.ml on the command line. This way, when compiling b.ml, the compiler has already seen a.ml and knows where to find the module A.
Some more comments:
You're right in your "As far as I understand" paragraph.
you don't really include a separate file, it's more like import in Python: the values of module A are available, but under the name A.foo. The contents of a.ml has not been copy-pasted into b.ml, rather, values of the module A, defined in a.ml and it's compiled version a.cmo have been accessed.
if you're using this module A in b.ml, you can pass any of the following on the command line before b.ml:
a.mli, which will get compiled into a.cmi
a.cmi if you've already compiled a.mli into a.cmi
a.ml or its compiled version a.cmo if you don't need to write your own a.mli, i.e. if the default interface of module A suits you. (This interface is simply every value of a.ml).