I'm using bazel to build a shared libary, but the output so file size is much bigger than expected; For comparision I also build a binary with same deps and srcs in BUILD file, the binary ouput size is much smaller; Code sample:
cc_binary(
name = "server",
srcs = ["server.cc"],
deps = [...]
) # the binary
cc_binary(
name = "libs.so",
srcs = ["server.cc"],
deps = [...],
linkshared = 1
) # the shared library
libs.so is about 5 times larger than server; It seems linkshared option packs all symbols in deps to the shared library, including unused functions, variables(nm shows libs.so contains much more symbols than server); How can I just link needed symbols to my shared library?
Well, if you're compiling a shared library, there is no such thing as "unused functions". You're creating a library with functions that an executable can use.
Related
I have a C++ project using autotools to build, and Catch2 for unit testing. The details of Catch2 are probably not relevant: it's just another program I have to build and run.
I have Makefile.am set up like this (simplified):
AUTOMAKE_OPTIONS = subdir-objects
check_PROGRAMS = catch2
bin_PROGRAMS = lpsdr
common_sources = applicationcontroller.cc flowgraph.cc [...]
lpsdr_SOURCES = $(common_sources) main.cc
catch2_SOURCES = $(common_sources) test.cc
This works mostly, except it compiles everything twice, creating lpsdr-applicationcontroller.o and catch2-applicationcontroller.o, and so on for each thing in common_sources.
Of course this doubles the build time. I'd prefer to link both catch2 and lpsdr with the same object files: it will be faster to build and also ensure I'm testing exactly the same compiled code as I'm running.
Is there any way around this behavior?
I don't know if there's a way to avoid building distinct object files for each program, but the same effect can be had by building an intermediate static library, and then linking lpsdr and catch2 against that.
Something like this:
AUTOMAKE_OPTIONS = subdir-objects
noinst_LIBRARIES = liblpsdr.a
check_PROGRAMS = catch2
bin_PROGRAMS = lpsdr
liblpsdr_a_SOURCES = applicationcontroller.cc dispatcher_sink.cc [...]
lpsdr_SOURCES = main.cc
lpsdr_LDADD = liblpsdr.a
catch2_SOURCES = test.cc
catch2_LDADD = liblpsdr.a
Is it possible to hard code dependencies into the libraries build with bazel. The reason is that if I build somelib I can use it in the workspace but as soon as I copy the lib somewhere else I loose all dependencies (bazel cache). Witch creates a problem when I want to deploy the libraries into the system or install.
some_folder
|
thirdparty
|_WORKSPACE
|_somelib
| |_src
| |_ a.c
| |_ BUILD
| |_include
| |_a.h
|_include
|_ b.h
It sounds like you want to build a fully statically linked library. This can be done in Bazel by building the library using cc_binary with the linkshared attribute set to True. According to the documentation you also have to name your library libfoo.so or similar.
What enables the static library here is cc_binary's linkstatic attributes behavior. When True, which is the default, all dependencies that can be linked statically into the binary will be. Note that linkstatic does NOT behave the same on cc_library, see the documentation.
So, basically you want something like this in your BUILD file
cc_binary(
name = "libfoo.so",
srcs = [...],
hdrs = [...],
linkshared = 1,
#linkstatic = 1 # This is the default, you don't need to add this.
)
Good luck!
I'm currently working with a third party library, which has headers declared using angular brackets, like a standard library :
#include <header.h>
However, these headers are installed in a non standard place, something like /opt/company/software/version/part_software/include
With a more traditional builder like MAKE, I can just use CXXFLAGS to indicate to g++ to look in this folder too for libraries, which finally comes down to pass a -I/opt/company/software/version/part_software/include option to g++.
When trying to do the same thing in bazel, using copts = [ "-I/opt/company/software/version/part_software/include" ], I get a "path outside of the execution root" error.
It's my understanding that bazel don't like the place where the lib is installed because the build needs to be reproducible, and including a library located outside the execution root violate this constraint.
A ugly hack I've come with is to create symbolic link of the headers in /usr/local/include, and use copts = [ "-I/usr/local/include" ] in the bazel build. However, I find this approach very hacky, and I'd like to find a more bazely approach to the problem.
Note : I can't install the program during the bazel build, as it uses a closed installer on which I have no control over. This installer can't be run in the bazel's sandboxed environment, as it needs to write on certain paths not accessible within the environment.
So, it turns out that the bazelesque way of including a third part library is simply to create package encapsulating the library.
Thanks to this useful discussion, I've managed to create a package with my third party library.
First we need a BUILD file, here named package_name.BUILD
package(
default_visibility = ["//visibility:public"]
)
cc_library(
name = "third_party_lib_name", #name to reference the third party library in other BUILD files
srcs = [
"external/soft/lib/some_lib.so", #.so files to include in the lib
"software/lib/os/arch/lib_some_plugin.so",
],
hdrs = glob([ # the glob takes all the headers needed
"software/include/**/*.h",
"software/include/**/*.hpp",
]),
includes = ["software/include/"], # Specify which files are included when we use the library
)
Now we need to reference the lib a a submodule in the WORKSPACE file :
new_local_repository(
name = "package_name",
path = "opt/company/software/version",
# build_file: path to the BUILD file, here in the same directory that the main WORKSPACE one
build_file = __workspace_dir__ + "/package_name.BUILD",
)
Now, instead of using copt to references the needed headers, I'm just adding a line to the deps of the cc_rule when needed, e.g :
cc_library(
name="some_internal_lib",
srcs = ["some_internal_lib.cc"],
deps = [
"#package_name//:third_party_lib_name", #referencing the third party lib
],
)
I have a project where there's only a handful of logical groupings for generating static libraries. However for convenience I want to have the library's source code to be managed with more granular folders.
Currently the only way I know to do this in CMake without having a library for each folder is to just list files as you would normally in with their relative paths:
add_library(SystemAbstraction STATIC "Some/Path/File.cpp")
However I can see this getting unwieldy as the project grows in size with all the different paths.
I tried to see if I could have a CMakeLists.txt in each folder and just use a variable in the base CMakeLists.txt when adding library dependencies. But it seems that add_subdirectory doesn't also import variables?
For expanding the scope of a variable inside a subdirectory, use the PARENT_SCOPE option of set. For example, you can test that if you have
# CMakeLists.txt
set(SRCS main.c)
add_subdirectory(foo)
message(${SRCS})
in the root directory and
# foo/CMakeLists.txt
set(SRCS ${SRCS} foo.c PARENT_SCOPE)
in a subdirectory then it will print main.c foo.c, i.e., the variable is correctly imported into the base CMakeLists.txt.
An option would be to use the object library feature of CMake. You still can but doesn't need to organise your CMake script into subdirectories:
add_library(lib1 OBJECT <srcs>)
add_library(lib2 OBJECT <srcs>)
...
add_library(mainlib $<TARGET_OBJECTS:lib1> $<TARGET_OBJECTS:lib2>)
You can set different compile flags for each object library:
target_include_directories(lib1 PRIVATE incl-dir-for-lib1)
target_compile_definitions(lib2 PRIVATE def-for-lib2)
You still need to set link libraries on your main library:
target_link_libraries(mainlib PRIVATE deps-of-lib1 deps-of-lib2)
Related documentation: Object Libraries
I'm trying to link my kernel module with an external static lib, like this:
obj-m += my_prog.o
my_prog-objs := some/path/lib.a
# all the standard targets...
For some reasone, the above Makefile doesn't compile my_prog.c at all, and the resulting module doesn't contain its code. Certainly, if I remove my_prog-objs line, my_prog.c gets compiled.
What's wrong with such an approach in a Makefile?
You must create a synthetic name as well as the source file and it's object name. You can not use my_prog.o directly as there are rules to make it from source. Here is an sample,
obj-m += full.o
full-src := my_prog.c
full-objs := $(full-src:.c=.o) lib.o # yes, make it an object.
Libraries are only supported from some special directories. Your object should be named lib.o_shipped and placed in the same directory. So, you need to take the external library and provide it locally as a shipped version. You need two object files; one is your compiled 'C' code/driver and the other is it linked together with the library.
The above is relevant to 2.6.36 kbuild infra-structure. The current documentation is in modules.rst section 3.3 Binary Blobs. I think the technique above will still work for libraries as opposed to just objects.
When you create the my_prog-objs list, you tell kbuild to use only the object files in that list. kbuild will no longer compile my_prog.c, and including my_prog.o in my_prog-objs results in a circular dependency. Instead, you need to create a unique obj-m and include both my_prog.o and /path/lib.a in its objs list. For example:
obj-m += foo.o
foo-objs += my_prog.o /path/lib.a
Took me about 2 hours to figure out why my module was doing nothing!
You're overriding the default my_prog-objs, which is just my_prog.o. Instead of replacing the contents with the library, add the library to the default:
my_prog-objs := my_prog.o some/path/lib.a
Hopefully you're not trying to link against a general userspace library... that won't work at all in kernelspace.