Include <headers.h> installed in non standard location - c++11

I'm currently working with a third party library, which has headers declared using angular brackets, like a standard library :
#include <header.h>
However, these headers are installed in a non standard place, something like /opt/company/software/version/part_software/include
With a more traditional builder like MAKE, I can just use CXXFLAGS to indicate to g++ to look in this folder too for libraries, which finally comes down to pass a -I/opt/company/software/version/part_software/include option to g++.
When trying to do the same thing in bazel, using copts = [ "-I/opt/company/software/version/part_software/include" ], I get a "path outside of the execution root" error.
It's my understanding that bazel don't like the place where the lib is installed because the build needs to be reproducible, and including a library located outside the execution root violate this constraint.
A ugly hack I've come with is to create symbolic link of the headers in /usr/local/include, and use copts = [ "-I/usr/local/include" ] in the bazel build. However, I find this approach very hacky, and I'd like to find a more bazely approach to the problem.
Note : I can't install the program during the bazel build, as it uses a closed installer on which I have no control over. This installer can't be run in the bazel's sandboxed environment, as it needs to write on certain paths not accessible within the environment.

So, it turns out that the bazelesque way of including a third part library is simply to create package encapsulating the library.
Thanks to this useful discussion, I've managed to create a package with my third party library.
First we need a BUILD file, here named package_name.BUILD
package(
default_visibility = ["//visibility:public"]
)
cc_library(
name = "third_party_lib_name", #name to reference the third party library in other BUILD files
srcs = [
"external/soft/lib/some_lib.so", #.so files to include in the lib
"software/lib/os/arch/lib_some_plugin.so",
],
hdrs = glob([ # the glob takes all the headers needed
"software/include/**/*.h",
"software/include/**/*.hpp",
]),
includes = ["software/include/"], # Specify which files are included when we use the library
)
Now we need to reference the lib a a submodule in the WORKSPACE file :
new_local_repository(
name = "package_name",
path = "opt/company/software/version",
# build_file: path to the BUILD file, here in the same directory that the main WORKSPACE one
build_file = __workspace_dir__ + "/package_name.BUILD",
)
Now, instead of using copt to references the needed headers, I'm just adding a line to the deps of the cc_rule when needed, e.g :
cc_library(
name="some_internal_lib",
srcs = ["some_internal_lib.cc"],
deps = [
"#package_name//:third_party_lib_name", #referencing the third party lib
],
)

Related

`go mod tidy` complains that the bazel-generated protobuf package is missing

I have a .proto protobuf definition file in a dir and I'm building a go library from it with Bazel like so (BUILD.bazel file below generated using gazelle):
load("#rules_proto//proto:defs.bzl", "proto_library")
load("#io_bazel_rules_go//go:def.bzl", "go_library")
load("#io_bazel_rules_go//proto:def.bzl", "go_proto_library")
proto_library(
name = "events_proto",
srcs = ["events.proto"],
visibility = ["//visibility:public"],
deps = ["#com_google_protobuf//:timestamp_proto"],
)
go_proto_library(
name = "proto_go_proto",
importpath = "github.com/acme/icoyote/proto",
proto = ":events_proto",
visibility = ["//visibility:public"],
)
go_library(
name = "proto",
embed = [":proto_go_proto"],
importpath = "github.com/acme/icoyote/proto",
visibility = ["//visibility:public"],
)
Some other code depends on //icoyote/proto:proto, and when I run go mod tidy in my module, it complains that it can't find the package github.com/acme/icoyote/proto:
go: finding module for package github.com/acme/icoyote/proto
github.com/acme/icoyote/cmd/icoyote imports
github.com/acme/icoyote/proto: no matching versions for query "latest"
Any IDE that doesn't have Bazel integration (e.g. VSCode, GoLand/IntelliJ without the Bazel plugin) complains as well
What do I do?
This is happening of course because because Bazel does generate .go files using protoc under the covers for the go_proto_library rule in the BUILD file, but only writes them out to a temp dir under bazel-bin to be used by the go_library rule, and go mod tidy doesn't seem look into bazel-bin (probably because it's a symlink but also if it did, the path of those files relative to the location of go.mod is all wrong)
One option is to manually generate the go files by calling protoc on your own, and remove the proto_library and go_proto_library rules in the BUILD file, then change the go_library rule to build your generated files. This is suboptimal because you have to manually rerun protoc every time you make changes to the .proto file (and if you put it into a //go:generate directive, you have to rerun go generate).
Instead, we can do the following:
Add a file empty.go to the dir that contains the .proto file. It should look like this:
package proto
Then tell gazelle to ignore empty.go (so it doesn't try to add a go_library rule to the BUILD file when you run gazelle --fix). We do that by adding the following to the BUILD file:
# gazelle:exclude empty.go
That's enough to make go mod tidy shut up.
This will also make the IDE stop complaining about the import, although you'll still get errors when referring to anything that's supposed to be in that package. If you don't want to abandon your IDE for an excellent GoLand or IntelliJ IDEA with a Bazel plugin, you might have to resort to the manual protoc method. Perhaps there's a way to create a symlink to wherever Bazel writes out the generated .go files under bazel-bin and force go mod tidy to follow it, but I haven't tried that. If you do and it works, do share!

How do I call a function in a different Move module / smart contract?

I know there is a Move module (smart contract) on chain with a function that looks like this:
public entry fun do_nothing() {}
I know it is deployed at 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7::other::do_nothing, you can see the module in the explorer here.
I have a Move module of my own that looks like this.
Move.toml:
[package]
name = 'mine'
version = '1.0.0'
[dependencies.AptosFramework]
git = 'https://github.com/aptos-labs/aptos-core.git'
rev = 'main'
subdir = 'aptos-move/framework/aptos-framework'
[addresses]
my_addr = "81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e"
other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7"
sources/mine.move:
module my_addr::mine {
use other_addr::other::do_nothing;
public entry fun do_stuff() {
do_nothing();
}
}
As you can see, I'm telling the compiler where the other module is by setting other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7". However, when I try to compile my Move module, it fails, saying "unbound module", meaning it doesn't know what the "other" module is.
$ aptos move compile --named-addresses my_addr="`yq .profiles.default.account < .aptos/config.yaml`"
Compiling, may take a little while to download git dependencies...
INCLUDING DEPENDENCY AptosFramework
INCLUDING DEPENDENCY AptosStdlib
INCLUDING DEPENDENCY MoveStdlib
BUILDING mine
error[E03002]: unbound module
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:2:9
│
2 │ use other_addr::other::do_nothing;
│ ^^^^^^^^^^^^^^^^^ Invalid 'use'. Unbound module: '(other_addr=0x6286DFD5E2778EC069D5906CD774EFDBA93AB2BEC71550FA69363482FBD814E7)::other'
error[E03005]: unbound unscoped name
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:5:9
│
5 │ do_nothing();
│ ^^^^^^^^^^ Unbound function 'do_nothing' in current scope
{
"Error": "Move compilation failed: Compilation error"
}
Why is compilation failing? Why can't the compiler figure it out for me based on the ABIs of the Move modules it finds at other_addr on chain?
The problem
In order to publish a Move module that calls a function in another Move module, you need its source code. This is true of all Move modules, not just your own. You'll notice in Move.toml there is already a dependency on AptosFramework. This is what allows you to call all the framework functions, e.g. those related to coins, tokens, signer, timestamps, etc.
So to make this work, you need to have access to the source.
Source: Git Dependency
If you have access to the source in another git repository, you can tell the compiler where to find the other module by adding this to your Move.toml:
[dependencies.other]
git = 'https://github.com/banool/move-examples.git'
rev = 'main'
subdir = 'call_other_module/other'
This is telling the compiler, "the source code for other can be found in the call_other_module/other/ directory at that git repo".
Source: Local
If you have the source code locally, you can do this instead:
[dependencies.other]
local = "../other"
Where the argument for local is the path to the source code.
Source: I don't have it?
If you don't have the source, you can try to download it. By default, when someone publishes a Move module, they include the source code alongside it.
First try to download the code:
cd /tmp
aptos move download --account 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7 --package other
If the source code was indeed deployed on chain, you should see this:
Saved package with 1 module(s) to `/tmp/other`
{
"Result": "Download succeeded"
}
Inside /tmp/other you'll find the full source, including Move.toml and sources/.
From here, you can just follow the steps for Source: Local above.
Note: The value for --package should match the name field in Move.toml of the deployed code. More to come on how to determine this based on on-chain data.
Source: The download failed?
If you ran aptos move download and saw this:
module without code: other
Saved package with 1 module(s) to `/private/tmp/other_code/other`
{
"Result": "Download succeeded"
}
You'll find that sources/other.move is empty.
This means the author published the code with this CLI argument set:
--included-artifacts none
Meaning they purposely chose not to include the source on chain.
Unfortunately at this point you're out of luck right now. It is a hard requirement of compilation that if you want to call a function in another Move module, you must have the source for that module. There is work in the pipeline that should enable decompilation of Move bytecode, but that's not ready yet.
I hope this helps, happy coding!!
The code used in this answer can be found here: https://github.com/banool/move-examples/tree/main/call_other_module.

CMake: Use variables from existing Makefile of 3rdparty library

I'm facing the following scenario:
Existing project which uses cmake
External 3rdparty library which only comes with Makefiles
The difference of my situation compared to existing questions is that I don't need to have cmake to build the 3rdparty library via the Makefile. Instead, the 3rdparty library provides a library.mk Makefile which has variables like LIB_SRCS and LIB_INCS containing all source and header files required to compile the library.
My idea is to include the library.mk into the project's CMakeLists.txt and then adding those $(LIB_SRCS) and $(LIB_INCS) to target_sources().
My question: How can I include library.mk into the existing CMakeLists.txt to get access to the $(LIB_SRCS) and $(LIB_INCS) for adding them to target_sources()? I'm looking for something like this:
include("/path/to/library.mk") # Somehow include the library's `library.mk` to expose variables to cmake.
add_executable(my_app)
target_sources(
my_app
PRIVATE
main.c
$(LIB_SRCS) # Add 3rd-party library source files
$(LIB_INCS) # Add 3rd-party library header files
)
Using include() does not work as the library.mk is not a CMake list/file.
Since you can't be sure that your target system will even have Make on it, the only option is to parse the strings out of the .mk file, which might be easy if the variables are set directly as a list of filenames, or really hard if they are set with expansions of other variables, conditionals, etc. Do this with FILE(STRINGS) cmake doc.
Your plan will only work if the Makefiles are trivial, and do not set important compiler flags, define preprocessor variables, modify the include directory, etc. And if they really are trivial, skip the parsing, and just do something like aux_source_directory(<dir> <variable>) to collect all the sources from the library directory.
You might also consider building and maintaining a CMakeLists.txt for this third-party library. Do the conversion once, and store it as a branch off of the "vendor" main branch in your version control system. Whenever you update, update the vendor branch from upstream, and merge or rebase your modifications. Or just store it in your existing project, referring to the source directory of the 3rd-party stuff.

Bazel environment variables in build rules

I want to refer to a DirectX SDK in the BUILD file. The problem is that (as far as I understand) Bazel supports passing environment variables only through --action_env=DXSDK_DIR argument for Bazel and it is meant to be used in actions, which must be defined in a plugin (.bzl file).
Is there any easier way to refer to the environment variable by using it as Make variable (includes = [ "$(DXSDK_DIR)/Include" ]) or do I need to write a plugin?
In principle you need a cc_library rule whose hdrs attribute globs the DirectX headers. For that you need to pretend that the DX SDK is part of your source tree. Bazel offers "repository rules" for that purpose.
1. Create a repository rule for the DirectX SDK
Depending on whether the SDK's location is known or needs to be discovered, you have two options.
a. Fixed SDK location
You can use this approach if you don't need to read any environment variables, run any binaries, or query the registry to find where the SDK is. This is the case if everyone who builds your rules will install the SDK to the same location.
Just add a new_local_repository rule to your WORKSPACE file, point the rule's path at the SDK's directory and write a simple build_file_content for it.
Example:
new_local_repository(
name = "directx_sdk",
path = "c:/program files/directx/sdk/includes",
build_file_contents = """
cc_library(
name = "sdk",
hdrs = glob(["**/*.h"]),
visibility = ["//visibility:public"],
)
""")
This rule creates the #directx_sdk repository with one rule in its root package, #directx_sdk//:sdk.
b. SDK discovery
You need to follow this approach if you need to read environment variables, run binaries, or query the registry to find where the SDK is.
Instead of using a new_local_repository rule, you need to implement your own. More info and examples are here.
Key points:
if your repository rule needs to read environment variables, add them to the list repository_rule(environ), e.g. repository_rule(..., environ = ["DXSDK_DIR"])
if you need to run some binaries that tell you where the SDK is, use repository_ctx.execute. You can use repository_ctx.which to find binaries on the PATH.
if you need to do registry queries, use repository_ctx.execute with reg.exe /query <args>
2. Depend on the SDK's cc_library
In your project, just depend on the SDK's library as if it was an ordinary cc_library:
cc_library(
name = "render",
...
deps = [
...
"#directx_sdk//:sdk",
],
)

CMake Hierarchical Project Management Without Abusing Libraries

I have a project where there's only a handful of logical groupings for generating static libraries. However for convenience I want to have the library's source code to be managed with more granular folders.
Currently the only way I know to do this in CMake without having a library for each folder is to just list files as you would normally in with their relative paths:
add_library(SystemAbstraction STATIC "Some/Path/File.cpp")
However I can see this getting unwieldy as the project grows in size with all the different paths.
I tried to see if I could have a CMakeLists.txt in each folder and just use a variable in the base CMakeLists.txt when adding library dependencies. But it seems that add_subdirectory doesn't also import variables?
For expanding the scope of a variable inside a subdirectory, use the PARENT_SCOPE option of set. For example, you can test that if you have
# CMakeLists.txt
set(SRCS main.c)
add_subdirectory(foo)
message(${SRCS})
in the root directory and
# foo/CMakeLists.txt
set(SRCS ${SRCS} foo.c PARENT_SCOPE)
in a subdirectory then it will print main.c foo.c, i.e., the variable is correctly imported into the base CMakeLists.txt.
An option would be to use the object library feature of CMake. You still can but doesn't need to organise your CMake script into subdirectories:
add_library(lib1 OBJECT <srcs>)
add_library(lib2 OBJECT <srcs>)
...
add_library(mainlib $<TARGET_OBJECTS:lib1> $<TARGET_OBJECTS:lib2>)
You can set different compile flags for each object library:
target_include_directories(lib1 PRIVATE incl-dir-for-lib1)
target_compile_definitions(lib2 PRIVATE def-for-lib2)
You still need to set link libraries on your main library:
target_link_libraries(mainlib PRIVATE deps-of-lib1 deps-of-lib2)
Related documentation: Object Libraries

Resources