How do I call a function in a different Move module / smart contract? - move-lang

I know there is a Move module (smart contract) on chain with a function that looks like this:
public entry fun do_nothing() {}
I know it is deployed at 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7::other::do_nothing, you can see the module in the explorer here.
I have a Move module of my own that looks like this.
Move.toml:
[package]
name = 'mine'
version = '1.0.0'
[dependencies.AptosFramework]
git = 'https://github.com/aptos-labs/aptos-core.git'
rev = 'main'
subdir = 'aptos-move/framework/aptos-framework'
[addresses]
my_addr = "81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e"
other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7"
sources/mine.move:
module my_addr::mine {
use other_addr::other::do_nothing;
public entry fun do_stuff() {
do_nothing();
}
}
As you can see, I'm telling the compiler where the other module is by setting other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7". However, when I try to compile my Move module, it fails, saying "unbound module", meaning it doesn't know what the "other" module is.
$ aptos move compile --named-addresses my_addr="`yq .profiles.default.account < .aptos/config.yaml`"
Compiling, may take a little while to download git dependencies...
INCLUDING DEPENDENCY AptosFramework
INCLUDING DEPENDENCY AptosStdlib
INCLUDING DEPENDENCY MoveStdlib
BUILDING mine
error[E03002]: unbound module
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:2:9
│
2 │ use other_addr::other::do_nothing;
│ ^^^^^^^^^^^^^^^^^ Invalid 'use'. Unbound module: '(other_addr=0x6286DFD5E2778EC069D5906CD774EFDBA93AB2BEC71550FA69363482FBD814E7)::other'
error[E03005]: unbound unscoped name
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:5:9
│
5 │ do_nothing();
│ ^^^^^^^^^^ Unbound function 'do_nothing' in current scope
{
"Error": "Move compilation failed: Compilation error"
}
Why is compilation failing? Why can't the compiler figure it out for me based on the ABIs of the Move modules it finds at other_addr on chain?

The problem
In order to publish a Move module that calls a function in another Move module, you need its source code. This is true of all Move modules, not just your own. You'll notice in Move.toml there is already a dependency on AptosFramework. This is what allows you to call all the framework functions, e.g. those related to coins, tokens, signer, timestamps, etc.
So to make this work, you need to have access to the source.
Source: Git Dependency
If you have access to the source in another git repository, you can tell the compiler where to find the other module by adding this to your Move.toml:
[dependencies.other]
git = 'https://github.com/banool/move-examples.git'
rev = 'main'
subdir = 'call_other_module/other'
This is telling the compiler, "the source code for other can be found in the call_other_module/other/ directory at that git repo".
Source: Local
If you have the source code locally, you can do this instead:
[dependencies.other]
local = "../other"
Where the argument for local is the path to the source code.
Source: I don't have it?
If you don't have the source, you can try to download it. By default, when someone publishes a Move module, they include the source code alongside it.
First try to download the code:
cd /tmp
aptos move download --account 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7 --package other
If the source code was indeed deployed on chain, you should see this:
Saved package with 1 module(s) to `/tmp/other`
{
"Result": "Download succeeded"
}
Inside /tmp/other you'll find the full source, including Move.toml and sources/.
From here, you can just follow the steps for Source: Local above.
Note: The value for --package should match the name field in Move.toml of the deployed code. More to come on how to determine this based on on-chain data.
Source: The download failed?
If you ran aptos move download and saw this:
module without code: other
Saved package with 1 module(s) to `/private/tmp/other_code/other`
{
"Result": "Download succeeded"
}
You'll find that sources/other.move is empty.
This means the author published the code with this CLI argument set:
--included-artifacts none
Meaning they purposely chose not to include the source on chain.
Unfortunately at this point you're out of luck right now. It is a hard requirement of compilation that if you want to call a function in another Move module, you must have the source for that module. There is work in the pipeline that should enable decompilation of Move bytecode, but that's not ready yet.
I hope this helps, happy coding!!
The code used in this answer can be found here: https://github.com/banool/move-examples/tree/main/call_other_module.

Related

VSCode look for Go packages in different directory

I successfully used rules_go to build a gRPC service:
go_proto_library(
name = "processor_go_proto",
compilers = ["#io_bazel_rules_go//proto:go_grpc"],
importpath = "/path/to/proto/package",
proto = ":processor_proto",
deps = ["//services/shared/proto/common:common_go_proto"],
)
However, I'm not sure how to import the resulting file in VSCode. The generated file is nested under bazel_bin and under the original proto file path; so to import this, it seems like I would need to write out the entire path (including the bazel_bin part) to the generated Go file. To my understanding, there doesn't seem to be a way to instruct VSCode to look under certain folders that only contain Go packages/files; everything seems to need a go.mod file. This makes it quite difficult to develop in.
For clarity, my directory structure looks something like this:
WORKSPACE
bazel-bin
- path
- to
- generated_Go_file.go
go.mod
go.sum
proto
- path
- to
- gRPC_proto.proto
main.go
main.go should use the generated_Go_file.go.
Is there a way around this?
I don't use Bazel and so cannot help with the Bazel configuration. It's likely there is a way to specify the generated code location so that you can revise this to reflect you preference.
The outline you provide of the generated code, is workable though and a common pattern. Often the generated proto|gRPC code is placed in a module's gen subdirectory.
This is somewhat similar to vendoring where your code incorporates what may often be a 3rd-party's stubs (client|server) into your code. The stubs must reflect the proto(s) package(s) and, when these are 3rd-party, using gen or bazel-bin provide a way to keep potentially multiple namespaces discrete.
You're correct that the import for main.go, could (!) be prefixed with the module name from go.mod (first line) followed by the folder path to the generated code. This is standard go packaging and treats the generated code in a similar way to vendored modules.
Another approach is to use|place the generated code in a different module.
For code generated from 3rd-party protos, this may be preferable and the generated code may be provided by the 3rd-party in a module that you can go get or add to your go.mod.
An example of this approach is Google Well-Known Types. The proto (sources) are bundled with protoc (lib directory) and, when protoc compiles sources that references any of these, the Go code that is generated includes imports that reference a Google-hosted location of the generated code (!) for these types (google.golang.org/protobuf/types/known).
Alternatively, you can replicate this behavior without having to use an external repo. The bazel-bin folder must be outside of the current module. Each distinct module in bazel-bin, would need its own go.mod file. You would include in a require block in your code's go.mod file references to the modules' (one or more) locations. You don't need to publish the module to a external repo but can simply require ( name => path/to/module ) to provide a local reference.

VS Code throwing error 'No package for import' but code compiles and runs fine

As a work requirement learning gRPC through an online course.
I have a project defined in folder greet (outside of GOPATH) with three packages called:
greet_client
greet_server
greetpb
In the go.mod file at the root of my project, I've specified the following:
module example.com/myuser/myproject
go 1.14
The code in greet_server/server.go makes a reference to greetpb.
I'm referencing it like the following:
I'm able to run server.go successfully.
It returns the expected the result:
My question is on the red squiggly lines VSCode throws saying it could not import greetpb:
Here's how package greetpb is defined (it's an auto generated file):
How can I get rid of this warning message?
Is it something I've not setup properly?
Update:
When I try to ctrl+click to view the module greetpb on the file server.go, I note that it's pointing to the url pkg.go.dev.
How can I make the program to do a "local" lookup?
Why would you use example.com? You don't have your package defined there.
Initialize go modules first i.e.
go mod init github.com/$USER/$REPO
Go mod vendor
go mod tidy && go mod vendor
That's it.

golang modules and local packages

I'm trying to understand how to organize my golang project using go1.11 modules. I tried several options, but none of them worked.
I have some code in the main package under the application folder and a local package that the main package uses.
$GOPATH
+ src
+ application/
+ main/
+ main.go
+ otherFileUnderMainPackage.go
+ aLocalPackage/
+ someCode.go
+ someCode_test.go
+ someMoreCode.go
+ someMoreCode_test.go
Files in the main package, imports ../aLocalPackage. When I compile by go build main/*.go it's working.
Then, I ran go mod init application: V.0.9.9 and got the go.mod file, but the build always fails. I always get error about not finding the local package: build application:V0.9.9/main: cannot find module for path _/.../src/application/aLocalPackage. I also tried to place the local package right under src/, place it under main/ etc. but none of these methods worked for me.
What is the way to use modules and local packages?
Thanks.
Relative import paths are not supported in module mode. You will need to update your import statements to use a full (absolute) import path.
You should also choose a module name other than application. Your module path should generally begin with a URL prefix that you control — either your own domain name, or a unique path such as github.com/$USER/$REPO.
I had some problems working with local packages myself.
There are two tricks to make it work:
you run "go build" in the package directory
This compiles the package and places it in the build cache.
This link about code organisation in go explains more.
You can identify where the cache is using:
>go env GOCACHE
/home/<user>/.cache/go-build
Import using a path relative to the project
I puzzled loads over what the correct import path was and finally discovered that go doc or go list will tell you.
>go doc
package docs // import "tools/src/hello/docs"
>go list
tools/src/hello/docs
For example. I have a hello world API project and was using swaggo to generate documentation which it does in a docs sub-directory.
To use it I add an import:
_ "tools/src/hello/docs"
For my case the _ is important as docs is not used directly but we its init() function to be invoked.
Now in hello/main.go I can add "tools/src/hello/docs" and it will import the correct package.
The path is relative to the location of go.mod if you have one.
I have tools/ here as I have a go.mod declaring "modules tools".
Modules are a different kettle of fish - see https://github.com/golang/go/wiki/Modules.
Recent versions of go (1.11 and later) can create a go.mod file which you may use to fix the version of a module that is used and avoid go's crazy default behaviour of just downloading the latest version of any package you import.
I have written a blogpost on how to start your first Go project using modules.
https://marcofranssen.nl/start-on-your-first-golang-project/
In general it boils down to just create a new folder somewhere on your system (doesn't have to be in GOPATH).
mkdir my-project
cd my-project
go mod init github.com/you-user/my-project
This will create the go.mod file. Now you can simply create your project layout and start building whatever you like.
Maybe one of my other blogs can inspire you a bit more on how to do things.
https://marcofranssen.nl/categories/software-development/golang/

Include <headers.h> installed in non standard location

I'm currently working with a third party library, which has headers declared using angular brackets, like a standard library :
#include <header.h>
However, these headers are installed in a non standard place, something like /opt/company/software/version/part_software/include
With a more traditional builder like MAKE, I can just use CXXFLAGS to indicate to g++ to look in this folder too for libraries, which finally comes down to pass a -I/opt/company/software/version/part_software/include option to g++.
When trying to do the same thing in bazel, using copts = [ "-I/opt/company/software/version/part_software/include" ], I get a "path outside of the execution root" error.
It's my understanding that bazel don't like the place where the lib is installed because the build needs to be reproducible, and including a library located outside the execution root violate this constraint.
A ugly hack I've come with is to create symbolic link of the headers in /usr/local/include, and use copts = [ "-I/usr/local/include" ] in the bazel build. However, I find this approach very hacky, and I'd like to find a more bazely approach to the problem.
Note : I can't install the program during the bazel build, as it uses a closed installer on which I have no control over. This installer can't be run in the bazel's sandboxed environment, as it needs to write on certain paths not accessible within the environment.
So, it turns out that the bazelesque way of including a third part library is simply to create package encapsulating the library.
Thanks to this useful discussion, I've managed to create a package with my third party library.
First we need a BUILD file, here named package_name.BUILD
package(
default_visibility = ["//visibility:public"]
)
cc_library(
name = "third_party_lib_name", #name to reference the third party library in other BUILD files
srcs = [
"external/soft/lib/some_lib.so", #.so files to include in the lib
"software/lib/os/arch/lib_some_plugin.so",
],
hdrs = glob([ # the glob takes all the headers needed
"software/include/**/*.h",
"software/include/**/*.hpp",
]),
includes = ["software/include/"], # Specify which files are included when we use the library
)
Now we need to reference the lib a a submodule in the WORKSPACE file :
new_local_repository(
name = "package_name",
path = "opt/company/software/version",
# build_file: path to the BUILD file, here in the same directory that the main WORKSPACE one
build_file = __workspace_dir__ + "/package_name.BUILD",
)
Now, instead of using copt to references the needed headers, I'm just adding a line to the deps of the cc_rule when needed, e.g :
cc_library(
name="some_internal_lib",
srcs = ["some_internal_lib.cc"],
deps = [
"#package_name//:third_party_lib_name", #referencing the third party lib
],
)

Linking with a Windows library outside the build folder

Is there a way to link with a library that's not in the current package path.
This link suggests placing everything under the local directory. Our packages are installed in some repository elsewhere. I just want to specify the libpath to it on windows.
authors = ["Me"]
links = "CDbax"
[target.x86_64-pc-windows-gnu.CDbax]
rustc-link-lib = ["CDbax"]
rustc-link-search = ["Z:/Somepath//CPP/CDbax/x64/Debug/"]
root = "Z:/Somepath//CPP/CDbax/x64/Debug/"
But trying cargo build -v gives me
package `hello v0.1.0 (file:///H:/Users/Mushfaque.Cradle/Documents/Rustc/hello)` specifies that it links to `CDbax` but does not have a custom build script
From the cargo build script support guide, it seems to suggest that this should work. But I can see that it hasn't added the path. Moving the lib into the local bin\x68_64-pc-windows-gnu\ path works however.
Update
Thanks to the answer below, I thought I'd update this to give the final results of what worked on my machine so others find it useful.
In the Cargo.toml add
links = "CDbax"
build = "build.rs"
Even though there is no build.rs file, it seems to require it (?) otherwise complains with
package `xxx v0.1.0` specifies that it links to `CDbax` but does not have a custom build script
Followed by Vaelden answer's create a 'config' file in .cargo
If this is a sub crate, you don't need to put the links= tag in the parent crate, even though it's a dll; even with a 'cargo run'. I assume it adds the dll path to the execution environment
I think the issue is that you are mistaking the manifest of your project with the cargo
configuration.
The manifest is the Cargo.toml file at the root of your project. It describes your project itself.
The cargo configuration describes particular settings for cargo, and allow for example to override dependencies, or in your case override build scripts. The cargo configuration files have a hierarchical structure:
Cargo allows to have local configuration for a particular project or
global configuration (like git). Cargo also extends this ability to a
hierarchical strategy. If, for example, cargo were invoked in
/home/foo/bar/baz, then the following configuration files would be
probed for:
/home/foo/bar/baz/.cargo/config
/home/foo/bar/.cargo/config
/home/foo/.cargo/config
/home/.cargo/config
/.cargo/config
With this structure you can specify local configuration per-project,
and even possibly check it into version control. You can also specify
personal default with a configuration file in your home directory.
So if you move the relevant part:
[target.x86_64-pc-windows-gnu.CDbax]
rustc-link-lib = ["CDbax"]
rustc-link-search = ["Z:/Somepath//CPP/CDbax/x64/Debug/"]
root = "Z:/Somepath//CPP/CDbax/x64/Debug/"
to any correct location for a cargo configuration file, it should work.

Resources