I want to create a .wasm file which still has the function names exported when compiled.
package main
import (
"fmt"
)
func main() {
fmt.Println("Main")
}
func MyFunc() {
fmt.Println("MyFunc")
}
I'm building with
GOOS=js GOARCH=wasm go build -o main.wasm
Which produces the wasm file (and awesome that Go targets wasm natively).
But using wabt and doing an object dump exposes these functions.
Export[4]:
- func[958] <wasm_export_run> -> "run"
- func[959] <wasm_export_resume> -> "resume"
- func[961] <wasm_export_getsp> -> "getsp"
- memory[0] -> "mem"
I'm expecting to see something like
func[137] <MyFunc> -> "MyFunc"
Does anyone know how to export functions in Go WASM?
In rust including #[no_mangle] and pub extern "C" keeps the function available in the output with wasm-pack. I'm looking for something similar with Go.
If you plan to write a lot of WASM in Go, you might want to consider compiling with TinyGo, which is a Go compiler for embedded and WASM.
TinyGo supports a //export <name> or alias //go:export <name> comment directive that does what you're looking for.
I'm copy-pasting the very first example from TinyGo WASM docs:
package main
// This calls a JS function from Go.
func main() {
println("adding two numbers:", add(2, 3)) // expecting 5
}
// ...omitted
// This function is exported to JavaScript, so can be called using
// exports.multiply() in JavaScript.
//export multiply
func multiply(x, y int) int {
return x * y;
}
And you build it with: tinygo build -o wasm.wasm -target wasm ./main.go.
The standard Go compiler has an ongoing open discussion about replicating TinyGo feature. The tl;dr seems to be that you can achieve this by setting funcs to the JS global namespace, with the js.Global().Set(...)
Related
I am attempting to create named loggers automatically for HTTP handlers that I'm writing, where I am passed a function (pointer).
I'm using the code mentioned in this question to get the name of a function:
package utils
import (
"reflect"
"runtime"
)
func GetFunctionName(fn interface{}) string {
value := reflect.ValueOf(fn)
ptr := value.Pointer()
ffp := runtime.FuncForPC(ptr)
return ffp.Name()
}
I'm using this in my main function to try it out like so:
package main
import (
"github.com/naftulikay/golang-webapp/experiments/functionname/long"
"github.com/naftulikay/golang-webapp/experiments/functionname/long/nested/path"
"github.com/naftulikay/golang-webapp/experiments/functionname/utils"
"log"
)
type Empty struct{}
func main() {
a := long.HandlerA
b := path.HandlerB
c := path.HandlerC
log.Printf("long.HandlerA: %s", utils.GetFunctionName(a))
log.Printf("long.nested.path.HandlerB: %s", utils.GetFunctionName(b))
log.Printf("long.nested.path.HandlerC: %s", utils.GetFunctionName(c))
}
I see output like this:
github.com/naftulikay/golang-webapp/experiments/functionname/long.HandlerA
This is okay but I'd like an output such as long.HandlerA, long.nested.path.HandlerB, etc.
If I could get the Go module name (github.com/naftulikay/golang-webapp/experiments/functionname), I can then use strings.Replace to remove the module name to arrive at long/nested/path.HandlerB, then strings.Replace to replace / with . to finally get to my desired value, which is long.nested.path.HandlerB.
The first question is: can I do better than runtime.FuncForPC(reflect.ValueOf(fn).Pointer()) for getting the qualified path to a function?
If the answer is no, is there a way to get the current Go module name using runtime or reflect so that I can transform the output of runtime.FuncForPC into what I need?
Once again, I'm getting values like:
github.com/naftulikay/golang-webapp/experiments/functionname/long.HandlerA
github.com/naftulikay/golang-webapp/experiments/functionname/long/nested/path.HandlerB
github.com/naftulikay/golang-webapp/experiments/functionname/long/nested/path.HandlerC
And I'd like to get values like:
long.HandlerA
long.nested.path.HandlerB
long.nested.path.HandlerC
EDIT: It appears that Go does not have a runtime representation of modules, and that's okay, if I can do it at compile time that would be fine too. I've seen the codegen documentation and I'm having a hard time figuring out how to write my own custom codegen that can be used from go generate.
The module info is included in the executable binary, and can be acquired using the debug.ReadBuildInfo() function (the only requirement is that the executable must be built using module support, but this is the default in the current version, and likely the only in future versions).
BuildInfo.Path is the current module's path.
Let's say you have the following go.mod file:
module example.com/foo
Example reading the build info:
bi, ok := debug.ReadBuildInfo()
if !ok {
log.Printf("Failed to read build info")
return
}
fmt.Println(bi.Main.Path)
// or
fmt.Println(bi.Path)
This will output (try it on the Go Playground):
example.com/foo
example.com/foo
See related: Golang - How to display modules version from inside of code
If your goal is to just have the name of the module available in your program, and if you are okay with setting this value at link time, then you may use the -ldflags build option.
You can get the name of the module with go list -m from within the module directory.
You can place everything in a Makefile or in a shell script:
MOD_NAME=$(go list -m)
go build -ldflags="-X 'main.MODNAME=$MOD_NAME'" -o main ./...
With main.go looking like:
package main
import "fmt"
var MODNAME string
func main() {
fmt.Println(MODNAME) // example.com
}
With the mentioned "golang.org/x/mod/modfile" package, an example might look like:
package main
import (
"fmt"
"golang.org/x/mod/modfile"
_ "embed"
)
//go:embed go.mod
var gomod []byte
func main() {
f, err := modfile.Parse("go.mod", gomod, nil)
if err != nil {
panic(err)
}
fmt.Println(f.Module.Mod.Path) // example.com
}
However embedding the entire go.mod file in your use case seems overkill. Of course you could also open the file at runtime, but that means you have to deploy go.mod along with your executable. Setting the module name with -ldflags is more straightforward IMO.
In Go, I have a compile error due to incompatible types which I cannot explain. I'm using the "C" module. A minimum example consists of the following 2 files:
package module
import "C"
type T struct {
X C.int
}
and a main program
package main
import (
"fmt"
"sandbox/module"
)
import "C"
func f() *module.T {
var x C.int = 42
return &module.T{X: x}
}
func main() {
fmt.Printf("value: %d", f().X)
}
This fails to compile with the message
./main.go:12: cannot use x (type C.int) as type module.C.int in field value.
For some reason the compiler thinks that C.int is not equal to module.C.int.
It must have something to do with the C module and the fact that the code is spread across 2 modules because it suddenly works if I switch from C.int to plain int.
Why does this piece of code not compile? What would be the proper solution to make it compile without squashing all code together in one module?
I'm using the latest Go compiler 1.9.2 on Ubuntu 16.04.
Command cgo
Go references to C
Cgo translates C types into equivalent unexported Go types. Because
the translations are unexported, a Go package should not expose C
types in its exported API: a C type used in one Go package is
different from the same C type used in another.
You say, "For some reason the compiler thinks that C.int is not equal to module.C.int." As the cgo command documentation explains, unexported Go types in different packages are not equal.
Don't expose C types in an exported API. For example,
module.go:
package module
type T struct {
X int
}
main.go:
package main
import (
"fmt"
"sandbox/module"
)
import "C"
func f() *module.T {
var x C.int = 42
return &module.T{X: int(x)}
}
func main() {
fmt.Printf("value: %d\n", f().X)
}
Output:
value: 42
When I try to pass a C.int from package main to a function in a helper package called common, I get the following error:
main.go:24: cannot use argc (type C.int) as type common.C.int in argument to common.GoStrings
From common.go:
/*
...
*/
import "C"
...
func GoStrings(argc C.int, argv **C.char) (args []string) {
// do stuff
}
From main.go:
/*
#cgo LDFLAGS: -lpam -fPIC
#define PAM_SM_AUTH
#include <security/pam_appl.h>
*/
import "C"
...
func pam_sm_authenticate(pamh *C.pam_handle_t, flags, argc C.int, argv **C.char) C.int {
args := common.GoStrings(argc, argv)
...
}
Is there any way to pass these objects back and forth? I've tried type casting to e.g. common.C.int, but that doesn't seem to be valid syntax. I'd like to be able to call GoStrings from multiple different main programs, and it seems like that should be allowable.
Unfortunately you can't pass C types between packages. You'll need to perform any necessary type conversions within the package that is importing the C types. As per the documentation:
Cgo translates C types into equivalent unexported Go types. Because
the translations are unexported, a Go package should not expose C
types in its exported API: a C type used in one Go package is
different from the same C type used in another.
If you have common C translation methods that you use, consider using go generate with a helper script to create these in each package where it is required from a master source file. Not as nice as solution as having a common library but much better than manually updating files in multiple packages.
I have an application that needs to use different packages depending on the target operating system and then generate an executable. The core package has an interface that needs to be populated depending on the package that is used.
I've recently found out the best way to achieve this is by using build tags. But what I'm struggling with is getting the interface populated by the loaded package with the correct build tag(s). Or perhaps there is a better alternative approach.
Here is a visual of how I imagined this to look like:
Whichever Build Constraints you chose, you can achieve this with interfaces and implementing the interface with New() constructors. And each of those special files will have the special packages you seek, on a per file basis. This approach also enforces good decoupling by forcing you to break off only the raw parts you need to implement specific to each architecture.
I am a personal fan of file suffixes, instead of build tags, as it makes it extremely easy to know which file binds to what architecture - just by looking at the filename. A big plus is you don't have to mess with any build tags and it will JustWork™. So my examples below will use file file suffixes. Specifically, the format is:
*_GOOS
*_GOARCH
*_GOOS_GOARCH
For example, renderer_windows_amd64.go, renderer_windows_amd64_test.go, renderer_linux.go, renderer_linux_test.go, etc. You can find all the GOOS and GOARCH that Go supports here.
EDIT: Validated code on kiddo's laptop (tweaking a build error). ;) Note though, you can't call go run main.go as the architecture isn't known. You'll have to go build && ./mybinary to execute it locally to test.
main.go
package main
import (
"fmt"
"os"
)
func main() {
r, err := NewRenderer()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// call the Render() method for the specific goarch-goos.
if err := r.Render(); err != nil {
fmt.Println(err)
}
}
renderer.go
This is a simple file that only defines the interface. And maybe some common enums.
package main
// Renderer renders performs the platform-specific rendering.
type Renderer interface {
Render() error
}
// alternatively, you could define a global renderer struct
// here to use in each of hte files below if they are always
// the same. often not though, as you want to keep states of
// of specific architectures within each struct.
// type renderer struct {
// ...
// }
//
// func (r *renderer) Render() error {
// ...
// }
renderer_windows.go
Includes 32 and 64 bit builds. If you want to target, say 64 bit only for specific 64bit compiled DLLs, then you can target more specifically with renderer_windows_amd64.go.
package main
import (
"fmt"
// "WindowsDLLPackage" specific package to import for Windows
)
// renderer implements Renderer interface.
type renderer struct {
// you can include some stateful info here for Windows versions,
// to keep it out of the global heap.
GOOS string
WindowsRules bool
}
// NewRenderer instantiates a Windows version.
func NewRenderer() (Renderer, error) {
return &renderer{
GOOS: "Windows",
WindowsRules: true,
}, nil
}
// Render renders the Windows version.
func (r *renderer) Render() error {
// use WindowsDLLPackage.NewSomething()
fmt.Println(r.GOOS, r.WindowsRules)
return nil
}
renderer_linux.go
Linux does not include Android (nor darwin, aka macOS) builds.
package main
import (
"fmt"
// "LinuxPackage" specific package to import for Linux
)
// renderer implements Renderer interface.
type renderer struct {
// you can include some stateful info here for Linux versions,
// to keep it out of the global heap.
GOOS string
LinuxRules bool
}
// NewRenderer instantiates a Linux version.
func NewRenderer() (Renderer, error) {
return &renderer{
GOOS: "Linux",
LinuxRules: true,
}, nil
}
// Render renders the Linux version.
func (r *renderer) Render() error {
// use LinuxPackage.NewSomething()
fmt.Println(r.GOOS, r.LinuxRules)
return nil
}
renderer_android.go
Android only specific version.
package main
import (
"fmt"
// "AndroidPackage" specific package to import for Android
)
// renderer implements Renderer interface.
type renderer struct {
// you can include some stateful info here for Android versions,
// to keep it out of the global heap.
GOOS string
AndroidRules bool
}
// NewRenderer instantiates a Android version.
func NewRenderer() (Renderer, error) {
return &renderer{
GOOS: "Linux",
AndroidRules: true,
}, nil
}
// Render renders the Android version.
func (r *renderer) Render() error {
// use AndroidPackage.NewSomething()
fmt.Println(r.GOOS, r.AndroidRules)
return nil
}
generate different binaries
All that's left is to cross-compile:
$ GOOS=windows GOARCH=amd64 go build -o mybinary.exe
$ GOOS=linux GOARCH=amd64 go build -o mybinary_linux
$ GOOS=darwin GOARCH=amd64 go build -o mybinary_macos
# and whatever u do to get ios/android builds...
Notice how all of the files above are part of the single package main and they exist all in the same directory? This works because the compiler only picks the one file suffix for the GOOS (windows, linux or android - you can do darwin, freebsd and a lot more). During compilation, the compiler only implements NewRenderer() once by using that one file. This is also how you can use specific packages per file.
Also notice how func NewRenderer() (Renderer, error) returns the Renderer interface, not the renderer struct type.
The type renderer struct is completely agnostic to the rest of the package, and can be used for any architecture's means by holding any state you need.
Also note that there aren't any global variables here. I've often use this pattern with goroutines and channels for highly concurrent applications - with no mutex locking bottlenecks. Keeping things off the heap is critical to avoid mutex locking. You could easily do go r.Render() and let it spawn a goroutine. Or, call it a few million times.
Finally, notice how all of the filenames above are easily distinguishable of what platform they target?
Don't fight the tooling with build tags, let the tools work for you.
Coding tips above:
I exported the interface, Renderer, as all of this could be moved to a package outside of main quite easily. You don't want to export the struct versions. But, you may want to export the NewRenderer() init method.
Renderer follows GoLang Effective Go guidelines in using simple interfaces with a single function: Render. And those functions become the name of the interface with the suffix of "er" - yes, even if the name ends in 'er', we add 'er' to the end and it becomes type Renderer interface. IOW: it shouldn't be called RenderEngine and instead it should be called Renderer with a single method you are manipulating: Render(). This clearly defines a single focus for the tooling and code. Aka, "the Go Way."
Create two files somewhat like these:
// +build myBuildFlag
package mypackage
import package1
var important = package1.Foo
Other one:
// +build !myBuildFlag
package mypackage
import package2
var important = package2.Foo
Now whenever you use important it something different depending on your build flag.
Take a look at Dave Chaney's post which explains how to deal with building for specific platforms/architectures quite clearly: https://dave.cheney.net/2013/10/12/how-to-use-conditional-compilation-with-the-go-build-tool
I want in init function to execute one of those two lines
func init() {
log.SetPrefix(">>>>>>>>>>>>> ")
log.SetOutput(ioutil.Discard)
}
How to achieve something similiar to C and macro definition in make file ?
I build go program like go build my_project and I don't have any custom make file. Can I pass to build command flag and read inside code ?
Create a string variable that you can test. Then set that string variable using -X passed to the linker. For example:
package main
var MODE string
func main() {
if MODE == "PROD" {
println("I'm in prod mode")
} else {
println("I'm NOT in prod mode")
}
}
Now, if you build this with just go build, MODE will be "". But you can also build it this way:
go build -ldflags "-X main.MODE PROD"
And now MODE will be "PROD". You can use that to modify your logic based on build settings. I generally use this technique to set version numbers in the binary; but it can be used for all kinds of build-time configuration. Note that it can only set strings. So you couldn't make MODE a bool or integer.
You can also achieve this with tags, which are slightly more complicated to set up, but much more powerful. Create three files:
main.go
package main
func main() {
doThing()
}
main_prod.go
// +build prod
package main
func doThing() {
println("I'm in prod mode")
}
main_devel.go
// +build !prod
package main
func doThing() {
println("I'm NOT in prod mode")
}
Now when you build with go build, you'll get your main_devel.go version (the filename doesn't matter; just the // +build !prod line at the top). But you can get a production build by building with go build -tags prod.
See the section on Build Constraints in the build documentation.