Effective way of distributing go executable - go

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks

Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

Related

Recommended tool to automate complicated build procedure

I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.

How to reliably refer to static file in a Go application?

I am writing a Go command-line tool that generates some files based on templates.
The templates are located in the Git repository alongside the code for the command-line tool itself.
I wanted to allow the following:
The binary, wherever it is called from, should always find the templates directory.
The templates directory can be overriden by the user if need be.
Since this is a Go application, I went with something like:
templateRoot := filepath.Join(
os.Getenv("GOPATH"),
"src/github.com/myuser/myproject/templates",
)
But being rather new to Go, I wonder if this approach is reliable enough: is it guaranteed that my application template will always be accessible at that path ?
What if someone vendorize my application into their own project ? Does that even make sense for a command-line tool ?
Because of 2., I obviously can't/won't use go-bindata because I want to allow for the templates to be overriden if need be.
In summary: what is a good strategy to reliably refer to non-go, static files in a Go command-line tool ?
GOPATH is used for building the application. While you could look for GOPATH and check relative locations to each GOPATH entry at runtime, you can't be sure it will exist (unless of course you make it a prerequisite for running your application).
go get itself is a convenience for developers to fetch and build a go package. It relies on having a GOPATH (though there's a default now in go1.8), and GOBIN in your PATH. Many programs require extra steps not covered by the simple go tool, and have scripts or Makefiles to do the build. If you're targeting users that aren't developers, you need to provide a way to install into standard system paths anyway.
Do what any regular program would do, and use some well-known path to locate the template files. You can certainly add some logic in your program to check for a hierarchy of locations: relative to $GOPATH, relative to the binary, working directory, $HOME, etc; just provide a list of locations to the user that your program will look for templates.

Does 'make'-ing something from source make it self-contained?

Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path

Clean installation of a MATLAB library on Windows

I would like to create an installer for a library (DLL) that can be use in multiple system including MATLAB.
For MATLAB, I have additional *.m and *.mex files to make the DLL functions easily accessible from it.
I also have an installer that modify the PATH environment variable to make my DLL visible to all potential calling system.
My problem is that MATLAB does not make use of the system PATH environment variable. Thus, I am looking for a fix that would allow users of my library to run the installer and have my library accessible from MATLAB "out of the box" (possibly after a restart or session reopen).
I currently see 2 ways to do it which I both do not like :
Write a MATLAB script that uses addpath()/savepath() functions. I don't like this because :
a. MATLAB may not always be installed.
b. This would mess with the user's MATLAB's own path variable.
c. Upon installing a new version of my library, I would have to mess even more with the MATLAB's path to delete the path to older library before adding the path to the newer library.
Look through the system PATH and search for ...\MATLAB\RXXXXn\bin path to use that to install my *.m and *.mex files in the appropriate folder inside MATLAB. I don't like this because :
a. It would mess with MATLAB's own installation.
b. Once again, installation of multiple successive versions of the library may cause some issues (currently multiple version may be installed in separate directory, and the PATH redirects to the last one installed, experts users can modify the PATH according to their needs).
Currently, I am leaning towards option 2, but I am looking for a better, cleaner solution to this installation procedure.
Can anyone give me some MATLAB's expert advice ?
Thanks in advance for your help!

Where should resources be kept in golang

My application uses json configuration files and other resources. Where should I place them in my project hierarchy?
I could not find the answer in http://golang.org/doc/code.html (How to Write Go Code)
Upd:
The question is not about automatic distribution of resources with application but much simpler: Where should I keep my resources in project hierarchy? Is there some standard place anyone expects them to be?
There is no single correct answer, nor are there any strong conventions assumed or enforced by any Go tooling at this time.
Typically I start by assuming that the files I need are located in the same directory from where the program will be run. For instance, suppose I need conf.json for myprog.go; then both of those files live together in the same directory and it works to just run something like
go build -o myprog && ./myprog
When I deploy the code, the myprog binary and conf.json live together on the server. The run/supervisor script needs to cd to that directory and then run the program.
This also works when you have a lot of resources; for instance, if you have a webserver with JS, CSS, and images, you just assume they're relative to cwd in the code and deploy the resource directories along with the server binary.
Another alternative to assuming a cwd is to have a -conf flag which the user can use to specify a configuration file. I typically use this for distributing command-line tools and open-source server applications that require a single configuration file. You could even use an -assets flag or something to point to a whole tree of resource files, if you wanted.
Finally, one more approach is to not have any resource files. go-bindata is a useful tool that I've used for this purpose -- it just encodes some data as bytes into a Go source file. That way it's all baked into your binary. I think this method is most useful when the resource data will rarely or never change, and is pretty small. (Otherwise you're going to be shipping around huge binaries.) One (kind of silly) example of when I've used go-bindata in the past was for baking a favicon into a really simple server which didn't otherwise require any extra files besides the server binary.
For static resource it might be the most convenient solution to include them in the binary similar to resources in Java. Newer Go version, at least 1.18 are providing the //go:embed directive to include content:
import (
_ "embed"
)
//go:embed myfile.txt
var myfile string
You can now use myfile in your code. E.g. IntelliJ provides also support for this.
There are also other options to include the content, e.g. as binary or dynamically, see the link.

Resources