I want to take the GCC Compiler that is on my machine and all its dependencies and zip them up in a deployment package that I can send off to AWS Lambda (That way I can use a Lambda to compile C code). Is there an easy way to package the whole thing in one go so I can deploy and use it from AWS Lambda?
This is what I have right now
However when I invoke the function I get
"gcc: error trying to exec 'cc1': execvp: No such file or directory\n"
as the response. Currently the way I compile gcc and those dependencies you see on the left panel was by spinning up a Amazon Linux docker container, installing gcc, and then zipping up gcc and its dependencies I found with the ldd command.
AWS Lambda runtime is described here. Basically, it's Amazon Linux. If I were you, I would try to grab the specified AMI and create a EC2 instance with it. Or just create an Amazon Linux 2 EC2 instance. Then I would log in to that instance and compile the binaries you need. Finally, I would export them in a ZIP file and ship with Lambda. This way chances are high that the binaries would work on Lambda.
Great question!
As you say, just packaging the binary doesn't help because you're missing the shared objects .so files, or some other dependencies. You can find your dependencies by running something like ldd, and this question helps. Projects like yumda, try to simplify this, and definitely worth a shot.
But under the hood, Lambda uses AmazonLinux and there's really no reason it can't be done. High level points we need are:
Build the binary in a AmazonLinux container
Determine the binary, and it's dependencies
Copy the binary and dependencies out of the container into a lambci container
Test out the lambci container (you'd typically need to set some env vars for this to work, e.g. $LD_LIBARY_PATH.
Once it runs, package that as a zip, and load it into your lambda, remembering to set the right env vars
As an option, I'd package gcc as a layer, so you can share it.
When i searched around, it looks like someone has done exactly this here. Hopefully it's exactly what you're looking for.
Configure, build and install gcc into a specific directory, specified by --prefix option to configure.
After installing, change gcc spec file, so that it hardcodes -rpath into executables and shared libraries, so that you do not need to tinker with LD_LIBARY_PATH (which is a wrong solution most of the time) to make the executables find the right libstdc++.so, libgcc_s.so and its friends.
rsync the directory onto another machine into the same place in filesystem.
Or archive the install directory and unpack it on your target machine.
However, the target must should have the same libc and system libraries gcc was built with, otherwise that may not work as intended.
Alternatively, build locally and deploy your executables with all their dependencies using Exodus.
Related
i'm trying to install a library inside a package. However I don't understand where it compiles to.
Structure is like so:
package/cmd/library
I can install other executable targets fine with go install. My paths are set correctly. However now I want to build my shared library target and deploy it somewhere (this deployment step can be done manually). I'm running into two different issues.
Issue one, I can't seem to install it at all:
go install -buildmode=c-shared bpackage/cmd/library#latest
Returns with:
go install: no install location for directory /home/tpm/go/pkg/mod/package/cmd/library outside GOPATH
For more details see: 'go help gopath'
which tells me that it installs somewhere other than in my gopath, I'm just not sure where that might be.
Issue 2, using the -o flag doesn't work with go install, so I can't seem to alter the output location to place it inside the GOPATH (i did try setting the GOBIN to within my gopath, but since other commands work fine I don't think this should be causing any issue)
Quoting Ian from https://github.com/golang/go/issues/24253
Note that it doesn't make a great deal of sense to use go install -buildmode=c-shared. The expectation is that people will use go build -buildmode=c-shared -o foo.so. The only point of using -buildmode=c-shared is to use the shared library somewhere, and using go install without -o will put the shared library in a relatively unpredictable place.
This question may not make much sense if my understanding of both the pkg-config and environment modules is somewhat incorrect, but I'll ask anyways as I could not find anything specific on this topic. There might be an entirely better solution available, if that is the case, I am all ears!
I while back I started using modules to easily load my development environment as needed (i.e. using commands like module load foo etc.). More recently, I have adopted the meson build system for my projects. In meson, libraries are treated as dependencies, which are found using pkg-config in the background instead. So now I have two ways of discovering libraries and setting up their lib and include directory.
As an example, I have the following (simplified) module script for library foo (I am using lmod which is based on lua):
prepend_path("LD_LIBRARY_PATH", "/opt/foo/lib")
prepend_path("CPATH", "/opt/foo/include")
I could also have a pkg-config file (*.pc) doing something similar like (that is, if my understanding of pkg-config is correct)
prefix=/opt/foo
includedir=${prefix}/include
libdir=${exec_prefix}/lib
Name: foo
Cflags: -I${includedir}
Libs: -L${libdir} -lfoo
Now both seem to be doing pretty much the same thing (in terms of setting up my environment), but simply using modulefiles will not allow meson to find my dependencies and I still have to use pkg-config (which requires basically creating two files, either manually or dynamically, but that sounds like a maintenance burden and also not very clean). Equally, I could create the pkg-config file and add the location of that file into the PKG_CONFIG_PATH, i.e. something like
prepend_path("LD_LIBRARY_PATH", "/opt/foo/lib")
prepend_path("CPATH", "/opt/foo/include")
prepend_path("PKG_CONFIG_PATH", /path/to/*.pc/file)
but again this requires two files (pkg and module). I rather like the module environment and so don't want to ditch that, so is there a better / cleaner way of doing things, where I just load a module file which will allow pkg-config (and thus meson in turn) to know about the dependency?
As of today, there is no bridge between the environment module and the pkg-config tools. The best thing I think that could be achieved to keep the module system, is to have a script that queries every pkg-config files available and create the corresponding modulefile. And run that script regularly to keep things in sync.
I am getting a stuck while trying to create a Terraform provider. I have been following the advice given on https://www.terraform.io/docs/extend/writing-custom-providers.html but when I go to build my binaries via Go go build -o terraform-provider-example I get several missing packages errors.
So I then work my way down the list of missing packages and use go get ... to get those packages installed in my Go libraries.
I get an error indicating that github.com/hashicorp/hcl/v2 cannot be found. I navigate to the location and sure enough it doesn't exist.
Package not available at install time screen shot
Package not available with go get
So I am getting stuck and unable to build these providers. I have looked for a while now trying to find something which describes how to setup the environment for creating providers but have been unsuccessful so far. Can anyone help get me going?
Terraform Core and Terraform provider development requires using the Go toolchain in the new "modules mode", which in current versions of Go is not the default.
The easiest way to ensure you're working in modules mode is to clone the repository you want to work on outside the $GOPATH/src directory. Development outside of GOPATH is only supported in Modules mode, and so the Go toolchain assumes that you intend to use modules mode if you are working in that way.
One reason why Terraform development requires modules mode (though not the only one) is that it has a dependency on github.com/hashicorp/hcl/v2, which is a module path type that is not supported in the old GOPATH mode because previously the Go toolchain was only able to install from the master branch of a particular remote dependency in a Git repository. The module path github.com/hashicorp/hcl/v2 is the Go Modules way to specify using the second major version of HCL, whereas github.com/hashicorp/hcl is the first major version.
In modules mode, it should be sufficient to just run go build -o terraform-provider-example (or, if you prefer, go install) and it will automatically fetch the dependencies to the local modules cache and use them from there. In modules mode, go get is for changing the dependencies recorded in go.mod rather than for installing existing dependencies.
How can I put my Go binary into a Debian package? Since Go is statically linked, I just have a single executable--I don't need a lot of complicated project metadata information. Is there a simple way to package the executable and resource files without going through the trauma of debuild?
I've looked all over for existing questions; however, all of my research turns up questions/answers about a .deb file containing the golang development environment (i.e., what you would get if you do sudo apt-get install golang-go).
Well. I think the only "trauma" of debuild is that it runs lintian after building the package, and it's lintian who tries to spot problems with your package.
So there are two ways to combat the situation:
Do not use debuild: this tool merely calls dpkg-buildpackage which really does the necessary powerlifting. The usual call to build a binary package is dpkg-buildpackage -us -uc -b. You still might call debuild for other purposes, like debuild clean for instance.
Add the so-called "lintian override" which can be used to make lintian turn a blind eye to selected problems with your package which, you insist, are not problems.
Both approaches imply that you do not attempt to build your application by the packaging tools but rather treat it as a blob which is just wrapped to a package. This would require slightly abstraining from the normal way debian/rules work (to not attempt to build anything).
Another solution which might be possible (and is really way more Debian-ish) is to try to use gcc-go (plus gold for linking): since it's a GCC front-end, this tool produces a dynamically-linked application (which links against libgo or something like this). I, personally, have no experience with it yet, and would only consider using it if you intend to try to push your package into the Debian proper.
Regarding the general question of packaging Go programs for Debian, you might find the following resources useful:
This thread started on go-nuts by one of Go for Debian packagers.
In particular, the first post in that thread links to this discussion on debian-devel.
The second thread on debian-devel regarding that same problem (it's a logical continuation of the former thread).
Update on 2015-10-15.
(Since this post appears to still be searched and found and studied by people I've decided to update it to better reflec the current state of affairs.)
Since then the situation with packaging Go apps and packages got improved dramatically, and it's possible to build a Debian package using "classic" Go (the so-called gc suite originating from Google) rather than gcc-go.
And there exist a good infrastructure for packages as well.
The key tool to use when debianizing a Go program now is dh-golang described here.
I've just been looking into this myself, and I'm basically there.
Synopsis
By 'borrowing' from the 'package' branch from one of Canonical's existing Go projects, you can build your package with dpkg-buildpackage.
install dependencies and grab a 'package' branch from another repo.
# I think this list of packages is enough. May need dpkg-dev aswell.
sudo apt-get install bzr debhelper build-essential golang-go
bzr branch lp:~niemeyer/cobzr/package mypackage-build
cd mypackage-build
Edit the metadata.
edit debian/control file (name, version, source). You may need to change the golang-stable dependency to golang-go.
The debian/control file is the manifest. Note the 'build dependencies' (Build-Depends: debhelper (>= 7.0.50~), golang-stable) and the 3 architectures. Using Ubuntu (without the gophers ppa), I had to change golang-stable to golang-go.
edit debian/rules file (put your package name in place of cobzr).
The debian/rules file is basically a 'make' file, and it shows how the package is built. In this case they are relying heavily on debhelper. Here they set up GOPATH, and invoke 'go install'.
Here's the magic 'go install' line:
cd $(GOPATH)/src && find * -name '*.go' -exec dirname {} \; | xargs -n1 go install
Also update the copyright file, readme, licence, etc.
Put your source inside the src folder. e.g.
git clone https://github.com/yourgithubusername/yourpackagename src/github.com/yourgithubusername/yourpackagename
or e.g.2
cp .../yourpackage/ src/
build the package
# -us -uc skips package signing.
dpkg-buildpackage -us -uc
This should produce a binary .deb file for your architecture, plus the 'source deb' (.tgz) and the source deb description file (.dsc).
More details
So, I realised that Canonical (the Ubuntu people) are using Go, and building .deb packages for some of their Go projects. Ubuntu is based on Debian, so for the most part the same approach should apply to both distributions (dependency names may vary slightly).
You'll find a few Go-based packages in Ubuntu's Launchpad repositories. So far I've found cobzr (git-style branching for bzr) and juju-core (a devops project, being ported from Python).
Both of these projects have both a 'trunk' and a 'package' branch, and you can see the debian/ folder inside the package branch. The 2 most important files here are debian/control and debian/rules - I have linked to 'browse source'.
Finally
Something I haven't covered is cross-compiling your package (to the other 2 architectures of the 3, 386/arm/amd64). Cross-compiling isn't too tricky in go (you need to build the toolchain for each target platform, and then set some ENV vars during 'go build'), and I've been working on a cross-compiler utility myself. Eventually I'll hopefully add .deb support into my utility, but first I need to crystallize this task.
Good luck. If you make any progress then please update my answer or add a comment. Thanks
Building deb or rpm packages from Go Applications is also very easy with fpm.
Grab it from rubygems:
gem install fpm
After building you binary, e.g. foobar, you can package it like this:
fpm -s dir -t deb -n foobar -v 0.0.1 foobar=/usr/bin/
fpm supports all sorts of advanced packaging options.
There is an official Debian policy document describing the packaging procedure for Go: https://go-team.pages.debian.net/packaging.html
For libraries: Use dh-make-golang to create a package skeleton. Name your package with a name derived from import path, with a -dev suffix, e.g. golang-github-lib-pq-dev. Specify the dependencies ont Depends: line. (These are source dependencies for building, not binary dependencies for running, since Go statically links all source.)
Installing the library package will install its source code to /usr/share/golang/src (possibly, the compiled libraries could go into .../pkg). Building depending Go packages will use the artifacts from those system-wide locations.
For executables: Use dh-golang to create the package. Specify dependencies in Build-Depends: line (see above regarding packaging the dependencies).
I recently discovered https://packager.io/ - I'm quite happy with what they're doing. Maybe open up one of the packages to see what they're doing?
I've read the entire ATLAS installation guide, and it says all you need to build shared (.so) libraries is to pass the --shared flag to the configure script. However, when I build, the only .so files that appear in my lib folder are libsatlas.so and libtatlas.so, though the guide says that there should be six others:
libatlas.so, libcblas.so, libf77blas.so, liblapack.so, libptcblas.so, libptf77blas.so
After installation some of the tests fail because these libraries are missing. Furthermore, FFPACK wants these libraries during installation.
Has anyone encountered this? What am I doing incorrectly?
In my experience, it's a lot more complex than that, see our EasyBuild implementation of the ATLAS build procedure at https://github.com/hpcugent/easybuild-easyblocks/blob/master/easybuild/easyblocks/a/atlas.py .
We needed to:
enable the -fPIC compiler option
run 'make shared cshared ptshared cptshared' in the 'lib' directory
We're not even using --shared for configure, probably because it doesn't do much.
If you want to build ATLAS (and whatever you will be linking it with) without headaches, look into EasyBuild.
(disclaimer: I'm a developer for EasyBuild)
First if you have incorrectly specified the --force-tids flag for configure then the parallel libs won't build. To check this you can run make ptcheck. I have question regarding the specification of this flag here
Then if I examine my resulting ATLAS Makefile it says " ... only when atlas is built to one lib" and indeed only two "fat" libs are constructed: libsatlas.so and libtatlas.so.
I quess you can either link FFPACK against those libs or change the resulting ATLAS Makefile to contain the targets you need (Which won't be too hard since the static libs are available).
I had to manually create links to the .so.3 files.
So the versioned library files existed, but not the files the cmake was looking for.
Running
sudo ln -s libatlas.so.3 libatlas.so
sudo ln -s libcblas.so.3 libcblas.so
sudo ln -s liblapack_atlas.so.3
(I didn't build the cblas, atlas or lapack but installed them with apt-get. Wondering why the links were not automatically created).