How to build for docker (rush monorepo)? - production

So I have a monorepo using rush (pnpm).
I have multiple applications and multiple libraries. Applications depend on libraries, and libraries depend on libraries.
I know pnpm creates a node_modules folder using symlinks.
I know docker copy directive won't follow symlinks.
How to build an application using rush (pnpm) without using symlinks (only for production deployment)

Related

Google Container Builder: How to install govendor dependencies during build step?

I am trying to use Google Cloud Container Builder to automate the building of my containers using GCP Build Triggers
My code is in Go, and I have a vendor folder in my project root which contains all of my Go dependencies (I use govendor). However, this vendor folder is NOT checked in to source control.
I have a cloudbuild.yaml file where I first build my Go source into a main executable, and then build a Docker image using this executable. Container Builder ensures these build steps have access to my master branch.
The problem is that the Go compilation step fails, because the vendor folder is not checked in to source control, so none of my dependencies are available for any build step.
Is there a way to create a build step that uses govendor to install all dependencies in the vendor folder? If so, how? Or is the only option to check in my vendor directory into source control (which seems unnecessary to me)?
As per #JimB and #Peter's comments to my question, an easy solution is to add my vendor directory to Git so I don't have to download all my dependencies during the build steps.

golang compiling the same package from two different file locations

The way I have my projects structured is similar to the following
/workspace
/src
/package1
/vendor
/src
/somepackage
/anotherpackage
/package1
My GOPATH is set to /workspace;/workspace/vendor
Note this is not using the go 1.5 vendor option.
So far everything has been compiling and working fine within out build / development workflow.
I'm in a situation now where I would like to import a library into the vendor directory workspace/vendor/src/package1 but write some unit tests in the workspace/src/package1 directory..
When the tests run it cannot find methods from the package1 in the vendor dir.
Is there a way to get the vendor package code recognised into the same namespace like this?
Are you asking to essentially "split" the code for a package between two folders in two different gopaths? The go tool cannot do this, as it takes the first folder it finds on any folder in your gopath. If you are actively working on a project, why would it go in the vendor gopath and not in the src one?
It is because of distinctions like this that I generally recommend one gopath for everything. If you want to vendor dependencies I recommend doing that for each individual main package you have.
As captncraig said: The go tool cannot do this.
But you are free to call the go compiler itself on any set of files you want: go tool compile <file.go>...
Of course this would reintroduce some kind of Makefile style build system. It is doable but all the heavy lifting done by go build or go install is lost and will have to live in your Makefiles.

How are golang projects packaged for deployment?

Coming from a JVM background I would like to know how to deploy a golang project to production. Is there an equivalent of a JAR file?
Is there a standalone package manager that can be installed on server and a dependency manifest file which can be run to bring down all dependencies on the server. I specifically do not want to have to build the project on the server as we can't have any compilers etc on production boxes.
thanks.
I you run go install <pkg>, the binary will be placed in $GOPATH/bin. You can copy that binary to another machine that has the same OS and architecture.
You can also change into the directory that includes the main package and just run go build. The binary will be placed in the current directory.
There are no dependencies in a Go binary for you to track. It is statically linked. (Some system libraries may be dynamically linked, but if you are running on the same OS, this shouldn't be a problem.)

How to package up a leiningen project for recompilation with all the libraries included? [for users without an internet connection]

I'm giving a Clojure workshop and I want people to be able to modify and recompile the Clojure project. The challenge is that they won't have internet connections - so I need to give them the project and the libraries all at once.
How can I package up a leiningen project for recompilation with all the libraries included?
Assumptions
They have leiningen installed on their machine prior to the workshop.
EDIT
This is almost the same question as How to package up a maven project for recompilation with all the libraries included? [without an internet connection]
Move your ~/.m2 directory aside. Run all the lein x leiningen commands you expect your users to have run, also build and test your project (test, install, jar, uberjar, etc.). This will have downloaded (a lot) of dependencies for Leiningen itself as well as for your project. $HOME/.m2 is where you'll find all the jar files that were pulled down by the Maven dependency resolver.
Once you've done this, add :offline? true to the project.clj, According to the documentation, this will Prevent Leiningen from checking the network for dependencies.
See Maven - alternative .m2 directory for an alternative to having to move your .m2 directory aside.
To make using it easy for your students, it may be best to create a self-contained zip archive with the entire .m2 directory, your project and Leiningen itself, along with a basic installer (bash script, or batch file) that moves or symlinks the .m2 directory into the proper place and adds the lein script to the path. This approach should satisfy the off-line needs - I think it covers all of the dependencies you would need.
I have assumed that your students will have java installed and have it on their PATH. Pre-running all of the lein commands you expect to use is important, as some of them have their own dependencies that are only resolved when they are first run.

rpmbuild - How to mark some package as conflicting at build time

I have to use different compiler (gcc) from one packaged with centos. It is also gcc, just repacked newer version which has been installed with different path.
I am using mock for build, which has in its basic setup
config_opts['chroot_setup_cmd'] = 'groupinstall build'
Group build in my case contains CentOS stock gcc. I cannot change anything in mock environment.
Is there a way how to delete gcc package before build actually proceed?
The problem is that some programs compiled by my repacked gcc tends to use system /usr/include/ instead of correct include's from repacked gcc, so I am looking for a way how to localize problem.
You can try to use the instructions provided by Fedora to complete this task, they talk about doing a build when the rpm you need to install isn't part of a repo.
If that doesn't work I would look at setting up a custom environment, or asking your admin to do so if you cannot change it as you state in your question. The configuration files are stored under /etc/mock/*.cfg. I would suggest copying one of these that matches your needs and naming it something unique. Then you need to add an additional repo line (either local or remote depending on where your custom copy of GCC lives).
This will configure the environment to pick up that version of GCC if it really is just marked as a newer release. In the event there is some unique naming convention or it's not being picked up for some reason you should look at modifying the chroot_setup_cmd to simply install all the build packages. When I review a
yum groupinstall buildsys-build
I see a list of all the associated packages. You'll obviously want to check 'build'. You can then modify your config_opts['chroot_setup_cmd'] to use 'install', instead of 'groupinstall', where you can then install all the associated build packages, as well as your custom GCC.
If that still doesn't work, you can always copy the build packages to your own personal repo where GCC lives, ensure that's the only one available to pull from, configure the repo so it supports the 'build' group, and then build the package. While not extremely helpful due to age, the Mock docs have some useful information for configuring your environment with local repos.

Resources