Quick Go project build with Docker without checking in vendor libraries - go

Currently, we have all vendored libraries in src/vendor which makes docker-compose build quite fast. Although adding vendored libraries to source control has the disavantage of libraries not being updated and also heavily polluting the diff of pull requests.
Is there a way in between, maybe with caching?

Is there a way in between, maybe with caching?
Yes, several. But don't fight the system/preferred method.
Use $GOPATH/src/MyProject/vendor like you are already doing.
adding vendored libraries to source control has the disavantage of libraries not being updated...
That all depends on your team's management of your repo. If everyone ignores the vendor, ya it will get stale.
Personally I make it a "1st of the month" habit of going through and refreshing all dependencies, running our test suites, and if no errors update for QA integration testing on the dev server and keep an eye on the error logs after release. Tools like godep and gostatus greatly help keep your GOPATH in chrcn with latest, that you can update your vendor folder(s) with quickly.
Just make sure it is a dedicated commit, so it can be reverted in a hurry if an issue creeps up.
also heavily polluting the diff of pull requests
First of all, that's just a process task. I enforce rebasing on all pull requests and reject all merges in all repos. This keeps a very clean git history; but, more to the point, rebasing moves your local commits until after the vendor updates. Shouldn't ever get a conflict unless someone added the same package. Which at that point is easy, just take the latest one and be done.
Sound like there are process issues to work out than worrying about /vendor management.

Related

Should I commit vendor directory with go mod?

I am using go modules on go1.12 to handle my Go dependencies. Is it best practice to also commit the vendor/ directory into version control?
This is somewhat related to Is it best-practice to commit the `vendor` directory? which asks this question in the case of using dep. With dep, commiting vendor/ is the only way to get truly reproducible builds. What about for go modules?
I'd like to give some arguments in favour of committing vendor, go.mod and go.sum.
I agree with the accepted answer's arguments that it's technically unnecessary and bloats the repo.
But here is a list of contra-arguments:
Building the project doesn't depend on some code being available on Github/Gitlab/... or the Go proxy servers. Open source projects may disappear because of censorship, authors incentives, licensing changes or some other reasons I can't currently think of, which did happen on npm, the JavaScript package manager, and broke many projects. Not in your repo, not your code.
We may have used internal or 3rd party Go modules (private) which may also disappear or become inaccessible, but if they are committed in vendor, they are part of our project. Nothing breaks unexpectedly.
Private Go modules may not follow semantic versioning, which means the Go tools will rely on the latest commit hash when fetching them on-the-fly. Repo history may be rewritten (e.g. rebase) and you, a colleague or your CI job may end up with different code for the dependencies they use.
Committing vendor can improve your code review process. Typically we always commit dependency changes in a separate commit, so they can be easily viewed if you're curious.
Here's an interesting observation related to bloating the repo. If I make code review and a team member has included a new dependency with 300 files (or updated one with 300 files changed), I would be pretty curious to deep dive into that and start a discussion about code quality, the need for this change or alternative Go modules. This may lead to actually decrease your binary size and overall complexity.
If I just see a single new line in go.mod in a new Merge Request, chances are I won't even think about it.
CI/CD jobs which perform compilation and build steps need not waste time and network to download the dependencies every time the CI job is executed. All needed dependencies are local and present (go build -mod vendor)
These are on top of my head, if I remember something else, I'll add it here.
Unless you need to modify the vendored packages, you shouldn't. Go modules already gives you reproducible builds as the go.mod file records the exact versions and commit hashes of your dependencies, which the go tool will respect and follow.
The vendor directory can be recreated by running the go mod vendor command, and it's even ignored by default by go build unless you ask it to use it with the -mod=vendor flag.
Read more details:
Go wiki: How do I use vendoring with modules? Is vendoring going away?
Command go: Modules and vendoring
Command go: Make vendored copies of dependencies

How to version products inside monorepo?

I have been educating myself about monorepos as I believe it is a great solution for my team and the current state of our projects. We have multiple web products (Client portal, Internal Portal, API, Core shared code).
Where I am struggling to find the answer that I want to find is versioning.
What is the versioning strategy when all of your projects and products are inside a monorepo?
1 version fits all?
Git sub-modules with independent versioning (kind of breaks the point of having a mono repo)
Other strategy?
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
What is the versioning strategy when all of your projects and products are inside a monorepo?
I would suggest that one version fits all for the following reasons:
When releasing your products you can tag the entire branch as release-x.x.x for example. If bugs come up you wouldn't need to check "which version was of XXX was YYY using"
It also makes it easier to force that version x.x.x of XXX uses version x.x.x of YYY. In essence, keeping your projects in sync. How you go about this of course depends on what technology your projects are written in.
And from a CI perspective, when you commit something in project A, should you launch the whole suite of tests in all of the projects to make sure that nothing broke, even though there was no necessarily a change made to a dependency/share module?
If the tests don't take particularly long to execute, no harm can come from this. I would definitely recommend this. The more often your tests run the sooner you could uncover time dependent or environment dependent bugs.
If you do not want to run tests all the time for whatever reason, you could query your VCS and write a script which conditionally triggers tests depending on what has changed. This relies heavily on integration between your VCS and your CI server.

Userfrosting best practice for helper functions

What would be the best practice to have custom code (library of functions) in a project using userfrosting?
As of now, I modify existing userfrosting controllers, which bloats the nice concise code.
I guess there is a nice way to keep custom functions in a place, which will not interfere with Userfrosting's code and thereby not be affected much during userfrosting upgrades.
At the moment, i'd like to have some custom functions for notifications, barcode etc.
Guess using a vendor folder under composer would be ideal? If so, how to go about it?
Does userfrosting have any extensibility like symfony?
Any help / pointer is appreciated!
Thanks!
As of version 0.3.1, there is no clean way to separate the core shipped code from developer-implemented code. For minor updates within a version (so, hotfixes to 0.3.1), the best way to keep up-to-date is by using git to make your project a fork of the UserFrosting repository.
So for example, you might have spurgeon/brood-crm (your project repo) as a fork of userfrosting/UserFrosting. You can then set userfrosting/UserFrosting as an upstream remote for your repo. Whenever a hotfix is released for userfrosting/UserFrosting, you can sync your fork with the upstream. This will pull changes to the main repo into your project, and give you a chance to resolve any merge conflicts (hopefully, there won't be any).
For people who are not familiar with the distinction between git and GitHub, I should point out that you can do all of this locally, without publishing your fork on GitHub.
UserFrosting 4 will (finally) have a modular, fully extendable design. Rather than having to directly modify the shipped code, you will be able to override the core routes, templates, schema, assets, etc in a separate directory. Upgrading from version 0.3.x to version 4, however, will probably need to be done manually.

What is the most effective way to lock down external dependency "versions" in Golang?

By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.

Maintaining upstream vendor source with Xcode and SVN

Question: What is the best way to maintain a project based on another OSS project, through Xcode and version managed by SVN?
I'd like to start a fork (?) of a reasonably popular open source project (it's allowed). Mostly, I want to build my own user interface written in Cocoa/ObjC for it and throw in a few custom features of my own as well.
Now, this OSS project isn't exactly small. The project itself has over 3000 files, and the build process is pretty intense- consisting of multiple stages and steps, which need to compile build tools, run those, then compile the results.
All this is fine and dandy in Xcode, since it's easy enough to setup build phases and rules to handle everything.
What I'm not clear on, is how best to manage patches from upstream. They are constantly working on the project and I'd like to be able to keep up to date with those patches as easily as possible, as many of the diff files effect sometimes up to a hundred (!) files at once.
So maintaining a pristine unmodified copy of that source tree so I can apply patches to it seems like a smart thing to do, because I really don't want to be sorting through hundreds of files every few weeks merging patches by hand.
What I'm thinking of doing in this regard is:
1) Setup an "upstream" SVN repo to hold a copy of the upstream source, plus the bare minimum required to compile it in Xcode (so an xcproject, a few xcconfigs, some prefix header files and that's it)
2) Setup my own "downstream" SVN repo where I do all my work and apply my own modifications.
Whenever upstream releases a patch, I can apply it to #1 then synchronize across to #2, and deal with any issues created by my own modifications.
What I'm not clear about, is if this is a sane way of handling things- or if there's some better practice I should be following.
Is this the best way to handle things, or should I be looking at doing this some other way?
In SVN-world it was named "Vendor Branches" long time ago and intensively used by many teams (you can additionally google this phrase)
Technically it's
one SVN repo
at least one special branch (special in terms of usage, nothing more), which, with svn:externals, linked to 3-rd party repo of upstream code
your place for changes (trunk or any other place, I prefer trunk), initially created as copy of vanilla code and there you perform all code-hacks
If (or "when") vendor branch got updates from upstream, you have just merge branch to /your place/, integrate changes and continue to work

Resources