how to use different config files on build? - go

I have a config file for my go program, and clearly it has different values when I run locally vs on my server.
My boss recently asked me to switch deployment methods so that I run "go build" locally, and then deploy that build. Thus, I need to switch out my global config file each time I build.
Unfortunately, I have a piss poor memory, and I'll almost certainly forget to switch these files out. I would love if there were way (maybe in a build script file, or anything) to use different config file when building as opposed to running "go run main.go" as I usually do when I am working locally on development.
Does anyone have any ideas or best practices here?
Note: I already gitignore this file, but since building if basically solidifies the whole program (including these set variables) into a single file, just having another file on the server wouldn't do.
Thanks all!

Use build constraints.
Here's an example:
config.go
//go:build !live
// Add your local config here
config_live.go
//go:build live
// Add your server config here
Use go build to build the local version. Use go build -tags live to build the server version.

Related

Deployment strategy for golang app, how to run golang app on production

I have a golang web app, and I need to deploy it. I am trying to figure out the best practice on how to run a golang app on production. The way I am doing it now, very simple,
just upload the built binary to production, without actually having
the source code on prod at all.
However I found an issue, my source code actually reads a config/<local/prod>.yml config file from the source. If I just upload the binary without source code, the app cant run because it is missing config. So I wonder what is the best practice here.
I thought about a couple solutions:
Upload source code, and the binary or build from the source
Only upload the binary and the config file.
Move yml config to Env Variables, but I think with this solution, the code will be less structured, because if you have lots of configs, env variables will be hard to manage.
Thanks in advance.
Good practice for deployment is to have reproducible build process that runs in a clean room (e.g. Docker image) and produces artifacts (binaries, configs, assets) to deploy, ideally also runs some tests that prove nothing was broken from the last time.
It is a good idea to package service - binary and all its needed files (configs, auxiliary files such as systemd, nginx or logrotate configs, etc.) into some sort of package - be it package native to your target environment Linux distribution (DPKG, RPM), virtual machine image, docker image, etc. That way you (or someone else tasked with deployment) won't forget any files, etc. Once you have package you can easily verify and deploy using native tools for that packaging format (apt, yum, docker...) to production environment.
For configuration and other files I recommend to make software to read it from well known locations or at least have option to pass paths in command line arguments. If you deploy to Linux I recommend following LFHS (tldr; configuration to /etc/yourapp/, binaries to /usr/bin/)
It is not recommended to build the software from source in production environment as build requires tools that are normally unnecessary there (e.g. go, git, dependencies, etc.). Installing and running these requires more maintenance and might cause security and performance risks. Generally you want to keep your production environment minimal as required to run the application.
I think the most common deployment strategy for an app is trying to comply with the 12-factor-app methodology.
So, in this case, if your YAML file is the configuration file, then it would be better if you put the configuration on the Environment Variables(ENV vars). So that when you deploy your app on the container, it is easier to config your running instance from the ENV vars rather than copying a config file to the container.
However, while writing system software, it is better to comply with the file system hierarchy structure defined by the OS you are using. If you are using a Unix-like system you could read the hierarchy structure by typing man hier on the terminal. Usually, I install the compiled binary on the /usr/local/bin directory and put the configuration inside the /usr/local/etc.
For the deployment on the production, I created a simple Makefile that will do the building and installation process. If the deployment environment is a bare metal server or a VM, I commonly use Ansible to do the deployment using ansible-playbook. The playbook will fetch the source code from the code repository, then build, compile, and install the software by running the make command.
If the app will be deployed on containers, I suggest that you create an image and use multi-stage builds so the source code and other tools that needed while building the binary would not be in the production environment and the image size would be smaller. But, as I mentioned before, it is a common practice to read the app configuration from the ENV vars instead of a config file. If the app has a lot of things to be configured, the file could be copied to the image while building the image.
While we wait for the proposal: cmd/go: support embedding static assets (files) in binaries to be implemented (see the current proposal), you can use one of the embedding static asset files tools listed in that proposal.
The idea is to include your static file in your executable.
That way, you can distribute your program without being dependent on your sources.

Share Git repository directory across multiple build definitions

When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….

How to run go generate on Heroku

I am using http://github.com/tmthrgd/go-bindata to embed static files and templates within Go executable file. It requires to run go generate to run a Go code that read each file and write binary representation as standard go file. go generate have to be fired before build process.
Is there a chance to configure Heroku to handle this?
go generate should be run locally while developing, not on heroku. If you run it on heroku it will lead to very hard to debug issues. If go generate has unexpected results you wont be able to easily inspect this.
You could run go generate with a tool like modd or with a git hook.
Having the results of go generate tracked by git also means that you can track which changes affected generated code.
In a language like ruby it might be customary to run bundle install on the server and omit dependencies from git. For go programs this is not so. Dependencies should be vendored and tracked by git. Same for generated code.
The rest is not at all advised for this case and I would never do something like this.
fork the go heroku buildpack
add a line to run go generate
use your modified go heroku buildpack
deploy your app
around this line
more on buildpacks

Teamcity build for live after successful CI build?

I have an environmental build system which currently has the following environments:
dev
ci
uat
live
Just to be clear when I say environmental build I mean there are a set of properties files for each environment and during the build these properties are used to template project files, so a database server may be "localhost" on dev environment but "12.34.56.78" on CI. So when starting the build you can give it an environment property and it will build for something other than dev (which is the default environment).
Now the CI build works fine and spits out the artifacts correctly, however as the build is CI all of it is configured to work on that environment, and I am looking at being able to trigger a build for live or uat when a CI build succeeds. This would then run the same build but with a different build argument.
Now I noticed there are a few mechanisms for this, one seems to be doing an automatic trigger on complete which could trigger another build, but this seems to require 2 separate build configurations which are essentially identical other than the build argument being "environment=live" rather than "environment=ci". Then there is adding another build step which would be the same as the first but take different argument and output the live artifacts elsewhere, but this would always happen much like the first option.
The final option I could see was to trigger a manual build once I have a live candidate, but it is unclear as to how to set a build argument, I could make a build parameter however it doesn't seem to get pulled into the build script like a command like build argument would.
I will see if there is a better answer, but after writing this I found that using Build Parameters seems the best option, this then can be embedded within your build configuration anywhere using the %environment% (or %your_parameter_here%).
This can then be setup to create a form element for manual builds so you can easily create a build for a different environments.

How to add some prebuild steps to jenkins?

I am a Jenkins newbie and need a little hand holding because we only maintain parts of our app in SVN. I have basic Jenkins install setup.
This is what I do to get a local DEV environment setup and need that translated to Jenkins in order to make a build:
DO SVN checkout (and get the 2 folders that are under SVN)
Delete the folders
Copy over the full app from FTP location
Do SVN restore
download sql file
Import into MySQL
How would I get the above mentioned steps in Jenkins? I know there are some post build steps that I can use. Just not sure how to put it all together. Any help will be much appreciated.
Tell Jenkins about the SVN repository and it will check it out automatically when a new build is started. That should take care of 1. 2-5 would be build steps (i.e. execute shell commands). Basically, you can set up Jenkins to do exactly what you do on the command line, except that the first step is taken care of automatically if you tell Jenkins about the repository.
Rather than trying to do these sort of things in Jenkins, you'll likely save yourself some trouble if you make use of something like Ant or NAnt to handle the complexities for your build.
I've found that doing my builds this way gives me added flexibility (ie, if it can be done via the command-line, I can use it in my build, rather than needing a Jenkins plugin to support it), and makes maintenance easier as well (since my NAnt scripts become part of the project and are checked into the VCS system, I can go back if I make a change that doesn't work out.
Jenkins has some build-history plugins, but over time I've found it easier to keep the majority of my 'build' logic and complexity outside of the CI environment and just call into it instead.

Resources