I'm very new to the Golang as well as iris (Go web framework). Right now I'm play with them and trying to understand whether they fit my needs. As I understand, after we completed a iris project, what we have is a bunch of .go files. Then we compile them and get one executable. How should we deploy this output? Simply put it at some where in the file system and run it (probably as a service on Windows or background job in Linux)? Is it that simple?
Go allows very simple deployment with a standalone binary you can push to all servers without worrying about available libraries:
Compile your code for the targeted operating system
Push the executable to your server
Run it with whatever you want : service, supervisord...
A good read: Go in production.
depends on yours root project it's a bit different to build. i prefer to configure my project so
AppName
Models
Services
service1
service2
Helpers
Controllers
Web
Views
Assets
Locals
main.go
for that go to the root directory ,
in windows for development you should execute
go run web\main.go
for production when the project complied
Set GOOS=linux //(if your server is linux)
go build web\main.go
after it you will see a binary file
when you put the file on the server in linux it's better to define a services for the binary file to execute on start up and restart on any error.
Related
I have already installed gin framework in a different folder on my desktop named Gingo. I am learning how to build a web RESTful API through the Gin framework,
by starting the implementation of the backend code that is needed to support our Go Music.
But I have created another folder on my desktop for this Go Music named backend, so do I have to install gin framework in this folder as well?
The project can be found at https://github.com/gin-gonic/
gin.
i think you must install in every project, in my opinion because framework on golang just thrid party labrary. but if you want to install on your system you can try this. maybe its can be new journery on your programming
The way you use external libraries and frameworks in Go is by using Go modules. Initialise your project by running go mod init name-of-project in the backend folder (or whatever the root folder is for your Go code).
Now, if you want to add gin to your project, you can run go get github.com/gin-gonic/gin, which adds gin to the dependencies of your project (you can see all dependencies in the go.mod file in the project root).
The gin code will be downloaded and placed in the pkg folder in your GOPATH (often ~/go). This way the code has to be downloaded only once, and every time you import it it is simply using the already present code. You do have to add it to the dependencies of your project every time though.
For more information about Go modules: https://zetcode.com/golang/module/
Currently trying to run an e2e application on an Nx Monorepo.
Using a shared plugin file and support, pulling in a config file depending on the environment and setting the baseUrl on the environment file. It appears to run the first stage of the tests where it speaks to an API to create a user but then crashes the actual test when trying to access the main site.
In this case, a development URL. The aim is to allow testing on a local environment once it's been created with nx application then #nwrl/cypress:cypress.
Changed it over to a node command to run the cypress open --project with a different env file per environment for each project that is a different area of the site.
Now it appears that it's failing to try to mkdir from a different hard disk location or it's running part the way through the test then closing the test and displaying that there are no tests with different projects selected!
Any help would be much appreciated.
I have a config file for my go program, and clearly it has different values when I run locally vs on my server.
My boss recently asked me to switch deployment methods so that I run "go build" locally, and then deploy that build. Thus, I need to switch out my global config file each time I build.
Unfortunately, I have a piss poor memory, and I'll almost certainly forget to switch these files out. I would love if there were way (maybe in a build script file, or anything) to use different config file when building as opposed to running "go run main.go" as I usually do when I am working locally on development.
Does anyone have any ideas or best practices here?
Note: I already gitignore this file, but since building if basically solidifies the whole program (including these set variables) into a single file, just having another file on the server wouldn't do.
Thanks all!
Use build constraints.
Here's an example:
config.go
//go:build !live
// Add your local config here
config_live.go
//go:build live
// Add your server config here
Use go build to build the local version. Use go build -tags live to build the server version.
I am trying to use Specflow with Playwright in order to do BDD on a portal app developed but I am facing a small problem.
The Specflow project is a separate project with the ASP.Net core server that has the Api of the portal app (it is in Vue). Since the tests are pointing to a specific URL (currently localhost), before running the tests, I need to run the ASP.Net core & Vue project locally. Otherwise, Specflow & Playwright will not be able to do the test (as it will not find the localhost).
Is it any way I can force the run of the Web Server project? I tried to run it from outside Visual Studio with dotnet build and then dotnet run commands but somehow they are missing parameters (that exist while running it from inside VS) and apart from that, these commands must somehow be triggered while trying to run the tests.
I have seen solutions like creating a Docker image from a Docker Compose file in order to pack a .Net project & server in it before running the Specflow tests. Then in the BeforeTestRun hook using the FluentDocker to spin-up the server but I am not quite sure it is the easier (or best) solution.
Does anyone know how I can trigger running the .net core project (with the Vue pages)?
This is actually a pretty big question, with a pretty big answer, however this is well-trodden ground. The issue isn't so much a "specflow" issue as a general automated testing issue. Development practices like continuous integration and continuous delivery can help. Each one is too big for a single question, however I can answer this in more general terms.
In its simplest form, running automated tests locally involves these steps:
Build the application
Deploy the application to a real web server
Run tests
I'm going to assume you are developing in a Windows environment, however every operating system has some sort of command line scripting solution available. The scripting language might change, but the overall idea will not.
Configure a web server. In Windows, this would be Internet Information Services (IIS).
Add a new "application" (or "IIS app" as some people call it) to your localhost web server. Point the physical directory to the root directory for the web project. Repeat this for each web site or web app your system requires.
Write a PowerShell script that gives you an easy way to build and deploy the applications to your local web server.
This script should use publish profiles set up in Visual Studio, which allows you to publish directly from Visual Studio before invoking tests manually through Test Explorer.
Write a PowerShell script used has a "harness" script to coordinate building, deploying locally, and then invoking dotnet test.
Running tests locally just requires a single line of PowerShell to invoke your test harness script:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create
# Skip deploying in case web apps haven't changed:
.\Scripts\Run-Tests.ps1 -solutionDir . -tags BlogPosts,Create -deploy:False
I have a golang web app, and I need to deploy it. I am trying to figure out the best practice on how to run a golang app on production. The way I am doing it now, very simple,
just upload the built binary to production, without actually having
the source code on prod at all.
However I found an issue, my source code actually reads a config/<local/prod>.yml config file from the source. If I just upload the binary without source code, the app cant run because it is missing config. So I wonder what is the best practice here.
I thought about a couple solutions:
Upload source code, and the binary or build from the source
Only upload the binary and the config file.
Move yml config to Env Variables, but I think with this solution, the code will be less structured, because if you have lots of configs, env variables will be hard to manage.
Thanks in advance.
Good practice for deployment is to have reproducible build process that runs in a clean room (e.g. Docker image) and produces artifacts (binaries, configs, assets) to deploy, ideally also runs some tests that prove nothing was broken from the last time.
It is a good idea to package service - binary and all its needed files (configs, auxiliary files such as systemd, nginx or logrotate configs, etc.) into some sort of package - be it package native to your target environment Linux distribution (DPKG, RPM), virtual machine image, docker image, etc. That way you (or someone else tasked with deployment) won't forget any files, etc. Once you have package you can easily verify and deploy using native tools for that packaging format (apt, yum, docker...) to production environment.
For configuration and other files I recommend to make software to read it from well known locations or at least have option to pass paths in command line arguments. If you deploy to Linux I recommend following LFHS (tldr; configuration to /etc/yourapp/, binaries to /usr/bin/)
It is not recommended to build the software from source in production environment as build requires tools that are normally unnecessary there (e.g. go, git, dependencies, etc.). Installing and running these requires more maintenance and might cause security and performance risks. Generally you want to keep your production environment minimal as required to run the application.
I think the most common deployment strategy for an app is trying to comply with the 12-factor-app methodology.
So, in this case, if your YAML file is the configuration file, then it would be better if you put the configuration on the Environment Variables(ENV vars). So that when you deploy your app on the container, it is easier to config your running instance from the ENV vars rather than copying a config file to the container.
However, while writing system software, it is better to comply with the file system hierarchy structure defined by the OS you are using. If you are using a Unix-like system you could read the hierarchy structure by typing man hier on the terminal. Usually, I install the compiled binary on the /usr/local/bin directory and put the configuration inside the /usr/local/etc.
For the deployment on the production, I created a simple Makefile that will do the building and installation process. If the deployment environment is a bare metal server or a VM, I commonly use Ansible to do the deployment using ansible-playbook. The playbook will fetch the source code from the code repository, then build, compile, and install the software by running the make command.
If the app will be deployed on containers, I suggest that you create an image and use multi-stage builds so the source code and other tools that needed while building the binary would not be in the production environment and the image size would be smaller. But, as I mentioned before, it is a common practice to read the app configuration from the ENV vars instead of a config file. If the app has a lot of things to be configured, the file could be copied to the image while building the image.
While we wait for the proposal: cmd/go: support embedding static assets (files) in binaries to be implemented (see the current proposal), you can use one of the embedding static asset files tools listed in that proposal.
The idea is to include your static file in your executable.
That way, you can distribute your program without being dependent on your sources.