go coverage sometimes fails to write file - go

Our Jenkins build has a line like
go test -p=4 -count=1 -timeout=3m -v -coverprofile=coverage.out ./...
Usually this is fine but sometimes it fails with an error like
vet: $WORK/b1704/cache.cover.go:1:1: expected 'package', found 'EOF'
The build slave machine has about 5 GB available at the start of the build, and it didn't appear to drop below 4GB during the build, although it's possible I missed transient spike in usage.
This is the only command executing at the time, there is no parallelisation in the build script, only what is done by go test -p itself.
Any ideas what could cause an intermittent failure like that?
go version go1.14 linux/arm64

Related

Where does `~/Library/Application\ Support/go/env` come from?

I want to run a program on my Macbook.
First I run it with go run main.go. It is compiled successfully but indicates permission denied when running.
So I switch to sudo go run main.go. but it fails in the compiling stage, showing compile: version "go1.17.8" does not match go tool version "go1.18".
then after some tries, I find this file named ~/Library/Application\ Support/go/env which sets the configuration GOROOT=~/sdk/go1.17.8.
I cannot figure out what leads to these.

Incremental builds not working with go build, only with go install

I'm running a project using Go Modules with 1.11.4 on Ubuntu, running in WSL.
My problem is that I'm having getting incremental builds to work as I expect. Perhaps this is due to me misunderstanding how it's supposed to, but I'd be glad if someone could clarify if that is the case.
Just as an example, if I do go build ./... then everything is built, as expected.
If I now do go build ./... again without any changes, my expectation was that due to the incremental builds, this time nothing would be built. But it builds everything again. I tried doing go build -i ./... (even though my understanding is that -i isn't needed anymore from 1.10), but the result is the same. This has been puzzling me for some time, as after reading the documentation I indeed expected the go build command to produce incremental builds.
The other day I realized that if instead I do go install ./... first, and then go install ./... again a second time, the second time around nothing is built, as I would expect. If I change a single module and run go install ./... again, then only that module is rebuilt, again what I would expect. So this gives me incremental builds.
So my questions are
1) Did I misunderstand go build ./... and how it handles incremental builds? Do I need to use go install instead?
2) Typically, we build the modules one by one, using the -o flag to specify an output path. Using go install instead, there is no -o option to specify an output path. Is there anything I can do to achieve something similar to -o using go install?
Thanks!

How to build some packet in tensorflow with debug mode\

I encountered the problem that I wanted to have a debug, then I wanted to build a debug version of tensorflow, using the following command:
bazel build --compilation_mode=dbg -s //tensorflow/tools/pip_package:build_pip_package
but it will trigger the longtime link in protobuf for almost oneday, and still not finished.
and my intension is to build some other package which is used by tensorflow with debug mode, could I configure the bazel build file to get some debug package separately?
To understand the issue better, try running the neverending action manually:
start the debug build, wait for it to get stuck in the protobuf linking action
interrupt the build (Ctrl+C)
run the build again with the -s flag, so Bazel shows the command line it executes (you could've ran step 1. with the -s flag, but then there's a lot more output and it's harder to find the right information)
interrupt the build again
cd into the directory shown in the by command and set environment variables
try running the command that failed (you may need to change the output paths because they are sometimes not user-writable) and see if it still never finishes
What you just did is running the same command Bazel was running and getting stuck on. If the command is stuck in this manual mode too, then the error might be with the linker (I doubt this is the case though). But if it succeeds, then the problem is with Bazel.

Golang - go run takes long to execute

I have a little problem, where every time I run 'go run >filename<' after making changes to my problem, it takes many seconds to start executing.
I tried it with a simple program like this:
package main
import "fmt"
func main() {
fmt.Println("Output")
}
And it took about 18 seconds to print out the result.
And ideas on what could cause this problem?
I am on windows by the way.
Thanks in Advance
Found this while experiencing an identical issue with freshly-compiled golang binaries on MacOS (OSX) Catalina.
In short, the OS now scans a new (to it) binary for malware, it usually does it on first start, but I found that it was doing it 3-5 times in a row, after which the binary got whitelisted and started as normal. Obviously, once you make changes to the code and re-compile the scans start happening again. The scan would take upwards of 20 seconds which ruined golangs fast iteration cycle.
Solution for me was as follows:
sudo spctl developer-mode enable-terminal
Then go to Preferences -> Security & Privacy -> Privacy
In the list to the left you will now have "Developer Tools" section, which will have OSX built-in Terminal listed. Check the box to enable it, and/or add anything else you may be using to develop (iTerm, VS Code, etc)
When running binaries from those applications the scans stop and things go back to normal.
$ go run command always complies the code into a temporary binary file and then executes it, every time it is run.
To go around this you could do $ go build -i main.go which will compile dependencies separately as .a files (i'm guessing this is the part that takes the longest because it takes time to build the dependencies) and then execute it with $ ./main, and each execution should be faster that $ go run.
You could also run a $ go get -u ./... to update all your deps and building with the -x flag will show you if the toolchain is finding incompatible versions.
$ go install builds the command in a temporary directory then moves it to $GOPATH/bin so you can execute it without the path $ main.
The last two commands require a rebuild/reinstall if the code has changes.
If anyone still facing this issue on windows. you can fix it in 2 ways. it works 100%.
open virus & threat protection in windows security in settings
way 1:
turn off real-time protection
way 2:
in virus and threat protection settings(click on manage settings) go to exclusions and add the following three folders
C:\Users\anves\AppData\Local\go-build
C:\Users\anves\AppData\Local\Temp
The folder where go is installed.

`go build` rebuilds unnecessarily

go build and go run are very slow on a tiny program I have (cgo invocations in particular). I'd like go to cache the binary so that it only rebuilds when the source is newer. I would use a simple Makefile with a % rule, but the language designers claim that go's build support doesn't need Makefiles.
Is there another alternative I've overlooked? Does the go community prefer another build system, maybe hash-based instead, for caching and reusing build products?
go build and go install will soon (Go 1.10, Q1 2018) be much faster: see this thread and this draft document.
The go command now maintains a cache of built packages and other small metadata (CL 68116 and CL 75473). The cache defaults to the operating system-defined user cache directory but can be moved by setting $GOCACHE.
Run "go env GOCACHE" to see the current effective setting. Right now the go command never deletes anything from the cache. If the cache gets too big, run "go clean -cache" instead of deleting the directory. That command will preserve the cache's log.txt file. In a few weeks I'll ask people to post their log.txt files to a Github issue so that we can evaluate cache size management approaches.
The main effect of the build cache is that commands like "go test" and "go build" run fast and do incremental builds always, reusing past build steps as aggressively as possible.
You do not have to use "go test -i" or "go build -i" or "go install" just to get fast incremental builds. We will not have to teach new users those workarounds anymore. Everything will just be fast.
Note that go install won't installs dependencies of the named packages: see "What does go build build?".
I wrote a tool that happens to solve this as a side effect. go build alone will not check if the executable it's producing is already up to date. go install does, and if you tweak it to install to a location of your choice, then you'll get the desired result, similar to go build.
You can see the behaviour you describe by doing something like this:
$ go get -d github.com/anacrolix/missinggo/cmd/nop
$ time go run "$GOPATH"/src/github.com/anacrolix/missinggo/cmd/nop/*.go
real 0m0.176s
user 0m0.142s
sys 0m0.048s
That's on a warm run. go run will link on every invocation, just as go build would. Note that github.com/anacrolix/missinggo/cmd/nop is an program that does absolutely nothing.
Here's invoking the same package, using my tool, godo:
$ time godo github.com/anacrolix/missinggo/cmd/nop
real 0m0.073s
user 0m0.029s
sys 0m0.033s
For larger programs, the difference should be more pronounced.
So in summary, your standard tooling option is to use go install, or an alternative like godo.

Resources