With bazel why do I keep fetching wrong build artifacts from the remote cache despite using --define or --action_env? - caching

From what I read (documentation seems to be quite sparse) you can use the --define and --action_env arguments to let Bazel build artifacts with a different 'configuration' and thus (as I would expect) not taking artifacts from a configured remote cache.
Is this correct?
I'd expect this command to take artifacts from cache if executed with identical values:
bazel build \
--remote_cache=<remote-cache-details> \
--define FOO=foo \
--action_env BAR=bar \
<target>
And I'd expect a re-build to be forced if one of the variables/values provided with --define or --action_env changed.
Is that still correct?
I'm currently facing the following situation: I somehow managed to 'poison' the remote cache with artifacts built against an incompatible version of some a library (glibc in my case), and now I'm getting errors when building with a configured remote cache:
...
bazel-out/k8-opt-exec-2B5CBBC6/bin/external/bzlws/generators/cpp/cpp: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by bazel-out/k8-opt-exec-2B5CBBC6/bin/external/bzlws/generators/cpp/cpp)
And I don't get this error when building without remote cache or building on a system with matching version of GLIBC, this is why I suspect this to be a caching issue.
I know there are better ways to provide Bazel with details about the toolchain, but my question is about how Bazel decides what to look for in the cache and how that can be influenced.
I'm creating an execution log file with --execution_log_json_file which shows the variables I provide using --action_env actually show up, so these should be taken into account.
Also changing values provided with --action_env result in longer builds.
Yet I keep getting this linker error when using the cache.
Is it possible that --action_env does retrigger a build but when it comes to linking Bazel takes libraries from the cache (e.g. glibc) despite it didn't build, i.e. taken from another machine, so changing the build environment doesn't affect this problem?

--remote_instance_name is passed to the remote cache, and most implementations will use that to separate the cache keys. The REST API includes it in the URL, the gRPC API includes it in the request body.
For example, bazel-remote says this:
If the --enable_ac_key_instance_mangling flag is specified and the instance name is not empty, then action cache keys are hashed along with the instance name to produce the action cache lookup key.
--action_env and --define only affect some actions, if you're fetching source files or some other action those don't affect then the cache key stays the same.

Related

go build -buildvcs does not insert vcs.revision buildinfo

This question poses problems that are distinct from those discussed here: How do you read debug VCS version info from a Go 1.18 binary?
I am having trouble getting go build -buildvcs=true to insert correct version information when building executables out of a git repo that has the following structure:
go version go1.18.4 linux/amd64
in project:
go.mod
cmd/exe1/main.go
cmd/exe2/main.go
pkg/pkg1/...
pkg/pkg2/...
(1) If I cd project;go build -buildvcs=true -o /tmp/exe1 cmd/exe1/main.go then the BuildInfo included in the exe includes Deps entries for all the dependencies of all the packages, but there is no embedded Setting with key vcs.revision, and the Dep entry for the module named in go.mod is (devel). I guess this latter issue is related to how to specify versions for modules, which I have not yet looked into, and therefore I assume it's using a default value.
(2) If I cd project/cmd/exe1;go build -buildvcs=true -o /tmp/exe1 (leaving out any relative path specifying what to build) then the BuildInfo included in the exe does NOT include Deps entries but DOES include the vcs.revision
Questions:
Is there any way to get both Deps and vcs.revision into BuildInfo?
Is this directory structure ok? The documentation for this stuff is not in "reference" format, and many important details are spread out throughout all of the tutorials and howtos. Quite frustrating to get to the bottom of these behaviors.
It seems to me go should embed a vcs.revision whenever generating an executable, but before I open a bug, I wanted to get community feedback on whether this is expected behavior when specifying a relative target on the command line. I've seen that that can be an issue in general, with go build.
Any pointers to the right place to read a comprehensive guide about this would be great.

How can I enable Gradle Build Cache when running Gradle build with Coverity?

I have a simple Gradle project that has org.gradle.caching=true set in gradle.properties in order to enable the local build cache.
When I run the build directly (./gradlew clean build) I can see that the local build cache is being used: https://scans.gradle.com/s/ykywrv3lzik3s/performance/build-cache
However, when I run the build with Coverity (bin/cov-build --dir cov-int ./gradlew clean build) I see the build cache is disabled for the same build: https://scans.gradle.com/s/j2pvoyhgzvvxk/performance/build-cache
How is Coverity causing the build cache to be disabled, and is there a way to run a build with Coverity and the Gradle Build Cache?
You can't use the build cache with Coverity, or at least you don't want to.
The Gradle Build Cache causes compilation to be skipped:
The Gradle build cache is a cache mechanism that aims to save time by reusing outputs produced by other builds. The build cache works by storing (locally or remotely) build outputs and allowing builds to fetch these outputs from the cache when it is determined that inputs have not changed, avoiding the expensive work of regenerating them.
Were that mechanism to be used with Coverity, it would prevent cov-build from seeing the compilation steps, and hence it would be unable to perform its own compilation of the source code, which is a necessary prerequisite to performing its static analysis.
I don't know precisely how Coverity is disabling the cache (or if that is even intentional on Coverity's part), but if it didn't do so, then you would have to yourself, as described in the Synopsys article Cov-build using gradle shows "No files were emitted" error message, the key step of which is:
Use "clean" and "cleanBuildCache" task to remove all saved cache data which prevent full compilation.
before running cov-build.

Artifactory GO remote not downloading zips

I'm using Artifactory 7.10.6.
go version 1.15.6 (also tested with older versions)
I am not using the jfrog cli, and would prefer not to.
I'm trying to sort out what I'm doing wrong here. I've used Artifactory to pull down content from remote connections to be stored on an local repository for other package types, but this doesn't seem to be working for me fully with GO. Disclaimer, I'm not super versed in GO...
Here is what I have setup.
a local go repo called "go-ext-release"
a remote of gocenter called "go-gocenter"
a virtual called "go-virtual" that contains only "go-ext-release"
a virtual called "go-virtual-dev" that contains "go-virtual" followed by "go-gocenter"
The idea here of course. Run a build with my GOPROXY set to "go-virtual-dev", copy the downloaded files from go-gocenter-cache to "go-ext-release". That should get me all the files I need to reset my environment, point to GOPROXY to "go-virtual" and run a build.
My build pointing to "go-virtual-dev" works fine. Build works, content is pulled down (mostly .mod and .info).
I move that content to the local (go-ext-release) and build in a clean environment using "go-virtual" and the build fails. it says it can't access .zip files. i.e. a 404 on /github.com/gorilla/mux/#v/v1.7.4.zip
Of course when I look for that zip, it doesn't exist.
If I take the url its trying to access and change the url from the "go-virtual" path to "go-virutal-dev" and punch it into a web browser the correct zip file gets downloaded to the "go-gocenter-cache" repo (as expected).
I did this process for the 4 or 5 zip files the build needed (its a small test build), and then moved the zips from the cached location over to the "go-ext-release" repo. After that, the build works using the "go-virtual" repository (i.e. the repo that just sees into our local repo).
So what am I doing wrong here? My expectation was that the initial build would have pulled all the files , zips included, to the cache as well. I know the build pulled them down because I can see them in my GOCACHE folder. Its as though it isn't using my GOPROXY to pull the zips down
Any help would be appreciated.
is there any commanline switch to force go to show me the exact URL it is using for pulls? I've experimented with using go get -v, but it doesn't give the full url.
Can you try running the build against go-virtual-dev using an empty GOPATH. I believe the Go client will not trigger the module zip download if you already have it locally which will not allow Artifactory to cache it from the remote repo.
BTW, Running go get -x should show you all the URLs being fetched.

Building custom Go Plugin

I'm in the process of creating a custom transformer for kustomize. However, I'm running into issues creating even the most basic Go Plugin. I'm trying to follow these steps here https://github.com/kubernetes-sigs/kustomize/blob/master/docs/plugins/goPluginGuidedExample.md
I'm using one of the plugins in mainline kustomize, ie. secretsfromdatabase [1]
According to the documentation, the instructions I'm following are:
tmpGoPath=$(mktemp -d)
GOPATH=$tmpGoPath go install sigs.k8s.io/kustomize/kustomize
GOPATH=$tmpGoPath go build -buildmode plugin -o SecretsFromDatabase.so SecretsFromDatabase.go
cp SecretsFromDatabase.so ~/.config/kustomize/plugin/mygenerators/sopsencodedsecrets/SopsEncodedSecrets
Now when I run kustomize, I get the following error:
Error: accumulating resources: recursed accumulation [...] fails to load: plugin.Open("$HOME/.config/kustomize/plugin/mygenerators/sopsencodedsecrets/SopsEncodedSecrets"): plugin was built with a different version of package internal/cpu
What is strange is I'm using the same tag in git as the version that is installed on my system.
kustomize version tags/kustomize/v3.5.4^0
{Version:3.5.4 GitCommit:3af514fa9f85430f0c1557c4a0291e62112ab026 BuildDate:2020-01-17T14:23:25+00:00 GoOs:darwin GoArch:amd64}
[1] https://github.com/kubernetes-sigs/kustomize/tree/master/plugin/someteam.example.com/v1/secretsfromdatabase
As for now plugins are very difficult to write and support because the environment should be identical and in practice only original build system can reliably build the plugins. In result a lot of people like you finding little differences in their build environments. I think it is bad idea from design and strongly recommend to get acquainted with Reddit discussion here

Forcing authentication on Maven release

I have a problem getting Maven to release to a Nexus server. Seemingly, it refuses to use my provided username and password (but there might be other problems as well).
When I first type 'mvn release:perform', I get a'not authorized'-error. However, some files are created on the Nexus, namely a pom with checksums etc. When I try a second time (without changing anything), I get a different error: '400 bad request'
When I delete the files and try again, I get the first error once again.
I have run this with the -X flag to see if I can make any sense of what is happening, and I have discovered that the first time I run the command, maven omits my username and password provided in settings.xml:
[INFO] [DEBUG] Using connector WagonRepositoryConnector with priority 0 for http://nexus.example.com/content/repositories/releases
When I run it the second time, it includes my credentials:
[INFO] [DEBUG] Using connector WagonRepositoryConnector with priority 0 for http://nexus.example.com/content/repositories/releases/ as developers
Notice it says 'as developers'
Of course I don't know that the fact that it prints it differently actually means anything, but it seems that way.
When I allow redeploy for the releases repository in Nexus, I always get the first variant (not authorized).
If anyone can tell me how I might force Maven to use my credentials (if that is indeed what it is not doing) or on what else might be wrong, I would be very happy.
I have got it working now, by specifying in the maven release plugin that it only deploy, and not deploy and deploy site as is default.
mvn site:deploy fails with the error: Wagon protocol 'http' does not support directory copying.
Of course, my original error message did not refer very much to site at all.
Way to produce useful error messages, Maven!
I found a way to force preemptive authentication here: http://maven.apache.org/guides/mini/guide-http-settings.html (it didn't solve my problem, but it is an answer to the title.)

Resources