Share compiled dependencies between dev and release builds - performance

I can compile Rust dependencies in optimized mode, while compiling your own code in debug mode:
# Cargo.toml
[profile.dev.package."*"]
opt-level = 3
debug = false
But this still compiles all my dependencies twice: once for debug mode and once for release mode, which seems wasteful (even with sccache):
$ cargo clean
$ cargo build
... all the dependencies ...
Finished dev [unoptimized + debuginfo] target(s) in 1m 44s
$ cargo build --release
... all the dependencies ...
Finished release [optimized] target(s) in 2m 08s
Theoretically it's only once, but in reality there are multiple developers, clean builds, cicd pipelines...
How can I make dev mode and release mode share compiled targets?

Related

How to effectively cache cargo/Rust projects in an Azure Build Pipeline

I have a set of Azure Build Pipelines that compile rust projects and currently use blob storage to store the .cargo and target folders as a cache.
When compiling locally, once a binary is compiled the first time, subsequent cargo build's don't compile the dependent libraries/crates again, just the local binary, however with my current pipeline system, after downloading the cache and using the correct target folder to build into, the pipeline still downloads and builds crates.
This is my config.toml for the cache and any pipeline builds.
[build]
target-dir = "./target"
dep-info-basedir = "."
incremental = true
It has reduced compilation times in some cases but not nearly as much as I expect.
Can I cache more folders to increase speed? Is there some cache identifier that cargo is checking and fouling the cache over?
The pipelines run a custom xtask binary which performs many tasks including running cargo build --release could this be causing issues?
You need to cache target and ~/.cargo/registry as mentioned by Caesar in the comments above.
The following worked for me (docs):
- task: Cache#2
inputs:
key: '"cargo" | "$(Agent.OS)" | Cargo.lock'
path: $(Build.SourcesDirectory)\target
displayName: cache cargo build
- task: Cache#2
inputs:
key: '"cargo-registry" | "$(Agent.OS)" | Cargo.lock'
path: $(UserProfile)\.cargo\registry
displayName: cache cargo registry

How can I enable Gradle Build Cache when running Gradle build with Coverity?

I have a simple Gradle project that has org.gradle.caching=true set in gradle.properties in order to enable the local build cache.
When I run the build directly (./gradlew clean build) I can see that the local build cache is being used: https://scans.gradle.com/s/ykywrv3lzik3s/performance/build-cache
However, when I run the build with Coverity (bin/cov-build --dir cov-int ./gradlew clean build) I see the build cache is disabled for the same build: https://scans.gradle.com/s/j2pvoyhgzvvxk/performance/build-cache
How is Coverity causing the build cache to be disabled, and is there a way to run a build with Coverity and the Gradle Build Cache?
You can't use the build cache with Coverity, or at least you don't want to.
The Gradle Build Cache causes compilation to be skipped:
The Gradle build cache is a cache mechanism that aims to save time by reusing outputs produced by other builds. The build cache works by storing (locally or remotely) build outputs and allowing builds to fetch these outputs from the cache when it is determined that inputs have not changed, avoiding the expensive work of regenerating them.
Were that mechanism to be used with Coverity, it would prevent cov-build from seeing the compilation steps, and hence it would be unable to perform its own compilation of the source code, which is a necessary prerequisite to performing its static analysis.
I don't know precisely how Coverity is disabling the cache (or if that is even intentional on Coverity's part), but if it didn't do so, then you would have to yourself, as described in the Synopsys article Cov-build using gradle shows "No files were emitted" error message, the key step of which is:
Use "clean" and "cleanBuildCache" task to remove all saved cache data which prevent full compilation.
before running cov-build.

Is it possible to use Bazel without compiling protobuf compiler?

I have some projects using Bazel, C++ and protobuf. I also use gitlab CI/CD to build, test, check coverage, etc.
The problem is that when the project compiles first time it also compiles a protobuf compiler, which adds about 15 minutes to each step (the step itself takes 1-5 min).
I was using a setup example from this documentation:
https://blog.bazel.build/2017/02/27/protocol-buffers.html
Here I created a simple hello world example with protobuf.
When I use protoc to generate *.pb.cc, *.pb.h files it takes about 5 seconds.
When I use bazel build ... it takes 15 minutes, because it builds protobuf compiler.
Build log: https://gitlab.com/mvfwd/issue-bazel-protobuf-compile/-/jobs/1532045913
Main Question
Is there any other way to setup Bazel to use already precompiled protoc and to skip 15 min on every step?
Update 2021-08-27
Added overwriting proto_compiler and proto_toolchain_for_cc as described in Implicit Dependencies and Proto Toolchains
Building :person_proto now works fine
$ bazel build :person_proto
WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE.
INFO: Analyzed target //:person_proto (19 packages loaded, 61 targets configured).
INFO: Found 1 target...
Target //:person_proto up-to-date:
bazel-bin/person_proto-descriptor-set.proto.bin
INFO: Elapsed time: 0.428s, Critical Path: 0.08s
INFO: 5 processes: 4 internal, 1 linux-sandbox.
INFO: Build completed successfully, 5 total actions
but building :person_cc_proto fails
$ bazel build :person_cc_proto
WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE.
ERROR: /home/m/Synology/drive/prog/2021/b/issue-bazel-protobuf-compile/BUILD:2:14: in :aspect_cc_proto_toolchain attribute of BazelCcProtoAspect aspect on proto_library rule //:person_proto: '#local_config_cc//:toolchain' does not have mandatory providers: ProtoLangToolchainProvider
ERROR: Analysis of target '//:person_cc_proto' failed; build aborted: Analysis of target '//:person_proto' failed
INFO: Elapsed time: 0.124s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 25 targets configured)
From https://blog.bazel.build/2017/02/27/protocol-buffers.html#implicit-dependencies-and-proto-toolchains
The proto_library rule implicitly depends on #com_google_protobuf//:protoc, which is the protocol buffer compiler. It must be a binary rule (in protobuf, it's a cc_binary). The rule can be overridden using the --proto_compiler command-line flag.
Hence, you could (not tested)
add the precompiled binary in your workspace,
define a cc_import for this target
pass the --proto_compiler command-line flag

Reduce Travis-CI building time for Gradle

I've started a new project of SprintBoot and Kotlin and I wanted to use Travis-CI as my CI server.
I also wanted to use codecov to collect the reports about my code coverage
Everything seems to work perfectly beside one thing, My project currently is an empty SpringBoot project that contains (and no tests) and the build itself takes up to 2m (mostly due to the time it takes to install Gradle).
I checked on their site and saw some optimizations to the build, but they're looked to early for this stage of the project (e.g. parallel tests execution).
Am I missing something? is 2m is the baseline for Travis-CI building time?
My current configurations for Travis :
# This enables the 'defaults' to test java applications:
language: java
# We can specify a list of JDKs to be used for testing
# A list of available JDKs in Trusty can be seed in:
# https://docs.travis-ci.com/user/reference/xenial/#jvm-clojure-groovy-java-scala-support
jdk:
- openjdk11
before_script:
# makes sure that gradle commands can be executed on build
- chmod +x gradlew
script:
# Makes sure that gradle can be executed.
- ./gradlew check
# Generates the reports for codecov
- ./gradlew jacocoTestReport
# This is to enable CodeCov's coverage
# If a build is successful, the code is submitted for coverage analysis
after_success:
- bash <(curl -s https://codecov.io/bash)
You'll want to cache to improve speeds of your build on Travis. Gradle has a dedicated guide on building on Travis: https://guides.gradle.org/executing-gradle-builds-on-travisci/
For caching, scroll down to Enable caching of downloaded artifacts

Why does Gradle re-run up-to-date tests on GitLab CI when I am caching the build directory?

I have the following build configuration for a multi-project gradle file:
stages:
- test
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
cache:
paths:
- .gradle/wrapper
- .gradle/caches
- build
test :
dependencies: []
image: openjdk:x
stage: test
script:
- ./gradlew test --debug
On the GitLab, between builds with no changes to source files, I get:
Up-to-date check for task ':x:compileJava' took 1.117 secs. It is not up-to-date because:
No history is available.
I'm not sure why it says this, as I would expected the task history to be restored from cache. I see this in the logs between runs:
Creating cache default...
.gradle/wrapper: found 207 matching files
.gradle/caches: found 5058 matching files
build: found 2743 matching files
When I re-run on my local machine, I can see the tests are not being re-run:
> Skipping task ':x:compileJava' as it is up-to-date (took 0.008 secs).
More confusing is dependencies are cached perfectly, it just keeps rerunning tests when I have made no code changes.
As far as I know the history that gradle is missing is also stored in the .gradle folder, but not in the caches or wrapper subfolder. If you tell Gitlab to cache the complete .gradle folder the problem should go away.
See also this example:
https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Gradle.gitlab-ci.yml

Resources