I want to conditionally enable run-time checks and logging, independently from each other and from debug and release mode. So I've started adding two features to my project, one called "invariant-checking" and one called "logging". Ultimately i want their use to be through macros I define in a crate which is visible project-wide.
I had assumed that if I filled out the features section the same way in all of the crates the same way then when I activated the feature while compiling the bin crate, then all the lib crates would also have the feature enabled, but this is not the case! How can I enable and disable features across multiple crates? Hopefully this can be done by only changing one thing like the command-line arguments to cargo.
To clarify exactly what I want, here's an example, which I will also reproduce below:
There are three crates, the main, bin, crate, and two lib crates, called "middle" and "common". Here are the relevant parts of the relevant files:
main.rs
extern crate common;
extern crate middle;
fn main() {
common::check!();
middle::run();
println!("done");
}
the main Cargo.toml
[dependencies]
[dependencies.common]
path = "libs/common"
[dependencies.middle]
path = "libs/middle"
[features]
default = []
invariant-checking = []
logging = []
middle's lib.rs
extern crate common;
pub fn run() {
common::check!();
common::run();
}
middle's Cargo.toml
[dependencies]
[dependencies.common]
path = "../common"
[features]
default = []
invariant-checking = []
logging = []
common's lib.rs
#[macro_export]
macro_rules! check {
() => {{
if cfg!(feature = "invariant-checking") {
println!("invariant-checking {}:{}", file!(), line!());
}
if cfg!(feature = "logging") {
println!("logging {}:{}", file!(), line!());
}
}};
}
pub fn run() {
check!()
}
and finally common's Cargo.toml
[dependencies]
[features]
default = []
invariant-checking = []
logging = []
When i run cargo run --features "invariant-checking,logging" I get the following output
invariant-checking src\main.rs:5
logging src\main.rs:5
done
but want it to log in middle and common as well. How can I transform this project such that it will do that, and still allow me to get only "done" as output by changing only one place?
How can I enable and disable features across multiple crates?
A Cargo.toml can add features that transitively enable other features which are allowed to belong to dependencies.
For example, in the Cargo.toml of a crate which depends on crates foo and bar:
[dependencies]
foo = "0.1"
bar = "0.1"
[features]
default = []
invariant-checking = [ "foo/invariant-checking", "bar/invariant-checking" ]
logging = [ "foo/logging", "bar/logging" ]
This crate adds the invariant-checking and logging features. Enabling them transitively enables the respective features of the crates foo and bar, so that
cargo build --features=logging,invariant-checking
will enable the logging and invariant-checking features in this crate and also in its dependencies foo and bar as well.
In your particular case, you probably want main to transitively enable the features of middle and common, and for middle to transitively enable the features of common.
The macro definitions in their current form have a problem: The code inside the macro gets inlined whenever the macro is used, and then compiled in the context where it got inlined. Since you use runtime feature checks like
if cfg!(feature = "invariant-checking")
this means that you need to define the features in every crate where you are using the macro. In the common crate itself, on the other hand, the feature is never queried and thus redundant.
This seems completely backwards to me. The feature flag should be only queried in the common crate, and using the macro should not require first defining a feature flag in the crate that uses it. For this reason, I suggest using compile-time checks to select what macro to define:
#[cfg(feature = "invariant-checking")]
macro_rules! check_invariant {
() => ( println!("invariant-checking {}:{}", file!(), line!()); )
}
#[cfg(not(feature = "invariant-checking"))]
macro_rules! check_invariant {
() => ()
}
#[cfg(feature = "logging")]
macro_rules! logging {
() => ( println!("logging {}:{}", file!(), line!()); )
}
#[cfg(not(feature = "logging"))]
macro_rules! logging {
() => ()
}
#[macro_export]
macro_rules! check {
() => ( check_invariant!(); logging!(); )
}
This way, you will only need to define the feature in the common crate, as it should be. As long as you only use a single version of that crate, switching the flag on and off has global effect.
Related
I'm attempting to start my first project in Rust using VScode. I added the '''image = "0.24.5"''' to the dependencies section of the Cargo.toml file, built the project with cargo build. However, whenever I try and run the file I get an error that says:
error[E0433]: failed to resolve: maybe a missing crate `image`?
--> main.rs:1:5
|
1 | use image::io::Reader;
| ^^^^^ maybe a missing crate `image`?
|
= help: consider adding `extern crate image` to use the `image` crate
This is the main file:
use image::io::Reader;
fn main() {
println!("Hello, world!");
}
Cargo.toml
[package]
name = "imaging"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
image = "0.24.5"
Not quite sure why I'm getting this error. My guess is that it may be that VScode is unable to find the dependency.
I have a gradle script in which I configure a plugin (in my case ospackage but I guess the same would apply to another plugin) using a variable as per:
ospackage {
...
version project.ext.my_version
...
}
This variable is first initialized and then is updated using a task that I call first in my build script:
ext {
...
my_version = "XXX"
...
}
task init{
group 'ho'
description 'get HO Version'
doLast {
...
project.ext.my_version = getParameter("VERSION")
...
}
}
the problem is that the plugin (in my case ospackage) always consider the initial value "XXX" and not the correct one that was set by executing the init task.
I know it has something to do with configuration and execution phase but still I cannot find a workaround to do what I want.
For info, I also tried to create a task like the one below but it also fail as it seems that buildDeb task does not overwrite ospackage version parameter
buildDeb {
doLast {
...
version project.ext.my_version
link('/usr/bin/aa', '/usr/bin/aa.sh')
...
}
}
I also tried to put at the end of my file something like:
ospackage.dependsOn("init")
but the problem is that ospackage is not recognized as a task
Thank you in advance for your help.
It looks to me like the essence of your question revolves around on-demand values. My understanding is that you would like to set a version number during the configuration phase and use that value during the execution phase to set a package version using the ospackage plugin.
The issue is that the ospackage documentation only provides examples (to date) that setup the package constants during the configuration phase. Obviously that won't work because it is the same time you are setting your version (something that must be able to be done in parallel). You have the right idea with doLast. I found that some things from the ospackage cannot go in "doLast" blocks like packageName (if you have more than one of the same package/task type), so I just put the things that require on-demand evaluation in that block (the version, because we need its evaluation delayed until the execution phase).
My solution was to create a variable that holds the function that resolves the version.
def versionForRpm = { -> project.version }
Create a configuration block
configurations.ext {
version = versionForRpm
...
}
This is an example of an on-demand value (aka lazily-evaluated value).
task someRpmBuild(type: Rpm) {
// all package configs that require evaluation during execution phase (lazy)
doLast {
version = configurations.ext.version
requires("someotherpackageinthisbuild", configurations.ext.version(), 0)
}
// all package configs that may be evaluated during the configuration phase
release = configurations.ext.release
packageGroup = configurations.ext.packageGroup
license = configurations.ext.license
packager = configurations.ext.packager
user = configurations.ext.user
distribution = configurations.ext.distribution
vendor = configurations.ext.vendor
url = configurations.ext.url
os = configurations.ext.os
buildHost = configurations.ext.buildHost
epoch = configurations.ext.epoch
arch = configurations.ext.arch
}
Note that configurations.ext.version will be "called" automatically in the execution phase. I needed to explicitly call it when used as an argument in requires, however.
according to the documentation, the task type is Deb:
task fooDeb(type: Deb) {
packageName // Default to project.name
packageDescription // Defaults to project.description
version // Version field, defaults to project.version
arch // Architecture, defaults to "all". E.g. "amd64", "all"
multiArch // Configure multi-arch behavior: NONE (default), SAME, FOREIGN, ALLOWED (see: https://wiki.ubuntu.com/MultiarchSpec )
release // DEB Release
epoch // Epoch, defaults to 0
user // Default user to permission files to
permissionGroup // Default group to permission files to, "group" is used by Gradle for the display of tasks
packageGroup
buildHost
license
packager
distribution
vendor
url
signingKeyId
signingKeyPassphrase
signingKeyRingFile
sourcePackage
provides
uid // Default uid of files
gid // Default gid of files
createDirectoryEntry // [Boolean]
maintainer // Defaults to packager
uploaders // Defaults to packager
priority
summary
conflicts
recommends
suggests
enhances
preDepends
breaks
replaces
}
where:
version Version field, defaults to project.version
might give the RPM plugin a try.
I was able to solve the issue i had, setting the destination for the ospackage copy destination to a calculated value by using
configurations.ext{
mydestdir = ""
rpmVersion = "1"
releaseNumber = "1"
}
task init{
group 'ho'
description 'get HO Version'
doLast {
...
configurations.ext.mydestdir = "/store/tmp/+getSubDir()"
configurations.ext.rpmVersion = "123"
configurations.ext.releaseNumber = "456"
...
}
}
task fooRpm(type: Rpm) {
dependsOn init
...
doLast(){
version = configurations.rpmVersion
release = configurations.releaseNumber
}
from(project.tempDir) {
into configurations.mydestdir
fileMode = 0644
user = "nobody"
permissionGroup = "nobody"
}
}
I think you'll have use type Deb, and make some changes, but this should speed up your build, and you can verify results by adding --scan before and after making these changes.
I want the outputs of one task to be available to an identical task in another submodule.
I'm trying to make yet-another plugin for compilation (of C/++, .hs, .coffee, .js et al) and source code generation.
So, I'm making a plugin and task/s that (so far) generate CMakeLists.txt, Android.mk, .vcxproj or whatever for each module to build the source code.
I have a multi-module build for this.
I can reach around and find the tasks from "other" submodules, but, I can't seem to enforce any execution order.
So, with ...
root project: RootModule
sub project: NativeCommandLine (requires SharedModule)
sub project: NativeGUI (requires SharedModule)
sub project: SharedModule
... I find that the NativeGUI tasks are executed before SharedModule which means that the SharedModule results aren't ready.
Bad.
Since the dependency { ... } stuff happens after plugins are installed (AFAIK) ... I'm guessing that the dependencies are connected after.
I need my tasks executed in order based on the dependency relations ... right? How can I do that?
I have created a (scala) TaskBag that lazily registers a collection of all participating Task instances.
I add instances of my task to this, along with a handler for when a new task appears.
During configure, any task can include logic in the lambda to filter and act on other tasks and it will be executed as soon as both tasks are participating.
package peterlavalle
import java.util
import org.gradle.api.Task
object TaskBag {
class AnchorExtension extends util.LinkedList[(Task, Task => Unit)]()
/**
* connect to the group of tasks
*/
def apply(task: Task)(react: Task => Unit): Unit =
synchronized {
// lazily create the central anchor ... thing ...
val anchor: AnchorExtension =
task.getProject.getRootProject.getExtensions.findByType(classOf[AnchorExtension]) match {
case null =>
task.getProject.getRootProject.getExtensions.create(classOf[AnchorExtension].getName, classOf[AnchorExtension])
case anchor: AnchorExtension =>
anchor
}
// show us off to the old ones
anchor.foreach {
case (otherTask, otherReact) =>
require(otherTask != task, "Don't double register a task!")
otherReact(task)
react(otherTask)
}
// add us to the list
anchor.add(task -> react)
}
}
is there a better way to declare multiple packages (e.g. 'appcompat-v7') from the same group (e.g. 'com.android.support') with the same version (e.g. '23.4.0')?
actual declaration:
...
def androidSupport = '23.4.0'
def rxBinding = '0.4.0'
dependencies {
...
// android support
compile "com.android.support:appcompat-v7:$androidSupport"
compile "com.android.support:design:$androidSupport"
compile "com.android.support:percent:$androidSupport"
// rxJava
compile "com.jakewharton.rxbinding:rxbinding:$rxBinding"
compile "com.jakewharton.rxbinding:rxbinding-support-v4:$rxBinding"
compile "com.jakewharton.rxbinding:rxbinding-appcompat-v7:$rxBinding"
compile "com.jakewharton.rxbinding:rxbinding-design:$rxBinding"
compile "com.jakewharton.rxbinding:rxbinding-recyclerview-v7:$rxBinding"
...
is it possible to do something like that?:
...
compile(group: 'com.android.support', version: '23.4.0') {
modules: "appcompat-v7", "design", "percent"
}
...
Once you realize that a Gradle build script is just a Groovy script, which means that each of those "compile" lines are just method calls, you start to see many possibilities.
For instance, specify an "inline" array of artifact names, call "each()" on that, and pass a closure that specifies "compile group:$it:$version" (or something like that).
def multiArtifactDependency = { String group, String version, List<String> artifacts -> artifacts.each { compile "${group}:${it}:${version}" } }
multiArtifactDependency('com.android.support', '23.3.0', ['appcompat-v7', 'design'])
multiArtifactDependency('com.squareup.retrofit2', '2.0.2', ['retrofit', 'converter-gson', 'adapter-rxjava'])
The following program is based on the example in the v8 Getting Started page. I have made three changes to demonstrate a problem I am encountering:
I create an empty array put it into the global context.
The script being run references the zeroth element in the array, which should return undefined.
I run the compiled script twice.
The first run works fine. The second fails: v8 calls V8_Fatal() in Deoptimizer::DoComputeCompiledStubFrame() because descriptor->register_param_count_ == -1.
Am I doing something wrong here? How can I fix it?
Isolate* isolate = Isolate::New();
Isolate::Scope isolate_scope(isolate);
HandleScope handle_scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
Local<Array> a = Array::New(isolate);
context->Global()->Set(String::NewFromUtf8(isolate, "a"), a);
Local<String> source = String::NewFromUtf8(isolate, "a[0];");
Local<Script> script = Script::Compile(source);
Local<Value> result = script->Run();
Local<Value> result2 = script->Run();
return 0;
NOTES:
This is the entire body of main().
Other fragments of JavaScript code run twice without a problem. Somehow this relates to the out-of-bound array reference, which is perhaps triggering deoptimization.
I do not want to recompile the script from scratch each time because I am typically running these scripts thousands of times, and sometimes millions of times.
I have tried compiling the script as an UnboundScript and then binding it for each execution, but the result is the same.
I have reported this as a v8 issue, but nobody has responded so I'm hoping that the StackOverflow community can help.
I am seeing this on VS2012 Update 4, but I also see it on VS2008, and in both x64 and x86 and in both Debug and Release builds.
OK, found it. The problem is an uninitialized code stub for dictionary loads - your use case triggers this as a failure as the stub isn't initialized through other means, eg compilation.
Below is a patch against v8 trunk revision 22629 that fixes the problem for me, tested on Windows with VS 2010 and Linux with g++ 4.9. Please let me know how you go with this:
Index: src/code-stubs.cc
===================================================================
--- src/code-stubs.cc (revision 22629)
+++ src/code-stubs.cc (working copy)
## -236,6 +236,8 ##
CODE_STUB_LIST(DEF_CASE)
#undef DEF_CASE
case UninitializedMajorKey: return "<UninitializedMajorKey>Stub";
+ case NoCache:
+ return "<NoCache>Stub";
default:
if (!allow_unknown_keys) {
UNREACHABLE();
## -939,6 +941,13 ##
// static
+void KeyedLoadDictionaryElementStub::InstallDescriptors(Isolate* isolate) {
+ KeyedLoadDictionaryElementStub stub(isolate);
+ InstallDescriptor(isolate, &stub);
+}
+
+
+// static
void KeyedLoadGenericElementStub::InstallDescriptors(Isolate* isolate) {
KeyedLoadGenericElementStub stub(isolate);
InstallDescriptor(isolate, &stub);
Index: src/code-stubs.h
===================================================================
--- src/code-stubs.h (revision 22629)
+++ src/code-stubs.h (working copy)
## -1862,6 +1862,8 ##
virtual void InitializeInterfaceDescriptor(
CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE;
+ static void InstallDescriptors(Isolate* isolate);
+
private:
Major MajorKey() const { return KeyedLoadElement; }
int NotMissMinorKey() const { return DICTIONARY_ELEMENTS; }
Index: src/isolate.cc
===================================================================
--- src/isolate.cc (revision 22629)
+++ src/isolate.cc (working copy)
## -2000,6 +2000,7 ##
NumberToStringStub::InstallDescriptors(this);
StringAddStub::InstallDescriptors(this);
RegExpConstructResultStub::InstallDescriptors(this);
+ KeyedLoadDictionaryElementStub::InstallDescriptors(this);
KeyedLoadGenericElementStub::InstallDescriptors(this);
}
As a workaround if you don't want to compile your own V8 for now, you could execute some code on each Isolate that uses the KeyedLoadDictionaryElementStub directly, prior to running your code --- this should initialize the stub. Something like the following works for me:
Isolate* isolate = Isolate::New();
Isolate::Scope isolate_scope(isolate);
HandleScope handle_scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
Local<Array> a = Array::New(isolate);
context->Global()->Set(String::NewFromUtf8(isolate, "a"), a);
// Workaround code for initializing KeyedLoadDictionaryElementStub
Local<String> workaround_source = String::NewFromUtf8(isolate, "Math.random()");
Local<Script> workaround_script = Script::Compile(workaround_source);
Local<Value> workaround_value = workaround_script->Run();
// End workaround
Local<String> source = String::NewFromUtf8(isolate, "a[0]");
Local<Script> script = Script::Compile(source);
// ...and so on