expected mutable reference `&mut egui::ui::Ui` found mutable reference `&mut Ui` should not be happening - image

Here is the repo im using as a reference for adding a image:
https://github.com/emilk/egui/blob/master/examples/retained_image/src/main.rs
Im getting this error when trying to draw a image to the screen on this line of code:
date_backdrop.show(ui);
"mismatched types
expected mutable reference &mut egui::ui::Ui
found mutable reference &mut Ui
perhaps two different versions of crate egui are being used?"
dont understand how to fix this issue. still new to rust
here is my code:
use eframe::egui;
use egui_extras::RetainedImage;
struct InitView;
impl eframe::epi::App for InitView {
fn name(&self) -> &str {
"CheckIt"
}
fn update(&mut self,ctx: &eframe::egui::CtxRef,frame: &mut eframe::epi::Frame<'_>) {
let date_backdrop = RetainedImage::from_image_bytes(
"date_backdrop.png",
include_bytes!("date_backdrop.png"),
)
.unwrap();
//background color
let frame = egui::containers::Frame {
fill: egui::Color32::from_rgb(241, 233, 218),
..Default::default()
};
//main window
egui::CentralPanel::default().frame(frame).show(ctx, |ui| {
ui.heading("This is an image:");
date_backdrop.show(ui);
});
}
}
fn main(){
let app: InitView = InitView;
let win_options = eframe::NativeOptions{
initial_window_size: Some(egui::Vec2::new(386.0, 636.0)),
always_on_top: true,
resizable: false,
..Default::default()
};
eframe::run_native(Box::new(app), win_options);
}
Here is my Cargo.toml:
[package]
name = "checkit"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
eframe = "0.14.0"
image = { version = "0.24", default-features = false, features = ["png"] }
egui_extras = { version = "0.20.0", features = ["image"] }

As the error tells, you have two incompatible version of egui in the dependency tree: eframe 0.14 pulls in egui 0.14, while egui_extras 0.20 pulls in egui 0.20. When major version is 0, different minor versions are treated as incompatible, therefore the types from them are not interchangable.
As far as I can see, both eframe and egui_extras depend on the same minor version of egui as their own version, so you have to synchronize these two dependencies to use the same minor version. The easiest way is probably to bump eframe to 0.20.

Related

How is it possible that i cant use this function?

using this github repo as a reference: https://github.com/emilk/egui/blob/master/examples/retained_image/src/main.rs
Im trying to load an image into my frame using the egui_extras::RetainedImage, but it is giving me an error that the function RetainedImage::from_image_bytes cannot be found in RetainedImage.
I have also checked image.rs class to make sure that the function is even there, which it is.
here is my code:
use eframe::{run_native, epi::App, egui, NativeOptions};
use egui_extras::RetainedImage;
struct InitView {
image: RetainedImage,
tint: egui::Color32,
}
impl Default for InitView {
fn default() -> Self {
Self {
image: RetainedImage::from_image_bytes(
"date_backdrop.png",
include_bytes!("date_backdrop.png"),
)
.unwrap(),
tint: egui::Color32::from_rgb(255, 0, 255),
}
}
}
impl App for InitView {
fn name(&self) -> &str {
"CheckIt"
}
fn update(&mut self,ctx: &eframe::egui::CtxRef,frame: &mut eframe::epi::Frame<'_>) {
//background color
let frame = egui::containers::Frame {
fill: egui::Color32::from_rgb(241, 233, 218),
..Default::default()
};
//main window
egui::CentralPanel::default().frame(frame).show(ctx, |ui| {
ui.label("test");
});
}
}
fn main(){
let app: InitView = InitView{..Default::default()};
let win_options = eframe::NativeOptions{
initial_window_size: Some(egui::Vec2::new(386.0, 636.0)),
always_on_top: true,
resizable: false,
..Default::default()
};
run_native(Box::new(app), win_options);
}
what im i doing wrong? im still new to rust
You need to add the image feature.
Edit your Cargo.toml and replace egui_extras with egui_extras = { version = "0.20.0", features = ["image"] } or run cargo add egui_extras -F "image" in your project root directory.

rust gpu_allocator bufferDeviceAddress must be enabbled

I am using ash and gpu_allocator to try to port some C++ code to rust.
I am running into a validation error that the C++ code never runs into:
"Validation Error: [ VUID-VkMemoryAllocateInfo-flags-03331 ] Object 0: handle = 0x56443b02d140, name = Logical device from NVIDIA GeForce GTX 1070, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xf972dfbf | If VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT is set, bufferDeviceAddress must be enabled. The Vulkan spec states: If VkMemoryAllocateFlagsInfo::flags includes VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT, the bufferDeviceAddress feature must be enabled (https://vulkan.lunarg.com/doc/view/1.3.211.0/linux/1.3-extensions/vkspec.html#VUID-VkMemoryAllocateInfo-flags-03331)"', src/vulkan/hardware_interface.rs:709:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
It seems, from the message I need to enable an extension.
Confusion n1, it seems this should always be enabbled, according to the docs:
https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_KHR_buffer_device_address.html
Because it was moved to be a core extension? Bbut maybe I miss understand what core means, so I tried enabling it manually.
I Have a list of extension names that I pass to the device upon creation:
const DEVICE_EXTENSIONS : &'static [&str] =
&[
"VK_KHR_swapchain",
"VK_KHR_dynamic_rendering",
"VK_EXT_buffer_device_address",
"VK_EXT_extended_dynamic_state",
"VK_EXT_conservative_rasterization",
];
let extensions : Vec<CString> = DEVICE_EXTENSIONS.iter().map(
|extension_name| CString::new(extension_name.to_string()).unwrap()
).collect();
let raw_extensions : Vec<*const i8> = extensions.iter().map(
|extension| extension.as_ptr()
).collect();
let mut dynamic_rendering =
vk::PhysicalDeviceDynamicRenderingFeaturesKHR {
dynamic_rendering : true as u32,
..Default::default()
};
type PDDRF = vk::PhysicalDeviceDynamicRenderingFeaturesKHR;
let indexing_features =
vk::PhysicalDeviceDescriptorIndexingFeatures
{
p_next : (&mut dynamic_rendering) as *mut PDDRF as *mut c_void,
shader_sampled_image_array_non_uniform_indexing : true as u32,
..Default::default()
};
// The enabledLayerCount and ppEnabledLayerNames parameters are deprecated,
// they should always be 0 and nullptr.
// https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkDeviceCreateInfo.html
let create_info =
vk::DeviceCreateInfo {
p_next : &indexing_features as *const _ as *const c_void,
queue_create_info_count : 1,
p_queue_create_infos : &queue_create_info,
enabled_extension_count : requested_device_extensions.len() as u32,
pp_enabled_extension_names : raw_extensions.as_ptr(),
p_enabled_features : &device_features,
..Default::default()
};
A little convoluted, I know, but so far the swapchain extension seems to load just fine, so I don't understand why the device address extension is never enabled.
Someone mentioned I should enable the feature alongside the extension:
let buffer_address =
vk::PhysicalDeviceBufferAddressFeaturesEXT
{
buffer_device_address : vk::TRUE,
..Default::default()
};
let dynamic_rendering =
vk::PhysicalDeviceDynamicRenderingFeaturesKHR {
p_next : (&buffer_address) as *const _ as *mut c_void,
dynamic_rendering : true as u32,
..Default::default()
};
type PDDRF = vk::PhysicalDeviceDynamicRenderingFeaturesKHR;
let indexing_features =
vk::PhysicalDeviceDescriptorIndexingFeatures
{
p_next : (&dynamic_rendering) as *const PDDRF as *mut c_void,
shader_sampled_image_array_non_uniform_indexing : true as u32,
..Default::default()
};
// The enabledLayerCount and ppEnabledLayerNames parameters are deprecated,
// they should always be 0 and nullptr.
// https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkDeviceCreateInfo.html
let create_info =
vk::DeviceCreateInfo
{
p_next : &indexing_features as *const _ as *const c_void,
queue_create_info_count : 1,
p_queue_create_infos : &queue_create_info,
enabled_extension_count : requested_device_extensions.len() as u32,
pp_enabled_extension_names : raw_extensions.as_ptr(),
p_enabled_features : &device_features,
..Default::default()
};
But I still get the same error even with this.
You already mentioned on the Khronos Vulkan Discord that you got it working with the piece of code I posted using ash's builder pattern and push_next features, but I think I see why your original code triggers the validation layer.
Enabling the feature
You're enabling VK_EXT_buffer_device_address and using its structure VkPhysicalDeviceBufferDeviceAddressFeaturesEXT,
but VK_EXT_buffer_device_address was deprecated by VK_KHR_buffer_device_address; it was promoted to a KHR extension.
Worth nothing here is, that VkPhysicalDeviceBufferDeviceAddressFeaturesKHR is not an alias for VkPhysicalDeviceBufferDeviceAddressFeaturesEXT, it's a different structure (... with identical layout).
They therefore have different sType values:
// Provided by VK_EXT_buffer_device_address
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES_EXT = 1000244000,
// Provided by VK_VERSION_1_2
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES = 1000257000,
// Provided by VK_KHR_buffer_device_address
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES_KHR = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES,
Now let's check the relevant piece of code for the validation layer, where it checks for the VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT flag for vkAllocateMemory:
VkBool32 buffer_device_address = false;
// --snip--
const auto *bda_features = LvlFindInChain<VkPhysicalDeviceBufferDeviceAddressFeatures>(device_createinfo_pnext);
if (bda_features) {
capture_replay = bda_features->bufferDeviceAddressCaptureReplay;
buffer_device_address = bda_features->bufferDeviceAddress;
}
It tries to find a VkPhysicalDeviceBufferDeviceAddressFeatures in the pNext chain, which is an alias for VkPhysicalDeviceBufferDeviceAddressFeaturesKHR, not the EXT one, as can be seen from the source code for LvlFindInChain:
// Find an entry of the given type in the const pNext chain
template <typename T> const T *LvlFindInChain(const void *next) {
// --snip--
if (LvlTypeMap<T>::kSType == current->sType) {
// Map type VkPhysicalDeviceBufferDeviceAddressFeatures to id VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES
template <> struct LvlTypeMap<VkPhysicalDeviceBufferDeviceAddressFeatures> {
static const VkStructureType kSType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES;
};
That is, the validation layer looks for a structure with
sType == VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES == 1000257000,
not one with VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_BUFFER_DEVICE_ADDRESS_FEATURES_EXT == 1000244000,
therefore it concludes that bufferDeviceAddress feature is not enabled.
You needed to use the promoted KHR one to satisfy the validation layer, even though your Vulkan driver probably supported the deprecated EXT one as well.
gpu-allocator
As for why using the gpu-allocator crate seemingly necessitated enabling buffer device address,
I'd suggest checking if you accidentally set gpu_allocator::vulkan::AllocatorCreatorDesc::buffer_device_address to true.
This is what prompts the library to use VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT when calling vkAllocateMemory:
let allocation_flags = vk::MemoryAllocateFlags::DEVICE_ADDRESS;
let mut flags_info = vk::MemoryAllocateFlagsInfo::builder().flags(allocation_flags);
// TODO(manon): Test this based on if the device has this feature enabled or not
let alloc_info = if buffer_device_address {
alloc_info.push_next(&mut flags_info)

How could I configure the reward for a substrate Aura validator?

Now that the PoA is running with multiple Aura validators in my substrate-node-template. How could I configure that reward amount or value for my Aura validators?
I got it working, PoA will reward the validator who created the block using the tip as the sample value for the fees, here are the steps:
Install the pallet_authorship
pallet-authorship = { version = "4.0.0-dev", default-features = false, git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.23" }
Configure the pallet to get the current author of the block
pub struct AuraAccountAdapter;
impl frame_support::traits::FindAuthor<AccountId> for AuraAccountAdapter {
fn find_author<'a, I>(digests: I) -> Option<AccountId>
where I: 'a + IntoIterator<Item=(frame_support::ConsensusEngineId, &'a [u8])>
{
pallet_aura::AuraAuthorId::<Runtime>::find_author(digests).and_then(|k| {
AccountId::try_from(k.as_ref()).ok()
})
}
}
impl pallet_authorship::Config for Runtime {
type FindAuthor = AuraAccountAdapter;
type UncleGenerations = ();
type FilterUncle = ();
type EventHandler = ();
}
Create OnUnbalanced implementation of Author and DealWithFees
use crate::{Authorship, Balances};
use frame_support::traits::{Imbalance, OnUnbalanced};
use crate::sp_api_hidden_includes_construct_runtime::hidden_include::traits::Currency;
use crate::AccountId;
type NegativeImbalance = <Balances as Currency<AccountId>>::NegativeImbalance;
pub struct Author;
impl OnUnbalanced<NegativeImbalance> for Author {
fn on_nonzero_unbalanced(amount: NegativeImbalance) {
if let Some(author) = Authorship::author() {
Balances::resolve_creating(&author, amount);
}
}
}
pub struct DealWithFees;
impl OnUnbalanced<NegativeImbalance> for DealWithFees {
fn on_unbalanceds<B>(mut fees_then_tips: impl Iterator<Item = NegativeImbalance>) {
if let Some(fees) = fees_then_tips.next() {
let mut split = fees.ration(80, 20);
if let Some(tips) = fees_then_tips.next() {
// for tips, if any, 80% to treasury, 20% to block author (though this can be anything)
tips.ration_merge_into(80, 20, &mut split);
}
//Treasury::on_unbalanced(split.0);
Author::on_unbalanced(split.1);
}
}
}
Call the implementation in the pallet_transaction_payment tuple OnChargeTransaction
impl pallet_transaction_payment::Config for Runtime {
type OnChargeTransaction = CurrencyAdapter<Balances, crate::impls::DealWithFees>;
type OperationalFeeMultiplier = ConstU8<5>;
type WeightToFee = IdentityFee<Balance>;
type LengthToFee = IdentityFee<Balance>;
type FeeMultiplierUpdate = ();
}
Also added in my blog: https://hgminerva.wordpress.com/2022/06/21/how-to-pay-the-block-author-validator-on-a-proof-of-authority-poa-consensus-in-substrate/

How to export Substrate native interface to runtime?

I'm trying link the substrate tutorial (https://github.com/substrate-developer-hub/substrate-node-template) to some C library. I wrapped the C library to a crate called cfun, which can called by the node layer.
I figured out I can use the marco runtime_interface to export the native interface for my pallets to call. However, although I can compile successfully, the wasm runtime cannot instantiate.
Any idea what is happening?
// node/cfun/src/lib.rs
#![cfg_attr(not(feature = "std"), no_std)]
#[link(name = "test", kind = "static")]
extern "C" {
pub fn double_int(input: i32) -> i32;
pub fn double_uint(input: u32) -> u32;
}
# node/cint/Cargo.toml
[package]
edition = '2018'
name = 'cint'
version = '2.0.1'
[dependencies]
# local dependencies
sp-runtime-interface = { version = "2.0.1", default-features = false}
cfun = { path = '../cfun', default-features = false, version = '2.0.1', optional = true }
[features]
default = ['std']
std = [
'sp-runtime-interface/std',
'cfun',
]
// node/cint/src/lib.rs
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(feature = "std")]
extern crate cfun;
use sp_runtime_interface::runtime_interface;
#[runtime_interface]
pub trait CFunctions {
fn c_double_uint(input: u32) -> u32 {
unsafe {
cfun::double_uint(input)
}
}
}
// pallets/template/src/lib.rs
// import module at the beginning
extern crate cint;
// ...
// modify the original `do_something` function
// from:
// Something::put(something);
// to:
let smt = cint::c_functions::c_double_uint(something);
Something::put(smt);
Edited 1
# pallets/template.Cargo.toml and runtime/Cargo.toml
# path is a bit different, but both included all interfaces
[dependencies]
# ...
sp-runtime-interface = { version = "2.0.1", default-features = false}
cfun = { path = '../../node/cfun', default-features = false, version = '2.0.1', optional = true }
cint = { path = '../../node/cint', default-features = false, version = '2.0.1'}
[features]
default = ['std']
std = [
# ...
'sp-runtime/std',
'cfun',
'cint/std',
]
Repo:
https://github.com/killalau/substrate-node-template/tree/franky

Requesting image with Swift 3 results in "Creating an image format with an unknown type is an error"

I have a controller in which I just fetch images in user's gallery and show them. It used to work with XCode 7.3, but after upgrading it to XCode 8.0 and updating the code for Swift 3.0 compatibility, it gives me a strange and very generic error:
Creating an image format with an unknown type is an error
I am not able to figure out what is not working here. My code is the following:
let imgManager = PHImageManager.default()
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = false
requestOptions.deliveryMode = PHImageRequestOptionsDeliveryMode.fastFormat
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key:"creationDate", ascending: false)]
let fetchResult = PHAsset.fetchAssets(with: PHAssetMediaType.image, options: fetchOptions)
if fetchResult.count > 0 {
for i in 0...(fetchResult.count-1) {
imgManager.requestImage(for: fetchResult.object(at: i), targetSize: view.frame.size, contentMode: PHImageContentMode.aspectFit, options: requestOptions, resultHandler: { (image, _) in
// do some stuff
self.progressView.isHidden = true
})
}
} else {
self.progressView.isHidden = true
}
In this code I removed the image visualization code for readability. The instruction that triggers the error is the "requestImage" one. Any help will be very appreciated.

Resources