Debuging Rust with clion,can not watch the value of variable - debugging

my rust code is as bellow:
#[test]
fn test2(){
let a = String::from("hello world");
println!("{}",a);
}
when i debug the test case,i can not see the value of variable, such as variable a

Related

print!() is failing in execute_query whereas it works out of it

I am doing some rust tests in the elrond blockchain.
When I print a token identifier outside of execute_query, my token is well printed. Whereas, it throws an error when I try to print it in the execute_query.
#[test]
fn test_foo() {
let mut setup = utils::setup(equip_penguin::contract_obj);
let token = TokenIdentifier::<DebugApi>::from_esdt_bytes(b"ITEM-a1a1a1");
// works
println!("{:x?}", token);
let b_wrapper = &mut setup.blockchain_wrapper;
let _ = b_wrapper.execute_query(&setup.cf_wrapper, |sc| {
// throw errors
println!("{:x?}", token);
});
}
The error is
thread 'build_url_with_one_item' panicked at 'called `Option::unwrap()` on a `None` value', /home/username/.cargo/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.27.4/src/tx_mock/tx_managed_types.rs:38:31
The utils::setup used in the above snippet code from this doc https://docs.elrond.com/developers/developer-reference/rust-testing-framework/
How does this error happens?
Okay, managed types must be declared inside execute_query.
The above code works:
#[test]
fn test_foo() {
let mut setup = utils::setup(equip_penguin::contract_obj);
let b_wrapper = &mut setup.blockchain_wrapper;
let _ = b_wrapper.execute_query(&setup.cf_wrapper, |sc| {
let token = TokenIdentifier::<DebugApi>::from_esdt_bytes(b"ITEM-a1a1a1");
println!("{:x?}", token);
});
}

"The collection is inconsistent state" for multiple tests using Ubuntu 20.04.1 LTS but not while using Windows 10

I have an issue running tests for the payments-contract demo https://github.com/near-apps/payments-contract running on ubuntu the error messages are as follows:
running 3 tests
test tests::make_deposit ... ok
test tests::make_deposit_and_payment ... thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: HostError(GuestPanic { panic_msg: "The collection is an inconsistent state. Did previous smart contract execution terminate unexpectedly?" })', /home/simon/.cargo/registry/src/github.com-1ecc6299db9ec823/near-sdk-2.0.1/src/environment/mocked_blockchain.rs:165:54
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
FAILED
test tests::make_deposit_and_withdrawal ... thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: HostError(GuestPanic { panic_msg: "The collection is an inconsistent state. Did previous smart contract execution terminate unexpectedly?" })', /home/simon/.cargo/registry/src/github.com-1ecc6299db9ec823/near-sdk-2.0.1/src/environment/mocked_blockchain.rs:165:54
FAILED
No matter which one is first it will successfully complete the test however any subsequent test/s will always fail.
Whereas when I run the same repo on Windows 10 all 3 tests work fine. I am using the same lib.rs file. Do tests need to be cleared after each one completes?
I noticed a similar question was answered by revising how the TreeMap is given an id in the contract. The only map I have is
#[derive(BorshDeserialize, BorshSerialize)]
pub struct PaymentContract {
pub owner_id: AccountId,
pub deposits: UnorderedMap<AccountId, Vec<Deposit>>,
}
impl Default for PaymentContract {
fn default() -> Self {
panic!("Should be initialized before usage")
}
}
#[near_bindgen]
impl PaymentContract {
#[init]
pub fn new(owner_id: AccountId) -> Self {
assert!(env::is_valid_account_id(owner_id.as_bytes()), "Invalid owner account");
assert!(!env::state_exists(), "Already initialized");
Self {
owner_id,
deposits: UnorderedMap::new(b"deposits".to_vec()),
}
}
Testing looks like below
// use the attribute below for unit tests
#[cfg(test)]
mod tests {
use super::*;
use std::convert::TryFrom;
use near_sdk::MockedBlockchain;
use near_sdk::{testing_env, VMContext};
fn ntoy(near_amount: u128) -> U128 {
U128(near_amount * 10u128.pow(24))
}
fn get_context() -> VMContext {
VMContext {
predecessor_account_id: "alice.testnet".to_string(),
current_account_id: "alice.testnet".to_string(),
signer_account_id: "bob.testnet".to_string(),
signer_account_pk: vec![0],
input: vec![],
block_index: 0,
block_timestamp: 0,
account_balance: 0,
account_locked_balance: 0,
attached_deposit: 0,
prepaid_gas: 10u64.pow(18),
random_seed: vec![0, 1, 2],
is_view: false,
output_data_receivers: vec![],
epoch_height: 19,
storage_usage: 1000
}
}
#[test]
fn make_deposit() {
let mut context = get_context();
context.attached_deposit = ntoy(1).into();
testing_env!(context.clone());
let mut contract = PaymentContract::new(context.current_account_id.clone());
contract.deposit("take my money".to_string());
let deposits = contract.get_deposits(context.signer_account_id.clone());
assert_eq!(deposits.get(0).unwrap().memo, "take my money".to_string());
}
#[test]
fn make_deposit_and_payment() {
let mut context = get_context();
context.attached_deposit = ntoy(1).into();
testing_env!(context.clone());
let mut contract = PaymentContract::new(context.current_account_id.clone());
contract.deposit("take my money".to_string());
contract.make_payment(0);
let deposits = contract.get_deposits(context.signer_account_id.clone());
assert_eq!(deposits[0].paid, true);
}
#[test]
fn make_deposit_and_withdrawal() {
let mut context = get_context();
context.attached_deposit = ntoy(1).into();
testing_env!(context.clone());
let mut contract = PaymentContract::new(context.current_account_id.clone());
contract.deposit("take my money".to_string());
contract.withdraw(0);
let deposits = contract.get_deposits(context.signer_account_id.clone());
assert_eq!(deposits.len(), 0);
}
}
Any help would be appreciated

How can I use the same default implementation for this Rust trait

I want to implement a trait that allows assigning generic types. So far I have tested this for u32 and String types:
trait Test {
fn test(&self, input: &str) -> Self;
}
impl Test for String {
fn test(&self, input: &str) -> Self {
input.parse().unwrap()
}
}
impl Test for u32 {
fn test(&self, input: &str) -> Self {
input.parse().unwrap()
}
}
fn main() {
let mut var = 0u32;
let mut st = String::default();
var = var.test("12345678");
st = st.test("Text");
println!("{}, {}", var, st);
}
I know this code is not perfect, and I should be using a Result return instead of unwrapping, but please set this aside as this is a quick example. The implementations for u32 and String are exactly the same, so I would like to use a default implementation for both instead of copying & pasting the code. I have tried using one, but as the returned type Self differs in both, compiler cannot determine the type size and errors.
How could I write a default implementation in this case?
Default implementation
The following bounds on Self are required for the default implementation:
Self: Sized because Self is returned from the function and will be placed in the caller's stack
Self: FromStr because you're calling parse() on input and expecting it to produce a value of type Self
<Self as FromStr>::Err: Debug because when you unwrap a potential error and the program panics Rust wants to be able to print the error message, which requires the error type to implement Debug
Full implementation:
use std::fmt::Debug;
use std::str::FromStr;
trait Test {
fn test(&self, input: &str) -> Self
where
Self: Sized + FromStr,
<Self as FromStr>::Err: Debug,
{
input.parse().unwrap()
}
}
impl Test for String {}
impl Test for u32 {}
fn main() {
let mut var = 0u32;
let mut st = String::default();
var = var.test("12345678");
st = st.test("Text");
println!("{}, {}", var, st);
}
playground
Generic implementation
A generic blanket implementation is also possible, where you automatically provide an implementation of Test for all types which satisfy the trait bounds:
use std::fmt::Debug;
use std::str::FromStr;
trait Test {
fn test(&self, input: &str) -> Self;
}
impl<T> Test for T
where
T: Sized + FromStr,
<T as FromStr>::Err: Debug,
{
fn test(&self, input: &str) -> Self {
input.parse().unwrap()
}
}
fn main() {
let mut var = 0u32;
let mut st = String::default();
var = var.test("12345678");
st = st.test("Text");
println!("{}, {}", var, st);
}
playground
Macro implementation
This implementation, similar to the default implementation, allows you to pick which types get the implementation, but it's also similar to the generic implementation, in that it doesn't require you to modify the trait method signature with any additional trait bounds:
trait Test {
fn test(&self, input: &str) -> Self;
}
macro_rules! impl_Test_for {
($t:ty) => {
impl Test for $t {
fn test(&self, input: &str) -> Self {
input.parse().unwrap()
}
}
}
}
impl_Test_for!(u32);
impl_Test_for!(String);
fn main() {
let mut var = 0u32;
let mut st = String::default();
var = var.test("12345678");
st = st.test("Text");
println!("{}, {}", var, st);
}
playground
Key differences
The key differences between the 3 approaches:
The default implementation makes the trait bounds inherent to the method's signature, so all types which impl Test must be sized, and have a FromStr impl with a debuggable error type.
The default implementation allows you to selectively pick which types get Test implementations.
The generic implementation doesn't add any trait bounds to the trait method's signature so a greater variety of types could potentially implement the trait.
The generic implementation automatically implements the trait for all types which satisfy the bounds, you cannot selectively "opt out" of the generic implementation if there are some types which you'd prefer not to implement the trait.
The macro implementation does not require modifying the trait method signature with additional trait bounds and allows you to selectively pick which types get the implementation.
The macro implementation is a macro and suffers all the downsides of being a macro: harder to read, write, maintain, increases compile times, and macros are essentially opaque to static code analyzers which makes it harder to easily type-check your code.

String attribute set in init method always returns empty string

I have the following struct with impl:
#[near_bindgen]
#[derive(Default, Serialize, Deserialize, BorshDeserialize, BorshSerialize, Debug)]
pub struct MyStruct {
owner: String
}
#[near_bindgen(init => new)]
impl MyStruct {
fn new() -> Self {
Self {
owner: "bob".to_string()
}
}
fn get_owner(&self) -> String {
return self.owner;
}
}
Then I deploy the contract using near deploy my_contract --masterAccount myAccount
If I call get_owner using near-shell: near call my_contract get_owner --accountId=myAccount It always returns "" instead of the expected "bob".
It seems like the new method might not get called on deployment.
Initializer doesn't automatically get called on deploy. deploy just deploys the code and doesn't call anything on the contract. We should probably add a new method to shell, that does deploy_and_call. But for now just call new manually.
The reason why we don't initialize automatically is that initializer might take additional arguments. You can pass an owner to new method. Here is an example how to use initializer with custom arguments and as well as how to make sure a contract can't be called without initialization:
#[near_bindgen]
#[derive(BorshDeserialize, BorshSerialize)]
pub struct FunToken {
/// AccountID -> Account details.
pub accounts: Map<AccountId, Account>,
/// Total supply of the all token.
pub total_supply: Balance,
}
impl Default for FunToken {
fn default() -> Self {
env::panic(b"Not initialized");
unreachable!();
}
}
#[near_bindgen(init => new)]
impl FunToken {
pub fn new(owner_id: AccountId, total_supply: Balance) -> Self {
let mut ft = Self { accounts: Map::new(b"a".to_vec()), total_supply };
let mut account = ft.get_account(&owner_id);
account.balance = total_supply;
ft.accounts.insert(&owner_id, &account);
ft
}
}
From here: https://github.com/nearprotocol/near-bindgen/blob/master/examples/fun-token/src/lib.rs#L52-L77
Basically it panics during Default call, so non initialized contract can't be called.
Initializer functions are usually used when you need to parametrize the initialization of the contract. If there are no parameters then just implement Default trait:
impl Default for MyStruct {
fn default() -> Self {
Self {
owner: "bob".to_string()
}
}}

Can I store methods in an array?

Consider the following code:
class Test {
func func1(arg1: Int) -> Void {
println(arg1)
}
var funcArr: Array< (Int) -> Void > = [func1] // (!) 'Int' is not a subtype of 'Test'
}
I'm trying to store the method func1in an array, but as you can see, this doesn't seem to work because func1supposedly only takes an argument of type Test. I assume this has something to do with methods needing to be associated with an object.
For some more clarification, have a look at the following code where I let swift infer the type of the array:
class Test {
func func1(arg1: Int) -> Void {
println(arg1)
}
var funcArr = [func1]
}
Test().funcArr[0](Test()) // Compiles, but nothing gets printed.
Test().funcArr[0](1) // (!) Type 'Test' does not conform to protocol 'IntegerLiteralConvertible'
Test().func1(1) // Prints '1'
A possible workaround for this problem is moving func1outside of the class like so:
func func1(arg1: Int) -> Void {
println(arg1)
}
class Test {
var funcArr = [func1]
}
Test().funcArr[0](1) // Prints '1'
This works fine for this simple example, but is less than ideal when I actually need to operate on an Object of type Test in the function. I can of course add another parameter to the function to pass an instance of Testto the function, but this seems clunky.
Is there any way I can store methods in an array?
Ultimately, what I want to be able to do is testObject.funcArr[someInt](someParam) and have that function work as a method belonging to testObject. Any clever workarounds are of course also welcome.
Instance methods in swift are just curried functions, and the first argument is implicitly an instance of the class (i.e. self). And that's why these two are equivalent:
Test().func1(0)
Test.func1(Test())(0)
So when you try to put that function in the array, you're reveling its real nature: the method func1 on Test is actually this class method:
class func1(self_: Test)(arg1: Int)
So when you refer to simply func1 (without an "instance context") it has type Test -> Int -> Void, instead of the expected Int -> Void, and that's why you get the error
Int is not a subtype of Test
So the real issue is that when you store the methods in funcArr the instance is not known yet (or if you will, you're referring the function at a class level). You can work around this using a computed property:
var funcArr: [Int -> Void] { return [func1] } // use a computed property
Or another valuable option could be simply to acknowledge the real type of func1 and explicitly passing the instance. E.g.
...
var funcArr = [func1]
...
let test = Test()
let func1 = test.funcArr[0]
func1(test)(0) // prints 0
update
Freely inspired by this other Q/A (Make self weak in methods in Swift) I came up with a similar solution that stores the method references.
func weakRef<T: AnyObject, A, B>(weakSelf: T, method: T -> A -> B) -> A -> B {
return { [unowned weakSelf] in { a in method(weakSelf)(a) } }()
}
class Test {
var methodRefs: [Int -> Void] = []
init() {
methodRefs.append(weakRef(self, Test.func1))
}
func func1(arg1: Int) {
println(arg1)
}
}
In order to store a method, you should remember that the method is invoked on a class instance. What's actually stored in the array is a curried function:
(Test) -> (Int) -> Void
The first function takes a class instance and returns another function, the actual (Int) -> Void method, which is then invoked on that instance.
To make it more explicit, the array type is:
var funcArr: [(Test) -> (Int) -> Void] = [Test.func1]
Then you can invoke the method with code like this:
var test = Test()
var method = test.funcArr[0]
method(test)(1)
Suggested reading: Curried Functions

Resources