Balances AccountStore definition in substrate runtime - substrate

In the balances pallet, the config trait has one item that is defined like type AccountStore: StoredMap<Self::AccountId, AccountData<Self::Balance>>;. This was a bit strange to me as I was expecting a normal storage map to store a mapping from AccountId to AccountData but after I looked at the docs for StoredMap I realised that it's a trait implemented on StorageMaps as well. That makes more sense now, so I go on to look at how the runtime defines this field, and to my surprise I find this in the runtime/src/lib.rs : type AccountStore = System;. Now I've never seen a runtime definition like this before because if I'm correct, System is supposed to represent the frame_system pallet. So I go to look at frame_system::Config for Runtime and I find this defintion :
type AccountData = pallet_balances::AccountData<Balance>;.
Now I don't know how these definitions are getting into pallet_balances' Config impl, but I can see that System contains both ingredients namely : one type AccountData and one AccountId. So at the end my two questions that stand are
What are the reasons for such a convulting design?
How does type AccountStore = System; figure out the concrete types?

Storing account balances in the system pallet also maintains some other frame_system information that may be important to keep around for a certain runtime configuration. But having consumers, providers and sufficients inside a runtime with multiple pallets and potentially interacting with other runtimes becomes quite crucial.
AccountStore defines where this balance is going to be stored, for this case is frame_system::Pallet<Runtime>. If we follow the lead and check the configuration of frame_system we will see that the type for AccountData is defined as
type AccountData = pallet_balances::AccountData<Balance>
Good, now we know that the AccountData stored in frame_system will be the one defined in pallet_balances.
So the information living in system concerning accounts will end up looking like the following:
/// Information of an account.
#[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode, TypeInfo)]
pub struct AccountInfo<Index, AccountData> {
/// The number of transactions this account has sent.
pub nonce: Index,
/// The number of other modules that currently depend on this account's existence. The account
/// cannot be reaped until this is zero.
pub consumers: RefCount,
/// The number of other modules that allow this account to exist. The account may not be reaped
/// until this and `sufficients` are both zero.
pub providers: RefCount,
/// The number of modules that allow this account to exist for their own purposes only. The
/// account may not be reaped until this and `providers` are both zero.
pub sufficients: RefCount,
/// The additional data that belongs to this account. Used to store the balance(s) in a lot of
/// chains.
pub data: AccountData,
}
Where AccountData fits the previous mentioned definition in pallet_balances.
Please, check this commit too for further information on how this may be tweaked -> https://github.com/paritytech/substrate/commit/31d90c202d6df9ce3837ee55587b604619a912ba

Related

Is it possible to transfer tokens to account declared in remaining accounts?

Try to transfer tokens to one of the account declared in remaining_accounts list.
Here's the way I create CpiContext:
let cpi_context = CpiContext::new(ctx.accounts.token_program.to_account_info(),
Transfer {
from: ctx.accounts.account_x.to_account_info(),
to: ctx.remaining_accounts.first().unwrap().to_account_info(),
authority: ctx.accounts.owner.to_account_info().clone()
});
I got error related to CpiContext lifetime mismatch. Exact log error: lifetime mismatch ...but data from ctx flows into ctx here.
Why I want to use remaining accounts to transfer tokens? This transfer is optional depending on whether user decides to pass the account (referral links/affiliation). Other methods than passing account as remaining accounts to implement the optional transfer will be also highly appreciated.
Make sure that you help out Rust's lifetime inference in your instruction:
pub fn your_instruction<'info>(ctx: Context<'_, '_, '_, 'info, YourContext<'info>>, ...)

Calling get_esdt_token_data for account that does not have the esdt

Considering that
get_esdt_token_data(address: &ManagedAddress, token_id: &TokenIdentifier, nonce: u64) -> EsdtTokenData<Self::Api>
always returns an EsdtTokenData rather than an option. What will this object look like if the address does not own the specified token?
The execution will fail as the VM will not return anything to the smart contract if it doesn't find the token.
The typical usage for this function is to get the data for the payment tokens the smart contract receives from the caller. If you're trying to use it freely, you might get into this situation, so this type of "free" usage is not really advised.

How to store the updates of state in an offchain database?

I want to store all the blockchain data in offchain database.
rpc has a function called EXPERIMENTAL_changes, I was told that I can do that by http polling of this method but I am unable to find out how to use it.
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=EXPERIMENTAL_changes \ params:='{ "changes_type": "data_changes", "account_ids": ["guest-book.testnet"], "key_prefix_base64": "", "block_id": 19450732 }'
For example here the results give:
"change": { "account_id": "guest-book.testnet", "key_base64": "bTo6Mzk=", "value_base64": "eyJwcmVtaXVtIjpmYWxzZSwic2VuZGVyIjoiZmhyLnRlc3RuZXQiLCJ0ZXh0IjoiSGkifQ==" }
What is key_base64?
Decoding it to string gives m::39
What is m::39?
For example, I have the following state data in the rust structure.
pub struct Demo {
user_profile_map: TreeMap<u128, User>,
user_products_map: TreeMap<u128, UnorderedSet<u128>>, // (user_id, set<product_id>)
product_reviews_map: TreeMap<u128, UnorderedSet<u128>>, // (product_id, set<review_id>)
product_check_bounty: LookupMap<u128, Vector<u64>>
}
How to know anything gets changed in these variables?
Will I have to check every block id for the point the contract is deployed, to know where there is the change?
I want to store all the blockchain data in offchain database.
If so, I recommend you take a look at the Indexer Framework, which allows you to get a stream of blocks and handle them. We use it to build Indexer for Wallet (keeps track of every added and deleted access key, and stores those into Postgres) and Indexer for Explorer (keeps track of every block, chunk, transaction, receipt, execution outcome, state changes, accounts, and access keys, and stores all of that in Postgres)
What is m::39?
Contracts in NEAR Protocol have access to the key-value storage (state), so at the lowest-level, you operate with key-value operations (NEAR SDK for AssemblyScript defines Storage class with get and set operations, and NEAR SDK for Rust has storage_read and storage_write calls to preserve data).
Guest Book example uses a high-level abstraction called PersistentVector, which automatically reads and writes its records from/to NEAR key-value storage (state). As you can see:
export const messages = new PersistentVector<PostedMessage>("m");
Guest Book defines the messages to be stored in the storage with m prefix, hense you see m::39, which basically means it is messages[39] stored in the key-value storage.
What is key_base64?
As key-value storage implies, the data is stored and accessed by keys, and the key can be binary, so base64 encoding is used to enable JSON-RPC API users with a way to query those binary keys as well (there is no way you can pass a raw binary blob in JSON).
How to know anything gets changed in these variables? Will I have to check every block id for the point the contract is deployed, to know where there is the change?
Correct, you need to follow every block, and check the changes. That is why we have built the Indexer Framework in order to enable community building services on top of that (we chose to build applications Indexer for Wallet and Indexer for Explorer, but others may decide to build GraphQL service like TheGraph)

How long before a block hash is invalidated?

The documentation says "Block hash is hash of the block from the current blockchain on top of which this transaction can be applied. It’s used to guard against blockchain forks and rollbacks."
If I try to sign and send a transaction with a block hash that is "a little out of date" then I get the error InvalidTxError::Expired
Is there some specific definition of this expiration timeout that I can use to predict whether it will happen and therefore need to refresh the block hash that I plan to use?
Does it happen after a period of time or if the block hash is Nth from the top of the chain or something?
There is a system-wide parameter transaction_validity_period that defines how long (for how many blocks) a transaction can be considered valid since the block hash it is based on.
after a little more digging based on #berryguy's accepted answer above, it looks like transaction_validity_period is an incoming parameter to ChainGenesis (pressing the blockchain start button, I guess) where the validity period is measured as a BlockIndex ("down from the top" or "back from the tip" of the chain depending on the animation playing out in your head)
snip from nearcore source
pub struct ChainGenesis {
pub time: DateTime<Utc>,
pub gas_limit: Gas,
pub gas_price: Balance,
pub total_supply: Balance,
pub max_inflation_rate: u8,
pub gas_price_adjustment_rate: u8,
pub transaction_validity_period: BlockIndex, /// <- here
pub epoch_length: BlockIndex,
}
and gets populated by genesis configuration file genesis.json that's loaded from ~/.near/genesis.json (on my local machine) maybe by a call to start_with_config

do all NEAR blockchain transactions require a receiver account?

reading through some documentation here and saw that part of the definition of a transaction is that all actions are performed "on top of the receiver's account" and also that the receiver account is "the account towards which the transaction will be routed."
also in the nearlib SDK, the transactions interface includes a method called signTransaction that requires receiverId as a parameter
async function signTransaction(receiverId: string, nonce: number, actions: Action[], blockHash: Uint8Array, signer: Signer, accountId?: string, networkId?: string): Promise<[Uint8Array, SignedTransaction]> {
but looking over the list of transactions supported by nearcore I wonder why do some of these transaction require a receiver.
why would any transactions require a "receiver" except for maybe Transfer, AddKey, DeleteKey, and DeleteAccount?
amd I think of the idea of "receiver" too literally, as in "they receive the outcome or impact of the transaction"? and instead it's not the right way to think about it?
or is receiverId optional in some cases but the interface just requires a value to avoid validation cruft?
here's what I believe to be the full list of supported transactions
pub enum Action {
CreateAccount(CreateAccountAction),
DeployContract(DeployContractAction),
FunctionCall(FunctionCallAction),
Transfer(TransferAction),
Stake(StakeAction),
AddKey(AddKeyAction),
DeleteKey(DeleteKeyAction),
DeleteAccount(DeleteAccountAction),
}
Conceptually every transaction always has a sender and a receiver, even though sometimes they might be the same. Because we always convert a transaction to a receipt that is sent to the receiver, it doesn't matter conceptually whether they are the same, even though in implementation there might be a difference.
Unfortunately, we don't have a good name to denote what we call "receiver". In some places in our code, we also call it "actor", because it actually refers to an account on which the action is performed opposed to an account which issued the action to perform (a.k.a "sender").
DeployContract, Stake, AddKey, DeleteKey require receiver==sender, in other words only an account itself can add/delete keys, stake and deploy a contract on itself, no other account can do it for it.
DeleteAccount is the same, it requires receiver==sender with one exception: If account is about to run out of balance due to storage rent and is below certain system-defined treshold any other account can delete it and claim the remaining balance.
CreateAccount, FunctionCall, and Transfer do not require receiver==sender. In case of the CreateAccount receiver should not exist at the moment of execution and will actually be created.
See the code that implements this logic: https://github.com/nearprotocol/nearcore/blob/ed43018851f8ec44f0a26b49fc9a863b71a1428f/runtime/runtime/src/actions.rs#L360

Resources