How to detect which network your contract is on - nearprotocol

Is it possible to detect which network (testnet, mainnet etc..) your contract is in via, for example, env in Rust? I have a contract that will be deployed to both testnet and mainnet and I have logic that depends on the network.

There are a two things you can do in this case that I could think of.
You could store in the state of the contract which network it lives on and then pass that into the initialization function.
If you wanted to use env, you could right split env::current_account_id() and check for .testnet at the end of the account ID.
Note: This method won't work if your contract is deployed to an implicit account.
An example of the code is:
// Get this with env::current_account_id();
let str = "benji.testnet.fayyr.testnet".to_string();
// Get the split at the end of the string using `.testnet`
let split_check = str.rsplit_once(".testnet");
// Default network to mainnet
let mut network = "mainnet";
// If `.testnet` was found, make sure it was at the end of the account ID
if let Some(split) = split_check {
if split.1.len() == 0 {
network = "testnet";
}
}

Related

In a Rust test, how can I check the state (account balance) of an account using NEAR protocol?

In a Rust test, how can I check the state (account balance) of an account?
E.g. I have this helper function:
fn set_context(account_index: usize, is_view: bool, deposit: Amount) {
let context = VMContextBuilder::new()
.signer_account_id(accounts(account_index))
.is_view(is_view)
.attached_deposit(deposit)
.build();
testing_env!(context);
}
And then my test contains:
...
let mut contract = Contract::new();
set_context(1, false, near_string_to_yocto("0.3".to_string()));
let recipient = accounts(0);
let _matcher1_offer_result = contract.offer_matching_funds(&recipient);
set_context(2, false, near_string_to_yocto("0.1".to_string()));
let _matcher2_offer_result = contract.offer_matching_funds(&recipient);
// TODO: Assert that this (escrow) contract now contains the correct amount of funds. Assert that the matchers' account balances have decreased appropriately.
I haven't been able to find an example in any docs or repo.
E.g. https://docs.rs/near-sdk/latest/src/near_sdk/test_utils/context.rs.html#10-14
Can't directly comment on Vlad's post due to low reputation, but the method you'd need to get the account details such as account balance is the account.view_account() method. You can find all related account methods here as well: https://docs.rs/workspaces/0.4.0/workspaces/struct.Account.html
The Workspaces docs aren't good yet
Any feedback as to how we can improve the docs?
Given that the balance is changed by the NEAR Protocol and the contract method cannot just change the balance directly, the only thing contract developers can check in unit tests is whether there was some promise returned with the tokens transferred. The current balance in the unit-test environment is available through near_sdk::env::account_balance()
If there is a need to do end-to-end testing, I recommend using https://github.com/near/workspaces-rs or https://github.com/near/workspaces-js

Substrate: How to validate the originator of unsigned extrinsics?

I need to be able to identify the source of an unsigned extrinsic for spam prevention purposes. Assume there is a set of known authorities from whom I am willing to accept an unsigned extrinsic. I want to check that the sender of the unsigned extrinsic is a member of this authority set (and that they are who they say they are).
From what I can tell there are a couple of different approaches and I would like to better understand the differences between each and the trade-offs involved.
Add a signature as a call parameter and define ValidateUnsigned.
This approach is used in the ImOnline pallet. Roughly speaking:
decl_module!(
// ...
fn my_unsigned_call(_origin,
args: MyArgs<T>,
authority: T::Authority,
signature: T::Signature) {
// Handle the call
todo!()
}
)
impl<T: Trait> frame_support::unsigned::ValidateUnsigned for Module<T> {
// ...
fn validate_unsigned(
_source: TransactionSource,
call: &Self::Call,
) -> TransactionValidity {
if let Call::my_unsigned_call(args, authority, signature) = call {
// Check the sender is in the approved authority set and verify sig
todo!();
}
}
}
Implement SignedExtension for some metadata associated with the pallet trait. This is touched upon in the docs and seems to be implemented in the TransactionPayment pallet. Implementation would be something like this:
struct SenderInfo<T> {
authority: T::Authority,
signature: T::Signature,
}
impl<T: Config + Send + Sync> SignedExtension for SenderInfo<T>
where
<T as frame_system::Config>::Call: IsSubType<Call<T>>,
{
// ...
fn validate_unsigned(
call: &Self::Call,
info: &DispatchInfoOf<Self::Call>,
len: usize
) -> TransactionValidity {
// validate self.authority and self.signature
}
}
This SignedExtension then needs to be aggregated into the SignedExtra of the runtime. (Right?)
I am tending towards using the second option since it seems cleaner: it doesn't require me to pollute my method signatures with extra information that is not even used in the method call. But would this mean that any transaction submitted to the runtime, signed or unsigned, would need to add this customised SignedExtra?
Are there any other considerations I should be aware of?
I'm working on a very similar thing.
I was able to do so with your approach 1.
Basically I do check two things there:
If payload was signed properly - When you think about it, this will tell you only about the user, but it doesn't check if a user is your authority user.
I do check if this account is on my authorities list
My working example is available here https://github.com/korzewski/jackblock/blob/master/pallets/jackblock/src/lib.rs#L401
Although, this is not perfect because I need to keep a second list of Authorities and add them manually.
Currently I'm trying to refactor it, so my authorities accounts are the same as my validators (Aura pallet). Still looking for a solution so maybe you know how to solve it? Basically how to reuse Aura pallet in my own pallet

What is the best way to test account balance changes in NEAR smart contracts?

I am trying to test that an account's NEAR balance increases and decreases.
env::account_balance() doesn’t seem to change even with an attached_deposit.
#[test]
fn takes_account_deposit() {
let mut context = get_context();
context.attached_deposit = 10000000000000000;
testing_env!(context.clone());
println!("Account balance before {}", env::account_balance());
let mut contract = Contract::default();
contract.take_deposit();
println!("Account balance after {}", env::account_balance());
}
Cross-contract calls in NEAR are asynchronous, so you need to setup a callback for the take_deposit (is my understanding correct that Contract is some other contract?). Learn more about promises and cross-contract calls in the doc

How to send protobuf::DynamicMessage with GRPC?

I have been playing around lately with GRPC and Protocol Buffers in order to get familiar with both frameworks in C++.
I wanted to experiment with the reflection functionality, so I have set up a very simple service where the (reflection-enabled) server exposes the following interface file:
syntax = "proto3";
package helloworld;
service Server {
rpc Add (AddRequest) returns (AddReply) {}
}
message AddRequest {
int32 arg1 = 1;
int32 arg2 = 2;
}
message AddReply {
int32 sum = 1;
}
On the client side I have visibility of the previous method thanks to the grpc::ProtoReflectionDescriptorDatabase. Therefore, I am able to create a message by means of a DynamicMessageFactory. However, I haven't been able to actually send the message to the server, nor, find any specific details in the documentation. Maybe it's too obvious and I'm completely lost...
Any hints will be deeply appreciated!
using namespace google::protobuf;
void demo()
{
std::shared_ptr<grpc::Channel> channel = grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials());
// Inspect exposed method
grpc::ProtoReflectionDescriptorDatabase reflection_database(channel);
std::vector<std::string> output;
reflection_database.GetServices(&output);
DescriptorPool reflection_database_pool(&reflection_database);
const ServiceDescriptor* service = reflection_database_pool.FindServiceByName(output[0]);
const MethodDescriptor* method = service->method(0);
// Create request message
const Descriptor* input_descriptor = method->input_type();
FileDescriptorProto input_proto;
input_descriptor->file()->CopyTo(&input_proto);
DescriptorPool pool;
const FileDescriptor* input_file_descriptor = pool.BuildFile(input_proto);
const Descriptor* input_message_descriptor = input_file_descriptor->FindMessageTypeByName(input_descriptor->name());
DynamicMessageFactory factory;
Message* request = factory.GetPrototype(input_message_descriptor)->New();
// Fill request message (sum 1 plus 2)
const Reflection* reflection = request->GetReflection();
const FieldDescriptor* field1 = input_descriptor->field(0);
reflection->SetInt32(request, field1, 1);
const FieldDescriptor* field2 = input_descriptor->field(1);
reflection->SetInt32(request, field2, 2);
// Create response message
const Descriptor* output_descriptor = method->output_type();
FileDescriptorProto output_proto;
output_descriptor->file()->CopyTo(&output_proto);
const FileDescriptor* output_file_descriptor = pool.BuildFile(output_proto);
const Descriptor* output_message_descriptor = output_file_descriptor->FindMessageTypeByName(output_descriptor->name());
Message* response = factory.GetPrototype(output_message_descriptor)->New();
// How to create a call...?
// ...is grpc::BlockingUnaryCall the way to proceed?
}
It's been a few years, but since you didn't get an answer, I'll make an attempt. You also didn't tag your question with a specific language, but it looks like you are using CPP. I can't provide a solution for CPP, but I can for JVM languages.
First of all, the following is taken from an open-source library I'm developing called okgrpc. It's the first of its kind attempt to create a dynamic gRPC client/CLI in Java.
Here are the general steps to make a call using DynamicMessage:
Get all the DescriptorProtos.FileDescriptorProto for the service you want to call using gRPC reflection.
Create indices for all types and methods in that service.
Find the Descriptors.MethodDescriptor corresponding to the method you want to call.
Convert your input to DynamicMessage. How to do this will depend on the input, of course. If JSON string, you can use the JsonFormat class.
Build an io.grpc.MethodDescriptor with method name, type (unary etc), request and response marshallers. You'll need to write your own DynamicMessage marshaller.
Use ClientCalls API to execute the RPC.
Obviously, the devil is in the details. If using Java, you can use my library, and let it be my problem. If using another language, good luck.

How to FIFO Order a Map during Unmarshalling

I know from reading around that Maps are intentionally unordered in Go, but they offer a lot of benefits that I would like to use for this problem I'm working on. My question is how might I order a map FIFO style? Is it even worth trying to make this happen? Specifically I am looking to make it so that I can unmarshal into a set of structures hopefully off of an interface.
I have:
type Package struct {
Account string
Jobs []*Jobs
Libraries map[string]string
}
type Jobs struct {
// Name of the job
JobName string `mapstructure:"name" json:"name" yaml:"name" toml:"name"`
// Type of the job. should be one of the strings outlined in the job struct (below)
Job *Job `mapstructure:"job" json:"job" yaml:"job" toml:"job"`
// Not marshalled
JobResult string
// For multiple values
JobVars []*Variable
}
type Job struct {
// Sets/Resets the primary account to use
Account *Account `mapstructure:"account" json:"account" yaml:"account" toml:"account"`
// Set an arbitrary value
Set *Set `mapstructure:"set" json:"set" yaml:"set" toml:"set"`
// Contract compile and send to the chain functions
Deploy *Deploy `mapstructure:"deploy" json:"deploy" yaml:"deploy" toml:"deploy"`
// Send tokens from one account to another
Send *Send `mapstructure:"send" json:"send" yaml:"send" toml:"send"`
// Utilize eris:db's native name registry to register a name
RegisterName *RegisterName `mapstructure:"register" json:"register" yaml:"register" toml:"register"`
// Sends a transaction which will update the permissions of an account. Must be sent from an account which
// has root permissions on the blockchain (as set by either the genesis.json or in a subsequence transaction)
Permission *Permission `mapstructure:"permission" json:"permission" yaml:"permission" toml:"permission"`
// Sends a bond transaction
Bond *Bond `mapstructure:"bond" json:"bond" yaml:"bond" toml:"bond"`
// Sends an unbond transaction
Unbond *Unbond `mapstructure:"unbond" json:"unbond" yaml:"unbond" toml:"unbond"`
// Sends a rebond transaction
Rebond *Rebond `mapstructure:"rebond" json:"rebond" yaml:"rebond" toml:"rebond"`
// Sends a transaction to a contract. Will utilize eris-abi under the hood to perform all of the heavy lifting
Call *Call `mapstructure:"call" json:"call" yaml:"call" toml:"call"`
// Wrapper for mintdump dump. WIP
DumpState *DumpState `mapstructure:"dump-state" json:"dump-state" yaml:"dump-state" toml:"dump-state"`
// Wrapper for mintdum restore. WIP
RestoreState *RestoreState `mapstructure:"restore-state" json:"restore-state" yaml:"restore-state" toml:"restore-state"`
// Sends a "simulated call" to a contract. Predominantly used for accessor functions ("Getters" within contracts)
QueryContract *QueryContract `mapstructure:"query-contract" json:"query-contract" yaml:"query-contract" toml:"query-contract"`
// Queries information from an account.
QueryAccount *QueryAccount `mapstructure:"query-account" json:"query-account" yaml:"query-account" toml:"query-account"`
// Queries information about a name registered with eris:db's native name registry
QueryName *QueryName `mapstructure:"query-name" json:"query-name" yaml:"query-name" toml:"query-name"`
// Queries information about the validator set
QueryVals *QueryVals `mapstructure:"query-vals" json:"query-vals" yaml:"query-vals" toml:"query-vals"`
// Makes and assertion (useful for testing purposes)
Assert *Assert `mapstructure:"assert" json:"assert" yaml:"assert" toml:"assert"`
}
What I would like to do is to have jobs contain a map of string to Job and eliminate the job field, while maintaining order in which they were placed in from the config file. (Currently using viper). Any and all suggestions for how to achieve this are welcome.
You would need to hold the keys in a separate slice and work with that.
type fifoJob struct {
m map[string]*Job
order []string
result []string
// Not sure where JobVars will go.
}
func (str *fifoJob) Enqueue(key string, val *Job) {
str.m[key] = val
str.order = append(str.order, key)
}
func (str *fifoJob) Dequeue() {
if len(str.order) > 0 {
delete(str.m, str.order[0])
str.order = str.order[1:]
}
}
Anyways if you're using viper you can use something like the fifoJob struct defined above. Also note that I'm making a few assumptions here.
type Package struct {
Account string
Jobs *fifoJob
Libraries map[string]string
}
var config Package
config.Jobs = fifoJob{}
config.Jobs.m = map[string]*Job{}
// Your config file would need to store the order in an array.
// Would've been easy if viper had a getSlice method returning []interface{}
config.Jobs.order = viper.GetStringSlice("package.jobs.order")
for k,v := range viper.GetStringMap("package.jobs.jobmap") {
if job, ok := v.(Job); ok {
config.Jobs.m[k] = &job
}
}
for
PS: You're giving too many irrelevant details in your question. I was asking for a MCVE.
Maps are by nature unordered but you can fill up a slice instead with your keys. Then you can range over your slice and sort it however you like. You can pull out specific elements in your slice with [i].
Check out pages 170, 203, or 204 of some great examples of this:
Programming in Go

Resources