How to read a hash value and get the corresponding AccountId from Substrate using RPC call? - substrate

I have a substrate node up and running with storage item as: value(Hash): Option<AccountId>. My aim is to provide the hash value (say, 0x0000000000000000000000000000000000000000000000000000000000000001 and get the corresponding account id in return).
When I do this through the UI, I get the following:
I want to perform the same task via RPC calls. After going through this blog post, I realized that my case would be to read the StorageMaps and so I began with running a few queries. If I am not wrong, the module is Substratekitties and the storage item is value. The mapping would be value to AccountId.
I ran the first two calls:
util_crypto.xxhashAsHex("Substratekitties", 128)
"0xe4a154b5ba85d6072b187ee66e4fef05"
util_crypto.xxhashAsHex("Value", 128)
"0x6b2f21989c43cc4e06ac1ad3e2027000"
But I am confused in the third call encoding: encoding the file sha256 hash. How to do that?
Running util_crypto.blake2AsHex("0000000000000000000000000000000000000000000000000000000000000001", 256)
"0x16821de47d8b3b0fa4ca43e5db1028d75207cbd1c379a4738144972b105355aa"
won't work and is not working either.
By not working I mean, I get the "null" values when executing this query.
This is the storage struct:
use frame_support::{decl_module, decl_storage, dispatch::result::Result, ensure, StorageMap};
use frame_system::ensure_signed;
use sp_runtime::DispatchError;
// pub trait Trait: balances::Trait {}
pub trait Trait: pallet_balances::Trait {}
decl_storage! {
trait Store for Module<T: Trait> as KittyStorage {
// Value: map T::Hash => Option<T::AccountId>;
// TODO: check whether this is the appropriate datatype(hash).
Value: map hasher(blake2_256) T::Hash => Option<T::AccountId>;
// Balances: map hasher(blake2_256) (T::AssetId, T::AccountId) => T::Balance;
}
}
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
fn set_value(origin, value: T::Hash) -> Result<(), DispatchError> {
let sender = ensure_signed(origin)?;
ensure!(!<Value<T>>::contains_key(value), "key already exists");
<Value<T>>::insert(value, sender);
Ok(())
}
}
}
Update:
My literal query:
curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "state_getStorage", "params": ["0x3fd011a1ea758d2e1b46ed3cec43fc86b2f21989c43cc4e06ac1ad3e2027000d3585436436a2253c5163fa0cfe54a648fa533ef32ea10dbd966ac438af77b71"]}' http://localhost:9933/
Hex queries:
util_crypto.xxhashAsHex("KittyStorage", 128)
"0xe3fd011a1ea758d2e1b46ed3cec43fc8"
util_crypto.xxhashAsHex("Value", 128)
"0x6b2f21989c43cc4e06ac1ad3e2027000"
util_crypto.blake2AsHex("0x0000000000000000000000000000000000000000000000000000000000001234")
"0xd3585436436a2253c5163fa0cfe54a648fa533ef32ea10dbd966ac438af77b71"

I think the issue here is you are taking the hash of a string "0000...0001" not the bytes. Try adding 0x to the front of the string.
EDIT: Try like this:
util_crypto.blake2AsHex("0x0000000000000000000000000000000000000000000000000000000000000001")
> "0x33e423980c9b37d048bd5fadbd4a2aeb95146922045405accc2f468d0ef96988"
EDIT 2: The other issue here is that you are using the wrong key.
Your example shows you are taking the storage key "Substratekitties" but your storage key is KittyStorage as specified in the decl_storage! macro.
So your first hash should be: 0xe3fd011a1ea758d2e1b46ed3cec43fc8

Related

Msearch Elasticsearch API - Rust

By this point, I feel like I am the only other person on earth that is using multi-search on Rust... other than the person who wrote it.
There is zero documentation on this other than this hyper-confusing one https://docs.rs/elasticsearch/7.14.0-alpha.1/elasticsearch/struct.Msearch.html
I figured I had to pass MsearchParts parts as an argument for the client.msearch(here goes msearch_parts), and luckily for me, there a piece of documentation for how that is supposed to be, but such documentation is so poorly done that I have no clue of what to do because I did not write the API.
I have no clue of how to pass my JSON
{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Whiskers"}}}}
{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Chicken"}}}}
{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Turkey"}}}}
"NOT IN THE CODE: extra EMPTY line required by elasticsearch multi-searches"
and get a 200^ response.
As a side note, my JSON is well formatted into a string that can be sent in a normal reqwest the issue is more on how to turn that JSON string into MsearchParts.
The body needs to conform to the structure specified for the msearch API
The multi search API executes several searches from a single API
request. The format of the request is similar to the bulk API format
and makes use of the newline delimited JSON (NDJSON) format.
The structure is as follows:
header\n
body\n
header\n
body\n
let client = Elasticsearch::default();
let msearch_response = client
.msearch(MsearchParts::None)
.body::<JsonBody<Value>>(vec![
json!({"index":"cat_food"}).into(),
json!({"query":{"term":{"name":{"term":"Whiskers"}}}}).into(),
json!({"index":"cat_food"}).into(),
json!({"query":{"term":{"name":{"term":"Chicken"}}}}).into(),
json!({"index":"cat_food"}).into(),
json!({"query":{"term":{"name":{"term":"Turkey"}}}}).into(),
])
.send()
.await?;
let json: Value = msearch_response.json().await?;
// enumerate over the response objects in the response
for (idx, response) in json["responses"]
.as_array()
.unwrap()
.into_iter()
.enumerate()
{
println!();
println!("response {}", idx);
println!();
// print the name of each matching document
for hit in response["hits"]["hits"].as_array().unwrap() {
println!("{}", hit["_source"]["name"]);
}
}
The above example uses MsearchParts::None, but because all search requests are targeting the same index, the index can be specified with MsearchParts::Index(...), and it doesn't need to be repeated in the header for each search request
let client = Elasticsearch::default();
let msearch_response = client
.msearch(MsearchParts::Index(&["cat_food"]))
.body::<JsonBody<Value>>(vec![
json!({}).into(),
json!({"query":{"term":{"name":{"term":"Whiskers"}}}}).into(),
json!({}).into(),
json!({"query":{"term":{"name":{"term":"Chicken"}}}}).into(),
json!({}).into(),
json!({"query":{"term":{"name":{"term":"Turkey"}}}}).into(),
])
.send()
.await?;
let json: Value = msearch_response.json().await?;
// enumerate over the response objects in the response
for (idx, response) in json["responses"]
.as_array()
.unwrap()
.into_iter()
.enumerate()
{
println!();
println!("response {}", idx);
println!();
// print the name of each matching document
for hit in response["hits"]["hits"].as_array().unwrap() {
println!("{}", hit["_source"]["name"]);
}
}
Because msearch's body() fn takes a Vec<T> where T implements the Body trait, and Body is implemented for &str, you can pass a string literal directly
let msearch_response = client
.msearch(MsearchParts::None)
.body(vec![r#"{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Whiskers"}}}}
{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Chicken"}}}}
{"index":"cat_food"}
{"query":{"term":{"name":{"term":"Turkey"}}}}
"#])
.send()
.await?;
let json: Value = msearch_response.json().await?;
Per the doc of MsearchParts, it looks an array of &str needs to be used to construct the Index (or IndexType) variant of enum MsearchParts. So, please give the following way a try and see if it works.
Let parts = MsearchParts::Index([
r#"{"index":"cat_food"}"#,
r#"{"query":{"term":{"name":{"term":"Whiskers"}}}}"#,
r#"{"index":"cat_food"}"#,
r#"{"query":{"term":{"name":{"term":"Chicken"}}}}"#,
r#"{"index":"cat_food"}"#,
r#"{"query":{"term":{"name":{"term":"Turkey"}}}}"#
]);
After hours of investigating I took some other approach, using a body vector and the msearch API.
I think the json doesnt go to the msearchparts but to a vector of bodies.
(see https://docs.rs/elasticsearch/7.14.0-alpha.1/elasticsearch/#request-bodies)
It runs, but the response gives me an error 400. And I dont know why.
I assume its the missing empty (json) bodies, as required in the elastic console.
What do you think?
let mut body: Vec<JsonBody<_>> = Vec::with_capacity(4);
body.push(json!(
{"query": {
"match": {"title":"bee"}
}}
).into());
body.push(json!(
{"query": {
"multi_match": {
"query": "tree",
"fields": ["title", "info"]
}
},"from":0, "size":2}
).into());
let search_response = client
.msearch(MsearchParts::Index(&["nature_index"]))
.body(body)
.send()
.await?;

GO is a complex nested structure

I wanted to clarify how to set values for
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
to make json.Marshall
there were no problems for the usual structure
package main
import (
"encoding/json"
"fmt"
)
type ElkBulkInsert struct {
Index []struct {
_Index string `json:"_index"`
_Id string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
Url string `json:"url"`
}
func main() {
sf := ElkBulIsertUrl{
Url: "http://mail.ru",
}
dd, _ := json.Marshal(sf)
fmt.Println(string(dd))
}
It's really unclear what you're asking here. Are you asking why the JSON output doesn't match what you expect? Are you unsure how to initialise/set values on the Index field of type []struct{...}?
Because it's quite unclear, I'll attempt to explain why your JSON output may appear to have missing fields (or why not all fields are getting populated), how you can initialise your fields, and how you may be able to improve the types you have.
General answer
If you want to marshal/unmarshal into a struct/type you made, there's a simple rule to keep in mind:
json.Marshal and json.Unmarshal can only access exported fields. An exported field have Capitalised identifiers. Your Index fieldin the ElkBulkInsert is a slice of an anonymous struct, which has no exported fields (_Index and _Id start with an underscore).
Because you're using the json:"_index" tags anyway, the field name itself doesn't have to even resemble the fields of the JSON itself. It's obviously preferable they do in most cases, but it's not required. As an aside: you have a field called Url. It's generally considered better form to follow the standards WRT initialisms, and rename that field to URL:
Words in names that are initialisms or acronyms (e.g. "URL" or "NATO") have a consistent case. For example, "URL" should appear as "URL" or "url" (as in "urlPony", or "URLPony"), never as "Url". Here's an example: ServeHTTP not ServeHttp.
This rule also applies to "ID" when it is short for "identifier," so write "appID" instead of "appId".
Code generated by the protocol buffer compiler is exempt from this rule. Human-written code is held to a higher standard than machine-written code.
With that being said, simply changing the types to this will work:
type ElkBulkInsert struct {
Index []struct {
Index string `json:"_index"`
ID string `json:"_id"`
} `json:"index"`
}
type ElkBulIsertUrl struct {
URL string `json:"url"`
}
Of course, this implies the data for ElkBulkInsert looks something like:
{
"index": [
{
"_index": "foo",
"_id": "bar"
},
{
"_index": "fizz",
"_id": "buzz"
}
]
}
When you want to set values for a structure like this, I generally find it easier to shy away from using anonymous struct fields like the one you have in your Index slice, and use something like:
type ElkInsertIndex struct {
ID string `json:"_id"`
Index string `json:"_index"`
}
type ElkBulkInsert struct {
Index []ElkInsertIndex `json:"index"`
}
This makes it a lot easier to populate the slice:
bulk := ElkBulkInsert{
Index: make([]ElkInsertIndex, 0, 10), // as an example
}
for i := 0; i < 10; i++ {
bulk.Index = append(bulk.Index, ElkInsertIndex{
ID: fmt.Sprintf("%d", i),
Index: fmt.Sprintf("Idx#%d", i), // wherever these values come from
})
}
Even easier (for instance when writing fixtures or unit tests) is to create a pre-populated literal:
data := ElkBulkInsert{
Index: []ElkInsertIndex{
{
ID: "foo",
Index: "bar",
},
{
ID: "fizz",
Index: "buzz",
},
},
}
With your current type, using the anonymous struct type, you can still do the same thing, but it looks messier, and requires more maintenance: you have to repeat the type:
data := ElkBulkInsert{
Index: []struct{
ID string `json:"_id"`
Index string `json:"_index"`
} {
ID: "foo",
Index: "bar",
},
{ // you can omit the field names if you know the order, and initialise all of them
"fizz",
"buzz",
},
}
Omitting field names when initialising in possible in both cases, but I'd advise against it. As fields get added/renamed/moved around over time, maintaining this mess becomes a nightmare. That's also why I'd strongly suggest you use move away from the anonymous struct here. They can be useful in places, but when you're representing a known data-format that comes from, or is sent to an external party (as JSON tends to do), I find it better to have all the relevant types named, labelled, documented somewhere. At the very least, you can add comments to each type, detailing what values it represents, and you can generate a nice, informative godoc from it all.

Retreive a struct/list of structs as views from a smart contract

I am trying to get the data of a single struct and the data of a list of this struct in view methods in a smart contract.
The struct would be something like:
#[derive(NestedEncode, NestedDecode, TopEncode, TopDecode, TypeAbi, Clone)]
pub struct Stream<M: ManagedTypeApi> {
pub id: u64,
pub payment: BigUint<M>,
pub enddate: u64,
pub receiver: ManagedAddress<M>,
}
A single view would be like:
#[view(getStream)]
fn get_stream(&self, id: u64) -> Stream<Self::Api> {
let payment = self.payment( id.clone() ).get().clone();
let enddate = self.enddate( id.clone() ).get().clone();
let receiver = self.receiver( id.clone() ).get().clone();
Stream {
id,
payment,
enddate,
receiver,
}
}
in the mandos tests I would expect something like:
"expect": {
"out": [
"u64:1",
"100,000,000,000",
"u64:200,000",
"address:my_address"
]
],
but in the test I always get an un-encoded byte result like:
Want: ["u64:1", "100,000,000,000", "u64:200,000", "address:my_address"]. Have: [0x000000000000000100000005174876e8000000000000030d406d795f616464726573735f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f5f]
I also tried different return types such as ManagedMultiResultVec, ManagedMultiResultVec or MultiResult with ManagedVec in general. But all seem to produce this output for me.
I also could not find out how I can retrieve and decode such a result in a dApp in TypeScript with the erdjs lib.
Can someone tell me what I have missed?
In mandos, you should expect this as out:
["u64:1|biguint:100,000,000,000|u64:200,000|address:my_address"]
Or
{
"0id": "u64:1",
"1payment": "biguint:100,000,000,000",
"2enddate": "u64:200,000",
"3receiver": "address:my_address"
}
I think that should be right.
And in a Dapp, you need the ABI file of the contract and need to do something like:
const result = ...; // do transaction here
const abi = await SmartContractAbi.fromAbiPath('...abi.json');
result.setEndpointDefinition(abi.getEndpoint('get_stream'));
console.log(result.unpackOutput());
From there you can figure out how to convert the result.

Passing nested JSON as variable in Machinebox GraphQL mutation using golang

Hi there Golang experts,
I am using the Machinebox "github.com/machinebox/graphql" library in golang as client for my GraphQL server.
Mutations with single layer JSON variables work just fine
I am, however, at a loss as to how to pass a nested JSON as a variable
With a single layer JSON I simply create a map[string]string type and pass into the Var method. This in turn populates my graphql $data variable
The machinebox (graphql.Request).Var method takes an empty interface{} as value so the map[string]string works fine. But embedded json simply throws an error.
code:
func Mutate(data map[string]string, mutation string) interface{} {
client := GQLClient()
graphqlRequest := graphql.NewRequest(mutation)
graphqlRequest.Var("data", data)
var graphqlResponse interface{}
if err := client.Run(context.Background(), graphqlRequest, &graphqlResponse); err != nil {
panic(err)
}
return graphqlResponse
}
Mutation:
mutation createWfLog($data: WfLogCreateInput)
{
createWfLog (data: $data){
taskGUID {
id
status
notes
}
event
log
createdBy
}
}
data variable shape:
{
"data": {
"event": "Task Create",
"taskGUID": {
"connect": {"id": "606f46cdbbe767001a3b4707"}
},
"log": "my log and information",
"createdBy": "calvin cheng"
}
}
As mentioned, the embedded json (value of taskGUID) presents the problem. If value was simple string type, it's not an issue.
Have tried using a struct to define every nesting, passed in struct.
Have tried unmarshaling a struct to json. Same error.
Any help appreciated
Calvin
I have figured it out... and it is a case of my noobness with Golang.
I didn't need to do all this conversion of data or any such crazy type conversions. For some reason I got in my head everything HAD to be a map for the machinebox Var(key, value) to work
thanks to xarantolus's referenced site I was able to construct a proper strut. I populated the strut with my variable data (which was a nested json) and the mutation ran perfectly!
thanks!

How to pass GraphQLEnumType in mutation as a string value

I have following GraphQLEnumType
const PackagingUnitType = new GraphQLEnumType({
name: 'PackagingUnit',
description: '',
values: {
Carton: { value: 'Carton' },
Stack: { value: 'Stack' },
},
});
On a mutation query if i pass PackagingUnit value as Carton (without quotes) it works. But If i pass as string 'Carton' it throws following error
In field "packagingUnit": Expected type "PackagingUnit", found "Carton"
Is there a way to pass the enum as a string from client side?
EDIT:
I have a form in my front end, where i collect the PackagingUnit type from user along with other fields. PackagingUnit type is represented as a string in front end (not the graphQL Enum type), Since i am not using Apollo Client or Relay, i had to construct the graphQL query string by myself.
Right now i am collecting the form data as JSON and then do JSON.stringify() and then remove the double Quotes on properties to get the final graphQL compatible query.
eg. my form has two fields packagingUnitType (An GraphQLEnumType) and noOfUnits (An GraphQLFloat)
my json structure is
{
packagingUnitType: "Carton",
noOfUnits: 10
}
convert this to string using JSON.stringify()
'{"packagingUnitType":"Carton","noOfUnits":10}'
And then remove the doubleQuotes on properties
{packagingUnitType:"Carton",noOfUnits:10}
Now this can be passed to the graphQL server like
newStackMutation(input: {packagingUnitType:"Carton", noOfUnits:10}) {
...
}
This works only if the enum value does not have any quotes. like below
newStackMutation(input: {packagingUnitType:Carton, noOfUnits:10}) {
...
}
Thanks
GraphQL queries can accept variables. This will be easier for you, as you will not have to do some tricky string-concatenation.
I suppose you use GraphQLHttp - or similar. To send your variables along the query, send a JSON body with a query key and a variables key:
// JSON body
{
"query": "query MyQuery { ... }",
"variables": {
"variable1": ...,
}
}
The query syntax is:
query MyMutation($input: NewStackMutationInput) {
newStackMutation(input: $input) {
...
}
}
And then, you can pass your variable as:
{
"input": {
"packagingUnitType": "Carton",
"noOfUnits": 10
}
}
GraphQL will understand packagingUnitType is an Enum type and will do the conversion for you.

Resources