near-sdk-rs view account keys - nearprotocol

I want to obtain in my smart contract the list of keys for a specific user. On RPC, I would do it like this:
curl --location --request POST 'https://rpc.mainnet.near.org/' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"id": "dontcare",
"method": "query",
"params": {
"request_type": "view_access_key_list",
"finality": "final",
"account_id": "account.near"
}
}'
What is the near-sdk-rs equivalent of this?
I know if I wanted to create keys, I'd use Promise::new(...).add_access_key(...). But I can't find any read operations here (and it wouldn't really make sense to use a Promise for a read anyways).
But surely it must be accessible to smart contracts, if it's accessible to the protocol. What am I missing?

Related

Receipt is not in any Chunks from RPC

I have the following transaction: 7gjmhdoPZcS5GQ5tbquEmzdoNXaVBmpTts6GidZZLzKM, which two receipts, the first of which is 4QJkVxdoc85VxnucLASBZykzygS3EiUyZ98eS1cAHa7L
Both the transaction and the receipt were executed in block 65694172. However, when using chunk via the RPC, the transaction is in one of the chunks (chunk 0), but the receipt isn't. It also isn't in any of the other chunks, or in any of the other blocks. Why is this?
Curl request that demonstrates the issue:
curl --location --request POST 'https://archival-rpc.mainnet.near.org' \
--header 'Content-Type: application/json' \
--data-raw '{
"method": "chunk",
"params": {
"block_id": 65694172,
"shard_id": 0
},
"jsonrpc": "2.0",
"id": 0
}'

How to send JSON body with curl to Postgrest

I'm trying to insert data on my Postgresql DB but I failed.
So basically I can select the data with the query below;
curl 10.127.18.18:3001/mytable
I can get all rows with this request.
I tried pretty much combination of the commands below but all of them failed. How can I insert a basic data with postgREST?
"mytable" table has 2 columns, "user_id" and "username";
curl POST 10.127.18.18:3001/mytable { "user_id": 3333, "username": testuser }
curl -X POST 10.127.18.18:3001/mytable HTTP/1.1 { "user_id": "3333", "username": "testuser" }
Thanks!
For those who encounter with the same problem, this query worked for me;
curl --location --request POST '10.127.18.18:3001/mytable' --header 'Content-Type: application/json' --data-raw '{ "user_id": "11", "username": "test" }'

Elasticsearch 6 create new field requires data type but "Indices created in 6.x only allow a single-type per index"

Create a new field in Elasticsearch 6.6.2 gives the following error:
{
"error": {
"root_cause": [
{
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
}
],
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
},
"status": 400
}
The request:
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping -H "Content-Type: application/json" -d \
'{
"properties": {
"amount": {"type": "integer"}
}
}'
gives error no matter what data type I use. The index already has types integer, text/keyword, text and date.
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping -H "Content-Type: application/json" -d "{\"properties\": {\"amount\": {\"type\": \"integer\"}}}"
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping -H "Content-Type: application/json" -d "{\"properties\": {\"amount\": {\"type\": \"text\"}}}"
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping/data -H "Content-Type: application/json" -d "{\"properties\": {\"amount\": {}}}"
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping -H "Content-Type: application/json" -d "{\"properties\": {\"amount\": {}}}
Expected to create a new field
Actually got syntax error:
{"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: mapping type is missing;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: mapping type is missing;"},"status":400}
You are right that 6.x restricts you to a single _type, but you do still need to supply the name of that type (in 7.x, it defaults to _doc).
Change your mapping to specify the _type, like the following which sets it to "my-type":
curl --request PUT http://10.1.3.81:9200/netswitch_message/_mapping/my-type -H "Content-Type: application/json" -d \
'{
"properties": {
"amount": {"type": "integer"}
}
}'
See: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/indices-put-mapping.html#indices-put-mapping

Unable to create visualization using curl command in elaticearch

I am trying to create visualization using curl command. I am using elasticsearch 6.2.3. I am able to create the same in elasticsearch 5.6.8.
I am using this command
curl -XPUT http://localhost:9200/.kibana/visualization/vis1 -H 'Content-Type: application/json' -d #vis1.json
It is showing this error :
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [visualization, doc]"}],"type":"illegal_argument_exception","reason":"Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [visualization, doc]"},"status":400}
Contents of vis1.json:
{
"title": "vis1",
"visState": "{\"title\":\"vis1\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"split\",\"params\":{\"field\":\"UsageEndDate\",\"interval\":\"M\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"row\":false}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"ProductName.keyword\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"4eb9f840-3969-11e8-ae19-552e148747c3\",\"filter\":[],\"query\":{\"language\":\"lucene\",\"query\":\"\"}}"
}
}
This is working fine in elasticearch 5.6.8 but not in 6.2.3.
Thanks in Advance.
In Kibana 6, the mapping of the .kibanaindex has changed in order to satisfy the upcoming "one mapping per index" breaking change.
You can try this way instead:
curl -XPUT http://localhost:9200/.kibana/doc/visualization:vis1 -H 'Content-Type: application/json' -d #vis1.json
Also the vis1.json file needs to be changed a little bit (the content needs to be moved to the visualization sub-section), like this:
{
"type": "visualization",
"updated_at": "2018-04-10T10:00:00.000Z",
"visualization": {
"title": "vis1",
"visState": "{\"title\":\"vis1\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"split\",\"params\":{\"field\":\"UsageEndDate\",\"interval\":\"M\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{},\"row\":false}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"ProductName.keyword\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}",
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"4eb9f840-3969-11e8-ae19-552e148747c3\",\"filter\":[],\"query\":{\"language\":\"lucene\",\"query\":\"\"}}"
}
}
}

upserting batches into elasticsearch store with bulk API

I have huge set of documents with same index and same type but obviously different ids. I want to either update existing ones or insert new in batches. How can I achieve it using bulk indexing API? I want to do something like below but it throws error. Basically, I want to upsert multiple docs in batches which have same index and same type.
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/_bulk -d'
{ "index": {"_type": "sometype", "_index": "someindex"}}
{ "_id": "existing_id", "field1": "test1"}
{ "_id": "existing_id2", "field2": "test2"}
'
You need to do it like this:
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/someindex/sometype/_bulk -d'
{ "index": {"_id": "existing_id"}}
{ "field1": "test1"}
{ "index": {"_id": "existing_id2"}}
{ "field2": "test2"}
'
Since all documents are in the same index/type, move that to the URL and only specify the _id for each document you want to update in your bulk.

Resources