Given a key, how do I read the value from a table via the API? - move

Imagine I have a move module that looks like this.
Move.toml
[package]
name = 'friends'
version = '1.0.0'
[dependencies.AptosFramework]
git = 'https://github.com/aptos-labs/aptos-core.git'
rev = 'testnet'
subdir = 'aptos-move/framework/aptos-framework'
[addresses]
friends = "81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e"
sources/nicknames.move
module friends::nicknames {
use std::error;
use std::signer;
use std::string::String;
use aptos_std::table::{Self, Table};
const ENOT_INITIALIZED: u64 = 0;
struct Nicknames has key {
// A map of friends' nicknames to wallet addresses.
nickname_to_addr: Table<String, address>
}
/// Initialize Inner to the caller's account.
public entry fun initialize(account: &signer) {
let nicknames = Nicknames {
nickname_to_addr: table::new(),
};
move_to(account, nicknames);
}
/// Initialize Inner to the caller's account.
public entry fun add(account: &signer, nickname: String, friend_addr: address) acquires Nicknames {
let signer_addr = signer::address_of(account);
assert!(exists<Nicknames>(signer_addr), error::not_found(ENOT_INITIALIZED));
let nickname_to_addr = &mut borrow_global_mut<Nicknames>(signer_addr).nickname_to_addr;
table::add(nickname_to_addr, nickname, friend_addr);
}
}
I then published the module (to testnet), initialized Nicknames to my account, and then added an entry:
aptos move publish
aptos move run --function-id 81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::initialize
aptos move run --function-id 81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::add --args string:dport address:81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e
Now that my table is on-chain with some data, how would I go about reading the value of the dport key. I think I can use the API for this?

You're right that you can use the API for this! First let's get some information about your table.
Let's look at the resource you've deployed to your account from the above Move module. First let's construct the struct tag (aka resource ID / handle), it looks like this:
<account_address>::<module>::<struct_name>
In your case:
0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::Nicknames
Because there can only be one of each resource in an account in the Aptos blockchain, we can use this to uniquely identify the resource in your account. Using this we can then get the table handle. A table handle is a globally unique ID (so, not just within the bounds of your account) that points to that specific table. We need that to make any further queries, so let's get that first:
$ curl https://fullnode.testnet.aptoslabs.com/v1/accounts/0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e/resource/0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::Nicknames | jq .
{
"type": "0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::Nicknames",
"data": {
"nickname_to_addr": {
"handle": "0x64fa842ed2c9da130f0419875e6c101aeea263882fadee3257b13f1bb4d7d41d"
}
}
}
Explaining the above:
Hitting https://fullnode.testnet.aptoslabs.com/v1/accounts/0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e/resource/<name> lets us get a specific resource on an account.
We then request specifically 0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e::nicknames::Nicknames. The two addresses happen to be the same here, but that's just because we're using the same account, another example could be 0x1::aptos_coin::AptosCoin.
We can see the handle there, 0x64fa842ed2c9da130f0419875e6c101aeea263882fadee3257b13f1bb4d7d41d.
Using this handle, we can now query the API:
$ cat query.json
{
"key_type": "0x1::string::String",
"value_type": "address",
"key": "dport"
}
$ curl -H 'Content-Type: application/json' --data-binary "#query.json" https://fullnode.testnet.aptoslabs.com/v1/tables/0x64fa842ed2c9da130f0419875e6c101aeea263882fadee3257b13f1bb4d7d41d/item
"0x81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e"
Explaining the above:
The request to get an item from a table via the API is a POST request. The data comes from this query.json file.
key_type is the type of the key in the table. You can see from the original declaration of nickname_to_addr that this was 0x1::string::String.
value_type is the type of the value. Its type was address, a special type that isn't deployed at any particular module (hence the lack of <addr>::<module>::.
key is the key that we're querying in the table.
If hypothetically the key used in the table was more complex, like a struct instead of a single value like a string, you could represent that struct as JSON in the request, e.g.
{
"key_type": "0x1::string::String",
"value_type": "address",
"key": {
"first_name": "Ash",
"last_name": "Ketchum",
}
}
This is the extent of what you can do with the API right now. Namely, reading values given you know the key ahead of time. If you want to do the below things, you need to query an indexer:
Iterate through all the keys in a table.
Get the entire table, both keys and values.
I'll write an answer on how to do this later.

Related

How can I mint multiple NFT copies based on NEP-171 standard?

i'm actualy minting tokens like this:
self.tokens.mint(token_id.clone(),account.clone(),Some(token_metadata.clone())
this are the params that i use to minting new tokens:
'{"token_id":next_tokenid_counter,"account": "'dokxo.testnet'", "token_metadata": { "title": "Some titile", "description": "Some description", "media": "","extra":"","copies":copies_number}}'
then only can minting one token with metadata info but only exist one token
and im looking if exist a method in Near/Rust like solidity's method to minting copies's n number: ex.
_mintBatch(address to, uint256[] ids, uint256[] amounts, bytes data)
any suggestions or examples for this?
The easiest implementation is probably to pre-mint all the tokens in the series.
I don't know RUST, so my example will be in AssemblyScript, and I will call the method nft_mint_series (you can call it whatever you want):
following the NEP-171 (and NEP-177 for metadata) standard. We can do something like the following example implementation. I will assume that you have a mint function, which it looks like you already have.
nft_mint_series will do one thing, which is to call the nft_mint function for all copies. You MUST change the id of each token, but everything else you do is up to the implementation and logic you want. I also change the title of each token in the method. Though this example is in AssemblyScript, I think it shouldn't be too difficult to find an equivalent in Rust, as it's a simple for loop.
#nearBindgen
export class Contract {
// Example implementation of how we can mint multiple copies of an NFT
// Return type is optional. I added it to make it easier to test the function
nft_mint_series(
to: string,
id: string,
copies: i32,
metadata: TokenMetadata
): Token[] {
const seriesName = metadata.title; // Store the title in the metadata to change name for each token's metadata.
const tokens: Token[] = [];
for (let i = 0; i < copies; i++) {
const token_id = id + ':' + (i + 1).toString(); // format -> id:edition
// (optional) Change the title (and other properties) in the metadata
const title = seriesName + ' #' + (i + 1).toString(); // format -> Title #Edition
metadata.title = title;
tokens.push(this.nft_mint(token_id, to, metadata));
}
return tokens;
}
}
The example above is just one simple and straight forward way, and is not necessary how it should be created in a real application.

Passing default/static values from server to client

I have an input type with two fields used for filtering a query on the client.
I want to pass the default values (rentIntervalLow + rentIntervalHigh) from server to the client, but don't know how to do it.
Below is my current code. I've come up with two naïve solutions:
Letting the client introspect the whole schema.
Have a global config object, and create a querable Config type with a resolver that returns the config object values.
Any better suggestions than the above how to make default/config values on the server accessible to the client?
// schema.js
const typeDefs = gql`
input FilteringOptions {
rentIntervalLow: Int = 4000
rentIntervalHigh: Int = 10000
}
type Home {
id: Int
roomCount: Int
rent: Int
}
type Query {
allHomes(first: Int, cursor: Int, input: FilteringOptions): [Home]
}
`
export default typeDefs
I'm using Apollo Server 2.8.1 and Apollo React 3.0.
It's unnecessary to introspect the whole schema to get information about a particular type. You can just write a query like:
query {
__type(name:"FilteringOptions") {
inputFields {
name
description
defaultValue
}
}
}
Default values are values that will be used when a particular input value is omitted from the query. So to utilize the defaults, the client would pass an empty object to the input argument of the allHomes field. You could also give input a default value of {}, which would allow the client not to provide the input argument at all, while still relaying the min and max default values to the resolver.
If, however, your intent is to provide the minimum and maximum values to your client in order to drive some client-specific logic (like validation, drop down menu values, etc.), then you should not utilize default values for this. Instead, this information should be queried directly by the client, using, for example, a Config type like you suggested.

Azure Function Parameter from Settings

Referring to the following example:
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
ILogger log)
I understand, the connectionStringSetting isn't the connection string to use, rather it's name of the setting to look up containing the ConnectionString.
Will this also work for CollectionName and databasename as well? I understand I can experiment and figure out, but I am confused as to how this is even resolved at build time/deployment time?
I see several properties being assigned values while others are taking them from configuration? Is it the underlying constructor for CosmosDBTrigger which takes care of using appropriate value?
Binding to a function is a way of declaratively connecting another resource to the function; bindings may be connected as input bindings, output bindings, or both. Data from bindings is provided to the function as parameters.
here is small sample of Azure function using CosmosDB trigger that is invoked when there are inserts or updates in the specified database and collection.
using Microsoft.Azure.Documents;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Collections.Generic;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class CosmosTrigger
{
[FunctionName("CosmosTrigger")]
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
ILogger log)
{
if (documents != null && documents.Count > 0)
{
log.LogInformation($"Documents modified: {documents.Count}");
log.LogInformation($"First document Id: {documents[0].Id}");
}
}
}
}
and here is the binding information of same azure function which is used to pass the param value to function
Cosmos DB trigger binding in a function.json file
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "<connection-app-setting>",
"databaseName": "Tasks",
"collectionName": "Items",
"createLeaseCollectionIfNotExists": true
}
To answer your question how this is even resolved at build time/deployment time" :- To use it locally we pass the same binding information in host.json file and local.settings.json file.
That's how it bind the information internally by checking param name.
Hope it helps.

Spring REST and PATCH method

I'm using SpringBoot and Spring REST.
I would like to understand the HTTP PATCH method to update properties of my Model
Is there any good tutorial explaining how to make it works ?
HTTP PATCH method and body to be Send
Controller method and how to manage the update operation
I've noticed that many of the provided answers are all JSON patching or incomplete answers. Below is a full explanation and example of what you need with functioning real world code
First, PATCH is a selective PUT. You use it to update any number of fields for an object or list of objects. In a PUT you typically send the entire object with whatever updates.
PATCH /object/7
{
"objId":7,
"objName": "New name"
}
PUT /object/7
{
"objId":7,
"objName": "New name",
"objectUpdates": true,
"objectStatus": "ongoing",
"scoring": null,
"objectChildren":[
{
"childId": 1
},
............
}
This allows you to update records without huge amounts of endpoints. For example, with above, to update scoring you need object/{id}/scoring, then to update name you need object/{id}/name. Literally one endpoint for every item or you require the front end to post the entire object for every update. If you have a huge object, this can take a lot of network time or mobile data that is unnecessary. The patch lets you have 1 endpoint with the minimal object property sends that a mobile platform should use.
here is an example of a real world use for patch:
#ApiOperation(value = "Patch an existing claim with partial update")
#RequestMapping(value = CLAIMS_V1 + "/{claimId}", method = RequestMethod.PATCH)
ResponseEntity<Claim> patchClaim(#PathVariable Long claimId, #RequestBody Map<String, Object> fields) {
// Sanitize and validate the data
if (claimId <= 0 || fields == null || fields.isEmpty() || !fields.get("claimId").equals(claimId)){
return new ResponseEntity<>(HttpStatus.BAD_REQUEST); // 400 Invalid claim object received or invalid id or id does not match object
}
Claim claim = claimService.get(claimId);
// Does the object exist?
if( claim == null){
return new ResponseEntity<>(HttpStatus.NOT_FOUND); // 404 Claim object does not exist
}
// Remove id from request, we don't ever want to change the id.
// This is not necessary, you can just do it to save time on the reflection
// loop used below since we checked the id above
fields.remove("claimId");
fields.forEach((k, v) -> {
// use reflection to get field k on object and set it to value v
// Change Claim.class to whatver your object is: Object.class
Field field = ReflectionUtils.findField(Claim.class, k); // find field in the object class
field.setAccessible(true);
ReflectionUtils.setField(field, claim, v); // set given field for defined object to value V
});
claimService.saveOrUpdate(claim);
return new ResponseEntity<>(claim, HttpStatus.OK);
}
The above can be confusing for some people as newer devs don't normally deal with reflection like that. Basically, whatever you pass this function in the body, it will find the associated claim using the given ID, then ONLY update the fields you pass in as a key value pair.
Example body:
PATCH /claims/7
{
"claimId":7,
"claimTypeId": 1,
"claimStatus": null
}
The above will update claimTypeId and claimStatus to the given values for claim 7, leaving all other values untouched.
So the return would be something like:
{
"claimId": 7,
"claimSrcAcctId": 12345678,
"claimTypeId": 1,
"claimDescription": "The vehicle is damaged beyond repair",
"claimDateSubmitted": "2019-01-11 17:43:43",
"claimStatus": null,
"claimDateUpdated": "2019-04-09 13:43:07",
"claimAcctAddress": "123 Sesame St, Charlotte, NC 28282",
"claimContactName": "Steve Smith",
"claimContactPhone": "777-555-1111",
"claimContactEmail": "steve.smith#domain.com",
"claimWitness": true,
"claimWitnessFirstName": "Stan",
"claimWitnessLastName": "Smith",
"claimWitnessPhone": "777-777-7777",
"claimDate": "2019-01-11 17:43:43",
"claimDateEnd": "2019-01-11 12:43:43",
"claimInvestigation": null,
"scoring": null
}
As you can see, the full object would come back without changing any data other than what you want to change. I know there is a bit of repetition in the explanation here, I just wanted to outline it clearly.
There is nothing inherently different in PATCH method as far as Spring is concerned from PUT and POST. The challenge is what you pass in your PATCH request and how you map the data in the Controller. If you map to your value bean using #RequestBody, you'll have to figure what is actually set and what null values mean. Others options would be limit PATCH requests to one property and specify it in url or map the values to a Map.
See also Spring MVC PATCH method: partial updates
Create a rest template using -
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
RestTemplate rest = new RestTemplate(new HttpComponentsClientHttpRequestFactory());
now make the PATCH call
ResponseEntity<Map<String, Object>> response = rest.exchange(api, HttpMethod.PATCH, request,
responseType);

How to save and retrieve lists in PhoneApplicationService.Current.State?

I need to store and retrieve lists in PhoneApplicationService.Current.State[] but this is not a list of strings or integers:
public class searchResults
{
public string title { get; set; }
public string description { get; set; }
}
public List<searchResults> resultData = new List<searchResults>()
{
//
};
The values of the result are fetched from internet and when the application is switched this data needs to be saved in isolated storage for multitasking. How do I save this list and retrieve it again?
If the question really is about how to save the data then you just do
PhoneApplicationService.Current.State["SearchResultList"] = resultData;
and to retrieve again you do
List<searchResults> loadedResultData = (List<searchResults>)PhoneApplicationService.Current.State["SearchResultList"];
Here is a complete working sample:
// your list for results
List<searchResults> resultData = new List<searchResults>();
// add some example data to save
resultData.Add(new searchResults() { description = "A description", title = "A title" });
resultData.Add(new searchResults() { description = "Another description", title = "Another title" });
// save list of search results to app state
PhoneApplicationService.Current.State["SearchResultList"] = resultData;
// --------------------->
// your app could now be tombstoned
// <---------------------
// load from app state
List<searchResults> loadedResultData = (List<searchResults>)PhoneApplicationService.Current.State["SearchResultList"];
// check if loading from app state succeeded
foreach (searchResults result in loadedResultData)
{
System.Diagnostics.Debug.WriteLine(result.title);
}
(This might stop working when your data structure gets more complex or contains certain types.)
Sounds like you just want to employ standard serialisation for your list object, see here in the MSDN docs
http://msdn.microsoft.com/en-us/library/ms973893.aspx
Or also XML serialisation if you want something that can be edited outside of the application (you can also use the Isolated Storage exploter to grab the file off and edit later)
http://msdn.microsoft.com/en-us/library/182eeyhh(v=vs.71).aspx
Alternatively i would also suggest trying out the Tombstone Helper project by Matt Lacey which can simplify this for you greatly
http://tombstonehelper.codeplex.com/
The answer by Heinrich already summarizes the main idea here - you can use the PhoneApplicationService.State with Lists like with any objects. Check out the MSDN docs on preserving application state: How to: Preserve and Restore Application State for Windows Phone. There's one important point to notice there:
Any data that you store in the State dictionary must be serializable,
either directly or by using data contracts.
Directly here means that the classes are marked as [Serializable]. Regarding your List<searchResults>, it is serializable if searchResults is serializable. To do this, either searchResults and all types referenced by it must be marked with the [Serializable] OR it must be a suitable Data Contract, see Using Data Contracts and Serializable Types. In short, make sure the class is declared as public and that it has a public, parameterless constructor.

Resources