Should I pay for every read from NEAR protocol?
How do I view the value stored in NEAR protocol smart contract? (e.g. staking pool fees)
What is the difference between view and change methods?
Should I pay for every read from NEAR protocol?
TL;DR: No, you should not.
In NEAR protocol there are to ways to interact with smart contracts:
Submit a transaction with a FunctionCall action, which will get the specified method executed on the chunk producing nodes and the result will be provable through the blockchain (in terms of near-api-js these are "change methods")
Call query(call_function) JSON RPC method, which will get the specified method executed on the RPC node itself in a read-only environment, and the call will never be recorded/proved through the blockchain (in terms of near-api-js these are "view methods")
You can change the state and chained operations (e.g. cross-contract calls, tokens transfer, or access key addition/deletion) only through the first approach since blockchain expects the user to cover the execution costs, so the user should sign their transaction, and they will get charged for the execution.
Sometimes, you don't need to change the state, instead, you only want to read a value stored on the chain, and paying for it is suboptimal (though if you need to prove that the operation has been made it might still be desirable). In this case, you would prefer the second approach. Calling a method through JSON RPC is free of charge and provides a limited context during the contract execution, but it is enough in some scenarios (e.g. when you want to check what is the staking pool fee, or who is the owner of the contract, etc).
Related
I am writing a set of interacting smart contracts for a NEAR blockchain. Let's imagine the the following scenario
User sends a token to an exchange smart contract
Token smart contract calls exchange smart contract
Exchange smart contract calls fee smart contract
Exchange smart contract calls another token contract to send back another set token in the trade
Unlike a single shard Ethereum, NEAR does cross contract calls with promises. Whereas a single tripped require() automatically rolls back to the whole Ethereum transactions, in the sharded nature to NEAR smart contracts themselves are responsible for rolling back state changes if the promise they triggered does not complete successfully.
My question is how to safely handle failures in the chain of promises between NEAR smart contracts
What are the failure modes (smart contract function panics, target account does not contain code, out of gas)
How to catch the different errors above and deal with different error modes
Is there already a pattern that allows writing promise chains safely in an easy manner, similar to try {} catch {} in JavaScript await/async model
How I can track between different promises what was the original initiating user transaction that caused the chain of promises to trigger
How smart contracts are forwarding gas and ensuring there is enough gas for the whole chain of promises to complete
Generally you can only tell whether a promise has succeeded without knowing what goes wrong in the case of an error. An example of such a check can be found here https://github.com/near/core-contracts/blob/4f245101d7d029ffb3450c560770db244fc7b3ce/lockup/src/utils.rs#L7. What is the use case of reacting differently to different error that you have in mind?
I'm trying to build a mental model of the role of off-chain workers in substrate. The bigger picture seems to be that they move logic inside the substrate node, that was otherwise done by oracles, triggering on predefined transactions. There are two use cases I was thinking of specifically:
1: Validating file formats: incoming transaction proposes a file accessible via url or ipfs hash, and it's format needs to be validated. An off-chain worker fetches the file, asserts format (size, encoding, content, whatever) and if correct submits another transaction saying it's valid.
2: Key generation: let's assume there is a separate service distributed with the substrate node, which manages keys for each instance. Node A runs a key sharing algorithm (like Shamir's secret sharing) via this external service between participants A, B and C, then makes a transaction creating a group (A,B,C) on-chain. This transaction triggers all nodes that are in this group to run off-chain workers, call into their local key store verifying having the key. They can all mark it on-chain afterwards.
As far as I understand it correctly, off-chain workers are triggered in every node after block execution. In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these. What is a good way of reaching consensus on the validity of the file? Is it also possible without economic incentives like staking? It would be problematic with tokens having no value in the network, e.g in enterprise settings. Is this even the right use case for off-chain workers? The second example should not suffer from such issue, we just need all parties to verify having the key.
Where does the thought process above go wrong, and why?
As far as I understand it correctly, off-chain workers are triggered in every node after block execution.
Yes and no. There is a CLI flag for it. And at the time of this writing it says:
--offchain-worker <ENABLED>
Should execute offchain workers on every block.
By default it's only enabled for nodes that are authoring new blocks. [default: WhenValidating] [possible
values: Always, Never, WhenValidating]
In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these.
I think it is the responsibility of the receiving function (aka. Call) to handle and incentivise this. For example, there could be a reward opportunity to validate an address. But, if it has already been submitted by another transaction, you will get slashed (or even if not, you do pay some transaction fee, for nothing). In such cases, you can assume that not all participants will submit a transaction. They will only do it when there is a chance of improvement, which should be depicted by your potential reward/slash scheme.
Is this even the right use case for off-chain workers?
I am no expert here, but I think at least the validation example is a good example. It is just a matter of finding a good incentive + anti-spam slashing.
I am less familiar with the second example, so no comments on that.
Let's say I have a contract function that expects a certain amount of near to be send with a certain transaction, the function is called create_order, create_order takes a couple arguments.
I have my contract setup in the frontend under the name myContract.
I want to call myContract.create_order({...}) but the transaction fails because this method call doesn't have the right amount of NEAR tokens attached.
How do I assign a certain value of deposit to a transaction?
It's possible to use account.functionCall directly (without sugar for RPCs) to either attach amount or specify gas allowance for the call.
See Account#functionCall in nearlib.
Nearlib supports it using account.functionCall(..., amount). But it might not work, because of the design of the access keys with function calls. Default authorized access keys towards applications only allows function calls without attached token deposits (only prepaid gas). It's done this way to prevent apps from automatically using your balance without your explicit approval. Details on access keys are here: https://github.com/nearprotocol/NEPs/blob/master/text/0005-access-keys.md
The way to attach a deposit for the transaction should be done with the explicit approval from the wallet. The app should create a request for the wallet, redirect to the wallet for the approval (or through the popup). Once user approves the transaction, it's signed with full access key from the wallet directly and broadcasted. But I'm afraid we don't have this API on the wallet yet. Issue for this: https://github.com/nearprotocol/near-wallet/issues/56
AFAIK it is not supported at the moment. It will be available after this NEP https://github.com/nearprotocol/NEPs/pull/13 lands.
I have an endpoint in my api that supports writes. The resource in question is collaborative, so it is reasonable to expect that there will be parallel write requests arriving concurrently.
If the number of writes is small, then this is relatively straight forward to do with a simple lambda - read the current state, compute the new state, compare and swap, spin until the swap succeeds or until we give up. In either case, we compute the appropriate http response and return it to the caller.
If the API is successful, then eventually the waste of conflicting writes becomes expensive enough to address.
It looks as though the natural response is to copy the requests into a queue, with a function that consumes batches; within each batch, we process the requests in sequence, storing the new write, and computing the appropriate response to the request.
What are the options for getting those computed responses copied into the http responses, and what are the trade offs to be be considered?
My sense is that in handling the http request, after (synchronously) enqueue the message, I need to block/poll on something that will eventually be populated with the response to the request.
I'm not sure if this will count an an answer, but I do not agree that the natural response is to copy/queue/block; that feels like you're just trading optimistic concurrency control for a kind of pessimistic one (and you'd probably have an easier time just implementing a lock using e.g. Redis - not to mention there are other issues with Lambda itself that would make the approach you describe even more difficult).
Users probably do not want an API like this as it would have high latency.
In my opinion an API that is well designed for collaborate modification of some shared state has higher order constructs that make the API successful: thinking of a conversation as an example, you would decompose the chat in to individual messages, where each message is in reply to some other message; the concurrent modification to the conversation is append-only for the most part (you might allow a user to edit an individual message but that's not a point of resource contention) and you might do things like count the number of messages within the conversation asynchronously such that it is eventually consistent.
You can look at the domain of your API and see if there's a way to expose modification to it in such a way that reduces contention by making modifications target sub-entities (even if the API represents this as a single resource, the storage engine does not have to).
Another option is looking in to a model like event sourcing, where the changes themselves are literally appended and you derive the state from some snapshot plus recent changes.
Let's assume we host two microservices: RealEstate and Candidate.
The RealEstate service is responsible for managing rental properties, landlords and so forth.
The Candidate service provides commands to apply for a rental property.
There would be a CandidateForRentalProperty command which requires the RentalPropertyId and all necessary Candidate information.
Now the crucial point: Different types of RentalPropertys require a different set of Candidate information.
Therefore the commands and aggregates got splitten up:
Commands: CandidateForParkingLot, CandidateForFlat, and so forth.
Aggregates: ParkingLotCandidature, FlatCandidature, and so forth.
The UI asks the read model to decide which command has to be called.
It's reasonable for me to validate the Candidate information and all the business logic involved with that in the Candidate domain layer, but leave out validation whether the correct command got called based on the given RentalPropertyId. Reason: Multiple aggregates are involved in this validation.
The microservice should be autonomous and it's read model consumes events from the RealEstate domain, hence it's not guaranteed to be up to date. We don't want to reject candidates based on that but rather use eventual consistency.
Yes, this could lead to inept Candidate information used for a certain kind of RentalProperty. Someone could just call the CandidateForFlat command with a parking lot rental property id.
But how do we handle the cases in which this happens?
The RealEstate domain does not know anything about Candidates.
Would there be an event handler which checks if there is something wrong and execute an appropriate command to compensate?
On the other hand, this "mapping" is domain logic and I'd like to accomodate it in the domain layer. But I don't know who's accountable for this kind of compensating measures. Would the Candidate aggregate be informed, like IneptApplicationTypeUsed or something like that?
As an aside - commands are usually imperative verbs. ApplyForFlat might be a better spelling than CandidateForFlat.
The pattern you are probably looking for here is that of an exception report; when the candidate service matches a CandidateForFlat message with a ParkingLot identifier, then the candidate service emits as an output a message saying "hey, we've got a problem here".
If a follow up message fixes the problem -- the candidate service gets an updated message that fixes the identifier in the CandidateForFlat message, or the candidate service gets an update from real estate announcing that the identifier actually points to a Flat, then the candidate service can emit another message "never mind, the problem has been fixed"
I tend to find in this pattern that the input commands to the service are really all just variations of handle(Event); the user submitted, the http request arrived; the only question is whether or not the microservice chooses to track that event. In other words, the "command" stream is just another logical event source that the microservice is subscribed to.
As you said, validation of commands should be performed at the point of command generation - at client side - where read models are available.
Command processing is performed by aggregate, so it cannot and should not check validity or existence of other aggregates. So it should trust a command issuer.
If commands comes from an untrusted environment like public API, then your API gateway becomes a client, and it should have necessary read models to validate references.
If you want to accept a command fast and check it later, then log events like ClientAppliedForParkingLot, and have a Saga/Process manager handle further workflow by keeping its internal state, and issuing commands like AcceptApplication or RejectApplication.
I understand the need for validation but I don't think the example you gave calls for cross-Aggregate (or cross-microservice for that matter) compensating measures as stated in the Q title.
Verifications like checking that the ID the client gave along with the flat rental command matches a flat and not a parking lot, that the client has permission to do that, and so forth, are legitimate. But letting the client create such commands in the wild and waiting for an external actor to come around and enforce these rules seems subpar because the rules could be made intrinsic properties of the object originating the process.
So what I'd recommend is to change the entry point into the operation - to create the Candidature Aggregate Root as part of another Aggregate Root's behavior. If that other Aggregate (RentalProperty in our case) lives in another Bounded Context/microservice, you can maintain a list of RentalProperties in the Candidate Bounded Context with just the amount of info needed, and initiate the Candidature from there.
So you would have
FlatCandidatureHandler ==loads==> RentalProperty ==creates==> FlatCandidature
or
FlatCandidatureHandler ==checks existence==> local RentalProperty data
==creates==> FlatCandidature
As a side note, what could actually necessitate compensating actions are factors extrinsic to the root object of the process. For instance, if the property becomes unavailable in the mean time. Then whatever Aggregate holds that information should emit an event when that happens and the compensation should be initiated.