do all NEAR blockchain transactions require a receiver account? - nearprotocol

reading through some documentation here and saw that part of the definition of a transaction is that all actions are performed "on top of the receiver's account" and also that the receiver account is "the account towards which the transaction will be routed."
also in the nearlib SDK, the transactions interface includes a method called signTransaction that requires receiverId as a parameter
async function signTransaction(receiverId: string, nonce: number, actions: Action[], blockHash: Uint8Array, signer: Signer, accountId?: string, networkId?: string): Promise<[Uint8Array, SignedTransaction]> {
but looking over the list of transactions supported by nearcore I wonder why do some of these transaction require a receiver.
why would any transactions require a "receiver" except for maybe Transfer, AddKey, DeleteKey, and DeleteAccount?
amd I think of the idea of "receiver" too literally, as in "they receive the outcome or impact of the transaction"? and instead it's not the right way to think about it?
or is receiverId optional in some cases but the interface just requires a value to avoid validation cruft?
here's what I believe to be the full list of supported transactions
pub enum Action {
CreateAccount(CreateAccountAction),
DeployContract(DeployContractAction),
FunctionCall(FunctionCallAction),
Transfer(TransferAction),
Stake(StakeAction),
AddKey(AddKeyAction),
DeleteKey(DeleteKeyAction),
DeleteAccount(DeleteAccountAction),
}

Conceptually every transaction always has a sender and a receiver, even though sometimes they might be the same. Because we always convert a transaction to a receipt that is sent to the receiver, it doesn't matter conceptually whether they are the same, even though in implementation there might be a difference.

Unfortunately, we don't have a good name to denote what we call "receiver". In some places in our code, we also call it "actor", because it actually refers to an account on which the action is performed opposed to an account which issued the action to perform (a.k.a "sender").
DeployContract, Stake, AddKey, DeleteKey require receiver==sender, in other words only an account itself can add/delete keys, stake and deploy a contract on itself, no other account can do it for it.
DeleteAccount is the same, it requires receiver==sender with one exception: If account is about to run out of balance due to storage rent and is below certain system-defined treshold any other account can delete it and claim the remaining balance.
CreateAccount, FunctionCall, and Transfer do not require receiver==sender. In case of the CreateAccount receiver should not exist at the moment of execution and will actually be created.
See the code that implements this logic: https://github.com/nearprotocol/nearcore/blob/ed43018851f8ec44f0a26b49fc9a863b71a1428f/runtime/runtime/src/actions.rs#L360

Related

Calling get_esdt_token_data for account that does not have the esdt

Considering that
get_esdt_token_data(address: &ManagedAddress, token_id: &TokenIdentifier, nonce: u64) -> EsdtTokenData<Self::Api>
always returns an EsdtTokenData rather than an option. What will this object look like if the address does not own the specified token?
The execution will fail as the VM will not return anything to the smart contract if it doesn't find the token.
The typical usage for this function is to get the data for the payment tokens the smart contract receives from the caller. If you're trying to use it freely, you might get into this situation, so this type of "free" usage is not really advised.

How many Transaction types in NEAR?

Can any one please help me on the transaction/operation types in Near where value/near is involved.I have seen multiple operation types like transfer , draw etc
There are only 7 native action kinds in NEAR Protocol:
Transfer (deposit gets transferred from a signer to a receiver account)
CreateAccount
DeleteAccount (the remaining funds on an account are transferred to the beneficiary account id)
CallFunction (tokens can be deposited [attached] to the function call, e.g. draw method expects some tokens attached)
AddKey
DeleteKey

How to store the updates of state in an offchain database?

I want to store all the blockchain data in offchain database.
rpc has a function called EXPERIMENTAL_changes, I was told that I can do that by http polling of this method but I am unable to find out how to use it.
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=EXPERIMENTAL_changes \ params:='{ "changes_type": "data_changes", "account_ids": ["guest-book.testnet"], "key_prefix_base64": "", "block_id": 19450732 }'
For example here the results give:
"change": { "account_id": "guest-book.testnet", "key_base64": "bTo6Mzk=", "value_base64": "eyJwcmVtaXVtIjpmYWxzZSwic2VuZGVyIjoiZmhyLnRlc3RuZXQiLCJ0ZXh0IjoiSGkifQ==" }
What is key_base64?
Decoding it to string gives m::39
What is m::39?
For example, I have the following state data in the rust structure.
pub struct Demo {
user_profile_map: TreeMap<u128, User>,
user_products_map: TreeMap<u128, UnorderedSet<u128>>, // (user_id, set<product_id>)
product_reviews_map: TreeMap<u128, UnorderedSet<u128>>, // (product_id, set<review_id>)
product_check_bounty: LookupMap<u128, Vector<u64>>
}
How to know anything gets changed in these variables?
Will I have to check every block id for the point the contract is deployed, to know where there is the change?
I want to store all the blockchain data in offchain database.
If so, I recommend you take a look at the Indexer Framework, which allows you to get a stream of blocks and handle them. We use it to build Indexer for Wallet (keeps track of every added and deleted access key, and stores those into Postgres) and Indexer for Explorer (keeps track of every block, chunk, transaction, receipt, execution outcome, state changes, accounts, and access keys, and stores all of that in Postgres)
What is m::39?
Contracts in NEAR Protocol have access to the key-value storage (state), so at the lowest-level, you operate with key-value operations (NEAR SDK for AssemblyScript defines Storage class with get and set operations, and NEAR SDK for Rust has storage_read and storage_write calls to preserve data).
Guest Book example uses a high-level abstraction called PersistentVector, which automatically reads and writes its records from/to NEAR key-value storage (state). As you can see:
export const messages = new PersistentVector<PostedMessage>("m");
Guest Book defines the messages to be stored in the storage with m prefix, hense you see m::39, which basically means it is messages[39] stored in the key-value storage.
What is key_base64?
As key-value storage implies, the data is stored and accessed by keys, and the key can be binary, so base64 encoding is used to enable JSON-RPC API users with a way to query those binary keys as well (there is no way you can pass a raw binary blob in JSON).
How to know anything gets changed in these variables? Will I have to check every block id for the point the contract is deployed, to know where there is the change?
Correct, you need to follow every block, and check the changes. That is why we have built the Indexer Framework in order to enable community building services on top of that (we chose to build applications Indexer for Wallet and Indexer for Explorer, but others may decide to build GraphQL service like TheGraph)

Relation between command handlers, aggregates, the repository and the event store in CQRS

I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
isNewEvent = false
foreach (event in eventStream)
this.Apply(event, isNewEvent)
Create(userName, ...)
if (this.created) throw "User already exists"
isNewEvent = true
this.Apply(new UserCreated(...), isNewEvent)
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
isNewEvent = true
this.Apply(new UserBlocked(...), isNewEvent)
Apply(userCreatedEvent, isNewEvent)
this.created = true
if (isNewEvent) this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent, isNewEvent)
this.blocked = true
if (isNewEvent) this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.

"Synchronized" transactions for bidding system

I tried to implement a bidding system with the following "naïve" implementation of a BidService, using Grails 2.1 (so Hibernate and Spring)
But it seems to fail to prevent raise conditions and this results in "duplicate" bids from differente concurrent users.
A couple of information:
- BidService is transactional by default,
- Item and Bid model use "version: false" (pessimistic locking)
class BidService{
BidResult processBid(BidRequest bidRequest, Item item) throws BidException {
// 1. Validation
validateBid(bidRequest, item) // -> throws BidException if bidRequest do not comply with bidding rules (price too low, invalid user, ...)
// 2. Proces Bid (we have some complex rules to process the bids too, but at the end we only place the bid
Bid bid = placeBid(bidRequest, item)
return bid
}
Bid placeBid(BidRequest bidRequest, Item item){
// 1. Place Bid
Bid bid = new Bid(bidRequest) // create a bid with the bidRequest values
bid.save(flush: true, failOnError: true)
// 2. Update Item price
item.price = bid.value
item.save(flush: true, failOnError: true)
return bid
}
}
But as stated in http://grails.org/doc/latest/guide/services.html 9.2 Scoped Services:
By default, access to service methods is not synchronised, so nothing prevents concurrent execution of those methods. In fact, because the service is a singleton and may be used concurrently, you should be very careful about storing state in a service. Or take the easy (and better) road and never store state in a service.
I thought of using "synchronized" on the whole processBid() method but that sounds rather rude and could raise liveness issues or deadlocks.
On the other hand, processing bids in async way, prevents to send direct user feedback about winning/loosing the auction.
Any advice or best practice to use in this case?
PS: I already asked on the grails ML but it's a rather wide Java concurrency question.
Your service is stateless, so there is no need to synchronize it, synchronization is needed when it comes to state.
Also you don't need to use any locking since again.. you don't change the existing state, you only add new rows. Moreover, I'm not a GORM expert, but version: false should switch off optimistic locking from what its name says, and this doesn't mean pessimistic locking is activated.
From your question I don't understand what is your problem, but unique constraints is what preventing duplication in database.

Resources