Understanding fee payments and assets flow - nearprotocol

Consider the following example transaction 24AVRBgWnEWQK1yPnfgfzkSugsWbLNxhnHeDQr9tG7Mf.
Receipt Ef93JLMy6aeAKDQA4p74dLxEQEPPJpLjfwPz6bJE3tBy burned 21027712500000000000 tokens as a fee. As one could see, this receipt is a sub-receipt of GrvohsC2eHLCZDKB7r2yZmSaxqqzqh97keCEFt3QeyQP which is a sub-receipt of another one. There are different senders and receivers in this receipt-chain.
There are a few things I don't understand here:
Did the sender of the transaction (rgv250cc.near in this case) covered all the fees for executing all receipts?
In receipt Ef93JLMy6aeAKDQA4p74dLxEQEPPJpLjfwPz6bJE3tBy both predecessor and receiver are aurora.pool.near. Does this mean, that in this case, the aurora.pool.near covered the fee for this receipt, which was earlier passed to aurora from the transaction sender, which is indicated by the amount of gas ("gas": 250000000000000,) attached to the function call from the transaction?
What does the value of tokens_burnt in receipt
6okihcHrqjr4KWY6CeRzynvC5GxA6wrNyZNMYyKphqJP represent? Is it the
fee of executing this receipt only, or maybe cumulated value of executing
all receipts in this receipt-chain?
EDIT:
AD 3. After summarizing all tokens_burnt value matched with the explorer, so each receipt shows only tokens burned for executing itself.

Did the sender of the transaction (rgv250cc.near in this case) covered all the fees for executing all receipts?
Correct.
In receipt Ef93JLMy6aeAKDQA4p74dLxEQEPPJpLjfwPz6bJE3tBy both predecessor and receiver are aurora.pool.near. Does this mean, that in this case, the aurora.pool.near covered the fee for this receipt, which was earlier passed to aurora from the transaction sender, which is indicated by the amount of gas ("gas": 250000000000000,) attached to the function call from the transaction?
rgv250cc.near covers all the fees even for the cross-contract calls.
What does the value of tokens_burnt in receipt 6okihcHrqjr4KWY6CeRzynvC5GxA6wrNyZNMYyKphqJP represent? Is it the fee of executing this receipt only, or maybe cumulated value of executing all receipts in this receipt-chain?
That only covers the tokens that were burnt during this particular receipt. You need to sum up all the tokens burnt values to get the total.
I believe you might benefit from reading this article about the balance changes.

Related

How to calculate storage rent?

I send transactions programmatically and I need to know exactly how much the fee is going to be. I managed to figure out how to calculate fees for ordinary transaction ((transfer cost + receipt creation cost) * 2), but now I'm struggling with a case where I need all my funds out of the account without deleting it. As I understand, in this case there must be a storage rent left on the account. However, I can't really figure out how to calculate that rent. There is a value returned from 'EXPERIMENTAL_protocol_config' method that seem to be connected to rent - 'storage_amount_per_byte', which implies that each byte costs 10000000000000000000 yocto, and also I can get 'storage_usage' from 'query' method with request type 'view_account', which is supposedly indicated how many bytes my account uses (which is 182). But whenever I try to send a transaction, I get a 'NotEnoughBalance' error that states that transaction cost is higher than the balance, but just by 669547687500000000 yocto. Whatever I do, I can't understand where this number comes from. No combination of fees from aforementioned 'EXPERIMENTAL_protocol_config' method yields this number.
There seems to be little to no decent documentation on transaction fee calculation, except for some 'fixed' values for most used actions. If you have any info on fee/storage rent calculation - I'll be thankful for it.
Through a random chance, I managed to find out the name that the number '6695476875' is referred to as, 'Reserved for transactions', (in gas, not tokens) as in the official wallet (wallet.near.org). God knows why it is reserved, neither docs.near.org, nomicon.io nor wiki.near.org have any info regarding this 'reservation' and this number is never mentioned in any RPC API method. This number is also never mentioned in 'near-api-js' lib, so I really have no idea if devs are even aware of it.
Anyway, since the title of this problem is 'How to calculate storage rent', the answer is something like this:
You get account info from 'query' method of RPC API (here's the doc) and take the "storage_usage" value (this is the amount of bytes that your account takes up on the blockchain).
You get protocol info from 'EXPERIMENTAL_protocol_config' method of RPC API (here's the doc) and take the "storage_amount_per_byte" value.
You multiply the amount of bytes by the storage_amount_per_byte and add the magic 669547687500000000 number to it.
And the resulting number is the least amount of tokens that you must have at your account at any time.
I don't know why it is a common practice to make lives of developers harder in blockchain industry, but this is a good example of such practice.

Batch operation over thousands of aggregates in CQRS system, do people do that?

I'm working on an application for managing bank credit cards.
CQRS and Event Sourcing architecture was chosen for the app.
The most important aggregate in the app is CreditCard which controls the credit card lifecycle.
It looks something like:
class CreditCard {
private int status;
public void activate() {...}
public void deactivate () {...}
...
}
Its activate and deactivate methods protects credit card invariants and publish CardActivatedEvent and CardDeactivatedEvent, respectively, if the invocation of the method succeeds.
We store these events in the event store for later aggregate reconstruction on the command side.
We apply these events to various views.
We use these events to notify other third party systems.
All good for now.
Recently, we got a new requirement to charge all active credit cards on monthly basis.
My first instinct was, ok we can add charge method to the same CreditCard aggregate.
This method can check some invariants relevant to charging. Like, is the card in correct status for charging, was it charged already, etc.
On successful invocation, this method can publish CardChargedEvent.
Then we can create some process manager which will once per month query view side for active credit cards to get their IDs.
Having these IDs, the process manager can issue multiple charge commands (one per credit card aggregate) to the command side.
For each charge command received, the command side will reconstruct CreditCard aggregate object and call it's charge method.
The only problem is that this approach looks quite inefficient. Especially regarding database roundtrips on the command side (one read and one write per aggregate instance).
If we take into the equation that we can easily have 100k plus credit cards in our app, this roundtrip overhead starts looking to me as a bit of a problem.
Does anyone have any experience with batch operations on CQRS/ES systems?
Is my concern valid?
What to do in such cases?
How you implement batches in CQRS systems?
One alternative that pops to my mind is that for charging use case I ditch CQRS/ES/DDD principles, and implement the whole thing using stored procedures on one of our view databases. This procedure can search for suitable credit cards in the credit card view table and populate the "to be charged queue" table with records found. Then I can have some external process that reads this second table and do whatever it needs to do.
Recently, we got a new requirement to charge all active credit cards on monthly basis. My first instinct was, ok we can add charge method to the same CreditCard aggregate.
I think this is where the design flaw happens.
Your CreditCard aggregate was designed with a specific use case in mind:
The most important aggregate in the app is CreditCard which controls the credit card lifecycle.
Charging the credit card is not part of the credit card lifecycle. Whether it happens or not depends on the credit card state, but charging (successfully) the credit card will not change the state of your domain object. It should not interact with the CreditCard domain aggregate, as the purpose of your aggregate is to enforce business rules when changing your state. You should ask yourself: what aggregate is changed when charging a credit card ?
The answer to this question depends on the rest of your domain model and business cases, but it has more to do to with stuff like account balance or credit authorization than the card itself. You could implement like this:
A batch process working monthly would query your CreditCard aggregate for active cards, then try to charge all customers for their monthly fees by sending a command to the AccountBalance aggregate ;
The AccountBalance aggregate would raise a BalanceChangedEvent if the customer has enough money, or a CreditAuthorizationRequiredEvent if not, temporarily freezing the account until the credit was authorized or rejected ;
A CreditAuthorization aggregate could either allow or deny the credit, based on credit allowance business rules, raising events accordingly ;
The AccountBalance aggregate would unfreeze the account, changing the balance or not based on the outcome, eventually raising or not the BalanceChangedEvent ;
The CreditCard aggregate would register to the CreditDeniedEvent to deactivate the credit card because the customer was not able to pay the fees ;
... and so on ...

Transfer to recipient which doesn't exist

When sender issues a Transfer action, runtime subtracts a deposit, issues receipt and node routes it to another shard.
What happens if recipient account doesn't exist? Will tokens be refunded back to the sender?
Yes. Runtime generates a refund receipt from a sender system with a single action Transfer for the total amount of deposit.

What is the role of column "IS CLOSED" in magento's transactions area?

I can see that the capture has been processed successfully:
But then on the "transactions" screen, "Is Closed" column is saying "no" next to capture. I think I just don't understand the role of this column. Can someone help explain it to me?
A little background on the credit card payment transaction process flow helps make sense of this. These are the basic flow actions of the transaction lifecycle:
Authorization
Capture
Settlement
These flow actions are broken down into more specific operations that can be called against the payment gateway. Here are some basic ones that are relevant:
Authorize (AUTH_ONLY):
Run the card for a given amount and obtain a unique authorization code. The amount will be put on hold and you are guaranteed these funds as long as you use the authorization code in a Capture transaction within 30 days. (How long before an authorization code expires varies by company. Check with your payment gateway)
Customers don't see the authorization as a charge on their statement, but they will see their available funds decrease by the amount you ran the authorization for.
If you don't use the authorization code in a follow-up Capture transaction, the authorization is "dropped", funds returned to the customer's balance and you can no longer use it.
Capture (PRIOR_AUTH_CAPTURE):
Use a previously obtained authorization code to complete the transaction.
The amount captured can be lower than the originally obtained authorization amount (this is useful in cases like our example where you don't know the total order amount ahead of time).
Source: http://www.softwareprojects.com/resources/conversion-traffic-to-cash/t-processing-payments-authorize-vs-capture-vs-settle-2030.html
Settlement: This is the process merchants must complete ... to be paid for their transactions.
The product or service must be delivered or performed before settlement can take place. In the case of mail order/telephone order, this specifically means the goods must be shipped before the settlement process is performed.
Source: http://www.shift4.com/insight/glossary/
In Magento, the is_closed flag signifies that a transaction is settled and no other operations may be performed against it. The reason a transaction would be left open until settlement is so that you can do partial shipments of goods (multiple captures), as well as void or refund the transaction.
To use Magento’s Mage_Authorizenet_Model_Directpost as an example, the capture() operation leaves the current transaction open, while void() and _refund() operations close it.

Dequeuing events during discrete event simulation

I have a question regarding the dequeue mechanism during discrete event simulation.
Most of the implementations use some kind of priority queue which can be used to quickly retrieve the event with the earliest timestamp. What happens when such an event cannot be scheduled because, say, it needs a resource to be able to run.
There may be another event in the queue whose timestamp is greater than the timestamp of the event that is blocked on a resource.
For example, let us assume we are modelling a grocery-store with separate checkout lines and a cashier per line. A shopper entering a checkout line is an event. We enqueue this event based on the time the shopper entered the checkout line. However, the order in which our simulation should execute two such events in not necessarily the time order in which they entered the checkout line because the cashiers might free up in a different order.
In such a scenario how does using a priority queue solely based on timestamp --- and independent of resource availability --- work out?
You need a queue for each cashier, or at least a count of waiting customers if customer identity is not important in your simulation ( e.g. I would join a queue of three people with one item each over a queue with one person with a full trolley, so just a queue length may not capture the information needed to incorporate that heuristic ).
When a customer joins the queue, the number of queuing customers is incremented or the customer is pushed onto the cashier's queue.
When the cashier is ready to serve, the first customer is popped of the cashier's queue. So the customer service event is dependent not on the time the customer arrives, but when the cashier is ready.
These queues or counters are independent of the scheduling mechanism for events - the events scheduled manipulate these queues, they aren't dependent on them for scheduling.
As Pete Kirkham pointed out, it's important to be aware that the lines (queues) that customers wait in are completely separate things from the priority queue that's used to determine event ordering.
In discrete-event simulation an event is a point in time at which the system state changes. When an event occurs you figure out what to do next based on the state. Joining the line of customers is an event, but so is becoming eligible for service. Once a customer becomes eligible for service, the logic of that event has to check whether service is possible or not. If so, schedule a new event for when the service will end. If there are resource constraints, then nothing gets scheduled and that customer is on hold. However, at some point in the future the required resource will become available. That's an event too, and that event's logic should check to see if there are customers on hold due to lack of the resource. If not, there's no need to schedule anything, but if so, you can now schedule the actual service for the customer. You can see that customer delays in the queue will increase with resource constraints.
For a much fuller explanation of how discrete-event simulations work, please look at this introductory tutorial paper.

Resources