Im trying to create a function or an extrinsic that doesn't have a transaction fee for the origin, but rather totally free. I thought maby with a weight of 0 it would be solved but it still costs tokens,
#[weight = 0]
then i tried to adjust the state with an rpc call, which did some calculations but did not modify the state
How can i create a function/extrinsic that is free without any transaction fee? And is it possible for rpc calls to adjust the state?
This is actually very easy with Substrate.
You simply pass Pays::No to the weight of the function.
Like so:
#[weight = (100_000, DispatchClass::Normal, Pays::No)]
Here the tuple describes:
The weight of the function. You should put a real value here to represent how complex this function is for your blockhain's computation.
The DispatchClass of this function. The default choice is Normal
The Pays option which determines if the caller will pay a fee or not.
Note that if you create an extrinsic that a user does not pay any fees, your blockchain is immediately vulnerable to DDOS attacks, as any user could spam this function at no cost.
You will need to build other layers of verification at your blockchain to make sure only valid calls to this function are propagated to other nodes.
Take a look here:
https://github.com/paritytech/polkadot/blob/master/runtime/common/src/claims.rs#L386
In this case, we have some statement which we verify is correctly signed by the user making the call before the call is passed to other nodes:
https://github.com/paritytech/polkadot/blob/master/runtime/common/src/claims.rs#L592
So you must do the same if you want your blockchain to be safe with a free function like this.
Related
I send transactions programmatically and I need to know exactly how much the fee is going to be. I managed to figure out how to calculate fees for ordinary transaction ((transfer cost + receipt creation cost) * 2), but now I'm struggling with a case where I need all my funds out of the account without deleting it. As I understand, in this case there must be a storage rent left on the account. However, I can't really figure out how to calculate that rent. There is a value returned from 'EXPERIMENTAL_protocol_config' method that seem to be connected to rent - 'storage_amount_per_byte', which implies that each byte costs 10000000000000000000 yocto, and also I can get 'storage_usage' from 'query' method with request type 'view_account', which is supposedly indicated how many bytes my account uses (which is 182). But whenever I try to send a transaction, I get a 'NotEnoughBalance' error that states that transaction cost is higher than the balance, but just by 669547687500000000 yocto. Whatever I do, I can't understand where this number comes from. No combination of fees from aforementioned 'EXPERIMENTAL_protocol_config' method yields this number.
There seems to be little to no decent documentation on transaction fee calculation, except for some 'fixed' values for most used actions. If you have any info on fee/storage rent calculation - I'll be thankful for it.
Through a random chance, I managed to find out the name that the number '6695476875' is referred to as, 'Reserved for transactions', (in gas, not tokens) as in the official wallet (wallet.near.org). God knows why it is reserved, neither docs.near.org, nomicon.io nor wiki.near.org have any info regarding this 'reservation' and this number is never mentioned in any RPC API method. This number is also never mentioned in 'near-api-js' lib, so I really have no idea if devs are even aware of it.
Anyway, since the title of this problem is 'How to calculate storage rent', the answer is something like this:
You get account info from 'query' method of RPC API (here's the doc) and take the "storage_usage" value (this is the amount of bytes that your account takes up on the blockchain).
You get protocol info from 'EXPERIMENTAL_protocol_config' method of RPC API (here's the doc) and take the "storage_amount_per_byte" value.
You multiply the amount of bytes by the storage_amount_per_byte and add the magic 669547687500000000 number to it.
And the resulting number is the least amount of tokens that you must have at your account at any time.
I don't know why it is a common practice to make lives of developers harder in blockchain industry, but this is a good example of such practice.
If you are let's say creating multiple functions (call them prime functions) for an app and they use the same code for their work. If you then extract the same code and make it its own function (sub-functions) and use it, are you charged for these sub-functions when you call the prime functions?
For example, if I am sending an email on different occasions to users. So, I make a new function that can only be called from inside the prime function, let's say an HTTP request. But the email function is never exposed to the HTTP request directly. So, when I use a function that in turn uses this email function, am I charged for just one invocation or for both, the HTTP request function and email function.
I know that they incur costs when they use the compute but the question is in terms of invocation?
If your function that is suppose to send email is called from the prime function.
Then yes, since both the functions have been invoked.
And, you will be charged for each lambda's duration separately.
One easy to find your costs would be just to look up for cloudwatch logs of these lambda.
Every log line with type=REPORT is equivalent to 1 invocation and it will also mention the memory configured and duration billed.
These are what AWS considers to bill you.
Should I pay for every read from NEAR protocol?
How do I view the value stored in NEAR protocol smart contract? (e.g. staking pool fees)
What is the difference between view and change methods?
Should I pay for every read from NEAR protocol?
TL;DR: No, you should not.
In NEAR protocol there are to ways to interact with smart contracts:
Submit a transaction with a FunctionCall action, which will get the specified method executed on the chunk producing nodes and the result will be provable through the blockchain (in terms of near-api-js these are "change methods")
Call query(call_function) JSON RPC method, which will get the specified method executed on the RPC node itself in a read-only environment, and the call will never be recorded/proved through the blockchain (in terms of near-api-js these are "view methods")
You can change the state and chained operations (e.g. cross-contract calls, tokens transfer, or access key addition/deletion) only through the first approach since blockchain expects the user to cover the execution costs, so the user should sign their transaction, and they will get charged for the execution.
Sometimes, you don't need to change the state, instead, you only want to read a value stored on the chain, and paying for it is suboptimal (though if you need to prove that the operation has been made it might still be desirable). In this case, you would prefer the second approach. Calling a method through JSON RPC is free of charge and provides a limited context during the contract execution, but it is enough in some scenarios (e.g. when you want to check what is the staking pool fee, or who is the owner of the contract, etc).
I'm trying to build a mental model of the role of off-chain workers in substrate. The bigger picture seems to be that they move logic inside the substrate node, that was otherwise done by oracles, triggering on predefined transactions. There are two use cases I was thinking of specifically:
1: Validating file formats: incoming transaction proposes a file accessible via url or ipfs hash, and it's format needs to be validated. An off-chain worker fetches the file, asserts format (size, encoding, content, whatever) and if correct submits another transaction saying it's valid.
2: Key generation: let's assume there is a separate service distributed with the substrate node, which manages keys for each instance. Node A runs a key sharing algorithm (like Shamir's secret sharing) via this external service between participants A, B and C, then makes a transaction creating a group (A,B,C) on-chain. This transaction triggers all nodes that are in this group to run off-chain workers, call into their local key store verifying having the key. They can all mark it on-chain afterwards.
As far as I understand it correctly, off-chain workers are triggered in every node after block execution. In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these. What is a good way of reaching consensus on the validity of the file? Is it also possible without economic incentives like staking? It would be problematic with tokens having no value in the network, e.g in enterprise settings. Is this even the right use case for off-chain workers? The second example should not suffer from such issue, we just need all parties to verify having the key.
Where does the thought process above go wrong, and why?
As far as I understand it correctly, off-chain workers are triggered in every node after block execution.
Yes and no. There is a CLI flag for it. And at the time of this writing it says:
--offchain-worker <ENABLED>
Should execute offchain workers on every block.
By default it's only enabled for nodes that are authoring new blocks. [default: WhenValidating] [possible
values: Always, Never, WhenValidating]
In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these.
I think it is the responsibility of the receiving function (aka. Call) to handle and incentivise this. For example, there could be a reward opportunity to validate an address. But, if it has already been submitted by another transaction, you will get slashed (or even if not, you do pay some transaction fee, for nothing). In such cases, you can assume that not all participants will submit a transaction. They will only do it when there is a chance of improvement, which should be depicted by your potential reward/slash scheme.
Is this even the right use case for off-chain workers?
I am no expert here, but I think at least the validation example is a good example. It is just a matter of finding a good incentive + anti-spam slashing.
I am less familiar with the second example, so no comments on that.
Let's say I've got a domain class, which has functions, that are to be called in a sequence. Each function does its job but if the previous step in the sequence is not done yet, it throws an error. The other way is that each function completes the step required for it to run, and then executes its own logic. I feel that this way is not a good practice, since I am adding multiple responsibilities, and the caller wont know what all operations can happen when he invokes a method.
My question is, how to handle dependent scenarios in DDD. Is it the responsibility of the caller to invoke the methods in the right sequence? Or do we make the methods handle the dependent operations before it's own logic?
Is it the responsibility of the caller to invoke the methods in the right sequence?
It's ok if those methods have a business meaning. For example the client may book a flight, and then book a hotel room. Both of those is something the client understands, and it is the client's logic to call them in this sequence. On the other hand, inserting the reservation into the database, then committing (or whatever) is technical. The client should not have to deal with that at all. Or "initializing" an object, then calling other methods, then calling "close".
Requiring a sequence of technical calls is a form of temporal coupling, it is considered a bad practice, and is not directly related to DDD.
The solution is to model the problem better. There is probably a higher level use-case the caller wants achieved with this call sequence. So instead of publishing the individual "steps" required, just support the higher use-case as a whole.
In general you should always design with the goal to get any sequence of valid calls to actually mean something (as far as the language allows).
Update: A possible model for the mentioned "File" domain:
public interface LocalFile {
RemoteFile upload();
}
public interface RemoteFile {
RemoteFile convert(...);
LocalFile download();
}
From my point of view, what you are describing is the orchestration of domain model operations. That's the job of the application layer, the layer upon domain model. You should have an application service that would call the domain model methods in the right sequence, and it also should take into account whether some step has left any task undone, and in such case, tell the next step to perform it.
TLDR; Scroll to the bottom for the answer, but the backstory will give some good context.
If the caller into your domain must know the order in which to call things, then you have missed an opportunity to encapsulate business logic in your domain, which is a symptom of an anemic domain.
#RobertBräutigam made a very good point:
Requiring a sequence of technical calls is a form of temporal coupling, it is considered a bad practice, and is not directly related to DDD.
This is true, but it is worse when you do it with your domain model because non-domain concerns get intermixed with domain concerns. Intent becomes lost in a sea of non business logic. If you can, you look for a higher-order aggregate that encapsulates the ordering. To borrow Robert's example, rather than booking a flight then a hotel room, and forcing that on the client, you could have a Vacation aggregate take both and validate it.
I know that sounds wrong in your case, and I suspect you're right. There's a clear dependency that can't happen all at once, so we can't be the end of the story. When you have a clear dependency with intermediate transactions that must occur before the "final" state, we have... orchestration (think sagas, distributed transactions, domain events and all that goodness).
What you describe with file operations spans across transactions. The manipulation (state change) of a domain is transactional at each point in a distributed transaction, but is not transactional overall. So when #choquero70 says
you are describing is the orchestration of domain model operations. That's the job of the application layer, the layer upon domain model.
that's also correct. Orchestration is key. Each step must manipulate the state of the domain once, and once only, and leave it in a valid state, but it OK for there to be multiple steps.
Each of those individual points along the timeline are valid moments in the state of your domain.
So, back to your model. If you expose a single interface with multiple possible calls to all steps, then you leave yourself open to things being called out of order. Make this impossible or at least improbable. Orchestration is not just about what to do, but what to prevent from happening. Create smaller interfaces/classes to avoid accidentally increasing the "surface area" of what could be misused accidentally.
In this way, you are guiding the caller on what to do next by feeding them valid intermediate states. But, and this is the important part, the burden on what to call in what order is not on the caller. Sure, the caller could know what to do, but why force it.
Your basic algorithm is the same: upload, transform, download.
Is it the responsibility of the caller to invoke the methods in the right sequence?
Not exactly. Is the responsibility of the caller to choose from legitimate choices given the state of your domain. It's "your" responsibility to present these choices via business methods on your correctly modeled moment/interval aggregate suitable for the caller to use.
Or do we make the methods handle the dependent operations before it's own logic?
If you've setup orchestration correctly, this won't be necessary. But it does make sense to validate anyway.
On a side note, each step of the orchestration you do should be very linear in nature. I tell my developers to be suspicious of an orchestration step that has an if statement in it. If there's an if it's likely better to be part of another orchestration step or encapsulated in business logic.