How to calculate storage rent? - nearprotocol

I send transactions programmatically and I need to know exactly how much the fee is going to be. I managed to figure out how to calculate fees for ordinary transaction ((transfer cost + receipt creation cost) * 2), but now I'm struggling with a case where I need all my funds out of the account without deleting it. As I understand, in this case there must be a storage rent left on the account. However, I can't really figure out how to calculate that rent. There is a value returned from 'EXPERIMENTAL_protocol_config' method that seem to be connected to rent - 'storage_amount_per_byte', which implies that each byte costs 10000000000000000000 yocto, and also I can get 'storage_usage' from 'query' method with request type 'view_account', which is supposedly indicated how many bytes my account uses (which is 182). But whenever I try to send a transaction, I get a 'NotEnoughBalance' error that states that transaction cost is higher than the balance, but just by 669547687500000000 yocto. Whatever I do, I can't understand where this number comes from. No combination of fees from aforementioned 'EXPERIMENTAL_protocol_config' method yields this number.
There seems to be little to no decent documentation on transaction fee calculation, except for some 'fixed' values for most used actions. If you have any info on fee/storage rent calculation - I'll be thankful for it.

Through a random chance, I managed to find out the name that the number '6695476875' is referred to as, 'Reserved for transactions', (in gas, not tokens) as in the official wallet (wallet.near.org). God knows why it is reserved, neither docs.near.org, nomicon.io nor wiki.near.org have any info regarding this 'reservation' and this number is never mentioned in any RPC API method. This number is also never mentioned in 'near-api-js' lib, so I really have no idea if devs are even aware of it.
Anyway, since the title of this problem is 'How to calculate storage rent', the answer is something like this:
You get account info from 'query' method of RPC API (here's the doc) and take the "storage_usage" value (this is the amount of bytes that your account takes up on the blockchain).
You get protocol info from 'EXPERIMENTAL_protocol_config' method of RPC API (here's the doc) and take the "storage_amount_per_byte" value.
You multiply the amount of bytes by the storage_amount_per_byte and add the magic 669547687500000000 number to it.
And the resulting number is the least amount of tokens that you must have at your account at any time.
I don't know why it is a common practice to make lives of developers harder in blockchain industry, but this is a good example of such practice.

Related

What is "sf_max_daily_api_calls"?

Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.

D365 Same Tracking Token was assigned to Email/Case in Customer Service

One customer had a problem where an incorrect email (from another customer) was assigned to a case. The incorrectly assigned email is a response to a case that was deleted. However, the current case has the same tracking token as the deleted one. It seems that the CRM system uses the same tracking token as soon as it is available again. This should not happen! Here Microsoft has a real programming error from our point of view. The only solution we see is to increase the number of numbers to the maximum so that it takes longer until all tracking tokens are used up. But in the end, you still reach the limit.
Is there another possibility or has Microsoft really made a big mistake in the way emails are allocated?
We also activated Smart Matching, but that didn't help in this case either, because the allocation was made via the Tracking Token first.
Thanks
The structure of the tracking token can be configured and is set to 3 digits by default. This means that as soon as 999 emails are reached, the tracking token starts again at 1, which is basically a thinking error on Microsoft's part.
If you have set "Automatic replies", these will be reached in the shortest possible time. We therefore had to increase the number to 9 digits, which is also not a 100% solution. At some point, this number of emails is also reached and then emails are again assigned to requests that do not belong together. Microsoft has to come up with another solution.

Pentesting: Data boundaries for Strings

We have conducted a pen test from 3rd party vendor. One of the observations is that there are no data boundaries. We have millions of fields in our applications. There are no validations on String type fields apart from occasional business logic related constraints on few fields.
The pen test has revealed issues related to a lot of String fields where one can insert numeric values or negative values and the app is processing the same without any issues.
My question is, is this a valid test? Why would someone disallow a numeric value such as “100” in a String field unless specifically asked for by business?
If this is a valid issue, this means every single String attribute has to be tested for non-numeric values which is quite insane. What should be the right approach?
This answer is a bit late, but hopefully it's still helpful to you or someone else down the line.
This finding has to do with Input Validation, and is absolutely a valid test. This is one of the controls that is recommended for implementation by the OWASP Foundation.
The risk here typically comes when this unsanitized value is passed to a downstream component, such as a database, user interface, or vulnerable function.
Here's a hasty example:
async function chargeUser(userId: number, currency: string) {
const amount = (await User.getUser(userId)).amountDue;
return await api.chargeUser(user.id, amount + currency);
}
The above Typescript function takes a user ID and currency, then makes a call to an API to charge the user for their current subscriptions. Let's make a few assumptions:
The API expects the amount in the format of "100USD" or "100".
The API allows negative charges for refunds.
If the currency is not included, the API will assume USD.
If the currency is unknown, the API call will fail with an error.
The user currently has $100 due.
Under normal conditions, an employee would make a request from the UI, specifying the currency to charge the user in. They have no control over how much the user is charged:
{"userId": 1, "currency": "USD"}
This would result the user being charged 100 USD, as expected.
However, an attacker could leverage Javascript's loose typing and the lack of input validation to charge the user above the amount due:
{"userId": 1, "currency": 10000}
Since 100 + 10,000 is 10,100, and the API assumes the currency if it is omitted, this would result in the user being charged 10,100 USD.
Similarly, the attacker could instead do something such as:
{"userId": 1, "currency": -100100}
Since 100 - 100,1000 is -100,000 and the API supports negative amounts, this would result in the user having 100,000 USD deposited into their bank account.
Since penetration tests are time boxed and penetration testers generally don't have access to your backend code or possess the same level of in depth knowledge of your application as your team, it's not usually possible to determine each potential instance where this could occur, hence the broad recommendation to ensure proper input validation is implemented for all fields.
Any input from the user, for all fields, for all requests, should always be validated to the fullest extent possible.
Since you have a large number of fields that share similar validation logic, you may wish to implement a single shared function responsible for validation, possibly according to a schema. For possible solutions, see:
Open API
GraphQL
Spring Framework Documentation

Indexing staking/rewards events for NEAR blockchain

I want to create an app to have a detailed info about historical stake and reward changes for each block. Can I track every delegation events that contain any stake balance changes of delegator/validator? Including information like:
delegator address
validator address
amount of tokens that got delegated, undelegated or receive rewards
I found this contract. And then I tried decode transaction's actions and receipts. I still can not find info about amount.
For example this transaction contain unstake_all method. I tried using Near REST API or Postgres DB like
postgres://public_readonly:nearprotocol#mainnet.db.explorer.indexer.near.dev/mainnet_explorer
But, it does not include info about amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
So can I somehow get these logs using REST API or Postgres and are these logs reliable source? Or if there is any other method to find staking/reward amount info?
First of all
But, it does not include info about the amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
Explorer queries the RPC and shows you the logs from ExecutionOutcome.
In the PostgreSQL database for Indexer for Explorer, we don't store logs, so you can't find them there.
To have a detailed info about historical stake and reward changes for each block, I think you should index the blockchain by yourself to be sure everything is calculated as you expected.
In order to do this, you'd need to build an indexer. Happily, we're releasing an MVP (yet completely working solution) of NEAR Lake Framework which is a microframework to build indexers but even easier than it is done before now.
Please, have a look at the example project https://github.com/near/near-lake-raw-printer which basically prints the data from each block. Refer to this comment as an example of the structure that you can receive for each block (StreamerMessage) https://github.com/near/near-lake/issues/1#issuecomment-1035285658
So the main idea is to start indexing from the block where rewards became available (Phase 2) and analyze each block, transaction, and receipt related to staking/unstaking so you can perform your calculations and record the info about historical stake and reward changes.

Slashing/Validator reward /Treasury Reward in NEAR

I want to capture all the balance changing operations for any near address provided.
I have got the info of all the action types and archival apis to pull out the transactions. Can any one help with the apis to get the slashing and reward distribution apis. Rewards again are distributed to the validator and some part goes to treasury.
Please help me with the blocks and apis using which I can validate the theoretical concept so that i can capture all the balance changing operations.
Thanks
We have already collect this information here, there is a read-only access to the DB. You need account_changes table.
Rewards are marked with account_changes.update_reason = 'VALIDATOR_ACCOUNTS_UPDATE'.
Slashing is turned off for now, so you will not see anything about it.

Resources