I want to create an app to have a detailed info about historical stake and reward changes for each block. Can I track every delegation events that contain any stake balance changes of delegator/validator? Including information like:
delegator address
validator address
amount of tokens that got delegated, undelegated or receive rewards
I found this contract. And then I tried decode transaction's actions and receipts. I still can not find info about amount.
For example this transaction contain unstake_all method. I tried using Near REST API or Postgres DB like
postgres://public_readonly:nearprotocol#mainnet.db.explorer.indexer.near.dev/mainnet_explorer
But, it does not include info about amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
So can I somehow get these logs using REST API or Postgres and are these logs reliable source? Or if there is any other method to find staking/reward amount info?
First of all
But, it does not include info about the amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
Explorer queries the RPC and shows you the logs from ExecutionOutcome.
In the PostgreSQL database for Indexer for Explorer, we don't store logs, so you can't find them there.
To have a detailed info about historical stake and reward changes for each block, I think you should index the blockchain by yourself to be sure everything is calculated as you expected.
In order to do this, you'd need to build an indexer. Happily, we're releasing an MVP (yet completely working solution) of NEAR Lake Framework which is a microframework to build indexers but even easier than it is done before now.
Please, have a look at the example project https://github.com/near/near-lake-raw-printer which basically prints the data from each block. Refer to this comment as an example of the structure that you can receive for each block (StreamerMessage) https://github.com/near/near-lake/issues/1#issuecomment-1035285658
So the main idea is to start indexing from the block where rewards became available (Phase 2) and analyze each block, transaction, and receipt related to staking/unstaking so you can perform your calculations and record the info about historical stake and reward changes.
Related
Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.
I send transactions programmatically and I need to know exactly how much the fee is going to be. I managed to figure out how to calculate fees for ordinary transaction ((transfer cost + receipt creation cost) * 2), but now I'm struggling with a case where I need all my funds out of the account without deleting it. As I understand, in this case there must be a storage rent left on the account. However, I can't really figure out how to calculate that rent. There is a value returned from 'EXPERIMENTAL_protocol_config' method that seem to be connected to rent - 'storage_amount_per_byte', which implies that each byte costs 10000000000000000000 yocto, and also I can get 'storage_usage' from 'query' method with request type 'view_account', which is supposedly indicated how many bytes my account uses (which is 182). But whenever I try to send a transaction, I get a 'NotEnoughBalance' error that states that transaction cost is higher than the balance, but just by 669547687500000000 yocto. Whatever I do, I can't understand where this number comes from. No combination of fees from aforementioned 'EXPERIMENTAL_protocol_config' method yields this number.
There seems to be little to no decent documentation on transaction fee calculation, except for some 'fixed' values for most used actions. If you have any info on fee/storage rent calculation - I'll be thankful for it.
Through a random chance, I managed to find out the name that the number '6695476875' is referred to as, 'Reserved for transactions', (in gas, not tokens) as in the official wallet (wallet.near.org). God knows why it is reserved, neither docs.near.org, nomicon.io nor wiki.near.org have any info regarding this 'reservation' and this number is never mentioned in any RPC API method. This number is also never mentioned in 'near-api-js' lib, so I really have no idea if devs are even aware of it.
Anyway, since the title of this problem is 'How to calculate storage rent', the answer is something like this:
You get account info from 'query' method of RPC API (here's the doc) and take the "storage_usage" value (this is the amount of bytes that your account takes up on the blockchain).
You get protocol info from 'EXPERIMENTAL_protocol_config' method of RPC API (here's the doc) and take the "storage_amount_per_byte" value.
You multiply the amount of bytes by the storage_amount_per_byte and add the magic 669547687500000000 number to it.
And the resulting number is the least amount of tokens that you must have at your account at any time.
I don't know why it is a common practice to make lives of developers harder in blockchain industry, but this is a good example of such practice.
I want to capture all the balance changing operations for any near address provided.
I have got the info of all the action types and archival apis to pull out the transactions. Can any one help with the apis to get the slashing and reward distribution apis. Rewards again are distributed to the validator and some part goes to treasury.
Please help me with the blocks and apis using which I can validate the theoretical concept so that i can capture all the balance changing operations.
Thanks
We have already collect this information here, there is a read-only access to the DB. You need account_changes table.
Rewards are marked with account_changes.update_reason = 'VALIDATOR_ACCOUNTS_UPDATE'.
Slashing is turned off for now, so you will not see anything about it.
One of our cafes has a coffee station where the barista needs to see orders (payments) as they come in from the the registers. We are not able to use the Webhooks feature because it does not allow us to filter based on location and register and our volume is too high. So I am developing an iOS app which will periodically call the payments API for that specific location to get the latest transactions. It will use the begin_time parameter to get txns since the last query. The app will only be deployed only on one device and we would like to make the call in intervals of every 5-10 seconds. It will probablly pull down 1-3 txns for each call. Is there a minimum interval that is recommended or enforced?
Thanks,
Mike
The only minimum interval you would have to worry about would running into rate limits, but at that rate, you won't have any problems. You can read more about error codes (including rate limiting) on Square's official documentation
I'm developing a basic messaging system on the Parse.com at the moment and I have noticed in the Events Analytics screen I'm hitting 30,000+ requests per day. This is a shock considering I'm the only person using the system at the moment. Obviously with a few users I would blow my API request limit straight away.
I'm pretty experienced with Parse.com these days, so I'm lean with queries and I'm alert to not putting finds, saves, retrieves, etc in for loops. I also understand that saveAll() on an array of ParseObjects doesn't always limit the request count to 1 (depending on relationships inside that object).
So how does one track down where the excessive calls are coming from?
I see the above Analytics > Performance > Served Requests data, but how do I drill down to see if cloud code or iOS is the culprit?
Current solution is to effectively unit test each block of Parse code and look at the results in above screen.
For the benefit of others who may happen upon this thread with the same questions, I found some techniques to hunt down where excessive requests are coming from.
1) Parse's documentation on the API's themselves is really good, but there isn't a lot of information / guides for the admin interfaces. Under: Analytics -> Explorer -> Make a table there is a capability to download all the requests for a specific day (to import into a spreadsheet). The data isn't very detailed though and the dates are epoch timestamps, so hard to follow. At least you can see [Request Type, Class, Installation ID] e.g. ["find", "MyParseClass", "Cloud Code"].
2) My other technique was to add custom Analytic events to the code. So in Cloud Code for example, I added the following line to each beforeSave and afterSave event:
Parse.Analytics.track('MyClass_beforeSave', null);
3) Obviously, Parse logs these calls in the Logs window, but given you can only see the most recents transactions and can't clear them, I found it mostly unhelpful in tracking down the excessive calls.