I am trying to find out how the rewards are sent to the validators account/delegation contract. Where is that information stored and how can it be retrieved from the RPC nodes?
Related
I'm trying to retrieve all the transfers and sales data on Solana.
I used ethereum-etl to get Ethereum data, it gets logs for each transaction and extracts the data for ERC-721 Transfer and ERC-1155 TransferSingle events.
How would I go about getting the same data for NFT transfers and sales on Solana?
Thank you
I'm new to solana. Currently, I'm working on an app that supports user to track their wallet historical balance and transactions.
For example, given by an account and time period range, the app will calculate the opening and closing balance and how much sol were sent and recevied during the time range.Since the rpc dose not support such features, I fetch all the historical transactions of an account and instead of using prebalance and postbalance directly returned by rpc, I try to calculate the historical balance by every transcations.(I use the absolute value of the subtraction of the prebalance and postbalance to get the transfer amount in every transaction, so that I can get the sent and the received value.) I found that in solana the rent does not show in the transaction, which will cause the balance calculation error.
I'd like to know if there is any way to track how much rent was paid given by an account address and timestamp in solana? I tried googling it and didn't find a solution.
Any comments and suggestions will be appreciated.
Unless I'm misunderstanding the question, the rent-exempt balances are included in transactions. For example, here's a transaction creating a USDC account: https://explorer.solana.com/tx/32oAkYzp47zF7DiPRFwMKLcknt6rhu43JW2yAfkEc2KgZpX35BoVeDBUs4kkiLWJ4wqoEFspndvGdUcB215jY931?cluster=testnet
There, you'll see that the new token account 2XBTsdaRTYdmsqLXRjjXonbVHCwvvGfHjBRfTXPcgnsS received 0.00203928 SOL, and the funding account 4SnSuUtJGKvk2GYpBwmEsWG53zTurVM8yXGsoiZQyMJn lost 0.00204428 SOL, which is higher since it paid for the transaction.
Roughly speaking, if you go though all a wallet's transactions, you can see if a payment was for rent-exemption if the destination account had 0 SOL to start, and the wallet paid for it. Note that this isn't perfect, since a lot of balances can move in a transaction!
Background information
We sell an API to users, that analyzes and presents corporate financial-portfolio data derived from public records.
We have an "analytical data warehouse" that contains all the raw data used to calculate the financial portfolios. This data warehouse is fed by an ETL pipeline, and so isn't "owned" by our API server per se. (E.g. the API server only has read-only permissions to the analytical data warehouse; the schema migrations for the data in the data warehouse live alongside the ETL pipeline rather than alongside the API server; etc.)
We also have a small document store (actually a Redis instance with persistence configured) that is owned by the API layer. The API layer runs various jobs to write into this store, and then queries data back as needed. You can think of this store as a shared persistent cache of various bits of the API layer's in-memory state. The API layer stores things like API-key blacklists in here.
Problem statement
All our input data is denominated in USD, and our calculations occur in USD. However, we give our customers the query-time option to convert the response just-in-time to another currency. We do this by having the API layer run a background job to scrape exchange-rate data, and then cache it in the document store. Individual API-layer nodes then do (in-memory-cached-with-TTL) fetches from this exchange-rates key in the store, whenever a query result needs to be translated into a specific currency.
At first, we thought that this unit conversion wasn't really "about" our data, just about the API's UX, and so we thought this was entirely an API-layer concern, where it made sense to store the exchange-rates data into our document store.
(Also, we noticed that, by not pre-converting our DB results into a specific currency on the DB side, the calculated results of a query for a particular portfolio became more cache-friendly; the way we're doing things, we can cache and reuse the portfolio query results between queries, even if the queries want the results in different currencies.)
But recently we've been expanding into also allowing partner clients to also execute complex data-science/Business Intelligence queries directly against our analytical data warehouse. And it turns out that they will also, often, need to do final exchange-rate conversions in their BI queries as well—despite there being no API layer involved here.
It seems like, to serve the needs of BI querying, the exchange-rate data "should" actually live in the analytical data warehouse alongside the financial data; and the ETL pipeline "should" be responsible for doing the API scraping required to fetch and feed in the exchange-rate data.
But this feels wrong: the exchange-rate data has a different lifecycle and integrity constraints than our financial data. The exchange rates are dirty and ephemeral point-in-time samples attained by scraping, whereas the financial data is a reliable historical event stream. The exchange rates get constantly updated/overwritten, while the financial data is append-only. Etc.
What is the best practice for serving the needs of analytical queries that need to access backend "application state" for "query result presentation" needs like this? Or am I wrong in thinking of this exchange-rate data as "application state" in the first place?
What I find interesting about your scenario is about when the exchange rate data is applicable.
In the case of the API, it's all about the realtime value in the other currency and it makes sense to have the most recent value in your API app scope (Redis).
However, I assume your analytical data warehouse has tables with purchases that were made at a certain time. In those cases, the current exchange rate is not really relevant to the value of the transaction.
This might mean that you want to store the exchange rate history in your warehouse or expand the "purchases" table to store the values in all the currencies at that moment.
This answer about upgradability suggests that at some point you should delete access keys to the account containing a smart contract: How do you upgrade NEAR smart contracts?.
It makes sense that a smart contract should be "frozen" at some point, and you want to give its users confidence that it will not be changed. But what about contract rewards and other funds belonging to the contract account? How would the original owner get access to that if keys are deleted?
But what about contract rewards and other funds belonging to the contract account? How would the original owner get access to that if keys are deleted?
The contract should be implemented in such a way that would allow certain operations.
Let's take a lockup contract as an example. This contract has a single owner, and the funds are locked for a certain amount of time, and the contract only provides certain methods to be called and guarded with the specific logic:
As an owner, I can delegate (stake) my tokens to staking pools while I still cannot arbitrary transfer the tokens
As an owner, I can withdraw the rewards from the staking pool through the lockup contract, and transfer those to an arbitrary account
Once the lockup time is over, as an owner, I can call add_full_access_key function, and thus gain full access over the account, and even delete it after that (transferring all the tokens to some other account).
All that is explicitly implemented on the contract level, and easy to review, and given there is no other AccessKey on the lockup contract, we can be sure that there is no other way to interfere with the contract logic.
I'm currently trying to build an application that handles personal finances. I'm struggling with Lagom ways of doing because I can't find any example of "real" application built with Lagom. I have to guess what are best practises and I'm constantly afraid of falling into pitfalls.
My case is the following: I have Users, Accounts and Transactions. Accounts belong to users but can be "shared" between them (with some sort of authorization system, one user is admin and other can read or edit the account). Transactions have an optional "debit" account, an optional "credit" account and an amount which is always positive.
The scenarios I was considering are the followings:
I consider that transactions belong to accounts and are parts of the account entity as a list of entries. In that scenario, a transfert transaction must have a "sister" entry in the other account. This seems easy to implement but I'm concerned by :
the potential size of the entity (and the snapshots). What happen if I have accounts that contain thousands of ten of thousands of transactions?
the duplication of the transaction in several accounts.
I consider that transactions have their own service. I that case I can use Kafka to publish events when transactions are recorded so the Account entity can "update" it's balance. In that case does it make sense to have a "balance" property in the entity or a read-side event listener for transaction events that update the read-database?
I can have two Persistent Entities in the same service but in that case I'm struggling with the read-side. Let say I have a transaction, I want to insert into the "transactions" table and update the "accounts" table. Should I have multiple read-side processors that listen to different events but write in the same db?
What do you think?
I think that you shouldn't have a different entity 'Transactions' because it is tightly coupled to the account entity, in fact, the transactions of an account is no more than the event log of this account. So I recommend persisting the balance with a unique transaction id and the id of the other account when it is a transfer transaction, and make the read processor to listen the events of the account changes to store them in the read model.
Doing this, a transfer is just a message between the two accounts that results in a modification of the balance that later will be persistent as part of the event log of each of them. This way seems more natural and you don't have to manage a sepparate aggregate root that, in addition, is tightly coupled to the account entities.