Is there any way to track how much rent was paid given by an account address and timestamp in solana? - solana

I'm new to solana. Currently, I'm working on an app that supports user to track their wallet historical balance and transactions.
For example, given by an account and time period range, the app will calculate the opening and closing balance and how much sol were sent and recevied during the time range.Since the rpc dose not support such features, I fetch all the historical transactions of an account and instead of using prebalance and postbalance directly returned by rpc, I try to calculate the historical balance by every transcations.(I use the absolute value of the subtraction of the prebalance and postbalance to get the transfer amount in every transaction, so that I can get the sent and the received value.) I found that in solana the rent does not show in the transaction, which will cause the balance calculation error.
I'd like to know if there is any way to track how much rent was paid given by an account address and timestamp in solana? I tried googling it and didn't find a solution.
Any comments and suggestions will be appreciated.

Unless I'm misunderstanding the question, the rent-exempt balances are included in transactions. For example, here's a transaction creating a USDC account: https://explorer.solana.com/tx/32oAkYzp47zF7DiPRFwMKLcknt6rhu43JW2yAfkEc2KgZpX35BoVeDBUs4kkiLWJ4wqoEFspndvGdUcB215jY931?cluster=testnet
There, you'll see that the new token account 2XBTsdaRTYdmsqLXRjjXonbVHCwvvGfHjBRfTXPcgnsS received 0.00203928 SOL, and the funding account 4SnSuUtJGKvk2GYpBwmEsWG53zTurVM8yXGsoiZQyMJn lost 0.00204428 SOL, which is higher since it paid for the transaction.
Roughly speaking, if you go though all a wallet's transactions, you can see if a payment was for rent-exemption if the destination account had 0 SOL to start, and the wallet paid for it. Note that this isn't perfect, since a lot of balances can move in a transaction!

Related

Event Sourcing and concurrent, contradictory events creation

I am having a hard time figuring this one out. Maybe you can help me.
Problem statement:
Imagine there is a system that records financial transactions of an account (like a wallet service). Transactions are stored in a database and each Transaction denotes an increase or decrease of the balance of a given amount.
On the application code side, when the User wants to purchase, all Transactions for his account are being pulled from the DB and the current balance is calculated. Based on the result, the customer has or has not sufficient funds for the purchase (the balance can never go below zero).
Transactions example:
ID userId amount currency, otherData
Transaction(12345, 54321, 180, USD, ...)
Transaction(12346, 54321, -50, USD, ...)
Transaction(12347, 54321, 20, USD, ...)
Those 3 from above would mean the User has 150 USD on his balance.
Concurrent access:
Now, imagine there are 2 or more instances of such application. Imagine, the User has a balance of 100 USD and bought two items worth of 100 USD at the same time. Request for such a purchase goes to two different instances, which both read all Transactions from DB and reduce them into currentBalance. In both replicas, at the same time balance equals to 100 USD. Both services allow purchase and add new Transaction Transaction(12345, 54321, -100, USD, ...) which decreases the balance by 100.
If there are two, contradictory Transactions inserted into the DB, the balance is incorrect: -100 USD.
Question:
How should I deal with such a situation?
I know that usually optimistic or pessimistic concurrency control is used. So here are my doubts about both:
Optimistic concurrency
It's about keeping the version of the resource and comparing it before the actual update, like a CAS operation. Since Transactions are a form of events - immutable entities - there is no resource which version I could grasp. I do not update anything. I only insert new changes to the balance, which has to be consistent with all other existing Transactions.
Pessimistic concurrency
It's about locking the table/page/row for modification, in case they more often happen in the system. Yeah, ok.. blocking a table/page for each insert is off the table I think (scalability and high load concerns). And locking rows - well, which rows do I lock? Again, I do not modify anything in the DB state.
Open ideas
My feeling is, that this kind of problem has to be solved on the application code level. Some, yet vague ideas that come to my mind now:
Distributed cache, which holds "lock of given User", so that only one Transaction can be processed at a time (purchase, deposit, withdrawal, refund, anything).
Each Transaction has having field such as previousTransactionId - pointer to the last committed Transaction and some kind of unique index on this field (exactly one Transaction can point to exactly one Transaction in the past, first Transaction ever having null value). This way I'd get constraint violation error trying to insert a duplicate.
Asynchronous processing with queueing system, and having a topic-per-user: exactly one instance processing Transactions for given User one-by-one. Nice try, but unfortunatelly I need to be synchronous with the purchase in order to reply to 3rd party system.
One thing to note is that typically there's a per-entity offset (a monotonically increasing number, e.g. Account|12345|6789 could be the 6789th event for account #12345) associated with each event. Thus, assuming the DB in which you're storing events supports it, you can get optimistic concurrency control by remembering the highest offset seen when reconstructing the state of that entity and conditioning the insertion of events on there not being events for account #12345 with offsets greater than 6789.
There are datastores which support the idea of "fencing": only one instance is allowed to publish events to a particular stream, which is another way to optimistic concurrency control.
There are approaches which move pessimistic concurrency control into the application/framework/toolkit code. Akka/Akka.Net (disclaimer: I am employed by Lightbend, which maintains and sells commercial support for one of those two projects) has cluster sharding, which allows multiple instances of an application to coordinate ownership of entities between themselves. For example instance A might have account 12345 and instance B might have account 23456. If instance B receives a request for account 12345, it (massively simplifying) effectively forwards the request to instance A which enforces that only request for account 12345 is being processed at a time. This approach can in some way be thought of as a combination of 1 (of note: this distributed cache is not only providing concurrency control, but actually caching the application state (e.g. the account balance and any other data useful for deciding if a transaction can be accepted) too) and 3 (even though it's presenting a synchronous API to the outside world).
Additionally, it is often possible to design the events such that they form a conflict-free replicated data type (CRDT) which effectively allows forks in the event log as long as there's a guarantee that they can be reconciled. One could squint and perhaps see bank accounts allowing overdrafts (where the reconciliation is allowing a negative balance and charging a substantial fee) as an example of a CRDT.
How should I deal with such a situation?
The general term for the problem you are describing is set validation. If there is some property that must hold for the set taken as a whole, then you need to have some form of lock to prevent conflicting writes.
Optimistic/pessimistic are just two different locking implementations.
In the event that you have concurrent writes, the usual general mechanism is that first writer wins. The losers of the race follow the "concurrent modification" branch, and either retry (recalculating again to ensure that the desired properties still hold) or abort.
In a case like you describe, if your insertion code is responsible for confirming that the user balance is not negative, then that code needs to be able to lock the entire transaction history for the user.
Now: notice that if in the previous paragraph, because its really important. One of the things you need to understand in your domain is whether or not your system is the authority for transactions.
If your system is the authority, then maintaining the invariant is reasonable, because your system can say "no, that one isn't a permitted transaction", and everyone else has to go along with it.
If your system is NOT the authority - you are getting copies of transactions from "somewhere else", then your system doesn't have veto power, and shouldn't be trying to skip transactions just because the balance doesn't work out.
So we might need a concept like "overdrawn" in our system, rather than trying to state absolutely that balance will always satisfy some invariant.
Fundamentally, collaborative/competitive domains with lots of authorities working in parallel require a different understanding of properties and constraints than the simpler models we can use with a single authority.
In terms of implementation, the usual approach is that the set has a data representation that can be locked as a whole. One common approach is to keep an append only list of changes to the set (sometimes referred to has the set's history or "event stream").
In relational databases, one successful approach I've seen is to implement a stored procedure that takes the necessary arguments and then acquires the appropriate locks (ie - applying "tell, don't ask" to the relational data store); that allows you to insulate the application code from the details of the data store.

Migrating critical info to the blockchain

We have a website with a traditional database (mongodb), which is the user voting system.
Each user can create many votes and we want to migrate that votes to be stored in the blockchain in a secure way, so they can't be changed or deleted once created. There are 300 000 votes in our db atm.
Can we use NEAR to store such an amount of data and how it can be implemented?
And what would be the price for storage?
Yes you can use NEAR to support your voting application.
We actually have an an example of that here
https://github.com/near-examples/voting-app
On top of that we have some useful resources here that discuss storage staking
https://docs.near.org/docs/concepts/storage-staking
That last link on storage will give you an overview of what storage staking is on Near but for an accurate up to date cost of how much storage would cost you per byte in yoctoNEAR (1NEAR=1*10^24 in yoctoNEAR) can query our RPC using this guide
https://docs.near.org/docs/api/rpc/protocol#protocol-config
As well as alternative storage solutions you can use in conjunction with NEAR
https://docs.near.org/docs/concepts/storage-solutions

Scalable and efficient location updates in laravel

For a delivery-service application based on laravel, I want to keep the customer updated on the current location of the driver. For this purpose, I have a lat and long column in my order table. The driver has the website open and posts his html5 geolocation to the server every, let's say, 30 seconds. The row gets updated with the new position and here comes the question.
Will it be more efficient to
- have a Ajax request from the customer client every 30 seconds, that searches against all current orders with the customer id as key and retrieves the current location to update the maps,
or to
- create a private Chanel with pusher, subscribe to it from the customer client and create locationUpdated events, once the driver submits his location?
My thoughts would be to use pusher, so that I don't have to do two queries (update and retrieve) for each updated location, periodically and for possibly hundreds of users at the same time.
The disadvantage I assume to cause trouble would be the amount of channels to be maintained by the server, to make sure every client has access to updated information.
Unfortunately, I have no clue what would cause more effort to the server. Any argumentation why either of the two solutions is better than the other, or even further improvements are welcome.

How oracle handles concurrency in clustered environment?

I have to implement a database solution wherein contention is handled in a clustered environment. There is a scenario wherein there are multiple users trying to access a bank account at the same time and deposit money into it if balance is less than $100, how can I make sure that no extra money is deposited? Basically , this query is supposed to fire :-
update acct set balance=balance+25 where acct_no=x ;
Since database is clustered , account ends up getting deposited multiple times.
I am looking for purely oracle based solution.
Clustering doesn't matter for the system which is trying to prevent the scenario you're fearing/seeing, which is locking.
Behold scenario user A and then user B trying to do an update, based on a check (less than 100 dollar in account):
If both the check and the update is done in the same transaction, locking will prevent that user B does a check, UNTIL user A has done both the check, and the actual insert. In other words, user B will find the check failing, and will not perform the asked action.
When a user says "at the same time", you should know that the computer does not know that concept, as all transactions are sequential, no matter what millisecond is identical. Behold the ID that is kept in the Redo Logs, there's only one counter. Transaction X and Y is done before or after each other, never at the same time.
That doesn't sound right ... When Oracle locks a row for update, the lock should be across all nodes. What you describe doesn't sound right. What version of Oracle are you using, and can you provide a step-by-step example of what you're doing?
Oracle 11 doc here:
http://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT020

Scaling message queues with lots of API calls

I have an application where some of my user's actions must be retrieved via a 3rd party api.
For example, let's say I have a user that can receive tons of phone calls. This phone call record should be update often because my user want's to see the call history, so I should do this "almost in real time". The way I managed to do this is to retrieve every 10 minutes the list of all my logged users and, for each user I enqueue a task that retrieves the call record list from the timestamp of the latest saved record to the current timestamp and saves all that to my database.
This doesn't seems to scale well because the more users I have, then, the more connected users I'll have and the more tasks i'll enqueue.
Is there any other approach to achieve this?
Seems straightforward with background queue of jobs. It is unlikely that all users use the system at the same rate so queue jobs based on their use. With fall back to daily.
You will likely at some point need more workers taking jobs from the queue and then multiple queues so if you had a thousand users the ones with a later queue slot are not waiting all the time.
It also depends how fast you need this updated and limit on api calls.
There will be some sort of limit. So suggest you start with committing to updated with 4h or 1h delay to always give some time and work on improving this to sustain level.
Make sure your users are seeing your data and cached api not live call api data incase it goes away.

Resources