Square Connect V2 API: Will ListTransactions for a time period guarantee all refunds for that time period are returned? - square-connect

I am trying to make sure I get all transactions and refunds for a location for a given time period. Is it enough to query the ListTransactions endpoint? Or could there be refunds that are returned by ListRefunds that would not be contained in the transactions response?
Much appreciated.

ListTransactions will have all the transactions.

Related

Is there any way to track how much rent was paid given by an account address and timestamp in solana?

I'm new to solana. Currently, I'm working on an app that supports user to track their wallet historical balance and transactions.
For example, given by an account and time period range, the app will calculate the opening and closing balance and how much sol were sent and recevied during the time range.Since the rpc dose not support such features, I fetch all the historical transactions of an account and instead of using prebalance and postbalance directly returned by rpc, I try to calculate the historical balance by every transcations.(I use the absolute value of the subtraction of the prebalance and postbalance to get the transfer amount in every transaction, so that I can get the sent and the received value.) I found that in solana the rent does not show in the transaction, which will cause the balance calculation error.
I'd like to know if there is any way to track how much rent was paid given by an account address and timestamp in solana? I tried googling it and didn't find a solution.
Any comments and suggestions will be appreciated.
Unless I'm misunderstanding the question, the rent-exempt balances are included in transactions. For example, here's a transaction creating a USDC account: https://explorer.solana.com/tx/32oAkYzp47zF7DiPRFwMKLcknt6rhu43JW2yAfkEc2KgZpX35BoVeDBUs4kkiLWJ4wqoEFspndvGdUcB215jY931?cluster=testnet
There, you'll see that the new token account 2XBTsdaRTYdmsqLXRjjXonbVHCwvvGfHjBRfTXPcgnsS received 0.00203928 SOL, and the funding account 4SnSuUtJGKvk2GYpBwmEsWG53zTurVM8yXGsoiZQyMJn lost 0.00204428 SOL, which is higher since it paid for the transaction.
Roughly speaking, if you go though all a wallet's transactions, you can see if a payment was for rent-exemption if the destination account had 0 SOL to start, and the wallet paid for it. Note that this isn't perfect, since a lot of balances can move in a transaction!

Why would I be getting back only 100 transactions even in production in Plaid

We got our production access yesterday and I set the environment to production and plugged in the new, production secret key. When I get back my access token it's of the form access-production-XXXXXXX-XXXXXX-XXXX-XXXXXXX. When I request transactions though, the "total transactions" field says a big number, like 745 in the example before me, and the number of transactions actually returned in the transactions array remains limited to 100.
Why? Why am I not seeing the whole 745?
/transactions/get takes a count parameter that indicates how many transactions to request. By default, this is 100. To get more than 100 transactions, you need to modify the parameter, and to get more than 500 transactions, you need to make multiple requests.
More info:
https://plaid.com/docs/api/products/#transactions-get-request-options-count
https://plaid.com/docs/transactions/pagination/

Spring #Transactional + Isolation.REPEATABLE_READ for Rate Limiting

We are trying a scenario of Rate Limiting the total no. of JSON records requested in a month to 10000 for an API.
We are storing the total count of records in a table against client_id and a Timestamp(which is primary key).
Per request we fetch record from table for that client with Timestamp with in that month.
From this record we get the current count, then increment it with no. of current records in request and update the DB.
Using the Spring Transaction, the pseudocode is as below
#Transactional(propagation=Propagation.REQUIRES_NEW, isolation=Isolation.REPEATABLE_READ)
public void updateLimitData(String clientId, currentRecordCount) {
//step 1
startOfMonthTimestamp = getStartOfMonth();
endOfMonthTimestamp = getEndOfMonth();
//step 2
//read from DB
latestLimitDetails = fetchFromDB(startOfMonthTimestamp, endOfMonthTimestamp, clientId);
latestLimitDetails.count + currentRecordCount;
//step 3
saveToDB(latestLimitDetails)
}
We want to make sure that in case of multiple threads accessing the "updateLimitData()" method, each thread get the updated data for a clientId for a month and it do not overwrite the count wrongly.
In the above scenario if multiple threads access the method "updateLimitData()" and reach the "step 3". First thread will update "count" in DB, then the second thread update "count" in DB which may not have latest count.
I understand from Isolation.REPEATABLE_READ that "Write Lock" is placed in the rows when update is called at "Step 3" only(by that time other thread will have stale data). How I can ensure that always threads get he latest count from table in multithread scenario.
One solution came to my mind is synchronizing this block but this will not work well in multi server scenario.
Please provide a solution.
A transaction would not help you unless you lock the table/row whilst doing this operation (don't do that as it will affect performance).
You can migrate this to the database, doing this increment within the database using a stored procedure or function call. This will ensure ACID and transactional safety as this is built into the database.
I recommend doing this using standard Spring Actuator to produce a count of API calls however, this will mean re-writing your service to use the actuator endpoint and not the database. You can link this to your Gateway/Firewall/Load-balancer to deny access to the API once their quote is reached. This means that your API endpoint is pure and this logic is removed from your API call. All new API's you developer will automatically get this functionality.

facebook ads insights - adding day dimension breakdown

We are using the
https://developers.facebook.com/docs/marketing-api/reference/ad-account/insights/ endpoint to get insights for a Facebook ad-account.
To get data for a few days with a day breakdown (for analytical purposes) we are creating a report per day and by doing so increasing our calls and reaching limits.
Is it possible to create a single report with a day breakdown?
Thanks!
What you're looking for is parameter time_increment, as documented here.
To get insights broken down to single days, use time_increment=1

Google big query API returns "too many free query bytes scanned for this project"

I am using Google's big query API to retrieve results from their n-gram dataset. So I send multiple queries of "SELECT ngram from trigram_dataset where ngram == 'natural language processing'".
I'm basically using the same code posted here (https://developers.google.com/bigquery/bigquery-api-quickstart) replaced with my query statement.
On every program run, I have to get a new code of authorization and type it in the console, which gives authorization to my program to send queries to google big query under my project ID. However, after sending 5 queries, it just returns " "message" : "Exceeded quota: too many free query bytes scanned for this project".
According to Google Big Query policy, their free quota is 100G/month, and I don't think I've even nearly come close to their quota. Someone suggested in the previous thread that I should enable billing information to use their free quota, which I did, but it's still giving me the same error. Is there any way to check the leftover quota or how to resolve this problem? Thank you very much!
The query you've mentioned scans 1.12 GB of data, so you should be able to run it 89 times in a month.
The way the quota works is that you start out with 100 GB of monthly quota -- if you use it up, you don't have to wait an entire month, but you get 3.3 more quota every day.
My guess (please confirm) is that you ran a bunch of queries and used up your 100 GB monthly free quota, then waited a day, and only were able to run a few queries before hitting the quota cap. If this is not the case, please let me know, and provide your project id and I can take a look in the logs.
Also, note that this isn't the most efficient usage of bigquery; an option would be to batch together multiple requests. In this case you could do something like:
SELECT ngram
FROM trigram_dataset
WHERE ngram IN (
'natural language processing',
'some other trigram',
'three more words')

Resources