Why would I be getting back only 100 transactions even in production in Plaid - production

We got our production access yesterday and I set the environment to production and plugged in the new, production secret key. When I get back my access token it's of the form access-production-XXXXXXX-XXXXXX-XXXX-XXXXXXX. When I request transactions though, the "total transactions" field says a big number, like 745 in the example before me, and the number of transactions actually returned in the transactions array remains limited to 100.
Why? Why am I not seeing the whole 745?

/transactions/get takes a count parameter that indicates how many transactions to request. By default, this is 100. To get more than 100 transactions, you need to modify the parameter, and to get more than 500 transactions, you need to make multiple requests.
More info:
https://plaid.com/docs/api/products/#transactions-get-request-options-count
https://plaid.com/docs/transactions/pagination/

Related

Browser: Network - response time shows 2 seconds when it displays after ~15

I wanted to check how quickly my web application will display results for a query : SELECT * FROM orders.
the query returns about 20k records on one page and it takes about 15 seconds
Why on every browser the response time stops after two seconds? Is it because the browser has trouble displaying so many records per one page? at 70k it gets out of memory.
Database - mysql on hosting
problem
correct response time
If you want to check how long it takes for the web app to process. You can add logging before and after doing the query.
You also could add some logging of the current time, when receiving the request and before returning the response.
As for why the request stops after two seconds, I don't think we have enough information to decide.
It could be from the web server default configuration that you use.
In my opinion, displaying 20k records might not be an efficient approach.
Other than the time to query and response time.
You might want to consider the looping that happens on the front end.
Personally, I would recommend paging at a lower number, and if you need to display all the data at once. You might consider using lazy loading as an option.
I know this is a very generic answer, but hopefully, this could help you out.

how to stop ABP from querying EMPTY AbpUserOrganizationUnits TABLE

We use ABP with a MSSQL database hosted on Azure.
in scope of Cost optimization we need to make as less requests to DB as possible.
during investigation i found out that ABP does around 50 millions of requests to table AbpUserOrganizationUnits per month and we don't this table at all
I would like to disable all calls to this table.
making 50 millions requests to database to receive an empty set is not what a normal product should do.
i woudl like to know if there is a way to stop this requests or if redirect them to a redis Cache or even stop them inside the API

Apollo Client v3 Delete cache entries after given time period

I am wondering if there is a way to expire cached items after a certain time period, e.g., 24 hours.
I know that Apollo Client v3 provides methods such as cache.evict and cache.gc which are a good start and I am already using; however, I want a way to delete cache items after a given time period.
What I am doing at the minute is adding a TimeToLive field to every object in my Apollo schema, and when the backend returns an object, the field is populated with the current time + 24 hours (i.e. the time in 24 hours time). Then when I query the data in the front end, I check the to see if the TimeToLive field of the returned data is in the future (if not that means the data was definitely retrieved from the cache and in which case I call the refetch function, which forces the query to fetch the data from the server. However, this doesn't seem like the best way to do things, mainly because I have to iterate over every result in the returned data anch check if any of the returned objects are expired; and if so, everything is refetched.
Another solution I thought of was to use something like React Native Queue and have a background task that periodically checks the cache and deleted items that have expired. But again, I am not totally sold on this solution.
For a little bit of context here: I am building a cooking / recipes app - and recipes / posts are cached on the device; however, my concern is that a user could delete a post, but everyone else who has that post cached would still be able to see it, and hence by expiring the cached item at least they would only be able to see for a number of hours before it is removed. However they might be a better way to do this all together, i.e. have the sever contact clients with the cached item (though I couldn't think of any low lift solutions at the time of writing this)
apollo-invalidation-policies replaces the Apollo-client InMemoryCache with InvalidationPolicyCache and within the typePolicies you can specify a timeToLive field. If an object is accessed beyond their TTL, they are evicted and no data is returned.

Jmeter for concurrent users

I have being using Jmeter-plugin Ultimate thread group for concurrent request.
But now I'm finding it difficult to use because the scenario is :
Each request has a trackingnumber(The trackingnumber are already generated in the system when a form is submitted, so I have to use the generated tracking number from DB) which are generated passed as a POST in http request, these trackingnumber are unique and have configured csv config for passing the trackingnumber. So once when trackingnumber is used, it cant be used again (as it would give me a error message) . So can someone please suggest me how to stress test this scenario where I have to hit a particular URL (with unique trackingnumber from csv file) for approximately 60/30 mins (with varing no of threads) till I get the crash point of the system.
1st way:-
You can pass the tracking numbers via csv file steps as,
allocate all the tracking numbers to specific uses (this can be possible with database query).
copy-paste those tracking numbers in csv file.
pass those tracking numbers as an parameter via csv data set config.
2nd way:-
fill the form & generated tracking number can be fetch via regular expression.
set allocation logic to specific user each time (disable other users).
log-in with this user & pass the fetched tracking number.
Hope will be helpful to you.

REST Api for Infinite scrolled query results

I'm building an internal server which contains a database of customer events. The webpage which allows access to the events is going to utilize an infinite scroll/dynamic loading scheme for display of live events as well as for browsing the results of queries to the database. So, you might query the database and maybe get 200k results. The webpage would display the 'first' 50 and allow you to scroll and scroll and scroll to see more and more results (loading perhaps 50 more at time).
I'm supposed to be using a REST api for the database access (a C# server). I'm unsure what the API should be so it remains RESTful. I've come up with 3 options. The question is, are any of them RESTful and which is most RESTful(is there such a thing -- if not I'll pick one of the RESTful).
Option 1.
GET /events?query=asdfasdf&first=1&last=50
This simply does the query and specifies the range of results to return. The server, unable to keep state, would have to requery the database each time (though perhaps utilizing the first/last hints to stop early) the infinite scroll occurs. Seems bad and there isn't any feedback about how many results are forthcoming.
Option 2 :
GET /events/?query=asdfasdf
GET /events/details?id1=asdf&id2=qwer&id3=zxcv&id4=tyui&...&id50=vbnm
This option first does a query which then returns the list of event ids but no further details. The webpage simply has the list of all the ids(at least it knows the count). The webpage holds onto the event id list and as infinite scroll/dynamic load is needed, makes another query for the event details of the specified ids. Each id is would nominally be a guid, so about 36 characters per id (plus &id##= for 41 characters). At 50 queries per hit, the URL would be quite long, 2000+ characters. The URL limit mentioned elsewhere on SO is around 2k. Maybe if I limit it to 40 ids per query this would be fine. It'd be nice to simply have a comma separated list instead of all the query parameters. Can you make a query parameter like ?ids=qwer,asdf,zxcv,wert,sdfg,rtyu,gfhj, ... ,vbnm ?
Option 3 :
POST /events/?query=asdfasdf
GET /events/results/{id}?first=1&last=50
This would post the query to the server and cause it to create a results resource. The ID of the results resource would be returned and would then be used to get blocks of the query results which in turn contain the event details needed for the webpage. The return from the POST XML could contain the number of records and other useful information besides the ID. Either the webpage would have to later delete the resource when the query page closed or the server would have to clean them up once they expire (days or weeks later).
I am concerned at Option 1, while RESTful is horrible for the server. I'm not sure requesting so many simultaneous resources, like the second GET in Option 2 is really RESTful or practical(seems like there has to be a better way). I'm not sure Option 3 is RESTful at all or if it is, its sort of cheating the REST thing by creating state via a POST(or should that be PUT).
Option 3 worked out fine. It required the server to maintain the query results and there was a bit of debate about how many queries (from various users) should simultaneously be saved as there would be no way to know when a user was actually done with a query.

Resources