Is there a way to query for the account balances of a NEAR account over time or other advanced datasets? - nearprotocol

What services or APIs are available to do advanced queries of account state on NEAR Protocol? The most basic is a time series of account balances for a specific account over time, eg:
# for account foo.near
20220731 1.234
20220730 1.567
...
I know Flipside has indexed the chain (docs) and stats.gallery does account-level stuff with charts but I can't find any dashboards that display this on awesomenear.com or good APIs. Any others?
Ideally there are more advanced queries possible as well, of course.

https://pikespeak.ai/ is nice & an API is on their roadmap. It doesn't give you balance on every block but there is CSV export so it can be added in an xls/sheet.

Yes! Enhanced APIs are available to use to get all kinds of balances on NFTs, FTs and NEAR! Access them via Pagoda Console
https://console.pagoda.co/

Related

What is "sf_max_daily_api_calls"?

Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.

Slashing/Validator reward /Treasury Reward in NEAR

I want to capture all the balance changing operations for any near address provided.
I have got the info of all the action types and archival apis to pull out the transactions. Can any one help with the apis to get the slashing and reward distribution apis. Rewards again are distributed to the validator and some part goes to treasury.
Please help me with the blocks and apis using which I can validate the theoretical concept so that i can capture all the balance changing operations.
Thanks
We have already collect this information here, there is a read-only access to the DB. You need account_changes table.
Rewards are marked with account_changes.update_reason = 'VALIDATOR_ACCOUNTS_UPDATE'.
Slashing is turned off for now, so you will not see anything about it.

Microservices: model sharing between bounded contexts

I am currently building a microservices-based application developed with the mean stack and am running into several situations where I need to share models between bounded contexts.
As an example, I have a User service that handles the registration process as well as login(generate jwt), logout, etc. I also have an File service which handles the uploading of profile pics and other images the user happens to upload. Additionally, I have an Friends service that keeps track of the associations between members.
Currently, I am adding the guid of the user from the user table used by the User service as well as the first, middle and last name fields to the File table and the Friend table. This way I can query for these fields whenever I need them in the other services(Friend and File) without needing to make any rest calls to get the information every time it is queried.
Here is the caveat:
The downside seems to be that I have to, I chose seneca with rabbitmq, notify the File and Friend tables whenever a user updates their information from the User table.
1) Should I be worried about the services getting too chatty?
2) Could this lead to any performance issues, if alot of updates take place over an hour, let's say?
3) in trying to isolate boundaries, I just am not seeing another way of pulling this off. What is the recommended approach to solving this issue and am I on the right track?
It's a trade off. I would personally not store the user details alongside the user identifier in the dependent services. But neither would I query the users service to get this information. What you probably need is some kind of read-model for the system as a whole, which can store this data in a way which is optimized for your particular needs (reporting, displaying together on a webpage etc).
The read-model is a pattern which is popular in the event-driven architecture space. There is a really good article that talks about these kinds of questions (in two parts):
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-1-richardson
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-2-richardson
Many common questions about microservices seem to be largely around the decomposition of a domain model, and how to overcome situations where requirements such as querying resist that decomposition. This article spells the options out clearly. Definitely worth the time to read.
In your specific case, it would mean that the File and Friends services would only need to store the primary key for the user. However, all services should publish state changes which can then be aggregated into a read-model.
If you are worry about a high volume of messages and high TPS for example 100,000 TPS for producing and consuming events I suggest that Instead of using RabbitMQ use apache Kafka or NATS (Go version because NATS has Rubby version also) in order to support a high volume of messages per second.
Also Regarding Database design you should design each micro-service base business capabilities and bounded-context according to domain driven design (DDD). so because unlike SOA it is suggested that each micro-service should has its own database then you should not be worried about normalization because you may have to repeat many structures, fields, tables and features for each microservice in order to keep them Decoupled from each other and letting them work independently to raise Availability and having scalability.
Also you can use Event sourcing + CQRS technique or Transaction Log Tailing to circumvent 2PC (2 Phase Commitment) - which is not recommended when implementing microservices - in order to exchange events between your microservices and manipulating states to have Eventual Consistency according to CAP theorem.

How to detect relationships using Microsoft Cognitive services?

Microsoft Cognitive Services offers a wide variety of capabilities to extract information from natural language. However I am not able to find how to use them in order to detect "relationships" where e.g. two (or more) specific "entities" are involved.
For example, detecting company acquisitions / merging.
These could be expressed in News articles as
"Company 1" has announced to acquire "Company2".
Certainly, there are several approaches to address that need, some that include entity detection first (e.g. Company1 and Company2 being companies) and then the relation (e.g. acquire ...).
Other approaches involve identifying first the "action" ( acquire ) and then through grammatical analysis find which is the "actor" and which the "object" of the action.
Machine learning approaches for semantic relation extraction has also been developed, in order to avoid humans to craft formal relation rules.
I would like to know if / how this use case can be performed with the Microsoft Cognitive Services.
Thankyou
Depends on tech used to examine response from the API https://dev.projectoxford.ai/docs/services
I use JQuery to parse the json response (webclient in asp.net code behind) from Luis/Cognitive Services API (I am not using the Bot Framework). I have a rules engine that I can configure for clients and save it, so that when the page loads, they fire functions based on the parsed json response. The rules engine includes various condition functions like contains, begins with, is, etc so I can test the users query for specific entities or virtually anything in the users query. It really comes down to a && or || javascript functions...
For example if intent=product in the json response, I then show a shopping cart widget. Or if entity=coffee black OR entity=double double then it triggers a widget to inject into the chat window (SHOW Shopping Cart). In short you either handle the AND/OR via the Bot Framework or via your tech of choice.

Best strategy for Foursquare venue/search caching

We are looking to use Foursquare as the location database for our application. Their API states that an application can make up to 5,000 userless requests per hour to venues/* endpoints. In order to help reduce the amount of requests, they recommend that you utilize caching to avoid making repetitive calls to the Foursquare API when different users are requesting the same information.
For our application, we are looking to use the venues/search endpoint to get checkin data around a location. What is the best way to go about caching this data to allow for the least amount of calls to the Foursquare API?
The current idea we have is to cache km by km “boxes” that represent an area on the earth. When a user requests nearby venues, we would make a call from the center point location of the box they are currently in to Foursquare, and cache the results for that box. Now when another user comes along, if they are too in that box we can return the results that we had cached for that box that are closest to the user. If a user is close to the edge of a box, we would return the close results from the box they are currently in, as well as the close results in the adjacent box.
Is this a good way to go about things to limit the requests? We fear this technique may use way too much memory. How do you go about it in your applications? Any insights would be great, thanks!
This sounds like a good strategy for caching venue searches. However, just to be super clear on Foursquare policies, they state that "Server-side caching of venue details is generally required for apps requesting an increase." We don't make caching of search results an explicit requirement before granting rate limit increases, only calls to venue details.

Resources