Slashing/Validator reward /Treasury Reward in NEAR - nearprotocol

I want to capture all the balance changing operations for any near address provided.
I have got the info of all the action types and archival apis to pull out the transactions. Can any one help with the apis to get the slashing and reward distribution apis. Rewards again are distributed to the validator and some part goes to treasury.
Please help me with the blocks and apis using which I can validate the theoretical concept so that i can capture all the balance changing operations.
Thanks

We have already collect this information here, there is a read-only access to the DB. You need account_changes table.
Rewards are marked with account_changes.update_reason = 'VALIDATOR_ACCOUNTS_UPDATE'.
Slashing is turned off for now, so you will not see anything about it.

Related

Indexing staking/rewards events for NEAR blockchain

I want to create an app to have a detailed info about historical stake and reward changes for each block. Can I track every delegation events that contain any stake balance changes of delegator/validator? Including information like:
delegator address
validator address
amount of tokens that got delegated, undelegated or receive rewards
I found this contract. And then I tried decode transaction's actions and receipts. I still can not find info about amount.
For example this transaction contain unstake_all method. I tried using Near REST API or Postgres DB like
postgres://public_readonly:nearprotocol#mainnet.db.explorer.indexer.near.dev/mainnet_explorer
But, it does not include info about amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
So can I somehow get these logs using REST API or Postgres and are these logs reliable source? Or if there is any other method to find staking/reward amount info?
First of all
But, it does not include info about the amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
Explorer queries the RPC and shows you the logs from ExecutionOutcome.
In the PostgreSQL database for Indexer for Explorer, we don't store logs, so you can't find them there.
To have a detailed info about historical stake and reward changes for each block, I think you should index the blockchain by yourself to be sure everything is calculated as you expected.
In order to do this, you'd need to build an indexer. Happily, we're releasing an MVP (yet completely working solution) of NEAR Lake Framework which is a microframework to build indexers but even easier than it is done before now.
Please, have a look at the example project https://github.com/near/near-lake-raw-printer which basically prints the data from each block. Refer to this comment as an example of the structure that you can receive for each block (StreamerMessage) https://github.com/near/near-lake/issues/1#issuecomment-1035285658
So the main idea is to start indexing from the block where rewards became available (Phase 2) and analyze each block, transaction, and receipt related to staking/unstaking so you can perform your calculations and record the info about historical stake and reward changes.

How to deal with "Foreign Key" in microservice architecture?

Quick question on Foreign key in Microservices. I already tried looking for answer. But, they did not give me the exact answer I was looking for.
Usecase : Every blog post will have many comments. Traditional monolith will have comments table with foreign key to blog post. However in microservice, we will have two services.
Service 1 : Post Microservie with these table fields (PostID, Name, Content)
Service 2 : Comments Microservie with these table fields (CommentID, PostID, Cpmment)
The question is, Do we need "PostID" in service 2 (Comments Microservice) ? I guess the answer is yes, as we need to know which comment belongs to which post. But then, it will create tight coupling? I mean if I delete service 1(Blog post service), it will impact service 2(Comments service) ?
I'm going to use another example I'm more familiar with to explain how I believe most people would do this.
Consider an Order Management System (OMS) and an Inventory Management System (IMS).
When a customer places an order in the company web site, we ask the OMS to create an order entry in the backend (e.g. via an HTTP endpoint).
The OMS system then broadcasts an event e.g. OrderPlaced containing all the details of the customer order. We may have a pub/sub (e.g. Redis), or a queue (e.g. RabbitMQ), or an event stream (e.g. Kafka) where we place the event (although this can be done in many other ways).
The thing is that we have one or more subscribers interested in this event. One of those could be the IMS, which has the responsibility of assigning the best inventory available every time an order is placed.
We can expect that the IMS will keep a copy of the relevant order information it received when it processed the OrderPlaced event such that it does not ask every little detail of the order to the OMS all the time. So, if the IMS needed a join with the order, instead of calling an endpoint in the Order API, it would probably just do a join with its local copy of the orders table.
Say now that our customer called to cancel her order. A customer service representative then cancelled it in the OMS Web User Interface. At that point an event OrderCanceled is broadcast. Guess who is listening for that event? Correct, the IMS receives notification and acts accordingly reversing the inventory assignation and probably even deleting the order record because it is no longer necessary on this domain.
So, as you can see, the best way to do this is by using events and making copies of the relevant details on the other domain.
Since events need time to get broadcast and processed by interested parties, we say that the order data in the IMS is eventually consistent.
Followup Questions
Q: So, if I understood right in microservises we prefer to duplicate data and get better performance? That is the concept? I mean I know the concept is scaling and flexibility but when we must share data we will just duplicate it?
Not really. That´s definitively not what I meant although it may have sounded like that due to my poor choice of words in the original explanation. It appears to me that at the heart of your question lies a lack of sufficient understanding of the concept of a bounded context.
In my explanation I meant to indicate that the OMS has a domain concept known as the order, but so does the IMS. Therefore, they both have an entity within their domain that represents it. There is a good chance that the order entity in the OMS is much richer than the corresponding representation of the same concept in the IMS.
For example, if the system I was describing was not for retail, but for wholesale, then the same concept of a "sales order" in our system corresponds to the concept of a "purchase order" in that of our customers. So you see, the same data, mapped under a different name, simply because under a different bounded context the data may have a different perspective and meaning.
So, this is the realization that a given concept from our model may be represented in multiple bounded contexts, perhaps from a different perspective and names from our ubiquitous language.
Just to give another example, the OMS needs to know about the customer, but the representation of the idea of a customer in the OMS is probably different than the same representation of such a concept or entity in the CRM. In the OMS the customer's name, email, shipping and billing addresses are probably enough representation of this idea, but for the CRM the customer encompasses much more.
Another example: the IMS needs to know the shipping address of the customer to choose the best inventory (e.g. the one in a facility closest to its final destination), but probably does not care much about the billing address. On the other hand, the billing address is fundamental for the Payment Management System (PMS). So, both the IMS and PMS may have a concept of an "order", it is just that it is not exactly the same, neither it has the same meaning or perspective, even if we store the same data.
One final example: the accounting system cares about the inventory for accounting purposes, to be able to tell how much we own, but perhaps accounting does not care about the specific location of the inventory within the warehouse, that's a detail only the IMS cares about.
In conclusion, I would not say this is about "copying data", this is about appropriately representing a fundamental concept within your bounded context and the realization that some concepts from the model may overlap between systems and have different representations, sometimes even under different names and levels of details. That's why I suggested that you investigate the idea of context mapping some more.
In other words, from my perspective, it would be a mistake to assume that the concept of an "order" only exists in the OMS. I could probably say that the OMS is the master of record of orders and that if something happens to an order we should let other interested systems know about those events since they care about some of that data because those other systems could have mapping concepts related to orders and when reacting to the changes in the master of record, they probably want to change their data as well.
From this point of view, copying some data is a side effect of having a proper design for the bounded context and not a goal in itself.
I hope that answers your question.

Attribute Based Access Control (ABAC) in a microservices architecture for lists of resources

I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.

Is there any way to replay events in a date range?

I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!

Validate Command in CQRS that related to other domain

I am learning to develop microservices using DDD, CQRS, and ES. It is HTTP RESTful service. The microservices is about online shop. There are several domains like products, orders, suppliers, customers, and so on. The domains built in separate services. How to do the validation if the command payload relates to other domains?
For example, here is the addOrderItemCommand payload in the order service (command-side).
{
"customerId": "CUST111",
"productId": "SKU222",
"orderId":"SO333"
}
How to validate the command above? How to know that the customer is really exists in database (query-side customer service) and still active? How to know that the product is exists in database and the status of the product is published? How to know whether the customer eligible to get the promo price from the related product?
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation. Because, we have developed an event processor outside the command-service that listen from the event and apply the event to the materalized view.
Thank you.
As there are more than one bounded contexts that need to be queried for the validation to pass you need to consider eventual consistency. That being said, there is always a chance that the process as a whole can be in an invalid state for a "small" amount of time. For example, the user could be deactivated after the command is accepted and before the order is shipped. An online shop is a complex system and exceptions could appear in any of its subsystems. However, being implemented as an event-driven system helps; every time the ordering process enters an invalid state you can take compensatory actions/commands. For example, if the user is deactivated in the meantime you can cancel all its standing orders, release the reserved products, announce the potential customers that have those products in the wishlist that they are not available and so on.
There are many kinds of validation in DDD but I follow the general rule that the validation should be done as early as possible but without compromising data consistency. So, in order to be early you could query the readmodel to reject the commands that couldn't possible be valid and in order for the system to be consistent you need to make another check just before the order is shipped.
Now let's talk about your specific questions:
How to know that the customer is really exists in database (query-side customer service) and still active?
You can query the readmodel to verify that the user exists and it is still active. You should do this as a command that comes from an invalid user is a strong indication of some kind of attack and you don't want those kind of commands passing through your system. However, even if a command passes this check, it does not necessarily mean that the order will be shipped as other exceptions could be raised in between.
How to know that the product is exists in database and the status of the product is published?
Again, you can query the readmodel in order to notify the user that the product is not available at the moment. Or, depending on your business, you could allow the command to pass if you know that those products will be available in less than 24 hours based on some previous statistics (for example you know that TV sets arrive daily in your stock). Or you could let the customer choose whether it waits or not. In this case, if the products are not in stock at the final phase of the ordering (the shipping) you notify the customer that the products are not in stock anymore.
How to know whether the customer eligible to get the promo price from the related product?
You will probably have to query another bounded context like Promotions BC to check this. This depends on how promotions are validated/used.
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation.
This depends on how resilient you want your system to be and how fast you want to reject invalid commands.
Synchronous call are simpler to implement but they lead to a less resilient system (you should be aware of cascade failures and use technics like circuit breaker to stop them).
Asynchronous (i.e. using events) calls are harder to implement but make you system more resilient. In order to have async calls, the ordering system can subscribe to other systems for events and maintain a private state that can be queried for validation purposes as the commands arrive. In this way, the ordering system continues to work even of the link to inventory or customer management systems are down.
In any case, it really depends on your business and none of us can tell you exaclty what to do.
As always everything depends on the specifics of the domain but as a general principle cross domain validation should be done via the read model.
In this case, I would maintain a read model within each microservice for use in validation. Of course, that brings with it the question of eventual consistency.
How you handle that should come from your understanding of the domain. Factors such as the length of the eventual consistency compared to the frequency of updates should be considered. The cost of getting it wrong for the business compared to the cost of development to minimise the problem. In many cases, just recording the fact there has been a problem is more than adequate for the business.
I have a blog post dedicated to validation which you can find here: How To Validate Commands in a CQRS Application

Resources