In Hyperledger Composer, what does a "Participant" represents? - hyperledger-composer

In a B2B blockchain network, is "Participant" meant to represent the business or the person that acts on behalf of the business (e.g. an employee) or both?
In most of the examples that I have seen, "Participant" seems to represent the business. But once you start thinking about security and Participant-Identity mapping, "Participant" as a person makes more sense.
Regards,
Naveen

In Hyperledger Composer, your 'active' Participant (or 'a' participant) may operate at a transactional level, in that it uses an identity to perform actions (adding assets, submitting transaction(s) that update those assets, query execution etc) - and needs at least one identity associated with it, to execute such transactions. At an Hyperledger Composer level, that Participant can be a person, or (if so implemented by the author's use case) it can be represented as a participant entity eg. Payroll Administrator (and still have an identity or identities mapped to it). Think for example also, of needing access control - who can see what at a Participant level? In Hyperledger Composer, one participant can have multiple identities - but only one is submitted (as an identity signature) with a transaction at a time.
Now from a 'business' perspective (irrespective to any Composer context) a business (organisation) can also 'participate' in a business network as a 'party' (so, a complex network of companies, working together to accomplish certain goals and have a business relationship with each other, narrowed down to specific business flows). To implement this on a blockchain, the parties will want to reduce this down to a level where identities and participants (and all the other elements of the modeled network) are defined in the network they join. So really, its just a question of context, and I think you'll now see what modeling and using a Participant in Hyperledger Composer means, and the outer context.

Participant: Participants represent the organizations or people who take part in the digital business network. Participants are defined in the business network model.
https://hyperledger.github.io/composer/latest/reference/glossary.html
And: https://hyperledger.github.io/composer/latest/managing/participantsandidentities.html

Related

Data Dependency Among Microservices

In my microservices architecture, I have a bunch of services A, B, C, D etc.
For ex: Service A is responsible for managing students. Service B is responsible for managing assessments, the student take.
Service B stores the studentid in the tables for reference. However when I have to query for all the assessments taken in a given time period, Service B has to call Service A to get the student name. Because the client app wants the name. not the id.
I see a lot of network calls among services because of this. So I was thinking Service A could raise an event whenever a new student is registering. Service B will consume the event and stores student info in its db. (same for student name update as well).
Questions:
Is this a bad practice? What are the pros and cons of this approach?
Feel free to suggest any alternatives.
It is good to allow some data duplication across the services and you can do it many many different ways.
One option is to having Service A publishing an event when a new student is registered.
One alternative (That might be simpler) is that when you create a new assessment against Service B, then you provide the username as part of the CreateAssessment command. In this way you don't need to publish any events between the two services when a new user is created.
Publishing events and replicating data into each service's database is a totally reasonable approach to minimizing network calls. I think you might find my answer to a similar question helpful as well (option 1 is the same as what you described):
https://stackoverflow.com/a/57791951/1563240

Monolithic Web API to microservice design

We have a monolithic Web API layer in our application with a hundred end points. I am trying to break it into microservices using Azure Service Fabric.
When we break them into multiple services, we may end up having duplicate code.
Example: Let's say we have an Account Services to create an account. And there is a payment service to apply payments to transactions.
In this case, both services need the Customer class/domain. Probably the Account Services need an exhaustive customer with full details, but the payment might need a light weight one.
The question is do we need to copy several domain entities, and other layers like this? Doesn't that create more maintenance issues?
If we don't we end up copying the code and creating different services, one monolithic service same is the existing Web API.
Any thoughts on this?
2ndly, we have some cases where transactions are mentioned today. If we separate them, is there any good design to record failures and rollback without trying too much to maintain transactions?
Breaking a monolith up into proper microservices with appropriate boundaries for your domain is certainly more of an art than a science. The prerequisite to taking on such a task is a thorough understanding of your domain and the interactions within, and you won't get it right the first time. One of points that Evans makes in his book on Domain-Driven Design is that for any sufficiently complex domain, the domain model continually evolves because your understanding of the domain is continually evolving; you will understand it a little better tomorrow than you do today. That said, don't be afraid to start when you have an understanding that is "good enough" and be willing to adapt/evolve your model.
I don't know your domain, but it sounds to me like you need to first figure out in which bounded context Customer primarily belongs. Yes, you want to minimize duplication of domain logic, and though it may not fit completely and neatly into a single service, to the extent that you make one service take primary responsibility for accessing, persisting, manipulating, validating, and ensuring the integrity of a Customer, the better off you'll be.
From your question, I see two possibilities:
The Account Services bounded context is the primary stakeholder in Customer, and Customer has non-trivial ties to other Account Services entities and services. It's difficult to draw clear boundaries around a Customer in isolation. In this case, Customer belongs in the Account Services bounded context.
Customer is an independent enough concept to merit its own microservice. A Customer can stand alone. In this case, Customer belongs in its own bounded context.
In either case, great care should be taken to ensure that the Customer-specific domain logic stays centralized in the Customer microservice behind strong boundaries. Other services might use Customer, or perhaps a light-weight (even read-only) CustomerView, but their interactions should go through the Customer service to the extent that they can.
In your question, you indicate that the Payments bounded context will need access to Customer, but it might just need a light-weight version. It should communicate with the Customer service to get that light-weight object. If, during Payments processing you need to update the Customer's billing address for example, Payments should call into the Customer microservice telling it to update its billing address. Payments need not know anything about how to update a Customer's billing address other than the single API call; any domain logic, validation, firing of domain events, etc... that need to happen as part of that operation are contained within the Customer microservice.
Regarding your second question: it's true that atomic transactions become more complex/difficult in a distributed architecture. Do some reading on the Saga pattern: https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/. Also, Jimmy Bogard is currently in the midst of a blog series called
Life Beyond Distributed Transactions: An Apostate's Implementation that may offer some good insights.
Hope this helps!

Validate Command in CQRS that related to other domain

I am learning to develop microservices using DDD, CQRS, and ES. It is HTTP RESTful service. The microservices is about online shop. There are several domains like products, orders, suppliers, customers, and so on. The domains built in separate services. How to do the validation if the command payload relates to other domains?
For example, here is the addOrderItemCommand payload in the order service (command-side).
{
"customerId": "CUST111",
"productId": "SKU222",
"orderId":"SO333"
}
How to validate the command above? How to know that the customer is really exists in database (query-side customer service) and still active? How to know that the product is exists in database and the status of the product is published? How to know whether the customer eligible to get the promo price from the related product?
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation. Because, we have developed an event processor outside the command-service that listen from the event and apply the event to the materalized view.
Thank you.
As there are more than one bounded contexts that need to be queried for the validation to pass you need to consider eventual consistency. That being said, there is always a chance that the process as a whole can be in an invalid state for a "small" amount of time. For example, the user could be deactivated after the command is accepted and before the order is shipped. An online shop is a complex system and exceptions could appear in any of its subsystems. However, being implemented as an event-driven system helps; every time the ordering process enters an invalid state you can take compensatory actions/commands. For example, if the user is deactivated in the meantime you can cancel all its standing orders, release the reserved products, announce the potential customers that have those products in the wishlist that they are not available and so on.
There are many kinds of validation in DDD but I follow the general rule that the validation should be done as early as possible but without compromising data consistency. So, in order to be early you could query the readmodel to reject the commands that couldn't possible be valid and in order for the system to be consistent you need to make another check just before the order is shipped.
Now let's talk about your specific questions:
How to know that the customer is really exists in database (query-side customer service) and still active?
You can query the readmodel to verify that the user exists and it is still active. You should do this as a command that comes from an invalid user is a strong indication of some kind of attack and you don't want those kind of commands passing through your system. However, even if a command passes this check, it does not necessarily mean that the order will be shipped as other exceptions could be raised in between.
How to know that the product is exists in database and the status of the product is published?
Again, you can query the readmodel in order to notify the user that the product is not available at the moment. Or, depending on your business, you could allow the command to pass if you know that those products will be available in less than 24 hours based on some previous statistics (for example you know that TV sets arrive daily in your stock). Or you could let the customer choose whether it waits or not. In this case, if the products are not in stock at the final phase of the ordering (the shipping) you notify the customer that the products are not in stock anymore.
How to know whether the customer eligible to get the promo price from the related product?
You will probably have to query another bounded context like Promotions BC to check this. This depends on how promotions are validated/used.
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation.
This depends on how resilient you want your system to be and how fast you want to reject invalid commands.
Synchronous call are simpler to implement but they lead to a less resilient system (you should be aware of cascade failures and use technics like circuit breaker to stop them).
Asynchronous (i.e. using events) calls are harder to implement but make you system more resilient. In order to have async calls, the ordering system can subscribe to other systems for events and maintain a private state that can be queried for validation purposes as the commands arrive. In this way, the ordering system continues to work even of the link to inventory or customer management systems are down.
In any case, it really depends on your business and none of us can tell you exaclty what to do.
As always everything depends on the specifics of the domain but as a general principle cross domain validation should be done via the read model.
In this case, I would maintain a read model within each microservice for use in validation. Of course, that brings with it the question of eventual consistency.
How you handle that should come from your understanding of the domain. Factors such as the length of the eventual consistency compared to the frequency of updates should be considered. The cost of getting it wrong for the business compared to the cost of development to minimise the problem. In many cases, just recording the fact there has been a problem is more than adequate for the business.
I have a blog post dedicated to validation which you can find here: How To Validate Commands in a CQRS Application

Microservices: model sharing between bounded contexts

I am currently building a microservices-based application developed with the mean stack and am running into several situations where I need to share models between bounded contexts.
As an example, I have a User service that handles the registration process as well as login(generate jwt), logout, etc. I also have an File service which handles the uploading of profile pics and other images the user happens to upload. Additionally, I have an Friends service that keeps track of the associations between members.
Currently, I am adding the guid of the user from the user table used by the User service as well as the first, middle and last name fields to the File table and the Friend table. This way I can query for these fields whenever I need them in the other services(Friend and File) without needing to make any rest calls to get the information every time it is queried.
Here is the caveat:
The downside seems to be that I have to, I chose seneca with rabbitmq, notify the File and Friend tables whenever a user updates their information from the User table.
1) Should I be worried about the services getting too chatty?
2) Could this lead to any performance issues, if alot of updates take place over an hour, let's say?
3) in trying to isolate boundaries, I just am not seeing another way of pulling this off. What is the recommended approach to solving this issue and am I on the right track?
It's a trade off. I would personally not store the user details alongside the user identifier in the dependent services. But neither would I query the users service to get this information. What you probably need is some kind of read-model for the system as a whole, which can store this data in a way which is optimized for your particular needs (reporting, displaying together on a webpage etc).
The read-model is a pattern which is popular in the event-driven architecture space. There is a really good article that talks about these kinds of questions (in two parts):
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-1-richardson
https://www.infoq.com/articles/microservices-aggregates-events-cqrs-part-2-richardson
Many common questions about microservices seem to be largely around the decomposition of a domain model, and how to overcome situations where requirements such as querying resist that decomposition. This article spells the options out clearly. Definitely worth the time to read.
In your specific case, it would mean that the File and Friends services would only need to store the primary key for the user. However, all services should publish state changes which can then be aggregated into a read-model.
If you are worry about a high volume of messages and high TPS for example 100,000 TPS for producing and consuming events I suggest that Instead of using RabbitMQ use apache Kafka or NATS (Go version because NATS has Rubby version also) in order to support a high volume of messages per second.
Also Regarding Database design you should design each micro-service base business capabilities and bounded-context according to domain driven design (DDD). so because unlike SOA it is suggested that each micro-service should has its own database then you should not be worried about normalization because you may have to repeat many structures, fields, tables and features for each microservice in order to keep them Decoupled from each other and letting them work independently to raise Availability and having scalability.
Also you can use Event sourcing + CQRS technique or Transaction Log Tailing to circumvent 2PC (2 Phase Commitment) - which is not recommended when implementing microservices - in order to exchange events between your microservices and manipulating states to have Eventual Consistency according to CAP theorem.

CQRS + Microservices: How to handle relations / validation?

Scenario:
I have 2 Microservices (which both use CQRS + Event Sourcing internally)
Microservice 1 manages Contacts (= Aggregate Root)
Microservice 2 manages Invoices (= Aggregate Root)
The recipient of an invoice must be a valid contact.
CreateInvoiceCommand:
{
"content": "my invoice content",
"recipient": "42"
}
I now read lot's of times, that the write side (= the command handler) shouldn't call the read side.
Taking this into account, the Invoices Microservice must listen to all ContactCreated and ContactDeleted events in order to know if the given recipient id is valid.
Then I'd have thousands of Contacts within the Invoices Microservice, even if I know that only a few of them will ever receive an Invoice.
Is there any best practice to handle those scenarios?
The recipient of an invoice must be a valid contact.
So the first thing you need to be aware of - if two entities are part of different aggregates, you can't really implement "apply a change to this entity only if that entity satisfies a specification", because that entity could change between the moment you evaluate the specification and the moment you perform the write.
In other words - you can only get eventual consistency across an aggregate boundary.
The aggregate is the authority for its own state, but everything else (for example, the contents of the command message), it pretty much has to accept that some external authority has checked the data.
There are a couple approaches you can take here
1) You can blindly accept that the recipient specified in the command is valid.
2) You can try to verify the validity of the recipient from some external authority (aka: a read model of some other aggregate) between receiving it from the untrusted source and submitting it to the domain model.
3) You can blindly accept the command as described, but treat the invoice as provisional until the validity of the recipient is confirmed. That means there is a second command to run on the invoice that certifies the recipient.
Note - from the point of view of the model, these different commands are equivalent, but at the application layer they don't need to be -- you can restrict access to the command to trusted sources (don't make it part of the public api, require authorization that is only available to trusted sources, etc).
Approach #3 is the most microservicy, as the two commands can be separated in time -- you can accept the CreateInvoice command as soon as it arrives, and certify the recipient asynchronously.
Where would you put approach 4), where the Invoices Microservice has it's own Contacts Store which gets updated whenever there's a ContactCreated or ContactDeleted event? Then both entities are part of the same service and boundary. Now it should be possible to make things consistent, right?
No. You've made the two entities part of the same service, but the problem was never that they were in different services, but that they are in separate aggregates -- meaning we can be changing the entity states concurrently, which means that we can't ensure that they are immediately synchronized.
If you wanted immediate consistency, you need a model that draws your boundaries differently.
For instance, if the invoice entities were modeled as part of the Contacts aggregate, then the aggregate can ensure the invariant that new invoices require a valid recipient -- the domain model uses the copy of the state in memory to confirm that the recipient was valid when we loaded, and the write into the book of record verifies that the book of record hadn't changed since the load happened.
The write of the aggregate state is a compare-and-swap in the book of record; if some concurrent process had invalidated the recipient, the CAS operation would fail.
The trade off, of course, is that any change to the Contact aggregate would also cause the invoice to fail; concurrent editing of different invoices with the same recipient goes out the window.
Aggregates are all or nothing; they aren't separable.
Now, one out might be that your Invoice aggregate has a part that must be immediately consistent with the recipient, and another part where eventually consistent, or even inconsistent, is acceptable. In which case your goal is to refactor the model.
The recipient of an invoice must be a valid contact.
This is a business rule. The question should be asked, what does this business rule mean for my application? Who should take responsibility for implementing this rule, or can the responsibility be shared?
One possibility is that, yes, the business rule is about invoices so it should be the responsibility of the Invoices Service to implement it.
However, the business rule is really about the creation of invoices. And the owner of invoice creation in your architecture is, strangely, not the Invoices Service. The reason for this is that the name of the command is CreateInvoiceCommand.
Let's think about this - the Invoices Service will never just create an invoice on its own. It just provides the capability. In this architecture, the actual owner of invoice creation is the sender of the command.
Using this line of reasoning, if the business rule is saying that invoice creation cannot happen against an invalid recipient, then it becomes the responsibility of the command sender to ensure this business rule is implemented.
This would be a very different scenario if, rather than receiving a command, the Invoices Service subscribed to events. As an example, an event called WidgetSold. In this scenario, the owner of invoice creation clearly would be the Invoicing service, and so the business rule would be implemented there instead.
If the user clicks the create invoice for contact 42 button, it's the
user's responsibility to take care that contact 42 exists
Yes, that is correct. The user's intention is to create an invoice. The business rules around invoice creation should, therefore, be enforced at this point. How this happens (or whether this happens at all) is a different question.
But what if the user doesn't care? Then it would create an invoice
with an invalid recipient id.
Also correct. As you say, there are side-effects to this approach, one of which is that you can end up with inconsistent data across your system. That is one of the realities of SOA.
Isn't this somehow similar to this: The Invoice has a currencyCode
property, it's a String.
I don't know if I agree or not. Is asking is this a valid ISO currency? different to asking is entity 42 valid according to another system?. I would think so.
Isn't it kinda the same as given recipient is not null and is valid
according to my Contacts Database?
I agree that in reality, you could implement this validation in the service. I am just saying that I don't think it's the right place for it. If you wanted to do this, you would have to either call out the another service or store all contacts locally, as you framed your question originally. I think it's simpler to just do it outside of the service.
I think that the answer depends on how resilient you want the system to be, that is, how to handle the situation in wich the Contacts Microservice is down (not responding or very slow).
1. You want to be very resilient
If the Contacts Microservice is down, you want to be able to emit invoices for some (maybe most) of the contacts. In this case you listen to the ContactCreated and ContactDeleted and maintain a (eventually consistent) local list of valid contacts; they should be named accordingly to the Ubiquitous language in this bounded context, like Payers (or something like that). Then, in the Application layer, when building the CreateInvoiceCommand you check that Payer is valid and create the command.
2. You don't need to be resilient
If the Contacts Microservice is down, you refuse to generate invoices. In this case, when building the command you make a request to the Invoices Microservice API endpoint and verify that the Payer is valid.
In any case, you check for contact's validity before the command is dispatched.

Resources