I am trying to implement microservice architecture in my backend application, One of my services receiving some data from another service, and this data are stored in MySQL DB, A cronjob read this data from DB and sending multiple requests to another services to complete some jobs like account create, billing info create and etc, all of this requests applying to exist record in DB, my problem is handling failures, This requests can be failed for any reason, How can I design DB for handling failures and retries?
Should I create multiple columns to handling states?
Like ACCOUNT_REQUESTED, ACCOUNT_CREATED_FAILED, account_create_failed_count and something like this
You should have a single column ( not multiple columns ) which maintains state because any Entity can have ONLY one state at a time..
And as usman mentioned; you can have retry logic etc. depending on your business logic. .. If workflow is really complex; there are some open source solutions for that also; but I don't think your case is that complicated... And as usman mentioned; you can use some messaging solutions like Kafka to make things asynchronus.
Related
In the effort to redesign an asynchronous flow based functional service to an event driven one, we have come up with changes on different part of this system. The service receives various statuses from external services through the API, which does computations and persists the result into the data store. The core logic is now moved from the api by introducing a queue (Kafka). Similarly the query functionality is provided through another interface (api) fronted by web UI. With this the command and query are separated. See below the diagram.
I have few questions on the approach
Is it right to have the query API (read) service & the event-complete-handler (write) operate on the same database with both dependent on the DB schema? Or is it better to have the query-api read from the replica DB?
The core-business-logic, at the end of computation, writes only to database and not to db+Kafka in a single transaction. Persisting to the database is handled by the event-complete-handler. Is this approach better?
Say in the future, if the core-business-logic needs to query the database to do the computation on every event, can it directly read from the database? Again, does it not create DB schema dependency between the services?
Is it right to have the query API (read) service & the event-complete-handler (write) operate on the same database with both dependent on the DB schema? Or is it better to have the query-api read from the replica DB?
"Right" is a loaded term. The idea behind CQRS is that the pattern can allow you to separate commands and queries so that your system can be distributed and scaled out. Typically they would be using different databases in a SOA/Microservice architecture. One service would process the command which produces an event on the service bus. Query handlers would listen to this event to change their data for querying.
For example:
A service which process the CreateWidgetCommand would produce an event onto the bus with the properties of the command.
Any query services which are interested widgets for producing their data views would subscribe to this event type.
When the event is produced, the subscribed query handlers will consume the event and update their respective databases.
When the query is invoked, their interrogate their own database.
This means you could, in theory, make the command handler as simple as throwing the event onto the bus.
The core-business-logic, at the end of computation, writes only to database and not to db+Kafka in a single transaction. Persisting to the database is handled by the event-complete-handler. Is this approach better?
No. If you question is about the transactionality of distributed systems, you cannot rely on traditional transactions, since any commands may be affecting any number of distributed data stores. The way transactionality is handled in distributed systems is often with a compensating transaction, where you code the steps to reverse the mutations made from consuming the bus messages.
Say in the future, if the core-business-logic needs to query the database to do the computation on every event, can it directly read from the database? Again, does it not create DB schema dependency between the services?
If you follow the advice in the first response, the approach here should be obvious. All distinct queries are built from their own database, which are kept "eventually consistent" by consuming events from the bus.
Typically these architectures have major complexity downsides, especially if you are concerned with consistency and transactionality.
People don't generally implement this type of architecture unless there is a specific need.
You can however design your code around CQRS and DDD so that in the future, transitioning to this type of architecture can be relatively painless.
The topic of DDD is too dense for this answer. I encourage you to do some independent learning.
If each instance of service has a separate database in Microservices architecture, how can we keep the data synced? For instance, if instace#1 serves a request and stores data in its database db#1 and another request on instannce#2 wants the data that was inserted to db#1 through instance#1, how can the database db#2 of instance#2 get the data from the database db#1 of instance#2? I think z-scaling is the solution here!
The microservice architecture uses a pattern called 'Eventual consistency'. Like you described, newly inserted data won't be directly available in all databases. You can read more about it here
That being said, the CQRS pattern is a populair way to solve the data distrubution / eventual consistency problem.
By using a messagebroker / bus, you can publish so called 'events' on a queue.
Microservices interested in changes / certain entities, can subscribe to those entities and save them in their own database.
This enables loosely coupled microservices, and the data necessary for certain entities is stored in the same database. Data duplication is ok, since we use eventual cosistency to make sure (eventually) everything is in sync over all microservices.
More information about the CQRS pattern using microservices can be found here
Here's a more practical example of something i'm working on right now. The language is in Dutch, but the flow should be self explanatory:
Hope this helps!
I suggest reading up on the following topics: CQRS, microservices, eventual consistency and messagebrokers (rabbitmq, kafka, etc)
Let's say we want to create the app with microservices.
We have some page where we display some items (products).
These products have multiple joins(categories, tags, users, and so on).
If users, categories data are within another services, how can we manage and filter the results?
For example in SQL you create 3,4 joins and get.
With microservices - I have to filter the categories, then filter tags and then products - this could be 10 time slower than the speed of the SQL query.
Also if I have table "products_categories" which set categories for each product which service is responsible for that? Product service or Category service ?
Thank you
In Microservices architecture there are two ways to deal with it.
The API composition pattern— This is the simplest approach and should be used whenever possible. It works by making clients of the services that own the data responsible for invoking the services and combining the results.
The Command query responsibility segregation (CQRS) pattern— This is more powerful than the API composition pattern, but it’s also more complex. It maintains one or more view databases whose sole purpose is to support queries.
I will prefer to use CQRS, Define a view database, which is a read-only replica to support specifically that query. The rest of the services keeps the replica up to date by subscribing to (create, update, insert)events published by the data owner services.
This is a very standard problem whenever any micro-service is built.. People just always feel micro-service is the solution for everything which is not true.
Solution to this problem is designing better. Designing so that there is a balance between performance and redundancy of data. Higher performance ( lower latency numbers ) means more duplicacy of data across different databases of microservice. You should not target to achieve performance as good as SQL Joins ; but also do not duplicate data too much. A balance is needed..
Most importantly, dividing the requirement into right set of micro-services is needed.
I assume you created a "microservice" per database table. Those are not microservices, those are just HTTP-based CRUD interfaces to your database.
First, know why you need microservices. (Is there an actual reason?) Second, you have to create microservices that encompass at least one full (business) functionality for your software. Meaning it doesn't need other services to do it.
If you need a table that needs data from multiple microservices, you by definition made wrong microservices. If a microservice can't provide it's own UI without the help of other services, it doesn't fully contain it's own functionality.
What's stopping you from having multiple services for reading / writing to the same database / table? For example:
One service to write to categories
One service to write to tags
One service to write to products
You could then write another service to read from all three of these services, however, this might not be at a HTTP level, instead you could read from the same database within your read service and leverage the power of SQL.
The service that reads could encompass your join logic which would mean you wouldn't need to consume the other services around it.
I have some questions above event-sourcing and cqrs in microservices architecture.
I understand that after send command some microservice executes it and emits event. Event-store subcsribes on it and saves inside his database. Also some ReadModel basing on this event generates and saves optimized data inside read database.
My first question is - Can microservice has his own database and store
data inside it too? Or maybe in event-sourcing approach microservices
don't have their own databases and everything is only stored inside
event store?
My second question is - when I execute command in microservice and
need some data for validation purposes do I need call ReadModel or
what? Assuming microservices haven't got their own databases I have no
choice?
Can microservice has his own database and store data inside it too?
Definitely, microservice can have its own database. But let's use terms from ES/CQRS. Database can represent Event Store (append-only log of immutabale events) and Read Model - some database used to answer queries which is populated by proseccing events.
So, microservice can have its own Read model, populated from events from other microservices.
Or microservice can process commands and save events to the shared Event Store.
Or microservice can process commands and save events to its own Event store.
Choice is yours, and it depends on degree of separation you want to achieve among microservices.
I would put all events that usually consumed together into same Event store. Which means I should be able to query for these events and have a single ordered stream as a result.
when I execute command in microservice and need some data for validation purposes do I need call ReadModel or what?
Command is executed by Aggregate, that has its own state. This state is built by processing all events for this aggregate, and this state should be used to validate a command.
You cannot/should not talk to Read Models in the command handler, primarily because those read models are not consistent with aggregate state. Aggregate state is consistent.
You can query Read Model before sending a command (to make sure it can be sent). But in command handler you need to rely on aggregate state only.
There is a famous case of registering user with requirement of a unique name. As a primary validation, in your UI code you can query read model and tell user that entered name is taken. If name is not taken, UI lets user issue a command. I'm assuming your Aggregate root is user.
But when processing this command ({id:123, type:CREATE_USER, name:somename}) you cannot check that "somename" is taken, because aggregate state for user 123 does not contain a list of taken names. You can potentially query some AllUsernames read model, but it can be milliseconds old, and some other user could take this "somename" already. So in this scenario, you will find a duplication during adding names to read model. And at that point you can do some compensation action - usually issue a command to suspend a user with duplicated name and ask him to re-register or change his name somehow.
It may seems strange, but if you have a really distributed system with several replicas of user list, you'll have the same problem, so why not just embrace the fact that data is always not fully consistent, and just deal with it?
The company I work for is investigating moving from our current monolithic API to microservices. Our current API is heavily dependent on spring and we use SQL server for most persistence. Our microservice investigation is leaning toward spring-cloud, spring-cloud-stream, kafka, and polyglot persistence (isolated database per microservice).
I have a question about how messaging via kafka is typically done in a microservice architecture. We're planning to have a coordination layer between the set of microservices and our client applications, which will coordinate activities across different microservices and isolate clients from changes to microservice APIs. Most of the stuff we've read about using spring-cloud-stream and kafka indicate that we should use streams at the coordination layer (source) for resource change operations (inserts, updates, deletes), with the microservice being one consumer of the messages.
Where I've been having trouble with this is inserts. We make heavy use of database-assigned identifiers (identity columns/auto-increment columns/sequences/surrogate keys), and they're usually assigned as part of a post request and returned to the caller. The coordination layer may be saving multiple things using different microservices and often needs the assigned identifier from one insert before it can move on to the next operation. Using messaging between the coordination layer and microservices for inserts makes it so the coordination layer can't get a response from the insert operation, so it can't get the assigned identifier that it needs. Additionally, other consumers on the stream (i.e. consumers that publish the data to a data warehouse) really need the message to contain the assigned identifier.
How are people dealing with this problem? Are database-assigned identifiers an anti-pattern in microservices? Should we expose separate microservice endpoints that return database-assigned identifiers so that the coordination layer can make a synchronous call to get an identifier before calling the asynchronous insert? We could use UUIDs but our DBAs hate those as primary keys, and they couldn't be used as an order number or other user-facing generated ids.
If you can programmatically create the identifier earlier while receiving from the message source, you can embed the identifier as part of the message header and subsequently use the message header information during database inserts and in any other consumers.
But this approach requires a separate verification by the other consumers against the database to process only the committed transactions (if you are concerned about processing only the inserts).
At our company, we built a dedicated service responsible for unique ids generation. And every other services grap the ids they need from there.
These generated ids couldn't be used as an order number but I think it's shouldn't be used for this job anyway. If you need to sort by created date, it's better to have a created_date field.
One more thing that is used to bug my mind with this approach is that the primary resource might be persisted after the other resource that rerefence it by the id. For example, a insert user, and insert user address request payload are sent asynchronously. The insert user payload contains a generated unique id, and user address payload contains that id as foreign reference back to user. The insert user address might be proccessed before the insert user request, but it's totally fine. I think it's called eventual consistency.