Design of state machine logic in REST application - spring

I am trying to finalize desgin for my use case of a REST application.
It is like a online order application where it will accept the order details, process it and
and once processing is finished it will update the status in datbase.
during fulfilling there can be multiple task which will be invoked. There will be another REST end point which will be used to get the status of order.
So there will be state transaction like below
Received --> Fulfilling --> Fulfilled
I stumble upon spring-statemachine framework and looks interesting. Considering above use case
is spring-statemachine right choice for it ? Also is there any example project to understand
in much details.

Considering above use case is spring-statemachine right choice for it
?
Yes, Spring state machine is a good choice for this use-case.
Also is there any example project to understand in much details.
Yes, there are a lot of example projects and in fact, there's one for order shipping/processing:
official order shipping recipe documentation
official order shipping github repo

Related

use event on message bus to trigger suspended activity

newbie here.
Reading the docs I understand we can use an incoming HTTP request as a trigger to wake up a suspended activity.
In my case, the business trigger is the arrival of a message on a bus (from another system)…..
I thought of building out dedicated hosted service that just listens to messages arriving on the bus and invoke / trigger the respective activities....
Would I be following the suggested patterns if I do that ? It feels wrong as I'd be writing some custom external code rather than relying on the declarative approach usually described in the ELSA docs...
Any thoughts welcome..
This is a great question. Both patterns are great and in fact, the declarative approach depends on supporting infrastructure (such as hosted services).
For example, let's take the HttpEndpoint and AzureServiceBusMessageReceived activities.
Both of them require supporting infrastructure:
HttpEndpoint depends on ASP.NET Core middleware to trigger workflows as HTTP requests come in
AzureServiceBusMessageReceived depends on a hosted service that contains message workers to trigger the appropriate workflows.
For your case, you don't have to write your own hosted service if you can use one of the existing messaging activities, since it's already done for you.
At the same time, it's perfectly OK to just have your own hosted service that consumes messages and trigger workflows yourself. You could make it even a bit fancier by having your hosted service trigger business-specific activities.
For example, rather than triggering some low-level "message received" activity, you could trigger a "order created" activity if that is what the message is all about.
More details about implementing these types of activities can be found https://elsa-workflows.github.io/elsa-core/docs/guides/guides-blocking-activities.
As you already discovered, there are also examples in the repository https://github.com/elsa-workflows/elsa-core/tree/master/src/samples.
I was only considering the Elsa Guides, but just discovered a whole list of additional samples in the Elsa-Core project itself. In particular, there are several examples that seem to handle my use case (example Elsa.Samples.RabbitMqWorker)....

Spring HATEOAS: Practicable for a microservice architecture?

I know this question was already asked but I could not find a satisfying answer.
I started to dive deeper in building a real restful api and I like it's contraint of using links for decoupling. So I built my first service ( with java / spring ) and it works well ( although I struggled a bit with finding the right format but that's another question ). After this first step I thought about my real world use case. Micorservices. Highly decoupled individual services. So I made a my previous scenario and I came to some problems or doubts.
SCENARIO:
My setup consists of a reverse proxy ( Traefik which works as service discovery and api gateway) and 2 Microservices. In addition, there is an openid connect security layer. My services are a Player service and a Team service.
So after auth I have an access token with the userId and I am able to call player/userId to get the player information and teams?playerId=userId to get all the teams of the player.
In my opinion, I would in both responses link the opposite service. The player/userId would link to the teams?playerId=userId and vice versa.
QUESTION:
I haven't found a solution besides linking via a hardcoded url. But this comes with so many downfalls as I can't imagine that this a solution used in real world applications. I mean just imagine your api is a bit more advanced and you have to link to 10 resources. If something changes, you have refactor and redeploy them all.
Besides the synchonization problem, how do you handle state in such a case. I mean, REST is all about state transfer. So I won't offer the link of the player to teams service if the player is in no team. Of course I can add the team ids as attribute to the player to decide whether to include the link or not. But this again increases coupling between the services.
The more I dive in the more obstacles I find and I'm about to just stay with my spring rest docs and neglect the core of Rest which I is a pity to me.
Practicable for a microservice architecture?
Fielding, 2000
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
Fielding 2008
REST is intended for long-lived network-based applications that span multiple organizations.
It is not immediately clear to me that "microservices" are going to fall into the sweet spot of "the web". We're not, as a rule, tring to communicate with a microservice that is controlled by another company, we often don't get a lot of benefit out of caching, or code on demand, or the other rest architectural constraints. How important is it to us that we can use general purpose components to exchange information between different microservices within our solution? and so on.
If something changes, you have refactor and redeploy them all.
Yes; and if that's going to be a problem for us, then we need to invest more work up front to define a stable interface between the two. (The fact that we are using "links" isn't special in that regard - if these two things are going to talk to each other, then they are going to need to speak a common language; if that common language needs to evolve over time (likely) then you need to build those capabilities into it).
If you want change over time, then you have to plan for it.
If you want backwards/forwards compatibility, then you have to plan for it.
Your identifiers don't need to be static - there are lots of possible ways of deferring the definition of an identifier; the most obvious being that you can use another identifier to look up the identifier you want, or the formula for calculating it, or whetever.
Think about how Google works - the links they use change all the time, but it doesn't matter because the protocol (refresh your bookmarked search form, enter your text in "the" one field, click the button) hasn't changed in 20 years. The interface is stable (even though the underlying spellings of the identifiers is not) and that's enough.

Is there a way to enable trace to display the sequence of decisions executed for a DMN in Kogito?

I finally got my sample dmn-quarkus example running. Is there a property that enables the trace, such a way it prints the sequence of decisions executed?
I noticed that when I provide a incorrect JSON for my DMN model, Kogito responds with a detail response, telling me which decision failed.
This is awesome! Is there a property to turn on to get the details in each response?
Kogito is based on a domain-model first approach to code generation
Kogito ergo domain
Kogito adopts to your business domain rather than the other way around [...]
this means the automatically code-generated API will always take the "shape" of the input/output context of the DMN model, and no longer the v7.x kie-server approach of a generic API.
The information you obtain during error is meant to provide an analogous to a stacktrace.
You can always leverage the Kogito API programming model to build the REST service yourself, in the way better fit your specific business requirement --shall that be provide a list of DMNDecisionResult(s). For instance a pragmatic approach could be to inspect the automatically generated code, and then code a bespoke service, based on this one.
We are looking into Audit functional requirements, but that is not provided out of the box yet. We always welcome community feedback, especially even more in this very early versions! Don't hesitate to join the community on our mailing-list or raise a JIRA ticket to take part of the conversation, the team will be glad to look further into it considering community feedback and suggestions!

Designing microservices in practice

Yet another question on how to or how not to split up a microservice :-D
The scenario:
What do we need?
Sending emails at different points of time within the work flow of an ecommerce order process. These mails will be containing order information.
What do we have?
1 x persistence service which retrieves order information
Several services which subscribe to order events and processes the relevant use case (e.g. Confirmation, delivery, invoice)
1 x service which can be triggered to send a mail
What's the next step?
Designing the architectural component which transforms the order information so they will fit the data structure of the email rendering service.
The current options are
1 having each processing service transform already existing order information for the mail template and send them to the mail rendering service.
2 have each processing service call a new service which would aggregate and transform the order information and call the mail rendering service.
Currently we're not sure yet if the data structures for the mail templates will be mostly common or if there will be differences.
So what do you think of these options in terms of cohesion, coupling and separation of concerns?
Do you need any more information? Any constructive thoughts are welcome!
Your software architecture should reflect your organizational structure, see Conway's law
Do you have multiple teams, and you want to minimize dependencies between the teams.
Are "services" large and complex enough to justify them being separated into modules?
Does the size of the product justify having advanced devops in place to orchestrate the microservices?
Do you need the flexibility in terms of deployment and replaceability of individual "services"?
If you can answer yes to most of these questions, it would make sense to go for microservices. Otherwise, you are just making your life complicated.
Frankly, microservices require a lot of coordination overhead which makes sense only if the product is large enough. Most (small) projects are just fine with monolithic and MVC architecture.
This is how I propose to proceed man, it's how one of my project's architecture does all SMTP related stuff.
API receives an HTTP request
It persists data needed to the database.
It offloads the long-running and memory intensive processes to mail builder.
Optional, mail builder builds attachment files (XLSX, PDF, etc)
Mail builder uploads to File Server
Mail builder offloads generic SMTP sending to SMTP service.
I suggested this format because it allows you to scale the instance of each piece (Mail builder will have tons of instances) depending on bottlenecks in your processing pipeline.
Given that you have asked this question in microservices, I am assuming you are asking the question in reference to cloud native patterns.
I suggest you start with looking at microservices pattern. An excellent site for the patterns is https://microservices.io/patterns/microservices.html.
Your question does not have the necessary details to provide an educated advice on what patterns are suitable and what are not. So, I suggest you look at these few patterns...
https://microservices.io/patterns/data/shared-database.html
https://microservices.io/patterns/data/database-per-service.html
Also take a look at event sourcing pattern
https://microservices.io/patterns/data/event-sourcing.html
Hope this helps.

Which workflow engine should I chose for implementing a dynamic reconfiguration of workflows?

I want to be able to interrupt a running workflow instance, say when a new activity is about to be invoked, and extract information both about the structure of the workflow and the data in the particular instance. Then I will consult with an external system and according to its response I will possibly alter the behaviour of the workflow. The options I would like to have are addition/removal of activities and altering parameters for the activities to be invoked.
I am currently struggling with the engine it's best to go with. I have looked at WWF, Apache ODE, Oracle Workflow and Active BPEL and as far as I understand they can all provide me with the options I need. I would really appreciate any recommendations on which one will be the easiest to work with for my purpose and any restrictions either of the above might have that would prevent me from reaching my goal.
Thanks
I am sorry not to directly answer your question, but you may be interested in a state machine framework called Stateless created by Nicholas Blumhardt (AutoFac). I have used this instead of Windows Workflow where I needed to quickly configure my steps for a work flow. I have one configuration file that I alter and can introduce new steps into the workflow quite easily. See my SO answer here for more details.
Essentially you define a state as State<T> and this allows you to persist your state in a database easily.

Resources