I have, for example, 2 flows that should end in the same transitions to the same states, e.g.,
Flow 1 ends in either go to state A or B.
Flow 2 ends in either go to state A or B.
Right now, I seem to need to define the same end-state for A and B in flow1.xml and flow2.xml.
Is there any way they can all share the same states, A and B?
I've tried creating something like flowState and defining two end states in it, and trying to refer to them in flows 1 and 2 like
flowState#stateA and flowstate#stateB
but no luck. Any help??
Refactor the common state in a subflow, and call the subflow from the different main flows where you want to reuse the state.
You can even pass parameters to the subflow to configure it using the spring expression language if needed.
Related
I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.
I am trying to understand which of the following two options is the right approach and why.
Say we have GetHotelInfo(hotel_id) API that is being invoked from the Web till the Controller.
The logic of the GetHotelInfo is:
Invoke GetHotelPropertyData() (Location, facilities…)
Invoke GetHotelPrice(hotel_id, dates…)
Invoke GetHotelReviews(hotel_id)
Once all results come back, process and merge the data and return 1 object that contains all relevant data of the hotel.
Option 1:
Create 3 different repositories (HotelPropertyRepo, HotelPriceRepo,
HotelReviewsRepo)
Create GetHotelInfo usecase that will use these 3 repositories and
return the final result.
Option 2:
Create 3 different repositories (HotelPropertyRepo, HotelPriceRepo,
HotelReviewsRepo)
Create 3 different usecases (GetHotelPropertyDataUseCase,
GetHotelPriceUseCase, GetHotelReviewsUseCase)
Create GetHotelInfoUseCase that will orchestrate the previous 3
usecases. (It can also be a controller, but that’s a different topic)
Let’s say that right now only GetHotelInfo is being exposed to the Web but maybe in the future, I will expose some of the inner requests as well.
And would the answer be different if the actual logic of GetHotelInfo is not a combination of 3 endpoints but rather 10?
You can see a similar method (called Get()) in "Clean Architecture with GO" from Manato Kuroda
Manato points out that:
following Acyclic Dependencies Principle (ADP), the dependencies only point inward in the circle, not point outward and no circulation.
that Controller and Presenter are dependent on Use Case Input Port and Output Port which is defined as an interface, not as specific logic (the details). This is possible (without knowing the details in the outer layer) thanks to the Dependency Inversion Principle (DIP).
That is why, in example repository manakuro/golang-clean-architecture, Manato creates for the Use cases layer three directories:
repository,
presenter: in charge of Output Port
interactor: in charge of Input Port, with a set of methods of specific application business rules, depending on repository and presenter interface.
You can use that example, to adapt your case, with GetHotelInfo declared first in hotel_interactor.go file, and depending on specific business method declared in hotel_repository, and responses defined in hotel_presenter
Is expected Interactors (Use Case class) call other interactors. So, both approaches follow Clean Architecture principles.
But, the "maybe in the future" phrase goes against good design and architecture practices.
We can and should think the most abstract way so that we can favor reuse. But always keeping things simple and avoiding unnecessary complexity.
And would the answer be different if the actual logic of GetHotelInfo is not a combination of 3 endpoints but rather 10?
No, it would be the same. However, as you are designing APIs, in case you need the combination of dozens of endpoints, you should start considering put a GraphQL layer instead of adding complexity to the project.
Clean is not a well-defined term. Rather, you should be aiming to minimise the impact of change (adding or removing a service). And by "impact" I mean not only the cost and time factors but also the risk of introducing a regression (breaking a different part of the system that you're not meant to be touching).
To minimise the "impact of change" you would split these into separate services/bounded contexts and allow interaction only through events. The 'controller' would raise an event (on a shared bus) like 'hotel info request', and each separate service (property, price, and reviews) would respond independently and asynchronously (maybe on the same bus), leaving the controller to aggregate the results and return them to the client, which could be done after some period of time. If you code the result aggregator appropriately it would be possible to add new 'features' or remove existing ones completely independently of the others.
To improve on this you would then separate the read and write functionality of each context into its own context, each responding to appropriate events. This will allow you to optimise and scale the write function independently of the read function. We call this CQRS.
We have multiple (50+) nifi flows that all do basically the same thing: pull some data out of a db, append some columns conver to parquet and upload to hdfs. They differ only in details such as the sql query to run or the location in hdfs that they land.
The question is how to factor these common nifi flows out such that any change made to the common flow automatically applies to all all derived flows. E.g if i want to add an extra step to also publish the data to Kafka I want to make this once and have it automatically apply to all 50 flows.
We’ve tried to get this working with nifi registry, however it seems like an imperfect fit. Essentially the issue is that nifi registry seems to work well for updating a flow in one environment (say wat) and then autmatically updating it in another environment (say prod). It seems less suited for updating multiple flows in the same environment with one specific example bing that it will reset the name of each flow to be the template name every time we redeploy meaning that al flows end up with the same name!
Does anyone know how one is supposed to manage a situation like ours asi guess it must be pretty common.
Apache NiFi has ProcessorGroups. As the name itself suggests, the processor groups are there to group together a set of processors' and their pipeline that does similar task.
So for your case what you can do is, you can refactor the flow by moving the common flow which can be reused with different pipelines to a separate processor group with an input port. Connect the outside flow that depends on this reusable flow by connecting to the input port of the reusable processor group. Depending on your requirement you can create an output port as well in this processor group and connect it with the outside flow.
Attaching a sample:
For the sake of explaining, I have made a mock flow so ignore the Processor types that are used, but rather see the name I had given to those processors.
The following screenshots show that I read from two different sources and individually connect them to two different processors that does the source specific changes to those processors
Then I connect these two flows to the input port of a processor group that has the reusable flow inside. So ultimately the two different flows shown in the above screenshot gets to work with a common reusable flow.
Showing what's inside the reusable flow:
Finally the output port output to outside connects the reusable flow to the outside component Write to somewehere
I hope this helps you with refactoring your complex flows. Feel free to get back, if you have any queries.
I’m a beginner at Camunda/BPMN and I want to use it to control what is going on in nodejs, mostly likely using a REST API, at least for now. (Unless folks have a better idea for how nodejs should talk to Camunda.) My goal is to deliver systems where non-programmers can update the business logic in very practical ways.
I'd like to trigger the start of perhaps more-than-one process by sending a REST message, say to reflect that "a new insurance policy has been sold" and that might trigger the instantiation of say 2 processes on Monday but perhaps on Tuesday we add a third and now the same REST API call should now trigger more activity on Wednesday. (I figure it is better for nodejs to know about events but not about the process definitions. After all, my goal is to use Camunda as a sort of business logic server for my application. The less the nodejs code needs to know, the better.)
Which REST API should I be using to express the message that, say "a new insurance policy has been sold"? When I look at:
https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/
I find it very confusing. What should "name" match in the biz process definitions? I assume I don't need an executionId? I assume I can leave out tenantId?
Would some string in the message match the ID of a start event in one or more process definitions (or what has to match what)?
When I look at a process, is there an easy way to tell what variables I need to supply to start that process running?
Should I perhaps avoid using this event-oriented style of kicking off processes and just use the POST /process-definition/key/{key}/start? It would seem to me to be better form to trigger activity with events or signals or something like that rather than to have my nodejs code know about the specific process definition by name.
Should I be using events or signals in this case?
I gather that the start event should not be a "None Start Event" but I'm not clear on what type of start event TO use if I want automatic triggering based on events or signals or something? Would a "Non-interrupting - Message Start Event" be the right sort? I'm finding this confusing.
Once I have triggered the process to start, what does nodejs need to send to step the process forward from one task in that instance to the next?
Thanks!
In order to instantiate a new workflow instance you have the following possibilities:
Start exactly one instance:
Start a workflow instance by its known "key": https://docs.camunda.org/manual/develop/reference/rest/process-definition/post-start-process-instance/
Start a workflow by a message start event: https://docs.camunda.org/manual/develop/reference/rest/message/post-message/. A message can only start one specific workflow instance, it is not allowed that this is not a unique relationship. The message start event is the one you have to use in your BPMN process model. See also https://docs.camunda.org/manual/develop/reference/bpmn20/events/message-events/. This might indeed be the better approach to make your client independent of the process definition key.
Start multiple instances:
- Start a workflow instance by a BPMN signal event: https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/. The signal name could start many instances as once.
The name of the message or name of signal would be configured in the BPMN model. Both could work for your use case.
Once a process instance is started it will move automatically execute the next steps.
Probably following this example (https://blog.bernd-ruecker.com/use-camunda-without-touching-java-and-get-an-easy-to-use-rest-based-orchestration-and-workflow-7bdf25ac198e) step by step can give you some better idea?
Hi the wise folks at SO. This is an SOS.
I'm in a deep trouble. In my web application there is an object (Say it is a request for something). User submits his/her request. After this it comes to the people who can approve/disapprove that request. During the period from submission to approval/disapproval many actions can be taken on the request. I have to present user with actions panel (collections of links) using which he/she can modify the state of the request.
Now based on which stage of processing the request is some actions are not allowed. Also if some action has already been taken it excludes the possibility of other actions.
Overall it creates a pretty complex matrix of allowed/forbidden actions that my tiny head is not able to take care of it.
I've create some static classes/methods which returns the arrays of allowed actions based on the state of the request. There are about 20 states that a application can be in. I've taken care based on state to remove/disable links for actions that are not possible in that state.
Now problem arises is that suppose request is in state X.
Now if in past action l has been taken on request we may not allow l or based on this some arbitrary actions m,n,o.
After writing all the methods to get arrays of links for 20 states, I have to filter the arrays based on the past history of actions (which is stored in sql db) which is very very big task.
Please suggest me some pattern which is easier to implement and efficient. It is getting on my nerves.
As I understand you have a real-world workflow scenario. In this case I would:
Model entire state as a single entity if possible (a single row with fixed number of fields). I would not model this as a set of actions.
Model each action as some change in the row. It is quite obvious when user enters some data, but I would also model each acceptance as either - a boolean field or a state field - depending on whether the acceptance is done by independent departments or it is a cascade of acceptance in a single department.
Also there may be a situation when an acceptance is given for some particular parameter and the parameter may change in the future, requiring new acceptance. In this case I would model such scenario as two fields. On for the parameter value and the second one for the accepted value. I would make the decision on whether an acceptance is still needed based on the difference of this two fields. This allows for implementing some thresholds.
Having a state modeled as a single row I would implement independent predicates for action allowance.
I think that point 4 is the most important one. If your are able to implement independent predicates for enabling actions then you will be able to easily modify them in the future.
Having 1-3 properly implemented you will be able to easily implement acceptance revoking, which may be required and in this case may make overall code size smaller.
Sounds like a job for a state machine workflow, or a few giant nested switches (which ever you prefer).
First thing that came into my mind: Statemachine. Each State is some kind of object. All states have some method "processRequest" that transits the execution into the next state.
The second thing that came into my mind - theses states have to be organized like a tree or graph. The graph represents the history of requests. You start in the initial State. You get Request A, you proceed to State A. After that, you get request B, you proceed to AB. Wether state AB is equal to BA is not clear by your description.
That way, you get far more states then your 20 states you have now, but each state includes the history. I'd suggest a naming convention after the path you had to take to get there (like AB before). And perhaps you can reuse state A and B in AB, to minimize coding.