Is it possible to stop the flow execution in SI based on a header/message value ?
Thanks.
You can use a Control Bus to start and stop an inbound-adapter.
If you want to stop an existing flow mid-execution, I'm not aware of any standard ESB component that will enable you to do that. You could perhaps use a Channel Interceptor and lock the thread execution manually, but this approach would only be as granular as your message endpoints.
Also, if you find a way to interrupt the execution, be careful of any timeout values you set in your flow configuration. Otherwise you may find the flow will fail when you eventually resume it!
Related
I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.
I use rest api in my program,I made a processor group for convent a mongodb collection to json file:
I want to run the scheduling only one time,so I set the "Run schedule" to 10000 sec.Then I will stop the group when the data flow have ran one time,and I made a Notify processor and add a DistributedMapCacheService.But the DistributedMapCacheClientService of the Notify processor only comunicates with the DistributedMapCacheService in nifi itself,It never nofity my program.
I try to use my own socket server,but I only get a message "nifi" but no more message.
My question is:If I only want scheduling run once and stop it,how do I know when shall I stop it?Or is there some other way to achieve my purpose,like detect if the json file exists or use incremental data(If the scheduling run twice,the data will be repeated twice)?
As #daggett said you can do it in a synchronous way you can use HandleHttpRequest as trigger and HandleHttpResponse to manage the response.
For an asynchronous was you have several options for the notification like PutTCP, PostHTTP, GetHTTP, use FTP, file system, XMPP or whatever.
If the scheduling run twice the duplicated elements depends on the processors you use, some of them have state others no, but if you are facing problems with repeated elements you can use the DetectDuplicate processor.
I have a MassTransit system that will consume 2 message types, one for a batch process, the other for CRUD operations on a single entity. Whilst the batch process is running, the CRUD operations should not be de-queued.
Is this possible to achieve using MassTransit? It seems the exchange binding -> type name, would potentially make this behavior difficult.
A solution would be to use one message type to denote both operations and then interrogate the message contents to discern between single and batch but this feels like a code smell. Also, this would require concurrency configuration to ensure only one consumer is ever active.
Can anyone help with an alternative solution here? Essentially, we need to pause all message consumption whilst an event driven process is running.
Thanks in advance.
By pause, do you mean that you want the CRUD operations to be able to occur without being blocked by the batch process? Because if it's only a matter of not having the two separate messages get in the way of each other, the most logical solution is using two separate queues, one receive endpoint for the batch process and another for the CRUD operations.
Now, if you truly need to separate the batch process such that it doesn't happen during the CRUD operations, that will require more work. And what if you receive a CRUD operation while the batch process is already running?
I think the separate queues is your best solution, however.
I have a need where I want to group messages received from a system based on certain criterion. For performance reasons, I want to avoid persisting these individual messages before I can group them. I've seen that JMS implementations provide transaction batching over a set of messages as given in
Document 1
Document 2
But I also want the acknowledgement of batch to be controlled by my code; as in case there is some issue in grouping, I should be able to rollback the batch I am reading, to be able to process the message in next try.
From above links, as the transaction is managed by container over a set of onMessage calls, I would not control the transaction commit and rollback.
Can someone let me know if I misreading it and what would be the way to achieve this?
I have a Process deployed on a self-hosted MSSI server. Bound to this Process I have a simple Pass-through query.
Some events gets dropped here "cep:/Server/Application/Erp/Entity/Event_Events_Process1/Query/StreamableBinding_1/Operator/Stream_1_CleanseInput"
I can see the counter of event dropped going up and I cannot find the reason why it's dropping.
Does anyone know how to debug that?
You can use the StreamInsight Event Flow Debugger. Make sure your application exposes the StreamInsight Management Service so you can hook up with the debugger. Then you can record the events which you can debug/step-through in the debugger.
Chances are your events are being dropped because of CTI violations. You might be enqueueing events that based on their start time occurred before the last CTI event.
That's absolutely a CTI violation. You'll see this behavior when you are issuing CTIs declaratively (for example, by specifying AdvanceTimeSettings.IncreasingStartTime or StrictlyIncreasingStartTime). There are a couple of ways that you can handle this:
1) Enqueue your CTIs programmatically. But you'll have to be careful of violations! (They'll cause an exception).
2) Tweak your AdvanceTimeSettings to include a Delay. You won't be able to use IncreasingStartTIme or StrictlyIncreasingStart time but you will be able to specify the CTI span duration or event count and a delay. Keep the delay small enough to keep your stream lively but large enough to not drop events. I can't tell you what that is; it'll depend on your events.