I have a Power automate Flow which triggers when there is a message in azure queue and process the message. Power automate triggers perfectly when there is a item in queue. But after successful process , Message is returning back to the queue. And again getting trigger by power automate. Is there any settings I have to change ?
Can someone help ?
After reproducing from my end, I faced the same issue. To resolve this, one way you can do is to add another action called Delete message (V2) specifying the required properties in the end. Below is the flow of my logic app.
Related
I have a simple integration flow that poll data based on a cron job from database, publish on a DirectChannel, then do split and transformations, and publish on another executor service channel, do some operations and finally publish to an output channel, its written using dsl style.
Also, I have an endpoint where I might receive an http request to trigger this flow, at this point I send the messages one of the mentioned channels to trigger the flow.
I want to make sure that the manual trigger doesn’t happen if the flow is already running due to either the cron job or another request.
I have used the isRunning method of the StandardIntegrationFlow, but it seems that it’s not thread safe.
I also tried using .wireTap(myService) and .handle(myService) where this service has an atomicBoolean flag but it got set per every message, which is not a solution.
I want to know if the flow is running without much intervention from my side, and if this is not supported how can I apply the atomic boolean logic on the overall flow and not on every message.
How can I simulate the racing condition in a test in order to make sure my implementation prevent this?
The IntegrationFlow is just a logical container for configuration phase. It does have those lifecycle methods, but only for an internal framework logic. Even if they are there, they don't help because endpoints are always running if you want to do them something by some event or input message.
It is hard to control all of that since it is in an async state as you explain. Even if we can stop a SourcePollingChannelAdapter in the beginning of that flow to let your manual call do do something, it doesn't mean that messages in other threads are not in process any more. The AtomicBoolean cannot help here for the same reason: even if you set it to true in the MessageSourceMutator.beforeReceive() and reset back to false in its afterReceive() when message is null, it still doesn't mean that messages you pushed down in other thread are already processed.
You might consider to use an aggregator for AtomicBoolean resetting in the end of batch since you mention that you pull data from DB, so perhaps there is a number of records per poll you can track downstream. This way your manual call could be skipped until aggregator collects results for that batch.
You also need to think about stopping a SourcePollingChannelAdapter at the moment when manual action is permitted, so there won't be any further race conditions with the cron.
I've created two services.
One of them (scheduler) only requests to the other (backoffice) for performing some "large" operations.
When backoffice receives a request:
first creates a mark (key on redis) in order to set that the process has started.
Each time a request is reached:
backoffice checks if the mark exist.
When it exists means that the previous process has not yet finished, and escape it.
Perform the large process.
When process is finished, the previous key in redis is removed.
It would be something like this:
if (key exists)
return;
make long process... (1);
remove key;
The problem arises when service is destroyed when the process has not already finished and then it doesn't removes the mark on redis. It means the process will never run again.
Is there any way to solve this kind of problems?
The way to solve this problem is use an existing engine as building custom scalable and robust solution for reliable service orchestration is really hard.
I recommend looking at Uber Cadence Workflow which would allow to convert your pseudocode into a real production application with minor changes.
You can fire a background job that updates timestamp under the key, e.g. every minute.
When service attempts to start the process it must verify key existence (as it does now) + timestamp under the key. If it is more than 1 minute ago then the previous attempt is stale and you can start over.
Sounds like you should be using a messaging queue to schedule tasks for the back office service. Queuing solutions like RabbitMQ allow you to manually acknowledge (or “ack”) that the process is complete. Whenever a subscriber crashes, the queue detects that the connection dropped without acknowledgement and will re-enqueue the same task which will be picked up by the next available subscriber. Here’s another thread talking about this problem specifically focused on messaging queues:
What happens to fetched messages when RabbitMQ consumer crashes?
I'm using rhea in a nodejs application to send messages around over Azure Service Bus using AMQP. My problem is as follows:
Sometimes a message processing attempt can fail because of something that is out of our hands. For instance, a call to some API could fail because a service is down. At that point we unlock the message so it can be picked up at a later time or by another instance. After a certain amount of retries (when delivery-count has hit a certain max) it just ends up in DLQ.
What I want to achieve is that between each delivery attempt there is an increasing pause so the X amount of retries don't just occur in rapid succession until the max is hit. This way I can give whatever is causing the failure some time to come back up if it's just a matter of waiting for some service to become available again. If that doesn't work the message can go to DLQ anyway.
Is there some setting in azure service bus that will achieve this or will I have to program this into my own application?
if you explicitly want to delay processing you can en-queue a new message with ScheduledEnqueueTime set of later delivery (using the message.Clone() function can help in creating the cloned message). You also have the ability to call message.Defer() and will not deliver this message again until you call Receive(Sequenceid) for that specific message at a later time .
I want to integrate slack with SCOM. SO i have a powershell script which will post notifications to slack and i have found where to place it so that it will get executed when some error occurs. But I am unable to find a way to create a Custom rule for error generation (eg. I want to create a rule which will trigger when any machine configured in SCOM, goes down or when its CPU utilization goes down). So that when this rule breaks, my Powershell script for slack notification will get triggered. IS this possible in SCOM?
Create a new command channel in SCOM
Attach the script that will process the alert's output ( the logic to both transform the data and send it to slack )
Create a subscription to the monitors and rules that you want passed to slack
Subscribe the command channel to the subscription
Reference: https://blogs.technet.microsoft.com/fesiro/2012/11/26/how-to-configure-command-notification-in-scom-2012-with-powershell-script/
If you have Orchestrator in your System Center infrastructure you can make a runbook that is initialized by a SCOM alert. You will need to setup the SCOM connector if you haven't but this is a great way to make the process more easily managed. Then you can call your script inside the runbook.
I'm pretty new to MSMQ 4.0. I got stuck with below scenario;
Service A takes User Details and Returns an User ID.
Then Service B takes Billing detials with User ID.
Now I have to Queue these steps. I'm planning to use Transaction Queue.
Could some one please help me with
1)Get the ID from first message and include it in the second message.
2)If at least one step failed I have to rollback(transaction Queue does it for me) retry or 5 times and if it still failed then move it to VerifyAdminQueue for verification by Admin.I dont like using DeadLetter Queue etc.,
Thanks in advance.
Services built with MSMQ queues are truly one-way. This means that there is no built in concept of a response. There are many ways you can implement a request-response communication pattern using MSMQ but with all of them you will need to construct and send the response back to the caller yourself.
With one way actions, rollback is very simple, and indeed MSMQ will rollback any failed steps in the transmission of a message. More complex operations such as request-response however lack any concept of a transaction in MSMQ and so any rollback across more than one message transmission steps will require you to write compensatory code.