We have been using camunda 7.4 version in most of our projects along with camunda-bpm-spring-boot-starter 1.1.0.
We have a project where in the camunda flow we try to publish a message to a message broker which internally is consumed by another system and re-publish a new message to the same message broker. Then we trigger a receiveTask to receive that message and process further. To listen to the incoming message we use org.springframework.amqp.core.MessageListener and we define the message co-relation for receiveTask within the onMessage() method. But we get below error in doing so
org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'ReceiveAmsharedResponse': No process definition or execution matches the parameters
We are trying to figure where the problem is? Is it in the version of camunda we are using or the problem is with the usage of receiveTask. We tried all approaches defined in https://docs.camunda.org/manual/7.4/reference/bpmn20/tasks/receive-task/ but no luck.
For the method createMessageCorrelation we get above error. And for other methods we get a NPE as EventSubscription/Execution objects are null.
Sample Camunda flow receiveTask Usage is as below:
<bpmn2:receiveTask id="ReceiveTask" name="Receive Task" messageRef="Message_06nl07f">
<bpmn2:incoming>SequenceFlow_xyz</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_190m9nx</bpmn2:outgoing>
</bpmn2:receiveTask>
......
......
<bpmn2:message id="Message_06nl07f" name="Message" />
And sample message co-relation code:
class XYZ implements MessageListener {
onMessage() {
runtimeService.createMessageCorrelation("Message")
.processInstanceBusinessKey(resourceId)
.setVariable(ACTIVITY_RESULT, activityResult)
.correlate();
}
Any help would be appreciated?
Thanks,
Viswanath
Assuming your process looks something like this:
O -- ( M ) -- ( M ) -- O
send receive
If the response message is send very fast, it possible that the onMessage and message correlation is executed before the message event subscription is persisted in the database. Basically the message arrives while the send task is stil being executed.
One way to avoid this would be to create the message subscription in parallel with sending the event:
O -- + -- ( M ) -- + -- O
| send |
`----( M ) --´
receive
Regarding to the given exception message which is :
org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'ReceiveAmsharedResponse': No process definition or execution matches the parameters
I assume that you correlate a message with the name ReceiveAmsharedResponse, but you defined a Message with a different name for your ReceiveTask.
Changing the definition of your Message to the following should work:
<bpmn2:message id="Message_06nl07f" name="ReceiveAmsharedResponse" />
Assuming your process looks something like this:
O -- ( M ) -- ( M ) -- O
send receive
This error means that a response was received "earlier" than the request was sent.
And by sent, I mean the whole transaction of the send-request process.
So, in other words, because of the fast response, there is no commited transaction in Camunda Database for the send-request process. Thus, when camunda processes the response-received process, it is not possible to correlate the response-received with the send-request.
One suggested solution is to use Async Continuations that Camunda offers and you might check in their official website. More specifically, you might use Async Before at the send-request level:
( M )
send
Async Before will save the transaction in Camunda database, before the send-request process will begin. Τherefore, it will be possible for the response to be correlated.
Related
I have two programs running in separated sessions. I want to send a event from program A and catch this event in program B.
How can I do that ?
Using class-based events is not really an option, as these cannot be used to communicate between user sessions.
There is a mechanism that you can use to send messages between sessions: ABAP Messaging Channels. You can send anything that is either a text string, a byte string or can be serialised in any of the above.
You will need to create such a message channel using the repository browser SE80 (Create > Connectivity > ABAP Messaging Channel) or with the Eclipse ADT (New > ABAP Messaging Channel Application).
In there, you will have to define:
The message type (text vs binary)
The ABAP programs that are authorised to access the message channel.
The scope of the messages (i.e. do you want to send messages between users? or just for the same user? what about between application servers?)
The message channels work through a publish - subscribe mechanism. You will have to use specialised classes to publish to the channel (inside report A) and to read from the channel (inside report B). In order to wait for a message to arrive once you have subscribed, you can use the statement WAIT FOR MESSAGE CHANNELS.
Example code:
" publishing a message
CAST if_amc_message_producer_text(
cl_amc_channel_manager=>create_message_producer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
i_suppress_echo = abap_true )
)->send( i_message = text_message ).
" subscribing to a channel
DATA(lo_receiver) = NEW message_receiver( ).
cl_amc_channel_manager=>create_message_consumer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
)->start_message_delivery( i_receiver = lo_receiver )
" waiting for a message
WAIT FOR MESSAGING CHANNELS
UNTIL lo_receiver->text_message IS NOT INITIAL
UP TO time SECONDS.
If you want to avoid waiting inside your subscriber report B and to do something else in the meanwhile, then you can wrap the WAIT FOR... statement inside a RFC and call this RFC using the aRFC variant. This would allow you to continue doing stuff inside report B while waiting for an event to happen. When this event happens, the aRFC callback method that you defined inside your report when calling the RFC would be executed.
Inside the RFC, you would simply have the subscription part and the WAIT statement plus an assignment of the message itself to an EXPORTING parameter. In your report, you could have something like:
CALL FUNCTION 'ZMY_AMC_WRAPPER' STARTING NEW TASK 'MY_TASK'
CALLING lo_listener->my_method ON END OF TASK.
" inside your 'listener' class implementation
METHOD my_method.
DATA lv_message TYPE my_message_type.
RECEIVE RESULTS FROM FUNCTION 'ZMY_AMC_WRAPPER'
IMPORTING ev_message = lv_message.
" do something with the lv_message
ENDMETHOD.
You could emulate it by checking in program B if a parameter in SAP memory has changed. program A will set this parameter to send the event. (ie SET/ GET PARAMETER ...). In effect you're polling event in B.
There a a lot of unknown in your desription. For example is the event a one-shot operation or can A send several event ? if so B will have to clear the parameter when done treating the event so that A know it's OK to send a new one (and A will have to wait for the parameter to clear after having set it)...
edited : removed the part about having no messaging in ABAP, since Seban shown i was wrong
I am investigating using sagas in mass transit to orchestrate activities across several services. The lifetime of the saga is short - less than 2 seconds if all goes well.
For my use case, i would like to use the request/respond approach, whereby the client requests a command, the saga handles that command, goes through some state changes as messages are received and eventually responds to the first command that initiated the saga, at which point the client receives the response and can display the result of the saga.
From what i can see, by this point, the context is no longer aware of the initial request. How can I reply to a message that was received in this way? Is there something i can persist to the saga data when handling the first event, and use that to reply later on?
Thanks Alexey. I have realised that I can store the ResponseAddress and RequestId from the original message on the saga, and then construct a Send() later on.
Getting the response details from the original request
MassTransit.EntityFrameworkIntegration.Saga.EntityFramework
SagaConsumeContext<TSagaData, TMessage> payload;
if (ctx.TryGetPayload(out payload))
{
ResponseAddress = payload.ResponseAddress;
RequestId = payload.RequestId ;
}
Sending the response
var responseEndpoint = await ctx.GetSendEndpoint(responseAddress);
await responseEndpoint.Send(message, c => c.RequestId = requestId);
UPDATE: The documentation has been updated to include a more complete example.
Currently, the saga state machine can only do immediate response like this:
// client
var response = await client.Request(requestMessage);
// saga
During(SomeState,
When(RequestReceived)
.Then(...)
.Respond(c => MakeResponseMessage(c))
.TransitionTo(Whatever)
)
So you can respond when handling a request.
If you want to respond to something you received before, you will have to craft the request/response conversation yourself. I mean that you will have to have decoupled response, so you need to send a message and have a full-blown consumer for the reply message. This will be completely asynchronous business.
So I have request/response queues that I am putting messages on and reading messages off from.
The problem is that I have multiple local instances that are reading/feeding off the same queues, and what happens sometimes is that one instance can read some other instance's reply message.
So is there a way I can configure my JMS, using spring that actually makes the instances read the messages that are only requested by them and not read other instance's messages.
I have very little knowledge about JMS and related stuff. So if the above question needs more info then I can dig around and provide it.
Thanks
It's easy!
A JMS message have two properties you can use - JMSMessageID and JMSCorrelationID.
A JMSMessageId is supposed to be unique for each message, so you could do something like this:
Let the client send a request, then start to listen for responses where the correlation id = the sent message id. The server side is then responsible for copying the message id of the request to the correlation id of the response. Something like: responseMsg.setJMSCorrelationID(requestMsg.getJMSMessageID());
Example client side code:
Session session = getSession();
Message msg = createRequest();
MessageProducer mp = session.createProducer(session.createQueue("REQUEST.QUEUE"));
mp.send(msg,DeliveryMode.NON_PERSISTENT,0,TIMEOUT);
// If session is transactional - commit now.
String msgID = msg.getJMSMessageID();
MessageConsumer mc = session.createConsumer(session.createQueue("REPLY.QUEUE"),
"JMSCorrelationID='" + msgId + "'");
Message response = mc.receive(TIMEOUT);
A more performant solution would be to use dedicated reply queues per destination. Simply set message.setJMSReplyTo(session.createQueue("REPLY.QUEUE."+getInstanceId())); and make sure the server side sends response to requestMsg.getJMSReplyTo() and not to a hard coded value.
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763
I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.