Trigger/Handle events between programs in different ABAP sessions - events

I have two programs running in separated sessions. I want to send a event from program A and catch this event in program B.
How can I do that ?

Using class-based events is not really an option, as these cannot be used to communicate between user sessions.
There is a mechanism that you can use to send messages between sessions: ABAP Messaging Channels. You can send anything that is either a text string, a byte string or can be serialised in any of the above.
You will need to create such a message channel using the repository browser SE80 (Create > Connectivity > ABAP Messaging Channel) or with the Eclipse ADT (New > ABAP Messaging Channel Application).
In there, you will have to define:
The message type (text vs binary)
The ABAP programs that are authorised to access the message channel.
The scope of the messages (i.e. do you want to send messages between users? or just for the same user? what about between application servers?)
The message channels work through a publish - subscribe mechanism. You will have to use specialised classes to publish to the channel (inside report A) and to read from the channel (inside report B). In order to wait for a message to arrive once you have subscribed, you can use the statement WAIT FOR MESSAGE CHANNELS.
Example code:
" publishing a message
CAST if_amc_message_producer_text(
cl_amc_channel_manager=>create_message_producer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
i_suppress_echo = abap_true )
)->send( i_message = text_message ).
" subscribing to a channel
DATA(lo_receiver) = NEW message_receiver( ).
cl_amc_channel_manager=>create_message_consumer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
)->start_message_delivery( i_receiver = lo_receiver )
" waiting for a message
WAIT FOR MESSAGING CHANNELS
UNTIL lo_receiver->text_message IS NOT INITIAL
UP TO time SECONDS.
If you want to avoid waiting inside your subscriber report B and to do something else in the meanwhile, then you can wrap the WAIT FOR... statement inside a RFC and call this RFC using the aRFC variant. This would allow you to continue doing stuff inside report B while waiting for an event to happen. When this event happens, the aRFC callback method that you defined inside your report when calling the RFC would be executed.
Inside the RFC, you would simply have the subscription part and the WAIT statement plus an assignment of the message itself to an EXPORTING parameter. In your report, you could have something like:
CALL FUNCTION 'ZMY_AMC_WRAPPER' STARTING NEW TASK 'MY_TASK'
CALLING lo_listener->my_method ON END OF TASK.
" inside your 'listener' class implementation
METHOD my_method.
DATA lv_message TYPE my_message_type.
RECEIVE RESULTS FROM FUNCTION 'ZMY_AMC_WRAPPER'
IMPORTING ev_message = lv_message.
" do something with the lv_message
ENDMETHOD.

You could emulate it by checking in program B if a parameter in SAP memory has changed. program A will set this parameter to send the event. (ie SET/ GET PARAMETER ...). In effect you're polling event in B.
There a a lot of unknown in your desription. For example is the event a one-shot operation or can A send several event ? if so B will have to clear the parameter when done treating the event so that A know it's OK to send a new one (and A will have to wait for the parameter to clear after having set it)...
edited : removed the part about having no messaging in ABAP, since Seban shown i was wrong

Related

Receivetask in camunda is not working as expected

We have been using camunda 7.4 version in most of our projects along with camunda-bpm-spring-boot-starter 1.1.0.
We have a project where in the camunda flow we try to publish a message to a message broker which internally is consumed by another system and re-publish a new message to the same message broker. Then we trigger a receiveTask to receive that message and process further. To listen to the incoming message we use org.springframework.amqp.core.MessageListener and we define the message co-relation for receiveTask within the onMessage() method. But we get below error in doing so
org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'ReceiveAmsharedResponse': No process definition or execution matches the parameters
We are trying to figure where the problem is? Is it in the version of camunda we are using or the problem is with the usage of receiveTask. We tried all approaches defined in https://docs.camunda.org/manual/7.4/reference/bpmn20/tasks/receive-task/ but no luck.
For the method createMessageCorrelation we get above error. And for other methods we get a NPE as EventSubscription/Execution objects are null.
Sample Camunda flow receiveTask Usage is as below:
<bpmn2:receiveTask id="ReceiveTask" name="Receive Task" messageRef="Message_06nl07f">
<bpmn2:incoming>SequenceFlow_xyz</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_190m9nx</bpmn2:outgoing>
</bpmn2:receiveTask>
......
......
<bpmn2:message id="Message_06nl07f" name="Message" />
And sample message co-relation code:
class XYZ implements MessageListener {
onMessage() {
runtimeService.createMessageCorrelation("Message")
.processInstanceBusinessKey(resourceId)
.setVariable(ACTIVITY_RESULT, activityResult)
.correlate();
}
Any help would be appreciated?
Thanks,
Viswanath
Assuming your process looks something like this:
O -- ( M ) -- ( M ) -- O
send receive
If the response message is send very fast, it possible that the onMessage and message correlation is executed before the message event subscription is persisted in the database. Basically the message arrives while the send task is stil being executed.
One way to avoid this would be to create the message subscription in parallel with sending the event:
O -- + -- ( M ) -- + -- O
| send |
`----( M ) --´
receive
Regarding to the given exception message which is :
org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'ReceiveAmsharedResponse': No process definition or execution matches the parameters
I assume that you correlate a message with the name ReceiveAmsharedResponse, but you defined a Message with a different name for your ReceiveTask.
Changing the definition of your Message to the following should work:
<bpmn2:message id="Message_06nl07f" name="ReceiveAmsharedResponse" />
Assuming your process looks something like this:
O -- ( M ) -- ( M ) -- O
send receive
This error means that a response was received "earlier" than the request was sent.
And by sent, I mean the whole transaction of the send-request process.
So, in other words, because of the fast response, there is no commited transaction in Camunda Database for the send-request process. Thus, when camunda processes the response-received process, it is not possible to correlate the response-received with the send-request.
One suggested solution is to use Async Continuations that Camunda offers and you might check in their official website. More specifically, you might use Async Before at the send-request level:
( M )
send
Async Before will save the transaction in Camunda database, before the send-request process will begin. Τherefore, it will be possible for the response to be correlated.

Two-way-binding for golang structs

TLDR: Can I register callback functions in golang to get notified if a struct member is changed?
I would like to create a simple two-way-binding between a go server and an angular client. The communication is done via websockets.
Example:
Go:
type SharedType struct {
A int
B string
}
sharedType := &SharedType{}
...
sharedType.A = 52
JavaScript:
var sharedType = {A: 0, B: ""};
...
sharedType.A = 52;
Idea:
In both cases, after modifying the values, I want to trigger a custom callback function, send a message via the websocket, and update the value on the client/server side accordingly.
The sent message should only state which value changed (the key / index) and what the new value is. It should also support nested types (structs, that contain other structs) without the need of transmitting everything.
On the client side (angular), I can detect changes of JavaScript objects by registering a callback function.
On the server side (golang), I could create my own map[] and slice[] implementations to trigger callbacks everytime a member is modified (see the Cabinet class in this example: https://appliedgo.net/generics/).
Within these callback-functions, I could then send the modified data to the other side, so two-way binding would be possible for maps and slices.
My Question:
I would like to avoid things like
sharedType.A = 52
sharedType.MemberChanged("A")
// or:
sharedType.Set("A", 52) //.. which is equivalent to map[], just with a predifined set of allowed keys
Is there any way in golang to get informed if a struct member is modified? Or is there any other, generic way for easy two-way binding without huge amounts of boiler-plate code?
No, it's not possible.
But the real question is: how do you suppose to wield all such magic in your Go program?
Consider what you'd like to have would be indeed possible.
Now an innocent assignment
v.A = 42
would—among other things—trigger sending stuff
over a websocket connection to the client.
Now what happens if the connection is closed (client disconnected),
and the sending fails?
What happens if sending fails to complete before a deadline is reached?
OK, suppose you get it at least partially right and actual modification of the local field happens only if sending succeeds.
Still, how should sending errors be handled?
Say, what should happen if the third assignment in
v.A = 42
v.B = "foo"
v.C = 1e10-23
fails?
you could try using server sent events (SSE) to send realtime data to the frontend, while sending a single post request with ur changes. That way you can monitor in the back and send data every second.

Pubnub chat application with storage

I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)

Implementing bulk-messaging from Salesforce to/from Twilio, hitting Salesforce API limits

I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.

Publisher finishes before subscriber and messages are lost - why?

Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.

Resources