How to process a message in a specific time? - spring

I'm using spring-integration to develop a service bus. I need to process some messages from the message-store at the specific time. For example if there is a executionTimestamp parameter in payload of the message it will be executed in specified time otherwise be executed as soon as message received.
What kind of channel and taskExecutor I have to use?
Do I Have to implement a custom Trigger or there is some conventional way to implement the message processing strategy?
Sincerely

See the Delayer.
The delay handler supports expression evaluation results that represent an interval in milliseconds (any Object whose toString() method produces a value that can be parsed into a Long) as well as java.util.Date instances representing an absolute time. In the first case, the milliseconds will be counted from the current time (e.g. a value of 5000 would delay the Message for at least 5 seconds from the time it is received by the Delayer). With a Date instance, the Message will not be released until the time represented by that Date object. In either case, a value that equates to a non-positive delay, or a Date in the past, will not result in any delay. Instead, it will be sent directly to the output channel on the original sender’s Thread. If the expression evaluation result is not a Date, and can not be parsed as a Long, the default delay (if any) will be applied.
You can add a MessageStore to hold the message if you don't want to lose messages that are currently delayed when the server crashes.

Related

Kafka Stream - How to send an alert if no event has been received for a given key during some amount of time

I need to send an alert if no event has been received in a topic for a given key during some amount of time. What would be the best approach to solve this use case with KafkaStream ?
I tried:
1) a windowedBy together with a suppress operator:
stream
.groupByKey()
.windowedBy(TimeWindows.of(Duration.ofMillis(1000)).grace(Duration.ZERO))
.count()
.suppress(Suppressed.untilWindowCloses(unbounded()))
.filter((k, v) -> v == 0)
.toStream()
.map((windowId, count) -> KeyValue.pair(windowId.key(), AlarmEvent.builder().build()))
.to(ALARMS, Produced.with(Serdes.String(), AlarmEvent.serde()));
But it seems that the window will not close until an event occurs after the expiration, thus no alarm can be send exactly after the defined timeout.
2) Using processor API with a punctator, it seems to work but I only tested with a TopologyTestDriver and advanceWallClockTime(). Not sure this advanceWallClockTime() relflects real time advance, or would only change upon event reception, thus falling back to the problem in 1).
3) If punctuator works, I would like to use it in a ValueTranformer to benefit from the DSL topology. However, I am encountering the problem described in How to forward event downstream from a Punctuator instance in a ValueTransformer?. Cannot send event downstream from the punctuator instance.
4) Finally, I had the idea to inject some dummy events on a regular basis (eg. every second) for every partitions so as to artificially force the inner clock to advance. This would allows me to use the clean and simple DSL window and suppress operators.
2) Using processor API with a punctator, it seems to work but I only tested with a TopologyTestDriver and advanceWallClockTime(). Not sure this advanceWallClockTime() relflects real time advance, or would only change upon event reception, thus falling back to the problem in 1).
That is the right approach. As the name indicate, punctuations can be triggered based on wall-clock time (ie, system time). TopologyTestDriver mocks wall-clock time for testing purpose, but KafkaStreams will use system time.
3) If punctuator works, I would like to use it in a ValueTranformer to benefit from the DSL topology. However, I am encountering the problem described in How to forward event downstream from a Punctuator instance in a ValueTransformer?. Cannot send event downstream from the punctuator instance.
You need to use transform() instead. Emitting data via forward() is no allowed in punctuations of a ValueTransformer because you could emit any key, violating that contract of a non-modified key.
4) Finally, I had the idea to inject some dummy events on a regular basis (eg. every second) for every partitions so as to artificially force the inner clock to advance. This would allows me to use the clean and simple DSL window and suppress operators.
That should work, too.

omnetpp: Avoid "sending while transmitting" error using sendDelayed()

I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.

Kafka Streams approach to timed window with max count

I have a system where we process text messages. Each message gets split up into sentences, and each sentence gets processed individually and the results of each sentence get published to a topic. This all happens asynchronously.
I want to be able to aggregate the results for the sentences.
The problem is that I want the window to end when the total number of sentences have been reached, or when a total amount of time has passed. Basically Tumbling time windows, but can end when a total number of results have been received.
Secondarily I want to be able to know when that window ends so that I can process the aggregation as an atomic event.
It's possible but you have to implement a custom processor - your requirements are simply to specific for the high-level API to cater for.
Your processor would store messages into a state store and use punctuate to periodically check if the window expired. It would also keep a running counter and check if the max number of results have been received. If either condition is met, it does the aggregation, removes messages from the state store and sends the results downstream.
You'd have to think about what to do on restart (failover/re-balancing). When starting up, the processor should inspect its state store and calculate the current running count and the window expiry time.
Now Apache Kafka offers you a way to wait closing the window. Here piece of code;
suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()))
For more, check it out.

Proper way of using the time in MSG

I'm writing a logging feature that registers socket events. The problem I'm having is that even though I have the time of the event in the MSG structure that I get when I call PeekMessage, the subsequent call to DispatchMessage will end up being handled by WindowProc, which does not receive the time as a parameter.
The "solution" I'm using to log times consists in detecting socket events in the main loop of my Windows application where PeekMessage occurs.
Which would be the proper way to do this? I would rather prefer not having to add logging specific logic to an otherwise general routine.
Use GetMessageTime() in your socket message handler:
Retrieves the message time for the last message retrieved by the GetMessage() function. The time is a long integer that specifies the elapsed time, in milliseconds, from the time the system was started to the time the message was created (that is, placed in the thread's message queue).
Compared to the time field of the MSG structure:
The time at which the message was posted.

Async Request-Response Algorithm with response time limit

I am writing a Message Handler for an ebXML message passing application. The message follow the Request-Response Pattern. The process is straightforward: The Sender sends a message, the Receiver receives the message and sends back a response. So far so good.
On receipt of a message, the Receiver has a set Time To Respond (TTR) to the message. This could be anywhere from seconds to hours/days.
My question is this: How should the Sender deal with the TTR? I need this to be an async process, as the TTR could be quite long (several days). How can I somehow count down the timer, but not tie up system resources for large periods of time. There could be large volumes of messages.
My initial idea is to have a "Waiting" Collection, to which the message Id is added, along with its TTR expiry time. I would then poll the collection on a regular basis. When the timer expires, the message Id would be moved to an "Expired" Collection and the message transaction would be terminated.
When the Sender receives a response, it can check the "Waiting" collection for its matching sent message, and confirm the response was received in time. The message would then be removed from the collection for the next stage of processing.
Does this sound like a robust solution. I am sure this is a solved problem, but there is precious little information about this type of algorithm. I plan to implement it in C#, but the implementation language is kind of irrelevant at this stage I think.
Thanks for your input
Depending on number of clients you can use persistent JMS queues. One queue per client ID. The message will stay in the queue until a client connects to it to retrieve it.
I'm not understanding the purpose of the TTR. Is it more of a client side measure to mean that if the response cannot be returned within certain time then just don't bother sending it? Or is it to be used on the server to schedule the work and do what's required now and push the requests with later response time to be done later?
It's a broad question...

Resources