I'm writing a logging feature that registers socket events. The problem I'm having is that even though I have the time of the event in the MSG structure that I get when I call PeekMessage, the subsequent call to DispatchMessage will end up being handled by WindowProc, which does not receive the time as a parameter.
The "solution" I'm using to log times consists in detecting socket events in the main loop of my Windows application where PeekMessage occurs.
Which would be the proper way to do this? I would rather prefer not having to add logging specific logic to an otherwise general routine.
Use GetMessageTime() in your socket message handler:
Retrieves the message time for the last message retrieved by the GetMessage() function. The time is a long integer that specifies the elapsed time, in milliseconds, from the time the system was started to the time the message was created (that is, placed in the thread's message queue).
Compared to the time field of the MSG structure:
The time at which the message was posted.
Related
There are few ways I can put a message for GUI thread.
PostMessage: according to docs, these messages are processed first (before most other messages). If I use this too often, the GUI thread may get stuck in processing only my messages and nothing else (will not respond to keyboard\mouse etc). This is too high-priority method.
SetTimer: WM_TIMER messages are processed last after everything else, so if there is any painting happening (like if I move a window continuously) all time will be spend for processing common messages and WM_TIMER will trigger too late.
This is too low-priority method.
I need something in-between to have my custom message processed ASAP but still leave room for the rest of messages to keep the GUI responsive.
What I'd like to try is to put some message to be processed just in same order as normal messages.
So here is the question, how can I do that?
Added:
I have one thread that prepares video frames and it needs to notify main (UI) thread that a new frame is ready and maybe display it. In a typical game loop it would be something like
process messages until queue is empty
process 1 frame
repeat
But now I can't control message loop because it may be in modal popup or menu.
Assume the answer is "no" (no other way to insert message).
However, in processing posted message I can monitor time passed and immediately signal the same message through WM_TIMER instead of processing it.
update
After some observation it seems no given time (1 ms? 5 ms?) guarantees that input will be processed. What works instead is explicitly checking message queue for input messages:
case MY_MSG:
{
MSG msg;
if(PeekMessage(&msg,0,0,0,PM_NOREMOVE|PM_QS_INPUT))
SetTimer(hwnd,MY_TIMER,0,0);
else
DoWork();
}
return 0;
I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.
I'm using spring-integration to develop a service bus. I need to process some messages from the message-store at the specific time. For example if there is a executionTimestamp parameter in payload of the message it will be executed in specified time otherwise be executed as soon as message received.
What kind of channel and taskExecutor I have to use?
Do I Have to implement a custom Trigger or there is some conventional way to implement the message processing strategy?
Sincerely
See the Delayer.
The delay handler supports expression evaluation results that represent an interval in milliseconds (any Object whose toString() method produces a value that can be parsed into a Long) as well as java.util.Date instances representing an absolute time. In the first case, the milliseconds will be counted from the current time (e.g. a value of 5000 would delay the Message for at least 5 seconds from the time it is received by the Delayer). With a Date instance, the Message will not be released until the time represented by that Date object. In either case, a value that equates to a non-positive delay, or a Date in the past, will not result in any delay. Instead, it will be sent directly to the output channel on the original sender’s Thread. If the expression evaluation result is not a Date, and can not be parsed as a Long, the default delay (if any) will be applied.
You can add a MessageStore to hold the message if you don't want to lose messages that are currently delayed when the server crashes.
I'd like to confirm what I think to be true. When I use Windows SendMessage(), this is a deterministic call in that it will execute immediately (will not return until the message is processed) as opposed to PostMessage() which is non-deterministic as it can be preempted by any other message that happens to be in the queue at that moment in time (in fact it will not be executed until it hits the message loop).
Is this a fair assessment, or am I missing something?
That is essentially true for in-process calls. Cross-process SendMessage works similar, but processing of the message doesn't begin until the receiver process calls GetMessage (or its kin).
Your UI thread has a message pump that looks something like:
while (GetMessage(&msg))
DispatchMessage(&msg);
PostMessage causes the message to be put onto the message queue. GetMessage removes the oldest message from the queue (FIFO*).
DispatchMessage causes the WndProc associated with the message's target window to be called with the message.
SendMessage basically bypasses this chain and calls the WndProc directly (more or less).
A lot of standard window messages result in a chain of SendMessage calls where sending one message sends another which sends another. That chain is often referred to as "the current dispatch". If you need your message to be processed inside the chain, use SendMessage. If you need it to be processed after the current dispatch has completed, use PostMessage.
You can use a tool like Spy++ to see Windows messaging in action, or to debug problems you're having with message order of operations.
[*] It's not strictly a FIFO queue because certain kinds of messages (i.e. timer, mouse, keyboard) are not actually posted to the queue, but rather generated on the fly. For the sake of simplicity, you can think of it as a FIFO.
I am writing a Message Handler for an ebXML message passing application. The message follow the Request-Response Pattern. The process is straightforward: The Sender sends a message, the Receiver receives the message and sends back a response. So far so good.
On receipt of a message, the Receiver has a set Time To Respond (TTR) to the message. This could be anywhere from seconds to hours/days.
My question is this: How should the Sender deal with the TTR? I need this to be an async process, as the TTR could be quite long (several days). How can I somehow count down the timer, but not tie up system resources for large periods of time. There could be large volumes of messages.
My initial idea is to have a "Waiting" Collection, to which the message Id is added, along with its TTR expiry time. I would then poll the collection on a regular basis. When the timer expires, the message Id would be moved to an "Expired" Collection and the message transaction would be terminated.
When the Sender receives a response, it can check the "Waiting" collection for its matching sent message, and confirm the response was received in time. The message would then be removed from the collection for the next stage of processing.
Does this sound like a robust solution. I am sure this is a solved problem, but there is precious little information about this type of algorithm. I plan to implement it in C#, but the implementation language is kind of irrelevant at this stage I think.
Thanks for your input
Depending on number of clients you can use persistent JMS queues. One queue per client ID. The message will stay in the queue until a client connects to it to retrieve it.
I'm not understanding the purpose of the TTR. Is it more of a client side measure to mean that if the response cannot be returned within certain time then just don't bother sending it? Or is it to be used on the server to schedule the work and do what's required now and push the requests with later response time to be done later?
It's a broad question...