Oracle AQ with ODP.Net. Automatically Dequeue on connect - oracle

I'm using Oracle ODP.Net for enqueue and dequeue.
Process A : Enqueue
Process B : Dequeue with MessageAvailable event
If Process A and B are running, there is no problem. On the "Process B", the event is always fired.
But, if "Process B" is off and "Process A" is on, when "Process B" restarts, the queues inserted during the off time are lost.
Is there an option for to fire the event for all queues inserted in the past ?
Many Thanks

There seem to be two approaches to address this issue:
Call the Listen() method of the OracleAQQueue class (after registering for message notification) to pick up "orphaned" messages sitting in the queue. Note that Listen() blocks until a message is received or a timeout occurs. So you'd want to specify a (short) timeout to return back to the processing thread in the event no messages are on the queue.
Call the Dequeue() method and trap Oracle error 25228 (no message available to dequeue). See the following thread from the Oracle support forums: https://forums.oracle.com/forums/thread.jspa?threadID=2186496.
I've been scratching my head on this topic. If you still have to "manually" test for new messages, what is the benefit of using the MessageAvaiable event callback in the first place? One route I've pondered is to wrap the Listen() method in an async call so that the caller isn't blocking on the thread (until a message is received or a timeout occurs). I wrapped Listen() and Dequeue() in a custom Receive() method and created my own MessageReceived event handler to pass the message details to the calling thread. Seems somewhat redundant, since ODP.NET provides the out-of-box callback, but I don't have to deal with the issue you describe (or write code to "manually" test for "orphaned" messages.
Any comments/thoughts on approach are welcomed.

I've been looking at this one too and have ended up doing something similar to Greg. I've not used the Listen() method though as I don't think it offers me anything over and above a simple Dequeue() - Listen() seems to be beneficial when you want to listen on behalf of multiple consumers, which in my instance is not relevant (see Oracle Docs).
So, in my 'Process B' I first register for notifications before initiating a polling process to check for any existing messages. It doesn't Listen(), it just calls Dequeue() within a controlled loop with a Wait period of a couple of seconds set. If the polling process encounters an Oracle timeout the wait period has expired and polling stops. I may need to consider dealing with timeouts if the wait period hasn't expired (though not 100% sure this if this is likely to happen).
I've noticed that any messages which are enqueued whilst polling will call the message notification method but by the time this connects and tries to retrieve the message the polling process always seems to have taken it. So inside the message notification method I capture and ignore any OracleExceptions with number 25263 (no message in queue <...> with message ID <...>).

Related

Shutting down Akka actor if AskTimeout reached

Say I am doing an Ask() on actor with some timeout, if the ask timesout, is there a way to get the underlying actor to stop processing things? For example, I don't want the main thread / caller to continue and this actor is still processing the timed out request
Short answer is no, you cannot do it.
Long answer is it depends.
You could move actor's work to another execution context via a Future for example. This will allow your actor react on other messages but that Future that actor has started cannot be cancelled if it was picked up by an execution context and not hanging in the queue of the execution context.
You could write some smart Future wrapper that would check if future was cancelled before starting the work. But if processing has started, the only thing you can do is calling interrupt on the thread executing the future (meaning that you need to capture this thread somehow) and hopping that the work will hit Object.wait or Thread.sleep methods, ie the only places when the interrupt exception can be received. But there is no guarantee of this ever happening.
No you can't. The only thing you can do to an actor is send a message to it. You can't 'get' the actor in any kind of other kind of way to interrupt it. And since, under normal circumstances, messages are processed in order any subsequent "stop processing" message will only be processed after the first message was already completed.
I think the solution to your problem will depend a bit on why you suspect the actor may "time out" in responding.
For example, where you might expect the Actor may sometimes have a backlog of messages, I think the best solution may be to include the timeout as part of the message. In your receiveMessage handler you can check this "request expiry" time before doing actual work, and if the timeout has already passed, just discard the message.

omnetpp: Avoid "sending while transmitting" error using sendDelayed()

I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.

How is the "blocking" behavior of Win32 API GetMessage() implemented?

According to here, GetMessage() is a blocking call which won't return until there's a message can be retrieved from the message queue.
So, how is this blocking behavior implemented?
Does GetMessage() use some kind of spin lock so that the UI thread just busy waits for new messages showing up in the message queue? If so, I guess at least one of my CPU cores should have high usage when a UI application is running. But I didn't see that happen. So how does it work?
ADD 1
Thanks for the hint in the comments. Spin lock is meant to reduce the cost of thread context switch. It shouldn't be used here. I also thought about maybe some event paradigm is used here. But if it is event-driven, how does this event model work?
My guess is like this:
An event for input checking is raised periodically. Perhaps via some hardware timer interrupt. Then the timer interrupt handler will check for various input device buffers for input events. And then put that into certain application's message queue based on the current desktop context, such as which is the active window.
And I guess maybe some other things are also based on the timer interrupt, such as thread context switching.
ADD 2
Based on replies so far. There's some event object that the UI thread waits on. But since UI thread is waiting on something, it is not active and can do nothing by itself yet. And the event object is just some passive state information. So there has to be someone else to wake up the thread upon the event state change. I tink it should be the thread scheduler. And the thread scheduler may be pulsed by the timer interrupt.
The thread scheduler will check the event state periodically and wake up thread and put messages into its queue as necessary.
Am I right about the whole picture?
ADD 3
And there's a remaining question: who modify the state of an event object? Based on here, it seems events are just some data structures that can be modified by any active parties. I think thread scheduler just use the relations among threads and events to decide which thread to run or not.
And by the time a thread is scheduled to run, all it's requirements should already been fulfilled. Such as a message should have been put into its queue before the event it waits on is raised. This is reasonable because otherwise it may be too late. (thanks to RbMm's comments.)
ADD 4
In JDK, the LinkedBlockingDeque type also offers a similar blocking behavior with the take() method.
Retrieves and removes the head of the queue represented by this deque
(in other words, the first element of this deque), waiting if
necessary until an element becomes available.
And the .NET counterpart is the BlockingCollection< T > type. A thread to discuss it.
Here is a thread about how to implement a blocking queue in C#.
The exact implementation is internal and not documented.
Most of the window manager in Windows is implemented in kernel mode. GetMessage is no different. It waits on an event object in kernel mode. When a thread is waiting on an event (or any other kernel-based synchronization object), it does not use any CPU time, because it is not scheduled to run until the wait is satisfied.
When a message is posted to the waiting thread's message queue, or another thread sends a message to a window belonging to the waiting thread, or a timer in the waiting thread fires, or a window in the waiting thread is ready to be painted, the event is signaled, the thread wakes up, and GetMessage() acts accordingly, whether that is to return a posted message to the caller, or to dispatch a sent message directly to the target window procedure and then go back to waiting on the event.
You can see the implementation leaking into the public API if you compare the maximum number of objects you can wait on in WaitForMultipleObjects and MsgWaitForMultipleObjects:
The maximum number of object handles is MAXIMUM_WAIT_OBJECTS.
vs
The maximum number of object handles is MAXIMUM_WAIT_OBJECTS minus
one.

Do PostMessage() messages appear in order in Windows?

The general question, is if I post several messages to the windows message pump from a separate worker thread, will they appear at their destination in the order I sent? ie..
::PostMessage(m_hUsers, WM_BULKPROCESS, 0, 0);
// ... some processing here ...
::PostMessage(m_hUsers, WM_BULKDONE, 0, 0);
m_hUsers is a handle (HWND) to a window I'm sending the messages to from my worker thread. So, will WM_BULKPROCESS always show up first in the window (and therefore be processed by the handler in that dialog class), or is it possible for them to get out of order, ie WM_BULKDONE gets processed before WM_BULKPROCESS, even though it was sent last?
There are a few exceptions (like WM_PAINT), but generally, the order of messages is kept.
Imaging trying to make sense of mouse input if messages appeared in the wrong order!
Quote from GetMessage
During this call, the system delivers pending, nonqueued messages,
that is, messages sent to windows owned by the calling thread using
the SendMessage, SendMessageCallback, SendMessageTimeout, or
SendNotifyMessage function. Then the first queued message that matches
the specified filter is retrieved. The system may also process
internal events. If no filter is specified, messages are processed in
the following order:
Sent messages
Posted messages
Input (hardware) messages and system internal events
Sent messages (again)
WM_PAINT messages
WM_TIMER messages
Window messages are stored in a queue. So you can rely on FIFO mechanism.
They should be unless you have code in the message pump that specifically dispatch the messages (either intentionally or not) differently e.g. somehow pick two messages and dispatch them out of order. Normally programmers call DispatchMessage for each message in the order you get from the queue.
I suspect the issue is synchronization and not the message queue. If your code allows multitple invocations of the worker thread proc you have to manage this more tightly to know which "instance" of the worker thread is posting the messages.
Have you checked to make sure you only one worker thread executing at a time, or that the m_hUsers window handle is protected from being changed between BULKPROCESS and BULKDONE?
SendMessage can be useful for managing the BULKDONE because it will block until the message has been processed allowing the code invoking the worker thread to synchronize the invokation of worker threads and truely know that one worker thread has finsished before invoking another. Postmessage will not block but remember the time sensitive part of your worker thread is
`// ... some processing here ...
not sending the windows messages.

What are alternatives to Win32 PulseEvent() function?

The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can.
I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular).
Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments.
Can anyone please suggest one?
Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.
I think you're going to need something a little more complex to hit your reliability target.
My understanding of your problem is that you have one producer and an unknown number of consumers all of which are different processes. Each consumer can NEVER miss any events.
I'd like more clarification as to what missing an event means.
i) if a consumer started to run and got to just before it waited on your notification method and an event occurred should it process it even though it wasn't quite ready at the point that the notification was sent? (i.e. when is a consumer considered to be active? when it starts or when it processes its first event)
ii) likewise, if the consumer is processing an event and the code that waits on the next notification hasn't yet begun its wait (I'm assuming a Wait -> Process -> Loop to Wait code structure) then should it know that another event occurred whilst it was looping around?
I'd assume that i) is a "not really" as it's a race between process start up and being "ready" and ii) is "yes"; that is notifications are, effectively, queued per consumer once the consumer is present and each consumer gets to consume all events that are produced whilst it's active and doesn't get to skip any.
So, what you're after is the ability to send a stream of notifications to a set of consumers where a consumer is guaranteed to act on all notifications in that stream from the point where it acts on the first to the point where it shuts down. i.e. if the producer produces the following stream of notifications
1 2 3 4 5 6 7 8 9 0
and consumer a) starts up and processes 3, it should also process 4-0
if consumer b) starts up and processes 5 but is shut down after 9 then it should have processed 5,6,7,8,9
if consumer c) was running when the notifications began it should have processed 1-0
etc.
Simply pulsing an event wont work. If a consumer is not actively waiting on the event when the event is pulsed then it will miss the event so we will fail if events are produced faster than we can loop around to wait on the event again.
Using a semaphore also wont work as if one consumer runs faster than another consumer to such an extent that it can loop around to the semaphore call before the other completes processing and if there's another notification within that time then one consumer could process an event more than once and one could miss one. That is you may well release 3 threads (if the producer knows there are 3 consumers) but you cant ensure that each consumer is released just the once.
A ring buffer of events (tick counts) in shared memory with each consumer knowing the value of the event it last processed and with consumers alerted via a pulsed event should work at the expense of some of the consumers being out of sync with the ticks sometimes; that is if they miss one they will catch up next time they get pulsed. As long as the ring buffer is big enough so that all consumers can process the events before the producer loops in the buffer you should be OK.
With the example above, if consumer d misses the pulse for event 4 because it wasn't waiting on its event at the time and it then settles into a wait it will be woken when event 5 is produced and since it's last processed counted is 3 it will process 4 and 5 and then loop back to the event...
If this isn't good enough then I'd suggest something like PGM via sockets to give you a reliable multicast; the advantage of this would be that you could move your consumers off onto different machines...
The reason PulseEvent is "unreliable" is not so much because of anything wrong in the function itself, just that if your consumer doesn't happen to be waiting on the event at the exact moment that PulseEvent is called, it'll miss it.
In your scenario, I think the best solution is to manually keep the counter yourself. So the producer thread keeps a count of the current "clock tick" and when a consumer thread starts up, it reads the current value of that counter. Then, instead of using PulseEvent, increment the "clock ticks" counter and use SetEvent to wake all threads waiting on the tick. When the consumer thread wakes up, it checks it's "clock tick" value against the producer's "clock ticks" and it'll know how many ticks have elapsed. Just before it waits on the event again, it can check to see if another tick has occurred.
I'm not sure if I described the above very well, but hopefully that gives you an idea :)
There are two inherent problems with PulseEvent:
if it's used with auto-reset events, it releases one waiter only.
threads might never be awaken if they happen to be removed from the waiting queue due to APC at the moment of the PulseEvent.
An alternative is to broadcast a window message and have any listener have a top-level message -only window that listens to this particular message.
The main advantage of this approach is that you don't have to block your thread explicitly. The disadvantage of this approach is that your listeners have to be STA (can't have a message queue on an MTA thread).
The biggest problem with that approach would be that the processing of the event by the listener will be delayed with the amount of time it takes the queue to get to that message.
You can also make sure you use manual-reset events (so that all waiting threads are awaken) and do SetEvent/ResetEvent with some small delay (say 150ms) to give a bigger chance for threads temporarily woken by APC to pick up your event.
Of course, whether any of these alternative approaches will work for you depends on how often you need to fire your events and whether you need the listeners to process each event or just the last one they get.
If I understand your question correctly, it seems like you can simply use SetEvent. It will release one thread. Just make sure it is an auto-reset event.
If you need to allow multiple threads, you could use a named semaphore with CreateSemaphore. Each call to ReleaseSemaphore increases the count. If the count is 3, for example, and 3 threads wait on it, they will all run.
Events are more suitable for communications between the treads inside one process (unnamed events). As you have described, you have zero ore more clients that need to read something interested. I understand that the number of clients changes dynamically. In this case, the best chose will be a named pipe.
Named Pipe is King
If you need to just send data to multiple processes, it’s better to use named pipes, not the events. Unlike auto-reset events, you don't need own pipe for each of the client processes. Each named pipe has an associated server process and one or more associated client processes (and even zero). When there are many clients, many instances of the same named pipe are automatically created by the operating system for each of the clients. All instances of a named pipe share the same pipe name, but each instance has its own buffers and handles, and provides a separate conduit for client/server communication. The use of instances enables multiple pipe clients to use the same named pipe simultaneously. Any process can act as both a server for one pipe and a client for another pipe, and vice versa, making peer-to-peer communication possible.
If you will use a named pipe, there would be no need in the events at all in your scenario, and the data will have guaranteed delivery no matter what happens with the processes – each of the processes may get long delays (e.g. by a swap) but the data will be finally delivered ASAP without your special involvement.
On The Events
If you are still interested in the events -- the auto-reset event is king! ☺
The CreateEvent function has the bManualReset argument. If this parameter is TRUE, the function creates a manual-reset event object, which requires the use of the ResetEvent function to set the event state to non-signaled. This is not what you need. If this parameter is FALSE, the function creates an auto-reset event object, and system automatically resets the event state to non-signaled after a single waiting thread has been released.
These auto-reset events are very reliable and easy to use.
If you wait for an auto-reset event object with WaitForMultipleObjects or WaitForSingleObject, it reliably resets the event upon exit from these wait functions.
So create events the following way:
EventHandle := CreateEvent(nil, FALSE, FALSE, nil);
Wait for the event from one thread and do SetEvent from another thread. This is very simple and very reliable.
Don’t' ever call ResetEvent (since it automatically reset) or PulseEvent (since it is not reliable and deprecated). Even Microsoft has admitted that PulseEvent should not be used. See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684914(v=vs.85).aspx
This function is unreliable and should not be used, because only those threads will be notified that are in the "wait" state at the moment PulseEvent is called. If they are in any other state, they will not be notified, and you may never know for sure what the thread state is. A thread waiting on a synchronization object can be momentarily removed from the wait state by a kernel-mode Asynchronous Procedure Call, and then returned to the wait state after the APC is complete. If the call to PulseEvent occurs during the time when the thread has been removed from the wait state, the thread will not be released because PulseEvent releases only those threads that are waiting at the moment it is called.
You can find out more about the kernel-mode Asynchronous Procedure Calls at the following links:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms681951(v=vs.85).aspx
http://www.drdobbs.com/inside-nts-asynchronous-procedure-call/184416590
http://www.osronline.com/article.cfm?id=75
We have never used PulseEvent in our applications. As about auto-reset events, we are using them since Windows NT 3.51 (although they appeared in the first 32-bit version of NT - 3.1) and they work very well.
Your Inter-Process Scenario
Unfortunately, your case is a little bit more complicated. You have multiple threads in multiple processes waiting for an event, and you have to make sure that all the threads did in fact receive the notification. There is no other reliable way other than to create own event for each consumer. So, you will need to have as many events as are the consumers. Besides that, you will need to keep a list of registered consumers, where each consumer has an associated event name. So, to notify all the consumers, you will have to do SetEvent in a loop for all the consumer events. This is a very fast, reliable and cheap way. Since you are using cross-process communication, the consumers will have to register and de-register its events via other means of inter-process communication, like SendMessage. For example, when a consumer process registers itself at your main notifier process, it sends SendMessage to your process to request a unique event name. You just increment the counter and return something like Event1, Event2, etc, and creating events with that name, so the consumers will open existing events. When the consumer de-registers – it closes the event handle that it opened before, and sends another SendMessage, to let you know that you should CloseHandle too on your side to finally release this event object. If the consumer process crashes, you will end up with a dummy event, since you will not know that you should do CloseHandle, but this should not be a problem - the events are very fast and very cheap, and there is virtually no limit on the kernel objects - the per-process limit on kernel handles is 2^24. If you are still concerned, you may to the opposite – the clients create the events but you open them. If they won’t open – then the client has crashed and you just remove it from the list.

Resources