Oracle dbms_aq.dequeue - oracle

Good day, respective all!
Environment: Oracle 18c XE 64-bit under Windows.
I have a simple point-to-point queue. Queue persitent. Single consumer.
Can you tell me: In what moment exactly dequeuing messages must be removed from a queue?
In my dequeuing procedure I don't do any commit in main transaction. Instead, I process messages and do a commit in an autonomous transaction. Something like this:
procedure deq_main
as
procedure proc_deq (p_payload in sometype)
as
mytype sometype;
pragma autonomous_transaction;
begin
... do some work
commit;
end proc_deq;
begin
dbms_aq.dequeue(...payload=> mytype,...);
proc_deq(mytype);
end deq_main;
There is no commit of dequeue process in deq_main procedure. However, messages removed from the queue.
What does it mean? Should I never bother with commit after dbms_aq.dequeue, or it depends on the some conditions? If it depends, can you clarify, under what circumstances I must commit explicitly.
Thanks in advance,
Andrew.

Your dequeue will immediately internally mark the message as in progress. You must either commit or rollback in that transaction to complete it. You are probably committing when you disconnect your session otherwise you are rolling back and marking the message as failed (which will likely move it to the exceptions queue).
The autonomous transaction is taking away a lot of the implicit handling that would be done. I would completely scrap it, then if your procedure completed successfully the message is marked as processed immediately.

Related

Spring #Kafkalistener auto commit offset or manual: Which is recommended?

As per what I read on internet, method annotated with Spring #KafkaListener will commit the offset in 5 sec by default.
Suppose after 5 seconds, the offset is committed but the processing is still going on and in between consumer crashes because of some issue, in that case after rebalancing, the partition will be assigned to other consumer and it will start processing from next message because previous message offset was committed.
This will result in loss of the message.
So, do I need to commit the offset manually after processing completes? What would be the recommended approach?
Again, if processing is done, and just before commit, the consumer crashed, then how to avoid the message
duplication in this case.
Please suggest the way which will avoid message loss and duplication. I am using Spring KafkaListener
with default configuration.
As usual this depends on your use case and how you would like to deal with issues during your processing. The usage of auto-commit will change the delivery semantics of your application.
Enabling the auto commits is more an "at-most-once" semantics as you would read the data and commit it before you have actually processed the data. In case your processing fails the message was already committed and you will not read it again, it is therefore "lost" for your application (for your particular consumerGroup to be more precise).
Disabling the auto commit is more a "at-least-once" semantics as you are committing the data only after the processing of the data. Imagine you fetch 100 messages from the topic. 50 of them were processed sucessfullay and your application fails during the processing of the 51st message. Now, as you disabled auto commit and only commit all or none messages at the end of the processing, you have not committed any of the 100 messages, the next time your application reads the same 100 messages again. However, you have now created 50 duplicate messages as they were already processed successfully previously.
To conclude, you need to figure out if your use case can rather handle data loss or deal with duplicates. Dealing with duplicates can be ensured if your application is idempotent.
You are asking about "how to prevent data loss and duplicates" which means you are referring to "exactly-once-scemantics". This is a big topic in distributed streaming systems and you could check the spring-kafka docs if this is supported under which configuration and dependent on the output operation of your application.
Please also check the comment of GaryRussell on this post:
"the Spring team does not recommend using auto commit; the listener container Ackmode (BATCH or RECORD) will commit the offsets in a deterministic manner; recent versions of the framework disable auto commit (unless specifically enabled)"
If the consumer takes 5+ seconds to process the message then you have a problem in the code that needs to be fixed.
Auto-commit is risky in Production as can lead to problem scenarios (message loss etc.)
Better to go with manual commit to have better control.
Make the consumer idempotent so that duplicate message and WIP state of consumer is not a problem. May be, maintain processing status in consumer's DB so that if processing is half done then on consumer restart it can clear the WIP state and process afresh. Similarly, if processing status is Complete state then on restart it will see the Complete status and simply commit the duplicate message to Kafka.

TOraDataSet blocking my program even with NonBlocking set to true

i was trying to do a little splashscreen so my program could open querys withaout blocking my aplication.
The code i wrote is this.
procedure TOpenThread.OpenTable;
begin
FActiveTable.NonBlocking := true;
FActiveTable.open;
end;
procedure TOpenThread.AbrirTablas;
begin
FActiveTable := FOwnerPlan.TablasEdicion.Tabla;
Synchronize(OpenTable);
while FActiveTable.Executing do
begin
if Terminated then CancelExecution;
sleep(10);
end;
FActiveTable.NonBlocking := false;
end;
This code is executing in a thread, and keeps doing it while the main thread gets stucked
I'm using delphi 2007
This code is executing in a thread
Now, it does not. Your code is:
Synchronize(OpenTable);
This explicitly means OpenTable procedure is executed within Main VCL thread and outside of your background auxillary TOpenThread.
More details on Synchronize that you may try to learn from are at https://stackoverflow.com/a/44162039/976391
All in all, there is just no simple solutions to complex problems.
If you want to offload DB interactions into a separate thread, you would have to make that thread exclusive owner and user of all DB components starting from the very DB connection and up to every transaction and every query.
Then you would have to make means to ASYNCHRONOUSLY post data requests from Main VCL Thread to the DB helper thread, and ASYNCHRONOUSLY receive data packets from it. Something like OmniThreadLibrary does with data streams - read their tutorials to get a gist of internal program structure when using multithreading.
You may TRY to modify your application to the following rules of thumb.
It would not be the fastest multithreading, but maybe the easiest.
all database components work is exclusively done inside TOpenThread.Execute context and those components are local members variables to the TOpenThread class. The connection-disconnection made only within TOpenThread.Execute; TOpenThread.Execute waits for the commands from the main thread in the almost infinite (until the thread gets terminated) and throttled loop.
specific database requests are made as anonymous procedures and are added to some TThreadedQueue<T> public member of TOpenThread object. The loop inside .Execute tries to fetch the action from that queue and execute it, if any exists, or throttle (Yield()) if the queue was empty. Neither Synchronize nor Queue wrappers are allowed around database operations. Main VCL thread only posts the requests, but NEVER waits for them to be actually executed.
those anonymous procedures after being executed do pass the database results back into main thread. Like http://www.uweraabe.de/Blog/2011/01/30/synchronize-and-queue-with-parameters/ or like Sending data from TThread to main VCL Thread or by any other back-into-main-thread way.
TOpenThread.Execute only exits the loop if the Terminated flag is set and the queue is empty. If Terminated is set then immediate exit would loose the actions still waiting on queue unprocessed.
Seems boring and tedious but easy? Not at all, add there that you would have to intercept exceptions and process all the errors in the async way - and you would "loose any hope entering this realm".
PS. and last but not least, about "This code is executing in a thread, and keeps doing it while the main thread gets stucked" supposition, frankly, I guess that you are wrong here and i think that BOTH your threads are stuck by one another.
Without fully understanding how thread-to-thread locking is designed to work in this specific component, carpet-bombing code with calls to Synchronize and other inter-thread locking tools, you have quite a chance to just chase them all your threads into the state of mutual lock, deadlock. See http://stackoverflow.com/questions/34512/

Duplicate Events in Message Broker

In Message Broker (v8.0.0.0) we are using the event monitoring framework to drive our flow-level auditing. We're looking at three types of audit - start, and and rollback; and the corresponding transaction.Start/End/Rollback events as defined by Message Broker are being used for this.
For rollback, within each flow, we have a generic exception handler that's catching the exception terminal from the input node, does some processing, and then throws an exception again. This means we get a rollback event from broker and the original message is backed out to DLQ.
However, for these cases, we are getting four events instead of the two expected (i.e. Start and Rollback). There is an extra Start event and an End event being generated.
I looked around and there's a possible duplicate of this issue in the MQSeries forums, where somebody suggested that this is because the message is being backed out. (Link at the end of the post.)
Can anybody suggest a mitigation/workaround? I looked at the event messages themselves and there's no way of distinguishing one from the other.
MQSeries Forum Thread
The extra start and end is because it is actually the MQ input node that sends the message to the DLQ, not MQ itself.
Since the transaction start message is raised before the node knows that it needs to DLQ the message we also need a transaction event to close the open transaction.
In fact we do this work under it's own MQ transaction so the events to correspond with the actual transaction boundaries, it is just that in your case the message never makes the flow on the last iteration.
It would be nice to distinguish between a successful transaction and one where we performed a back, like we do with rollback events but IIB does not currently allow for this.
I would suggest raising it as a requirement at the following URI:
https://www.ibm.com/developerworks/rfe/?PROD_ID=532

Is calling conn.rollback redundant while doing transaction in jdbc?

While this particular question has been asked multiple times already, but I am still unsure about it. My set up is something like this: I am using jdbc and have autocommit as false. Let's say I have 3 insert statements, I want to execute as a transaction followed by conn.commit().
sample code:
try {
getConnection()
conn.setAutoCommit(false);
insertStatment() //#1
insertStatment() //#2
insertStatment() //#3, could throw an error
conn.commit()
} catch(Sql exception) {
conn.rollback() // why is it needed?
}
Say I have two scenarios
Either, there won't be any error and we will call conn.commit() and everything will be updated.
Say first two statements work fine but there is a error in the third one. So conn.commit() is not called and our database is in consistent state. So why do I need to call conn.rollback()?
I noticed that some people mentioned that rollback has an impact in case of connection pooling? Could anyone explain to me, how it will affect?
A rollback() is still necessary. Not committing or rolling back a transaction might keep resources in use on the database (transaction handles, logs or record versions etc). Explicitly committing or rolling back makes sure those resources are released.
Not doing an explicit rollback may also have bad effects when you continue to use the connection and then commit. Changes successfully completed in the transaction (#1 and #2 in your example) will be persisted.
The Connection apidoc however does say "If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved." which should be interpreted as: a Connection.close() causes a rollback. However I believe there have been JDBC driver implementations that used to commit on connection close.
The impact on connection pooling should not exist for correct implementations. Closing the logical connection obtained from the connection pool should have the same effect as closing a physical connections: an open transaction should be rolled back. However sometimes connection pools are not correctly implemented or have bugs or take shortcuts for performance reasons, all of which could lead to an open transaction being already started when you get handed a logical connection from a pool.
Therefor: be explicit in calling rollback.

Oracle AQ with ODP.Net. Automatically Dequeue on connect

I'm using Oracle ODP.Net for enqueue and dequeue.
Process A : Enqueue
Process B : Dequeue with MessageAvailable event
If Process A and B are running, there is no problem. On the "Process B", the event is always fired.
But, if "Process B" is off and "Process A" is on, when "Process B" restarts, the queues inserted during the off time are lost.
Is there an option for to fire the event for all queues inserted in the past ?
Many Thanks
There seem to be two approaches to address this issue:
Call the Listen() method of the OracleAQQueue class (after registering for message notification) to pick up "orphaned" messages sitting in the queue. Note that Listen() blocks until a message is received or a timeout occurs. So you'd want to specify a (short) timeout to return back to the processing thread in the event no messages are on the queue.
Call the Dequeue() method and trap Oracle error 25228 (no message available to dequeue). See the following thread from the Oracle support forums: https://forums.oracle.com/forums/thread.jspa?threadID=2186496.
I've been scratching my head on this topic. If you still have to "manually" test for new messages, what is the benefit of using the MessageAvaiable event callback in the first place? One route I've pondered is to wrap the Listen() method in an async call so that the caller isn't blocking on the thread (until a message is received or a timeout occurs). I wrapped Listen() and Dequeue() in a custom Receive() method and created my own MessageReceived event handler to pass the message details to the calling thread. Seems somewhat redundant, since ODP.NET provides the out-of-box callback, but I don't have to deal with the issue you describe (or write code to "manually" test for "orphaned" messages.
Any comments/thoughts on approach are welcomed.
I've been looking at this one too and have ended up doing something similar to Greg. I've not used the Listen() method though as I don't think it offers me anything over and above a simple Dequeue() - Listen() seems to be beneficial when you want to listen on behalf of multiple consumers, which in my instance is not relevant (see Oracle Docs).
So, in my 'Process B' I first register for notifications before initiating a polling process to check for any existing messages. It doesn't Listen(), it just calls Dequeue() within a controlled loop with a Wait period of a couple of seconds set. If the polling process encounters an Oracle timeout the wait period has expired and polling stops. I may need to consider dealing with timeouts if the wait period hasn't expired (though not 100% sure this if this is likely to happen).
I've noticed that any messages which are enqueued whilst polling will call the message notification method but by the time this connects and tries to retrieve the message the polling process always seems to have taken it. So inside the message notification method I capture and ignore any OracleExceptions with number 25263 (no message in queue <...> with message ID <...>).

Resources