Oracle AQ Leave stale Subscribers in the Queue - oracle

We are using Oracle AQ(Advanced Queue) in our project. We see messages and subscribers keep accumulating in our AQ Queue table. It's not clear to us why there are so many subscribers and get into the error:
Caused by: oracle.jms.AQjmsException: ORA-24067: exceeded maximum number of subscribers for queue FUSION.FND_JMS_EXMR_QUEUE

Advanced Queue subscribers are not getting stale, but they are explicitely created with dbms_aqadm.add_subscriber and must be removed using dbms_aqadm.remove_subscriber.
You may check the subscribers for a particular queue with the following query (substitute your queue name)
select CONSUMER_NAME, ADDRESS, PROTOCOL from all_queue_subscribers where QUEUE_NAME = 'QUEUE_NAME';
CONSUMER_NAME, ADDRESS, PROTOCOL
S1 0
S2 0
Example of Removing a Subscriber
DECLARE
subscriber sys.aq$_agent;
BEGIN
subscriber := sys.aq$_agent('S1', NULL, NULL);
dbms_aqadm.remove_subscriber(queue_name => 'queue_name', subscriber => subscriber);
END;
/
As documented the maximum number of subscribers is 1024
ORA-24067: exceeded maximum number of subscribers for queue string
Cause: An attempt was made to add new subscribers to the specified, but the number of subscribers for this queue has exceeded the maximum number (1024) of subscribers allowed per queue.
Action: Remove existing subscribers before trying to add new subscribers.

Related

Oracle BPEL receive message (Oracle SOA 12.2.1.4.0)

I would like to insert in a BPEL flow a sort of event listener that waits for a message.
I thought about implementing this with the "receive / message" component, but I didn't understand how it should be configured to intercept
one and only one message, that is precisely related to the current instance of the flow.
I defined a variable CorrelationId to store an unique identifier; next, on the component "receive message" I defined a correlation set, but I didn't understand how to pass the correlationID to it
Not sure how this composite gets called, but you could receive the message(s) in one composite and either put them in a JMS queue with a second composite that dequeues and processes, or you could put the messages into a table and have the second composite poll the table using the database adapter, setting maxTransactionSize=1.

Correct Number of Partitions/Replicas for #RetryableTopic Retry Topics

Hello Stack Overflow community and anyone familiar with spring-kafka!
I am currently working on a project which leverages the #RetryableTopic feature from spring-kafka in order to reattempt the delivery of failed messages. The listener annotated with #RetryableTopic is consuming from a topic that has 50 partitions and 3 replicas. When the app is receiving a lot of traffic, it could possibly be autoscaled up to 50 instances of the app (consumers) grabbing from those partitions. I read in the spring-kafka documentation that by default, the retry topics that #RetryableTopic autocreates are created with one partition and one replica, but you can change these values with autoCreateTopicsWith() in the configuration. From this, I have a few questions:
With the autoscaling in mind, is it recommended to just create the retry topics with the same number of partitions and replicas (50 & 3) as the original topic?
Is there some benefit to having differing numbers of partitions/replicas for the retry topics considering their default values are just one?
The retry topics should have at least as many partitions as the original (by default, records are sent to the same partition); otherwise you have to customize the destination resolution to avoid the warning log. See Destination resolver returned non-existent partition
50 partitions might be overkill unless you get a lot of retried records.
It's up to you how many replicas you want, but in general, yes, I would use the same number of replicas as the original.
Only you can decide what are the "correct" numbers.

Spring Kafka Listener receiving duplicate messages

My Spring Kafka Listener is receiving duplicate messages, i can see the messages are being polled from same partition and offset and the same timestamp. In my code, i keep track every incoming message and identify the duplicates, but in this case, i cannot even reject it from processing as both messages - original and duplicate are received at almost same time, and the first record is not even committed in the database tracking table.
1.Please suggest how i should avoid polling the duplicate messages, i dont understand why it is being polled twice- only under load.
2. How i can handle this in tracking table, if message 1 metadata is being processed and not committed in
the tracking table, message 2 comes and is not able to find that record in tracking table and proceeds with processing the duplicate message again.
config of the listener based on my use case:
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 300000);
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
The two consumers need to have the same group.id property so that the partitions are distributed across them.
If they are in different consumer groups, they will both get all the records.

Schedule creation of consumer in Kafka using kafka-go

I am new to kafka and currently working on it. I am using kafka-go in golang to create producer and consumer. Currently i am able to create a producer but i want consumer to be created once a producer of a topic is created and not every time. means for each topic, a consumer is created only once. Also, when there is a need of creating more consumer for a topic to balance load, it gets created. Is there any way to schedule that, either through goroutines or Faktory?
You should not have a coupled producer/consumer, Kafka let you have totally decoupled producer / consumer.
You can run your consumer even if the topic does not exists ( Kafka will create it, you'll just get an leader unavailable warning ), and run your producer whenever you want.
Regarding scaling, the idea is that you create as many partition as you might want your topic to be scaled ( in number of consumers).
Here is some reading about topic partition strategy:
https://blog.newrelic.com/engineering/effective-strategies-kafka-topic-partitioning/
You have lot of readings about this on the web.
Yannick

Controlling JMS Server: too many MDBeans created (weblogic)

I have an application that does a delayed operation. User generates 1 million messages that are stored in the JMS Queue and then a MDBeans are consuming these messages and performing some action and storing data in the database. Since JMS Queue is working too fast, it tries to create 1 million MDBean instances which in turn try to create 1 million database connections. No surprise that some of them timeout since JDBC connection pool cannot serve 1 million connection requests.
What is the best solution to control the number of MDBeans created? It would be better that 1 million messages would be processed by a certain number of MDBeans that is not exceeding the number of allowed connections in JDBC pool
You can limit the number of instances of your MDB by using the max-beans-in-free-pool element within the descriptor for your bean in weblogic-ejb-jar.xml.
<message-driven-descriptor>
<pool>
<max-beans-in-free-pool>100</max-beans-in-free-pool>
<initial-beans-in-free-pool>50</initial-beans-in-free-pool>
</pool>
...
</message-driven-descriptor>
If left unspecified, the number of instances created will only be bounded by the number of threads available. Regardless, it is good practice to set the maximum number of threads equal to or lower than the size of your database connection pool.

Resources