Using ODP.NET OracleAQQueue.Listen on a Multiconsumer Queue - oracle

I have a client app that connects to an Oracle AQ multi-consumer queue. I want to use OracleAQQueue.Listen to listen for new messages on the queue. API docs show that the Listen method can be used for multi-consumer queues. My code for listening to the queue is shown below.
string consumerName = "APPINST1";
using (OracleConnection con = new OracleConnection(connectionString))
{
con.Open();
OracleAQQueue queue = new OracleAQQueue("MY_Q");
queue.MessageType = OracleAQMessageType.Udt;
queue.UdtTypeName = "MY_Q_MSG";
queue.DequeueOptions.DeliveryMode = OracleAQMessageDeliveryMode.Persistent;
queue.Connection = con;
Console.WriteLine("Listening for messages...");
queue.Listen(new string[] { consumerName });
}
The problem that I'm having is that on the line of code where I call queue.Listen(), I get the Oracle exception:
ORA-25295: Subscriber is not allowed to dequeue buffered messages
Googling for advice on this particular error hasn't been too helpful. I've removed and re-added my subscriber to the queue several times to no avail. My guess is that I'm not setting some property correctly before I make the call to Listen, but I can't figure out the issue.
Any ideas?

I ran across the following note in the Streams Advanced Queuing User's Guide, in Chapter 10 - Oracle Streams AQ Operations Using PL/SQL:
Note: Listening to multiconsumer queues is not supported in the Java API.
Although I can't find it explicitly stated anywhere, I'm guessing the same rule applies to the ODP.NET API.

You must set the visibility attribute to IMMEDIATE to use buffered messaging.

Related

querying artemis queue size fails

In a spring boot application using artemis we try to avoid queues containing too many messages. The intention is to only put in new messages if the number of messages currently in the queue falls below a certain limit, e.g. 100 messages. However, that seems not to work but we don't know why or what the "correct" method would be to implement that functionality. The number of messages as extracted by the code below is always 0 although in the gui there are messages.
To reproduce the problem I installed apache-artemis-2.13.0 locally.
We are doing something like the following
if (!jmsUtil.queueHasNotMoreElementsThan(QUEUE_ALMOST_EMPTY_MAX_AMOUNT, reprocessingMessagingProvider.getJmsTemplate())) {
log.info("Queue has too many messages. Will not send more...");
return;
}
jmsUtil is implemented like
public boolean queueHasNotMoreElementsThan(int max, JmsOperations jmsTemplate) {
return Boolean.TRUE.equals(
jmsTemplate.browse((session, queueBrowser) -> {
Enumeration enumeration = queueBrowser.getEnumeration();
return notMoreElemsThan(enumeration, max);
}));
}
private Boolean notMoreElemsThan(Enumeration enumeration, int max) {
for (int i = 0; i <= max; i++) {
if (!enumeration.hasMoreElements()) {
return true;
}
enumeration.nextElement();
}
return false;
}
As a check I used additionally the following method to give me the number of messages in the queue directly.
public int countPendingMessages(String destination, JmsOperations jmsTemplate) {
Integer totalPendingMessages = jmsTemplate.browse(destination,
(session, browser) -> Collections.list(browser.getEnumeration()).size());
int messageCount = totalPendingMessages == null ? 0 : totalPendingMessages;
log.info("Queue {} message count: {}", destination, messageCount);
return messageCount;
}
That method of extracting the queue size seems to be used as well by others and is based on the documentation of QueueBrowser: The getEnumeration method returns a java.util.Enumeration that is used to scan the queue's messages.
Would the above be the correct way on how to obtain the queue size? If so, what could be the cause of the problem? If not, how should the queue size be queried? Does spring offer any other possibility of accessing the queue?
Update: I read another post and the documentation but I wouldn't know on how to obtain the ClientSession.
There are some caveats to using a QueueBrowser to count the number of messages in the queue. The first is noted in the QueueBrowser JavaDoc:
Messages may be arriving and expiring while the scan is done. The JMS API does not require the content of an enumeration to be a static snapshot of queue content. Whether these changes are visible or not depends on the JMS provider.
So already the count may not be 100% accurate.
Then there is the fact that there may be messages still technically in the queue which have been dispatched to a consumer but have not yet been acknowledged. These messages will not be counted by the QueueBrowser even though they may be cancelled back to the queue at any point if the related consumer closes its connection.
Simply put the JMS API doesn't provide a truly reliable way to determine the number of messages in a queue. Furthermore, "Spring JMS" is tied to the JMS API. It doesn't have any other way to interact with a JMS broker. Given that, you'll need to use a provider-specific mechanism to determine the message count.
ActiveMQ Artemis has a rich management API that is accessible though, among other things, specially constructed JMS messages. You can see this in action in the "Management" example that ships with ActiveMQ Artemis in the examples/features/standard/management directory. It demonstrates how to use JMS resources and provider-specific helper classes to get the message count for a JMS queue. This is essentially the same solution as given in the other post you mentioned, but it uses the JMS API rather than the ActiveMQ Artemis "core" API.

IBM MQ syncpoint and dotnet

When trying to use c# and ibm mq client (9.1.5), I want to use the syncpoint functionality.
var getMessageOptions = new MQGetMessageOptions();
getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_WAIT + MQC.MQGMO_SYNCPOINT;
getMessageOptions.WaitInterval = 20000; // 20 seconds wait
Hashtable props = new Hashtable();
props.Add(MQC.HOST_NAME_PROPERTY, "localhost");
props.Add(MQC.CHANNEL_PROPERTY, "DOTNET.SVRCONN");
props.Add(MQC.PORT_PROPERTY, 3636);
MQQueueManager qm = new MQQueueManager("QM", props);
MQQueue queue = qm.AccessQueue("Q1", MQC.MQOO_INPUT_AS_Q_DEF);
try
{
var message = new MQMessage();
queue.Get(message, getMessageOptions);
string messageStr = message.ReadString(message.DataLength);
SaveTheMessageToAFile(messageStr);
//qm.Commit();
}
catch (MQException e) when (e.Reason == 2033)
{
// Report exceptions other than "no messages in the queue"
Log.Information("No messages in the queue");
}
catch (Exception ex)
{
Log.Error($"Exception when trying to capture a message from the queue:
}
I would have expected to see the same message each time if i didn't call commit. Is there something that needs to be enabled on the queuemanager?
In your example you are not issuing a Commit() or a Backout(), so at that point the message will just be in a uncommitted state. If you were to then kill off your process the message would get rolled back to the queue. As you mentioned in the comments if you call Disconnect(), in most cases this will implicitly commit uncommitted messages.
This is documented in the IBM MQ KC in a few pages:
Reference>Developing applications reference>The IBM MQ .NET classes and interfaces>MQQueueManager.NET class
Disconnect();
...
Generally, any work performed as part of a unit of work is committed. However, if the unit of work is managed by .NET, the unit of work might be rolled back.
NOTE: managed by .NET means a Distributed transactions, not what you are doing.
Developing applications>Developing .NET applications
When you use the procedural interface, you disconnect from a queue manager by using the call MQDISC( Hconn, CompCode, Reason), where Hconn is a handle to the queue manager.
In the .NET interface, the queue manager is represented by an object of class MQQueueManager. You disconnect from the queue manager by calling the Disconnect() method on that class.
Developing applications>Developing MQI applications with IBM MQ>Writing a procedural application for queuing>Committing and backing out units of work>Syncpoint considerations in IBM MQ applications
Except on z/OS batch with RRS, if a program issues the MQDISC call while there are uncommitted requests, an implicit syncpoint occurs. If the program ends abnormally, an implicit backout occurs.

Redis keyspace notifications with StackExchange.Redis For Delete operation

I've been searching to find out how to perform a subscription to key space notifications on Redis using ServiceStack.Redis library for removal of Key.
Checking available tests on the git-hub and other websites I've found IRedisSubscription can be used for subscribing to specific Redis key events, For set operation it is working absolutely fine but when it comes to Delete operation the action is not invoked.
Is it possible to take advantage of this Redis feature using ServiceStack.Redis and get event on delete operation too?
In the configuration file I have added this line:
notify-keyspace-events KEAg
I am using the following code.
var channels = new[] { "__keyevent#0__:set" , "__keyevent#0__:del" };
using (var redisConsumer = new RedisClient("localhost:6379"))
using (var subscription = redisConsumer.CreateSubscription()) {
subscription.OnMessage = onKeyChange;
subscription.SubscribeToChannelsMatching(channels );
}
From the surface, it looks like what you got should work.
Try setting notify-keyspace-events to AKE, the g is redundant, as noted in Notifications Config:
A Alias for g$lshztxe, so that the "AKE" string means all the
events.
Try using SubscribeToChannels instead of SubscribeToChannelsMatching. The latter is for pattern subscription.
You can test how many subscribers you have with the PUBSUB NUMSUB __keyevent#0__:del command from redis-cli.
Try testing your events are being triggered with SUBSCRIBE __keyevent#0__:del from redis-cli. This will help you determine if the problem is on redis-server or the app code.
Please update the question with results if you can't get it to work after trying the above.

Azure, SubscriptionClient.OnMessage, and Sessions

Does the Azure Service Bus Subscription client support the ability to use OnMessage Action when the subscription requires a session?
I have a subscription, called "TestSubscription". It requires a sessionId and contains multipart data that is tied together by a SessionId.
if (!namespaceManager.SubscriptionExists("TestTopic", "Export"))
{
var testRule = new RuleDescription
{
Filter = new SqlFilter(#"(Action='Export')"),
Name = "Export"
};
var subDesc = new SubscriptionDescription("DataCollectionTopic", "Export")
{
RequiresSession = true
};
namespaceManager.CreateSubscription(sub`enter code here`Desc, testRule);
}
In a seperate project, I have a Service Bus Monitor and WorkerRole, and in the Worker Role, I have a SubscriptionClient, called "testSubscriptionClient":
testSubscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, _topicName, CloudConfigurationManager.GetSetting("testSubscription"), ReceiveMode.PeekLock);
I would then like to have OnMessage triggered when new items are placed in the service bus queue:
testSubscriptionClient.OnMessage(PersistData);
However I get the following message when I run the code:
InvalidOperationException: It is not possible for an entity that requires sessions to create a non-sessionful message receiver
I am using Azure SDK v2.8.
Is what I am looking to do possible? Are there specific settings that I need to make in my service bus monitor, subscription client, or elsewhere that would let me retrieve messages from the subscription in this manner. As a side note, this approach works perfectly in other cases that I have in which I am not using sessioned data.
Can you try this code:
var messageSession=testSubscriptionClient.AcceptMessageSession();
messageSession.OnMessage(PersistData);
beside of this:
testSubscriptionClient.OnMessage(PersistData);
Edit:
Also, you can register your handler to handle sessions (RegisterSessionHandler). It will fire your handle every new action.
I think this is more suitable for your problem.
He shows both way, in this article. It's for queue, but I think you can apply this to topic also.

Is it possible for a JMS topic to have multiple publishers

From what I've read so-far, a JMS Topic is 1-to-Many and I wonder if its possible to support Many-to-Many using a topic. Consider a topic called "Reports" with multiple services spread out across an enterprise needing to publish scheduled reports. Having multiple publishers would alleviate the need to subscribe interested applications to a topic for each of the reporting services.
Note:
I'm going to use Spring and ActiveMQ in my solution.
#Mondain: yes, very much possible. A practical example would be live stock market price feed provided by multiple sources and those feed consumed by multiple channels.
Yes, you can create many TopicPublisher from your TopicSession, and many applications can connect the same Topic using TopicPublisher or TopicSubscriber.
You can do something like this, and call CreateMessageProducer to create a new instance of producer anywhere in your application.
public ActiveMqProducer(string activeMqServiceUrl)
{
_activeMqServiceUrl = activeMqServiceUrl;
IConnectionFactory factory = new ConnectionFactory(new Uri(_activeMqServiceUrl));
_activeMqConnection = factory.CreateConnection();
_activeMqSession = _activeMqConnection.CreateSession(AcknowledgementMode.Transactional);
_activeMqConnection.Start();
}
private IMessageProducer CreateMessageProducer(string mqTopicName)
{
ITopic destination = SessionUtil.GetTopic(_activeMqSession, mqTopicName);
var producer = _activeMqSession.CreateProducer(destination);
return producer;
}

Resources