I am trying to understand some of the IBM MQ put options:
I have used https://www.ibm.com/docs/en/ibm-mq/9.2?topic=interfaces-mqputmessageoptionsnet-class for documentation.
It seems that MQPMO_ASYNC_RESPONSE and MQPMO_SYNC_RESPONSE in fact are mutually exclusive, yet these options have to different ID's (bit 12 and bit 13). What does MQ do when both options are set or neither one is set?
t seems that MQPMO_SYNCPOINT and MQPMO_NO_SYNCPOINT in fact are mutually exclusive, yet these options have to different ID's (bit 2 and bit 3). What does MQ do when both options are set or neither one is set?
MQPMO_RESPONSE_AS_Q_DEF makes things even more confusing for me. As I understand from the documentation this bit defers the control of the requests being synchronous or asynchronous to the queue definition. Therefore ignorning the options set with MQPMO_ASYNC_RESPONSE and MQPMO_SYNC_RESPONSE. But the documentation states the following
For an MQDestination.put call, this option takes the put response type from DEFPRESP attribute of the queue.
For an MQQueueManager.put call, this option causes the call to be made synchronously.
And the documentation for DEFRESP at https://www.ibm.com/docs/en/ibm-mq/9.1?topic=queues-defpresp-mqlong is stating this:
The default put response type (DEFPRESP) attribute defines the value used by applications when the PutResponseType within MQPMO has been set to MQPMO_RESPONSE_AS_Q_DEF. This attribute is valid for all queue types.
The value is one of the following:
SYNC
The put operation is issued synchronously returning a response.
ASYNC
The put operation is issued asynchronously, returning a subset of MQMD fields.
But the other documentation says setting this option makes the call synchronous.
So in short: What happens with the seemingly mutually exclusive options and what does the MQPMO_RESPONSE_AS_Q_DEF really do?
If you combine options flags that are not supposed to be combined, such as MQPMO_SYNCPOINT and MQPMO_NO_SYNCPOINT you will be return MQRC_OPTIONS_ERROR on the MQPUT call. you can see the documentation for this in IBM Docs here.
You are correct, the use of MQPMO_RESPONSE_AS_Q_DEF tells the queue manager to take the value for Put Response from the queue attribute DEFPRESP. The queue manager will look up the queue definition, and if it is SYNC it will effectively use MQPMO_SYNC_RESPONSE and if it is ASYNC it will effectively use MQPMO_ASYNC_RESPONSE.
The documentation you pointed us to states the following:-
MQC.MQPMO_RESPONSE_AS_Q_DEF
For an MQDestination.put call, this option takes the put response type from DEFPRESP attribute of the queue.
For an MQQueueManager.put call, this option causes the call to be made synchronously.
I don't know why it would be different depending on the class used, but I can tell you that if the MQPMO_RESPONSE_AS_Q_DEF makes it to the queue manager it will change it as described above. This documentation suggests that the MQQueueManager class is changing it itself which is an odd decision.
Related
If you use the default constructor to construct a TPL BufferBlock, are the DataFlowBlockOptions unbounded? In other words, what is the BoundedCapacity of the BufferBlock?
As stated in this SO answer, it's not possible to query nor change the values of the BufferBlock after construction.
You have two options to find this out: read the docs or create BufferBlock by yourself.
From Introduction to TPL Dataflow:
The majority of the dataflow blocks included in System.Threading.Tasks.Dataflow.dll support the specification of a bounded capacity.
This is the limit on the number of items the block may be storing and have in flight at any one time. By default, this value is initialized to DataflowBlockOptions.Unbounded (-1), meaning that there is no limit.
However, a developer may explicitly specify an upper bound. If a block is already at its capacity when an additional message is offered to it, that message will be postponed.
Also, from MSDN:
DataflowBlockOptions is mutable and can be configured through its properties.
When specific configuration options are not set, the following defaults are used:
TaskScheduler: TaskScheduler.Default
MaxMessagesPerTask: DataflowBlockOptions.Unbounded (-1)
CancellationToken: CancellationToken.None
BoundedCapacity: DataflowBlockOptions.Unbounded (-1)
Dataflow blocks capture the state of the options at their construction.
Subsequent changes to the provided DataflowBlockOptions instance should not affect the behavior of a dataflow block.
You can always view private members from debugger:
You also may try to get/set them by reflection, but this is really not recommended.
I am trying to write a consumer for an existing queue.
RabbbitMQ is running in a separate instance and queue named "org-queue" is already created and binded to an exchange. org-queue is a durable queue and it has some additional properties as well.
Now I need to receive messages from this queue.
I have use the below code to get instance of the queue
conn = Bunny.new
conn.start
ch = conn.create_channel
q = ch.queue("org-queue")
It throws me an error stating different durable property. It seems by default the Bunny uses durable = false. So I've added durable true as parameter. Now it states the difference between other parameters. Do I need to specify all the parameters, to connect to it? As rabbitMQ is maintained by different environment, it is hard for me to get all the properties.
Is there a way to get list of queues and listening to the required queue in client instead of connecting to a queue by all parameters.
Have you tried the :passive=true parameter on queue()? A real example is the rabbitmq plugin of logstash. :passive means to only check queue existence rather than to declare it when consuming messages from it.
Based on the documentation here http://reference.rubybunny.info/Bunny/Queue.html and
http://reference.rubybunny.info/Bunny/Channel.html
Using the ch.queues() method you could get a hash of all the queues on that channel. Then once you find the instance of the queue you are wanting to connect to you could use the q.options() method to find out what options are on that rabbitmq queue.
Seems like a round about way to do it but might work. I haven't tested this as I don't have a rabbitmq server up at the moment.
Maybe there is way to get it with rabbitmqctl or the admin tool (I have forgotten the name), so the info about queue. Even if so, I would not bother.
There are two possible solutions that come to my mind.
First solution:
In general if you want to declare an already existing queue, it has to be with ALL correct parameters. So what I'm doing is having a helper function for declaring a specific queue (I'm using c++ client, so the API may be different but I'm sure concept is the same). For example, if I have 10 subscribers that are consuming queue1, and each of them needs to declare the queue in the same way, I will simply write a util that declares this queue and that's that.
Before the second solution a little something: Maybe here is the case in which we come to a misconception that happens too often :)
You don't really need a specific queue to get the messages from that queue. What you need is a queue and the correct binding. When sending a message, you are not really sending to the queue, but to the exchange, sometimes with routing key, sometimes without one - let's say with. On the receiving end you need a queue to consume a message, so naturally you declare one, and bind it to an exchange with a routing key. You don't need even need the name of the queue explicitly, server will provide a generic one for you, so that you can use it when binding.
Second solution:
relies on the fact that
It is perfectly legal to bind multiple queues with the same binding
key
(found here https://www.rabbitmq.com/tutorials/tutorial-four-java.html)
So each of your subscribers can delcare a queue in whatever way they want, as long as they do the binding correctly. Of course these would be different queues with different names.
I would not recommend this. This implies that every message goes to two queues for example and most likely a message (I am assuming the use case here needs to be processed only once by one subscriber).
What is the exact nature of the thread-unsafety of a JMS Session and its associated constructs (Message, Consumer, Producer, etc)? Is it just that access to them must be serialized, or is it that access is restricted to the creating thread only?
Or is it a hybrid case where creation can be distinguished from use, i.e. one thread can create them only and then another thread can be the only one to use them? This last possibility would seem to contradict the statement in this answer which says "In fact you must not use it from two different threads at different times either!"
But consider the "Server Side" example code from the ActiveMQ documentation.
The Server class has data members named session (of type Session) and replyProducer (of type MessageProducer) which are
created in one thread: whichever one invokes the Server() constructor and thereby invokes the setupMessageQueueConsumer() method with the actual creation calls; and
used in another thread: whichever one invokes the onMessage() asynchronous callback.
(In fact, the session member is used in both threads too: in one to create the replyProducer member, and in the other to create a message.)
Is this official example code working by accident or by design? Is it really possible to create such objects in one thread and then arrange for another thread to use them?
(Note: in other messaging infrastructures, such as Solace, it's possible to specify the thread on which callbacks occur, which could be exploited to get around this "thread affinity of objects" restriction, but no such API call is defined in JMS, as far as I know.)
JMS specification says a session object should not be used across threads except when calling Session.Close() method. Technically speaking if access to Session object or it's children (producer, consumer etc) is serialized then Session or it's child objects can be accessed across threads. Having said that, since JMS is an API specification, it's implementation differs from vendor to vendor. Some vendors might strictly enforce the thread affinity while some may not. So it's always better to stick to JMS specification and write code accordingly.
The official answer appears to be a footnote to section 4.4. "Session" on p.60 in the JMS 1.1 specification.
There are no restrictions on the number of threads that can use a Session object or those it creates. The restriction is that the resources of a Session should not be used concurrently by multiple threads. It is up to the user to insure that this concurrency restriction is met. The simplest way to do this is to use one thread. In the case of asynchronous delivery, use one thread for setup in stopped mode and then start asynchronous delivery. In more complex cases the user must provide explicit synchronization.
Whether a particular implementation abides by this is another matter, of course. In the case of the ActiveMQ example, the code is conforming because all inbound message handling is through a single asynchronous callback.
I've started looking at MassTransit and am writing the classes that will handle the messages. When I implement the interface from Consumes<T> I get four options: All, Selected, For<T> and Context. What is the difference between the four and when should them be used?
All just gives you all the messages to consume. Context is All but you also get the Context<TMessage> if you need it. Selected allows you to accept or reject messages before it gets to your consumer. For<T> is primarily for Sagas, I don't think there's a good use case for it outside of that.
Starting off, just using All is likely the right answer.
I've been reading the MSDN documentation for IcmpSendEcho2 and it raises more questions than it answers.
I'm familiar with asynchronous callbacks from other Win32 APIs such as ReadFileEx... I provide a buffer which I guarantee will be reserved for the driver's use until the operation completes with any result other than IO_PENDING, I get my callback in case of either success or failure (and call GetCompletionStatus to find out which). Timeouts are my responsibility and I can call CancelIo to abort processing, but the buffer is still reserved until the driver cancels the operation and calls my completion routine with a status of CANCELLED. And there's an OVERLAPPED structure which uniquely identifies the request through all of this.
IcmpSendEcho2 doesn't use an OVERLAPPED context structure for asynchronous requests. And the documentation is unclear excessively minimalist about what happens if the ping times out or fails (failure would be lack of a network connection, a missing ARP entry for local peers, ICMP destination unreachable response from an intervening router for remote peers, etc).
Does anyone know whether the callback occurs on timeout and/or failure? And especially, if no response comes, can I reuse the buffer for another call to IcmpSendEcho2 or is it forever reserved in case a reply comes in late?
I'm wanting to use this function from a Win32 service, which means I have to get the error-handling cases right and I can't just leak buffers (or if the API does leak buffers, I have to use a helper process so I have a way to abandon requests).
There's also an ugly incompatibility in the way the callback is made. It looks like the first parameter is consistent between the two signatures, so I should be able to use the newer PIO_APC_ROUTINE as long as I only use the second parameter if an OS version check returns Vista or newer? Although MSDN says "don't do a Windows version check", it seems like I need to, because the set of versions with the new argument aren't the same as the set of versions where the function exists in iphlpapi.dll.
Pointers to additional documentation or working code which uses this function and an APC would be much appreciated.
Please also let me know if this is completely the wrong approach -- i.e. if either using raw sockets or some combination of IcmpCreateFile+WriteFileEx+ReadFileEx would be more robust.
I use IcmpSendEcho2 with an event, not a callback, but I think the flow is the same in both cases. IcmpSendEcho2 uses NtDeviceIoControlFile internally. It detects some ICMP-related errors early on and returns them as error codes in the 12xx range. If (and only if) IcmpSendEcho2 returns ERROR_IO_PENDING, it will eventually call the callback and/or set the event, regardless of whether the ping succeeds, fails or times out. Any buffers you pass in must be preserved until then, but can be reused afterwards.
As for the version check, you can avoid it at a slight cost by using an event with RegisterWaitForSingleObject instead of an APC callback.