Script file to retrieve MQ messages from the queue manager - ibm-mq

I want to write a script file that will append the arrived MQ messages in the Queue Manager in a log file.Please help

If you want all messages arriving on a channel, you can use the LogIP exit from the BlockIP2 page of mrmq.dk. An API exit such as SupportPac MA0W can log all messages put. An API exit can catche messages from local applications as well as those arriving over channels.
If you want to script this, you can use a program such as Q (from SupportPac MA01) to remove the messages from the queue as they arrive and append them to a file.
For example,
#!/usr/bin/ksh
q -IMYQMGR/MY.QUEUE >> logfile.txt
Typically, the script is triggered and configured to append new messages to the file. The problem with this is that it destructively removes the messages. If there is an application of record needing to use those messages it isn't a great solution. You could browse the queue but there's no guarantee of getting the messages before the app of record gets them - and the browse would periodically restart at the head of the queue so you might log the same message twice.
Another scripting option is the Perl MQSeries module. This module exposes all the options of the WMQ API as well as object-oriented methods. If you need something quick and dirty, the Q program is delivered as an executable. If you want something powerful that exposes all the APIs to your script (and don't mind compiling it) the Perl MQSeries module is a great way to go. Here's a code snippet, taken from the module's samples, showing how to GET messages:
while (1) {
$sync_flag = 0;
undef $outcome;
my $request_msg = MQSeries::Message::->new();
my $status = $request_queue->
Get('Message' => $request_msg,
'GetMsgOpts' =>
{
'WaitInterval' => 5000, # 5 seconds
'Options' => (MQSeries::MQGMO_WAIT |
MQSeries::MQGMO_SYNCPOINT_IF_PERSISTENT |
MQSeries::MQGMO_CONVERT |
MQSeries::MQGMO_FAIL_IF_QUIESCING),
},
);
unless ($status) { # Error
my $rc = $request_queue->Reason();
die "Error on 'Get' from queue $qmgr_name/$request_qname:\n" .
"\tReason: $rc (" . MQReasonToText($rc). ")\n";
}
next if ($status < 0); # No message available
One thing people have done in the past is to convert the queue to an alias over a topic. The app that uses the messages is redirected to GET from a new queue and an administrative subscription connects the topic to the new queue. At this point the real app gets all the messages and a new subscription can be made for logging messages going through the topic.

Related

Ruby-kafka Read all messages topic and exit

I need to read all messages form Kafka topic then process and exit (no need to run like a daemon forever) . I have written a code like below , it serves the purpose if messages available in topic , if the topic is empty ( or no new message for mentioned Group_id) it will wait till next message arrives , I need to exit immediately if no message available to process. Please have look on my code and suggest if any better way to achieve this .
I am using ruby-kafka 1.3.0 gem
require 'kafka'
khost = 'xxx.xxx.xxx.xxx'
kport = 'xxxx'
kafka = Kafka.new(["#{khost}:#{kport}"] )
consumer = kafka.consumer(group_id: "my-consumer")
consumer.subscribe("my-topic")
consumer.each_batch do |batch|
$msg = batch
consumer.stop # stop after reading first batch
end
# Process messages here
$msg.messages.each do |message|
puts message.value
end
I have also found a method kafka.fetch_messages , However I did not find an option to maintain group_id and track already processed messages without adding additional code .

Trigger/Handle events between programs in different ABAP sessions

I have two programs running in separated sessions. I want to send a event from program A and catch this event in program B.
How can I do that ?
Using class-based events is not really an option, as these cannot be used to communicate between user sessions.
There is a mechanism that you can use to send messages between sessions: ABAP Messaging Channels. You can send anything that is either a text string, a byte string or can be serialised in any of the above.
You will need to create such a message channel using the repository browser SE80 (Create > Connectivity > ABAP Messaging Channel) or with the Eclipse ADT (New > ABAP Messaging Channel Application).
In there, you will have to define:
The message type (text vs binary)
The ABAP programs that are authorised to access the message channel.
The scope of the messages (i.e. do you want to send messages between users? or just for the same user? what about between application servers?)
The message channels work through a publish - subscribe mechanism. You will have to use specialised classes to publish to the channel (inside report A) and to read from the channel (inside report B). In order to wait for a message to arrive once you have subscribed, you can use the statement WAIT FOR MESSAGE CHANNELS.
Example code:
" publishing a message
CAST if_amc_message_producer_text(
cl_amc_channel_manager=>create_message_producer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
i_suppress_echo = abap_true )
)->send( i_message = text_message ).
" subscribing to a channel
DATA(lo_receiver) = NEW message_receiver( ).
cl_amc_channel_manager=>create_message_consumer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
)->start_message_delivery( i_receiver = lo_receiver )
" waiting for a message
WAIT FOR MESSAGING CHANNELS
UNTIL lo_receiver->text_message IS NOT INITIAL
UP TO time SECONDS.
If you want to avoid waiting inside your subscriber report B and to do something else in the meanwhile, then you can wrap the WAIT FOR... statement inside a RFC and call this RFC using the aRFC variant. This would allow you to continue doing stuff inside report B while waiting for an event to happen. When this event happens, the aRFC callback method that you defined inside your report when calling the RFC would be executed.
Inside the RFC, you would simply have the subscription part and the WAIT statement plus an assignment of the message itself to an EXPORTING parameter. In your report, you could have something like:
CALL FUNCTION 'ZMY_AMC_WRAPPER' STARTING NEW TASK 'MY_TASK'
CALLING lo_listener->my_method ON END OF TASK.
" inside your 'listener' class implementation
METHOD my_method.
DATA lv_message TYPE my_message_type.
RECEIVE RESULTS FROM FUNCTION 'ZMY_AMC_WRAPPER'
IMPORTING ev_message = lv_message.
" do something with the lv_message
ENDMETHOD.
You could emulate it by checking in program B if a parameter in SAP memory has changed. program A will set this parameter to send the event. (ie SET/ GET PARAMETER ...). In effect you're polling event in B.
There a a lot of unknown in your desription. For example is the event a one-shot operation or can A send several event ? if so B will have to clear the parameter when done treating the event so that A know it's OK to send a new one (and A will have to wait for the parameter to clear after having set it)...
edited : removed the part about having no messaging in ABAP, since Seban shown i was wrong

Can I mark IBM MQ messages as dirty?

I do have the following (multi-threaded) process in place:
Browse MQ queue (with lock) and get the next available message
Do something with it which might or might not fail
a. If successful, remove message from queue and start over or b. if not successful, leave message on queue
My problem arises from the fact that my application could die unexpectedly between step 2 and 3 and the application would then produce a duplicated message upon restart.
Is there a way to mark a message as 'dirty' or 'processing' on the queue (while or after reading it) with the mark persisting even if the application restarts?
I have tried to use the marks provided by MQ, but they do not survive a restart. Another possibility would be to move the message to a 'processing' queue, remove it on success or move it back to the source queue on failure, but this requires a second queue and is not trivial code anymore.
Rough code example:
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.options = MQConstants.MQGMO_BROWSE_FIRST | MQConstants.MQGMO_LOCK;
MQMessage message = new MQMessage();
message.correlationId = MQC.MQCI_NONE;
message.messageId = MQC.MQMI_NONE;
queue.get(message, gmo);
boolean success = processMessage(message);
// Application gets killed here after successful message processing.
// Produces duplicate after restart.
if (success) {
MQGetMessageOptions gmo2 = new MQGetMessageOptions();
gmo2.options = MQConstants.MQGMO_MSG_UNDER_CURSOR;
queue.get(new MQMessage(), gmo2);
}
Basically, I'd like to achieve this:
get message non-destructively from queue (only if not marked as "processing")
mark message as "processing" on queue
process message (including sending to some destination)
if successful delete from queue, or remove "processing" state on queue otherwise
If the application dies right after a successful third step 'process message', the message would be marked as "processing" and would not be processed again (as it might have been already).
Note: I do not want this process to have any knowledge about the message processing (other than success).
Have you tried SYNCPOINT?Commit or Backout kind of operation might help in this scenario.
Your solution is a horrible design. If you are updating a database then why are you not using 2 phase commits (i.e. XA transactions)?
Just have your MQAdmin setup up the queue manager to use the resource manager of the particular database you are using then it is as simple as:
Start transaction (2 phase commit)
Get message (destructive get NOT browse) from the queue
Update database
Commit transaction
Hence, everything in the transaction, MQGET and database update, will either be committed together or backed out together.
If your application were to crash, then the resource manager will automatically back out everything in the transaction.
Lets say you don't want to use 2 phase commit or you are not updating a database (updating a file) then you can use single phase UOW (Unit of Work).
Use MQGMO option of MQGMO_SYNCPOINT
Get message (destructive get NOT browse) from the queue
Update whatever you are updating
Issue MQCMIT
Things to know about MQ:
If an application issues an MQDISC or ends normally, with current uncommitted operations, an implied MQCMIT is executed by IBM MQ, i.e. all operations done under SYNCPOINT are committed.
If an application ends abnormally, with current uncommitted operations, an implied MQBACK is executed by IBM MQ, i.e. all operations done under SYNCPOINT are rolled back.

Publisher finishes before subscriber and messages are lost - why?

Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.

Posting large number of messages to AMQP queue

Using v0.7.1 of the Ruby amqp library and Ruby 1.8.7, I am trying to post a large number (millions) of short (~40 bytes) messages to a RabbitMQ server. My program's main loop (well, not really a loop, but still) looks like this:
AMQP.start(:host => '1.2.3.4',
:username => 'foo',
:password => 'bar') do |connection|
channel = AMQP::Channel.new(connection)
exchange = channel.topic("foobar", {:durable => true})
i = 0
EM.add_periodic_timer(1) do
print "\rPublished #{i} commits"
end
results = get_results # <- Returns an array
processor = proc do
if x = results.shift then
exchange.publish(x, :persistent => true,
:routing_key => "test.#{i}")
i += 1
EM.next_tick processor
end
end
EM.next_tick(processor)
AMQP.stop {EM.stop} end
The code starts processing the results array just fine, but after a while (usually, after 12k messages or so) it dies with the following error
/Library/Ruby/Gems/1.8/gems/amqp-0.7.1/lib/amqp/channel.rb:807:in `send':
The channel 1 was closed, you can't use it anymore! (AMQP::ChannelClosedError)
No messages are stored on the queue. The error seems to be happening just when network activity from the program to the queue server starts.
What am I doing wrong?
First mistake is that you didn't post the RabbitMQ version that you are using. Lots of people are running old obsolete version 1.7.2 because that is what is in their OS package repositories. Bad move for anyone sending the volume of messages that you are. Get RabbitMQ 2.5.1 from the RabbitMQ site itself and get rid of your default system package.
Second mistake is that you did not tell us what is in the RabbitMQ logs.
Third mistake is that you said nothing about what is consuming the messages. Is there another process running somewhere that has declared a queue and bound it to the exchange. There is NO message queue unless somebody declares it to RabbitMQ and binds it to an exchange. Even then messages will only flow if the binding key for the queue matches the routing key that you publish with.
Fourth mistake. You have routing keys and binding keys mixed up. The routing key is a string such as topic.test.json.echos and the binding key (used to bind a queue to an exchange) is a pattern like topic.# or topic..json.
Updated after your clarifications
Regarding versions, I'm not sure when it was fixed but there was a problem in 1.7.2 with large numbers of persistent messages causing RabbitMQ to crash when it rolled over its persistence log, and after crashing it was unable to restart until someone manually undid the rollover.
When you say that a connection is being opened and closed, I hope that it is not per message. That would be a strange way to use AMQP.
Let me repeat. Producers do NOT write messages to queues. They write messages to exchanges which then route the messages to queues based on the routing key (string) and the queue's binding key (pattern). In your example I misread the use of the # sign, but I see nothing which declares a queue and binds it to the exchange.

Resources