WebSphere MQ: Disable logging - ibm-mq

http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.amqzag.doc%2Ffa12570_.htm talks about log configuration, but does not say if logging can be disabled altogether. Is this possible?

There is no performance impact MQ uses log files. These are not log files in the generic sense they instead keep a record of the in-flight transactions.

Related

IBM MQ queues files rolling over policy

We are using IBM MQ and recently we faced an issue where some messages that were declared as sent to the MQ server by our client application were not consumed by our MQ consumer.
We lacked logging produced/consumed messages so we tried to check messages in MQ server log/data.
We found that messages are stored in /var/mqm/qmgrs/MQ_MANAGER/queues/ but we didn't find there all messages in the queue file (old messages were not found)
What is the rollover policy of IBM MQ and where does old queues files go?
That's not how the queue files work. They are not rollover logs. The same space is continually overwritten as needed to store messages, but messages may not be written there at all if they can be processed through memory caches etc.
PERSISTENT messages are usually logged in files under /var/mqm/log, but there are circumstances where even that can be avoided. Your qmgr's recovery logfile configuration (circular/linear etc) will determine whether historic information about PERSISTENT messages remains available.
NONPERSISTENT messages are never logged in those files.
In IBM MQ messages can be either persistent or non-persistent.
If a message is persistent it will normally be written to the transactional logs (usually under /var/mqm/log/MQ_MANAGER/active) before a commit completes or before the PUT completes if not done under a unit of work.
If a message is non-persistent it will not be written to the transactional logs.
At this point either type of message may reside only in memory and will only be written to the queue file (usually under /var/mqm/qmgrs/MQ_MANAGER/queues) if it needs to offload memory or if it is persistent and a check point is taken.
If the message is consumed in a timely manner it may never be written to the queue file.
The queue file will shrink in size if space taken up by messages that are no longer needed is in use, this happens automatically and is not configurable or documented by IBM as far as I know.
Non-persistent messages generally do not survive a queue manager restart.
Transactional logs can be configured as circular or linear. If circular the logs will be reused once they are no longer needed. If linear with automatic log management (introduced in 9.0.2) they will work similarly to circular. If linear without automatic log management, what happens to logs that are no longer needed would be based on your own log management.
If the message is still in the transactional log you may be able to view it as described in "Where's my message? Tool and instructions to use the MQ recovery log to find out what happened to your persistent MQ messages on distributed platforms".

ActiveMQ non-persistent delivery mode limitations?

I am using ActiveMQ where I need following requirements
To have very fast consumers as my producers are already very fast
Need processing at lease 2K messages per second
Not require to process/consume messages again in case of server crash or other failures. I can trigger whole process again.
Needs to run very normal configuration server - 4Gib RAM
I have configured ActiveMQ as given below
Using non-persistent delivery mode (vm://localhost)(http://activemq.apache.org/what-is-the-difference-between-persistent-and-non-persistent-delivery.html)
Using spring integration for put/fetch messages in/from queue/channel.
Using max-concurrent-consumers with 10 threads
Assume all other configs are by default with ActiveMQ and Sprig-integration.
Problems/Questions
I am not sure how ActiveMQ stores messages in case of non-persistent delivery mode, is it possible that my process will fail with out of memory errors once my queue size exceed some limit? I am asking this because it's very difficult to test whole process for me. So I needs to be aware about limitation before I trigger the process.
If non-persistent delivery mode is not sufficient with my above requirements, is there any performance tuning tips with which I can achieve my requirements with persistent delivery mode (tcp://). I have already tested with this mode, but it seems consumers are very slow here. Also, I have already tried to use DUPS_OK_ACKNOWLEDGE to make my consumer fast with persistent delivery mode but no luck.
NOTE : I am using latest ActiveMQ version 5.14
I am not sure how ActiveMQ stores messages in case of non-persistent delivery mode
Activemq store messages in the memory at first, and it will also swap it to the disk(there is a tmp_storage folder in activemq's data path).
is it possible that my process will fail with out of memory errors once my queue size exceed some limit
I have never met out of memory in activemq, even with about one million messages.
You can also make sure by the producer flow control(http://activemq.apache.org/producer-flow-control.html).
You can make the producer hang when there is too many messages not consumed.
And about performance of persistent delivery, I also have no good methods.

Oracle service bus with BigData

I do not have much experiences with Oracle Service Bus, I am trying to design a logging solution with BigData.
As I read, the default log and report activity in OSB will put the data into the domain's server log file or into the database where we setup the server domain. If I want to put all the logs into a separate BigData database. I will need to either of these approaches:
Java callout, use JMS or some other technology to send data to the bigdata server.
Web service callout, create a separate web service to handle the logging.
Create custom report provider to replace the default one in OSB Reporting.
Something else
Please tell give me some ideas about what method I should be using, and please provide your reasons if you can, thank you so much.
Isn't the logging framework in weblogic based on Log4j? That means you can use a JMSAppender (probably prudent to wrap in an Async log4j appender if you can) and handle it however you want.
Or, if you're talking about the OSB Reporting framework, there's a few options:
Configure the default JMS reporting provider (which uses the underlying SOAINFRA database which hopefully is set up to be something better than the default Derby instance), then write a MDB that pulls reports off the queue and inserts it into SAS BigData
Turn the JMS provider off and use a custom provider, which can do anything you want. If you want, you can still do a two-step process, where the reporting provider itself puts reports on a JMS queue so it returns quickly, and a different MDB pulls messages off and persists them at its own pace.
I do not recommend a web service or database callout without an async step in the middle, because you need logging and reporting to be very quick and use as little resources for as short a period as possible.
You don't want logging to hog threads while you're experiencing load. I have seen entire buses brought down because of one hiccup, because the logging database suffered a performance blip, which caused a bunch of open threads trying to log to it, which caused thread starvation or timeouts, which caused more error logging...
if you have a buffer like a JMS queue, then you can handle peaks by planning ahead. You can say "actually I want a JMS queue of 10,000 messages, and if that overflows due to whatever reason, I want to (push the overflow to a separate queue over on this other box) or (filter out all the non-essential messages) or (throw new messages away) or (action of your choice). Oh yeah, and if the logging database fails then I will try 3 times to commit and if not, move it to this other queue". Or whatever you want.
There are multiple ways to achieve this. You could use the report activity to push to JMS or use the log activity.
You can also write a small routine such as this (either on OSB or outside it), that can read anything that you are logging (such as via the log activity but also additional metadata that is logged when you turn on monitoring of OSB components) and do with it whatever is needed (such as pushing it to a database or BigData store).
The key is to avoid writing an explicit service call in each pipeline/flow and the above approach(es) use standard OSB/ODL* loggers
*Oracle Diagnostic Logging

What are "log pages" and use in WMQ?

In logging how could
log pages
are useful and used for log writings ?
appreciate any one help..!
Log pages are an internal feature of WMQ. They are used by the queue manager to record recoverable data that is needed a queue manager restart to possibly restore objects and messages that were modified or processed since the last checkpoint.
They are not directly usable by the applications or users, for the most part can be ignored.
More information is available on the WMQ7 InfoCenter http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp

Ensure durability with messages when using WCF

I am wondering how we can ensure message durability when using websphere MQ and WCF. I want to be able to have my WCF process pick messages off of the queue and if there is an issue that the applciation encounters (power outage, etc) I don't lose the messages. I also would like to not have to use a transaction if at all possible because I want to eliminate distributed transactions.
Thanks,
S
Well, there's transactions and there's distributed transactions. The "right" answer is to use the WMQ 1-phase commit here. That doesn't have the complexity of XA transactions but it does give you the ability to roll back a message without losing it. In fact, when using clients you really should be using at least 1-phase commit just to prevent loss of messages.
Short of that there is always the "browse-with-lock, delete-message-under-cursor" method. I'm pretty sure everything you need to do the browseing, locking and deleting is exposed under .NET but perhaps Shashi will comment and confirm.
WebSphere MQ WCF custom channel has a feature "Assured Delivery" that guarantees that a service request or reply is actioned and not lost. This is the 1-phase commit (also known as SYNC_POINT in) WMQ.
"Assuered Delivery" is a service contract attribute. Here are more details about the feature.

Resources