In logging how could
log pages
are useful and used for log writings ?
appreciate any one help..!
Log pages are an internal feature of WMQ. They are used by the queue manager to record recoverable data that is needed a queue manager restart to possibly restore objects and messages that were modified or processed since the last checkpoint.
They are not directly usable by the applications or users, for the most part can be ignored.
More information is available on the WMQ7 InfoCenter http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp
Related
We are using IBM MQ and recently we faced an issue where some messages that were declared as sent to the MQ server by our client application were not consumed by our MQ consumer.
We lacked logging produced/consumed messages so we tried to check messages in MQ server log/data.
We found that messages are stored in /var/mqm/qmgrs/MQ_MANAGER/queues/ but we didn't find there all messages in the queue file (old messages were not found)
What is the rollover policy of IBM MQ and where does old queues files go?
That's not how the queue files work. They are not rollover logs. The same space is continually overwritten as needed to store messages, but messages may not be written there at all if they can be processed through memory caches etc.
PERSISTENT messages are usually logged in files under /var/mqm/log, but there are circumstances where even that can be avoided. Your qmgr's recovery logfile configuration (circular/linear etc) will determine whether historic information about PERSISTENT messages remains available.
NONPERSISTENT messages are never logged in those files.
In IBM MQ messages can be either persistent or non-persistent.
If a message is persistent it will normally be written to the transactional logs (usually under /var/mqm/log/MQ_MANAGER/active) before a commit completes or before the PUT completes if not done under a unit of work.
If a message is non-persistent it will not be written to the transactional logs.
At this point either type of message may reside only in memory and will only be written to the queue file (usually under /var/mqm/qmgrs/MQ_MANAGER/queues) if it needs to offload memory or if it is persistent and a check point is taken.
If the message is consumed in a timely manner it may never be written to the queue file.
The queue file will shrink in size if space taken up by messages that are no longer needed is in use, this happens automatically and is not configurable or documented by IBM as far as I know.
Non-persistent messages generally do not survive a queue manager restart.
Transactional logs can be configured as circular or linear. If circular the logs will be reused once they are no longer needed. If linear with automatic log management (introduced in 9.0.2) they will work similarly to circular. If linear without automatic log management, what happens to logs that are no longer needed would be based on your own log management.
If the message is still in the transactional log you may be able to view it as described in "Where's my message? Tool and instructions to use the MQ recovery log to find out what happened to your persistent MQ messages on distributed platforms".
We are using IBM MQ8.0. Activitiy logs are getting logged for outgoing messages which we are sending to external system. But there is no log available for the messages which are from external system to our MQ Manager.
Is it problem with client channel configuration ?
Or MQ logging configuration issue ?
IBM describes these "activity logs" as recover logs in the Knowledge center page "Making sure that messages are not lost (logging)"
IBM MQ records all significant changes to the persistent data controlled by the queue manager in a recovery log.
This includes creating and deleting objects, persistent message updates, transaction states, changes to object attributes, and channel activities. The log contains the information you need to recover all updates to message queues by:
Keeping records of queue manager changes
Keeping records of queue updates for use by the restart process
Enabling you to restore data after a hardware or software failure
Please note that non-persistent messages are not logged to the recover log.
Based on your question it is likely that the messages you are sending to the external system are persistent messages and the messages you are receiving from the external system are non-persistent messages, this would explain why they are not logged to the recover log files.
Persistence is determined at the time the message is first PUT.
IBM has a good Technote "Message persistence FAQs" about this subject.
Q3. What is the best way to be certain that messages are persistent?
A3. Set MQMD message persistence to persistent (MQPER_PERSISTENT), or nonpersistent (MQPER_NOT_PERSISTENT) and your message will always retain that value.
Note: MQPER_PERSISTENCE_AS_Q_DEF is the default setting for the persistence value in the MQMD. See the persistence values listed below.
...
Additional information
MQPER_PERSISTENCE_AS_Q_DEF can lead to unexpected results. If there is more than one definition in the queue-name resolution path, the default persistence attribute is taken from first queue definition in the path at the time of the MQPUT or MQPUT1 call. This queue could be an:
alias queue
local queue
local definition of a remote queue
queue-manager alias
transmission queue
cluster queue
The external system will need to make sure the messages they send you are set as persistent messages if you want them to be logged.
I do not have much experiences with Oracle Service Bus, I am trying to design a logging solution with BigData.
As I read, the default log and report activity in OSB will put the data into the domain's server log file or into the database where we setup the server domain. If I want to put all the logs into a separate BigData database. I will need to either of these approaches:
Java callout, use JMS or some other technology to send data to the bigdata server.
Web service callout, create a separate web service to handle the logging.
Create custom report provider to replace the default one in OSB Reporting.
Something else
Please tell give me some ideas about what method I should be using, and please provide your reasons if you can, thank you so much.
Isn't the logging framework in weblogic based on Log4j? That means you can use a JMSAppender (probably prudent to wrap in an Async log4j appender if you can) and handle it however you want.
Or, if you're talking about the OSB Reporting framework, there's a few options:
Configure the default JMS reporting provider (which uses the underlying SOAINFRA database which hopefully is set up to be something better than the default Derby instance), then write a MDB that pulls reports off the queue and inserts it into SAS BigData
Turn the JMS provider off and use a custom provider, which can do anything you want. If you want, you can still do a two-step process, where the reporting provider itself puts reports on a JMS queue so it returns quickly, and a different MDB pulls messages off and persists them at its own pace.
I do not recommend a web service or database callout without an async step in the middle, because you need logging and reporting to be very quick and use as little resources for as short a period as possible.
You don't want logging to hog threads while you're experiencing load. I have seen entire buses brought down because of one hiccup, because the logging database suffered a performance blip, which caused a bunch of open threads trying to log to it, which caused thread starvation or timeouts, which caused more error logging...
if you have a buffer like a JMS queue, then you can handle peaks by planning ahead. You can say "actually I want a JMS queue of 10,000 messages, and if that overflows due to whatever reason, I want to (push the overflow to a separate queue over on this other box) or (filter out all the non-essential messages) or (throw new messages away) or (action of your choice). Oh yeah, and if the logging database fails then I will try 3 times to commit and if not, move it to this other queue". Or whatever you want.
There are multiple ways to achieve this. You could use the report activity to push to JMS or use the log activity.
You can also write a small routine such as this (either on OSB or outside it), that can read anything that you are logging (such as via the log activity but also additional metadata that is logged when you turn on monitoring of OSB components) and do with it whatever is needed (such as pushing it to a database or BigData store).
The key is to avoid writing an explicit service call in each pipeline/flow and the above approach(es) use standard OSB/ODL* loggers
*Oracle Diagnostic Logging
I am using circular logging. Because of human intervention, one of the
queue files is corrupted.
Since the circular logging is not having the ability of recovering the
corrupted queue files, what will be the next steps it will take?
Will queue manager create an empty queue file for that queue and start
enrolling the messages to it? Else, it will just show the pending
messages in the queue but not allow the applications to process?
As you correctly note, MQ cannot recover from a damaged queue file when it is configured for circular logging.
Will queue manager create an empty queue file for that queue and start enrolling the messages to it? Else, it will just show the pending messages in the queue but not allow the applications to process?
None of the above. The queue manager will return an error to any process attempting to access that queue.
When a queue file is damaged, it may or may not have had messages in it. There is no automatic recovery possible that would correctly reconcile the state of any messages that may have been enqueued, therefore no further processing is done on that queue and any access returns an error. Human intervention is required in that case and the fix is to delete and redefine the queue using runmqsc.
If additional queue recovery is required to make sure messages are not lost in such cases, linear logging is mandatory.
The queue manager is not going to create an new queue file automatically. If you truly have a corrupted queue then you may have to delete and recreate it. It would be helpful if you can provide more info about the error you see indicating the queue is corrupt. Also, what version of MQ are you using?
We currently use MQ Explorer to manage a WebSphere MQ V7 on Z/OS.
Few days ago I deleted a queue by mistake.
Later I wanted to go back in history and look some logs to see when it exactly happened.
My question is, where does MQ Explorer logs all activities that takes place via MQ Explorer?
Thank you
MQ Explorer does not have such a log. If you want an audit trail of things done to your queue manager you should enable command events on the queue manager, that way you'll have an audit trail regardless of the tool used to do the deed.
Command events are enabled using ALTER QMGR CMDEV(ENABLED) or if you prefer not to clutter the audit trail with Display commands, use ALTER QMGR CMDEV(NODISPLAY).
You may also want to consider configuration events, which provide a snapshot of a object before and after any change. For example in your example where a queue is deleted, the configuration event will contain a snapshot of what the queue looked like before it was deleted, allowing that information to be used to reinstate the queue as it was.
Read more:-
Command Events
Command Event reference
Configuration Events
Create object Event reference
Change object Event reference
Delete object Event reference