I have AMQ Artemis cluster, shared-store HA (master-slave), 2.17.0.
I noticed that all my clusters (active servers only) that are idle (no one is using them) using from 10% to 20% of CPU, except one, which is using around 1% (totally normal). I started investigating...
Long story short - only one cluster has a completely normal CPU usage. The only difference I've managed to find that if I connect to that normal cluster's master node and attempt telnet slave 61616 - it will show as connected. If I do the same in any other cluster (that has high CPU usage) - it will show as rejected.
In order to better understand what is happening, I enabled DEBUG logs in instance/etc/logging.properties. Here is what master node is spamming:
2021-05-07 13:54:31,857 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Backup is not active, trying original connection configuration now.
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Trying reconnection attempt 0/1
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty$NettyConnectorFactory#6cf71172, connectorConfig=TransportConfiguration(name=slave-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&port=61616&keyStorePassword=****&sslEnabled=true&host=slave-com&trustStorePath=/path/to/ssl/truststore-jks&keyStorePath=/path/to/ssl/keystore-jks
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Connector NettyConnector [host=slave.com, port=61616, httpEnabled=false$ httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=true, useNio=true] using native epoll
2021-05-07 13:54:32,357 DEBUG [org.apache.activemq.artemis.core.client] AMQ211002: Started EPOLL Netty Connector version 4.1.51.Final to slave.com:61616
2021-05-07 13:54:32,358 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Remote destination: slave.com/123.123.123.123:61616
2021-05-07 13:54:32,358 DEBUG [org.apache.activemq.artemis.spi.core.remoting.ssl.SSLContextFactory] Creating SSL context with configuration
trustStorePassword=****
port=61616
keyStorePassword=****
sslEnabled=true
host=slave.com
trustStorePath=/path/to/ssl/truststore.jks
keyStorePath=/path/to/ssl/keystore.jks
2021-05-07 13:54:32,448 DEBUG [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector] Added ActiveMQClientChannelHandler to Channel with id = 77c078c2
2021-05-07 13:54:32,448 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] Connector towards NettyConnector [host=slave.com, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=true, useNio=true] failed
This is what slave is spamming:
2021-05-07 14:06:53,177 DEBUG [org.apache.activemq.artemis.core.server.impl.FileLockNodeManager] trying to lock position: 1
2021-05-07 14:06:53,178 DEBUG [org.apache.activemq.artemis.core.server.impl.FileLockNodeManager] failed to lock position: 1
If I attempt to telnet from master node to slave node (same if I do it from slave to slave):
[root#master]# telnet slave.com 61616
Trying 123.123.123.123...
telnet: connect to address 123.123.123.123: Connection refused
However if I attempt the same telnet in that the only working cluster, I can successfully "connect" from master to slave...
Here is what I suspect:
Master acquires lock in instance/data/journal/server.lock
Master keeps trying to connect to slave server
Slave unable to start, because it cannot acquire the same server.lock on shared storage.
Master uses high CPU because of such hard-trying to connect to slave, which is not running.
What am I doing wrong?
EDIT: This is how my NFS mounts look like (taken from mount command):
some_server:/some_dir on /path/to/artemis/instance/data type nfs4 (rw,relatime,sync,vers=4.1,rsize=65536,wsize=65536,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,soft,noac,proto=tcp,timeo=50,retrans=1,sec=sys,clientaddr=123.123.123.123,local_lock=none,addr=123.123.123.123)
Turns out issue was in broker.xml configuration. In static-connectors I somehow decided to list only a "non-current server" (e.g. I have srv0 and srv1 - in srv0 I only added connector of srv1 and vice versa).
What it used to be (on 1st master node):
<cluster-connections>
<cluster-connection name="abc">
<connector-ref>srv0-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>srv1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
How it is now (on 1st master node):
<cluster-connections>
<cluster-connection name="abc">
<connector-ref>srv0-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>srv0-connector</connector-ref>
<connector-ref>srv1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
After listing all cluster's nodes, the CPU normalized and it's not only ~1% on active node. The issue is totally not related AMQ Artemis connections spamming or file locks.
We are using MQ 9.0.0.1 version on linux machine and it is active/passive configuration.
We have faced issue in production server issue is "queue manager ended unexpectedly" around 10:30 AM IST, we have taken queue manager restart manually post restart everything is normal issue got resolved.
From the Queue manager logs we have observed below error stacks. Request to all,
please provide your analysis from the below queue manager logs why the queue manager ended unexpectedly?
-------------------------------------------------------------------------------
03/07/2019 07:21:10 AM - Process(5082.15855) User(tsg) Program(amqzlaa0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ7234: 20000 messages from queue 'NEFT_SMS_INQUIRY' loaded on queue manager
'QMGR_NEFT'.
EXPLANATION:
20000 messages from queue NEFT_SMS_INQUIRY have been loaded on queue manager
QMGR_NEFT.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5003.1) User(tsg) Program(amqzxma0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5008: An essential IBM MQ process 5016 (zllCRIT) cannot be found and is
assumed to be terminated.
EXPLANATION:
1) A user has inadvertently terminated the process. 2) The system is low on
resources. Some operating systems terminate processes to free resources. If
your system is low on resources, it is possible it has terminated the process
so that a new process can be created.
ACTION:
IBM MQ will stop all MQ processes. Inform your systems administrator. When
the problem is rectified IBM MQ can be restarted.
----- amqzxmb0.c : 10095 ------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5080.1) User(tsg) Program(amqpcsea)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ8506: Command server MQGET failed with reason code 2009.
EXPLANATION:
An MQGET request by the command server, for the IBM MQ queue
SYSTEM.ADMIN.COMMAND.QUEUE , failed with reason code 2009.
ACTION:
None.
----- amqphrea.c : 86 ---------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5069.1) User(tsg) Program(amqzmgr0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5025: The command server has ended. ProcessId(5080).
EXPLANATION:
The command server process has ended.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5051.1) User(tsg) Program(amqrrmfa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9542: Queue manager is ending.
EXPLANATION:
The program will end because the queue manager is quiescing.
ACTION:
None.
----- amqrrmfa.c : 3011 -------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5051.1) User(tsg) Program(amqrrmfa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9411: Repository manager ended normally.
EXPLANATION:
The repository manager ended normally.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5047.9) User(tsg) Program(amqzmuf0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5976: 'IBM MQ Distributed Pub/Sub Command Task' has ended.
EXPLANATION:
'IBM MQ Distributed Pub/Sub Command Task' has ended.
ACTION:
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5047.10) User(tsg) Program(amqzmuf0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5976: 'IBM MQ Distributed Pub/Sub Publish Task' has ended.
EXPLANATION:
'IBM MQ Distributed Pub/Sub Publish Task' has ended.
ACTION:
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5047.8) User(tsg) Program(amqzmuf0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5976: 'IBM MQ Distributed Pub/Sub Fan Out Task' has ended.
EXPLANATION:
'IBM MQ Distributed Pub/Sub Fan Out Task' has ended.
ACTION:
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5047.7) User(tsg) Program(amqzmuf0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5976: 'IBM MQ Distributed Pub/Sub Controller' has ended.
EXPLANATION:
'IBM MQ Distributed Pub/Sub Controller' has ended.
ACTION:
-------------------------------------------------------------------------------
03/07/2019 10:30:15 AM - Process(31712.199) User(tsg) Program(amqrmppa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9508: Program cannot connect to the queue manager.
EXPLANATION:
The connection attempt to queue manager 'QMGR_NEFT' failed with reason code
2059.
ACTION:
Ensure that the queue manager is available and operational.
----- cmqxrmsa.c : 6146 -------------------------------------------------------
03/07/2019 10:30:15 AM - Process(31712.199) User(tsg) Program(amqrmppa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9772: MQCTL failed with MQRC=2009.
EXPLANATION:
The indicated IBM MQ API call failed for the specified reason code.
ACTION:
Refer to the Application Programming Reference manual for information about
Reason Code 2009.
----- cmqxrstf.c : 2663 -------------------------------------------------------
03/07/2019 10:30:15 AM - Process(31712.199) User(tsg) Program(amqrmppa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9999: Channel 'GPSSVRCONN' to host 'XXXXXX' ended abnormally.
EXPLANATION:
The channel program running under process ID 31712 for channel 'GPSSVRCONN'
ended abnormally. The host name is 'XXXXXX'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5081.5) User(tsg) Program(runmqlsr)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9508: Program cannot connect to the queue manager.
EXPLANATION:
The connection attempt to queue manager 'QMGR_NEFT' failed with reason code
2059.
ACTION:
Ensure that the queue manager is available and operational.
----- cmqxrmsa.c : 471 --------------------------------------------------------
03/07/2019 10:30:15 AM - Process(5081.5) User(tsg) Program(runmqlsr)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9999: Channel 'GPSSVRCONN' to host 'XXXXXX' ended abnormally.
EXPLANATION:
The channel program running under process ID 5081 for channel 'GPSSVRCONN''
ended abnormally. The host name is 'XXXXXX'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
03/07/2019 10:30:17 AM - Process(5078.1) User(tsg) Program(runmqchi)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9510: Messages cannot be retrieved from a queue.
EXPLANATION:
The attempt to get messages from queue 'SYSTEM.CHANNEL.INITQ' on queue manager
'QMGR_NEFT' failed with reason code 2009.
ACTION:
If the reason code indicates a conversion problem, for example
MQRC_SOURCE_CCSID_ERROR, remove the message(s) from the queue. Otherwise,
ensure that the required queue is available and operational.
----- amqrimna.c : 1085 -------------------------------------------------------
03/07/2019 10:30:18 AM - Process(5069.1) User(tsg) Program(amqzmgr0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5023: The channel initiator has ended. ProcessId(5078).
EXPLANATION:
The channel initiator process has ended.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:24 AM - Process(31712.192) User(tsg) Program(amqrmppa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9540: Commit failed.
EXPLANATION:
The program ended because return code 2009 was received when an attempt was
made to commit change to the resource managers. The commit ID was
'AMQRMRSASFMS.TO.IPAYNEFT XXXXXX E怜͇'.
ACTION:
Tell the systems administrator.
----- amqrmrca.c : 2977 -------------------------------------------------------
03/07/2019 10:30:24 AM - Process(31712.192) User(tsg) Program(amqrmppa)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ9999: Channel 'SFMS.TO.IPAYNEFT' to host 'XXXXXX' ended abnormally.
EXPLANATION:
The channel program running under process ID 31712 for channel
'SFMS.TO.IPAYNEFT' ended abnormally. The host name is 'XXXXXX'; in some
cases the host name cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
03/07/2019 10:30:24 AM - Process(5071.1) User(tsg) Program(amqfqpub)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5807: Queued Publish/Subscribe Daemon for queue manager QMGR_NEFT ended.
EXPLANATION:
The Queued Publish/Subscribe Daemon on queue manager QMGR_NEFT has ended.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:24 AM - Process(5003.1) User(tsg) Program(amqzxma0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5050: An essential IBM MQ process 5071 (amqfqpub) cannot be found and is
assumed to be terminated.
EXPLANATION:
1) A user has inadvertently terminated the process. 2) The system is low on
resources. Some operating systems terminate processes to free resources. If
your system is low on resources, it is possible it has terminated the process
so that a new process can be created. 3) MQ has encountered an unexpected
error. Check for possible errors reported in the MQ error logs and for any
FFSTs that have been generated.
ACTION:
IBM MQ will attempt to restart the terminated process.
----- amqzxmb0.c : 10095 ------------------------------------------------------
03/07/2019 10:30:24 AM - Process(5003.1) User(tsg) Program(amqzxma0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ8004: IBM MQ queue manager 'QMGR_NEFT' ended.
EXPLANATION:
IBM MQ queue manager 'QMGR_NEFT' ended.
ACTION:
None.
-------------------------------------------------------------------------------
03/07/2019 10:30:25 AM - Process(5069.1) User(tsg) Program(amqzmgr0)
Host(xxxxxxx) Installation(Installation1)
VRMF(9.0.0.1) QMgr(QMGR_NEFT)
AMQ5027: The listener 'LISTENER.TCP' has ended. ProcessId(5081).
EXPLANATION:
The listener process has ended.
ACTION:
None.
-------------------------------------------------------------------------------
I got a cluster of 1 master node and 2 slaves and I'm trying to compile my application with mesos.
Basically, here is the command that I use:
mesos-execute --name=alc1 --command="ccmake -j myapp" --master=10.11.12.13:5050
Offers are made from the slave but this compilation task keeps failing.
[root#master-node ~]# mesos-execute --name=alc1 --command="ccmake -j myapp" --master=10.11.12.13:5050
I0511 22:26:11.623016 11560 sched.cpp:222] Version: 0.28.0
I0511 22:26:11.625602 11564 sched.cpp:326] New master detected at master#10.11.12.13:5050
I0511 22:26:11.625952 11564 sched.cpp:336] No credentials provided. Attempting to register without authentication
I0511 22:26:11.627279 11564 sched.cpp:703] Framework registered with 70582e35-5d6e-4915-a919-cae61c904fd9-0139
Framework registered with 70582e35-5d6e-4915-a919-cae61c904fd9-0139
task alc1 submitted to slave 70582e35-5d6e-4915-a919-cae61c904fd9-S2
Received status update TASK_RUNNING for task alc1
Received status update TASK_FAILED for task alc1
I0511 22:26:11.759610 11567 sched.cpp:1903] Asked to stop the driver
I0511 22:26:11.759639 11567 sched.cpp:1143] Stopping framework '70582e35-5d6e-4915-a919-cae61c904fd9-0139'
On the sandbox slave node, here is the stderr logs:
I0511 22:26:13.781070 5037 exec.cpp:143] Version: 0.28.0
I0511 22:26:13.785001 5040 exec.cpp:217] Executor registered on slave 70582e35-5d6e-4915-a919-cae61c904fd9-S2
sh: ccmake: command not found
I0511 22:26:13.892653 5042 exec.cpp:390] Executor asked to shutdown
Just to mentionned that commands like this work fine and get me the expected results:
[root#master-node ~]# mesos-execute --name=alc1 --command="find / -name a" --master=10.11.12.13:5050
I0511 22:26:03.733172 11550 sched.cpp:222] Version: 0.28.0
I0511 22:26:03.736112 11554 sched.cpp:326] New master detected at master#10.11.12.13:5050
I0511 22:26:03.736383 11554 sched.cpp:336] No credentials provided. Attempting to register without authentication
I0511 22:26:03.737730 11554 sched.cpp:703] Framework registered with 70582e35-5d6e-4915-a919-cae61c904fd9-0138
Framework registered with 70582e35-5d6e-4915-a919-cae61c904fd9-0138
task alc1 submitted to slave 70582e35-5d6e-4915-a919-cae61c904fd9-S2
Received status update TASK_RUNNING for task alc1
Received status update TASK_FINISHED for task alc1
I0511 22:26:04.184813 11553 sched.cpp:1903] Asked to stop the driver
I0511 22:26:04.184844 11553 sched.cpp:1143] Stopping framework '70582e35-5d6e-4915-a919-cae61c904fd9-0138'
I don't really get what is needed for even troubleshot this issue.
Immediately after I commitoffset using the golang client. https://github.com/Shopify/sarama
./kafka-consumer-offset-checker.sh --zookeeper=localhost:2181 --topic=my-replicated-topic --group=ib --broker-info
Group Topic Pid Offset logSize Lag Owner
ib my-replicated-topic 0 12 12 0 none
BROKER INFO
1 -> localhost:9093
However, after several minute, I run the same checker command.
./kafka-consumer-offset-checker.sh --zookeeper=localhost:2181 --topic=my-replicated-topic --group=ib --broker-info
Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/ib/offsets/my-replicated-topic/0.
And I check the zookeeper, the node never exists at any time, even when checker list the offset correctly.
sarama commit: 23d523386ce0c886e56c9faf1b9c78b07e5b8c90
kafka 0.8.2.1
golang 1.3
kafka server config:
broker.id=1
port=9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
It seems to me that the consumer group get expired. How can I make the consumer group persist?
Sarama does not talk to zookeeper, I should use high level consumer group library
instead.
https://github.com/Shopify/sarama/issues/452
After creating a new Queue-Manager using the MQ Explorer, it's fails to be started properly and provide the following message:
Command: "C:\Program Files (x86)\IBM\WebSphere MQ\bin\crtmqm" -sa test_qm
WebSphere MQ queue manager created.
Directory 'C:\Program Files (x86)\IBM\WebSphere MQ\qmgrs\test_qm'
created.
The queue manager is associated with installation 'WMQ75Install'.
exitvalue = 2059
I couldn't figure out how to solve it from the logs and tried to start it manually from the MQ Explorer and from the command-line shell as well, but without any success - it just not started.
HERE IS MY AMQERR01.LOG TEXT:
21/01/2015 14:18:46 - Process(7960.3) User(johnsmith) Program(amqzmuc0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ6287: WebSphere MQ V7.5.0.2 (p750-002-131001_DE).
EXPLANATION:
WebSphere MQ system information:
Host Info :- Windows 7 Enterprise x64 Edition, Build 7601: SP1 (MQ
Windows 32-bit)
Installation :- C:\Program Files (x86)\IBM\WebSphere MQ (WMQ75Install)
Version :- 7.5.0.2 (p750-002-131001_DE)
ACTION:
None.
21/01/2015 14:18:46 - Process(7960.3) User(johnsmith) Program(amqzmuc0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5051: The queue manager task 'LOGGER-IO' has started.
EXPLANATION:
The critical utility task manager has started the LOGGER-IO task. This task has
now started 1 times.
ACTION:
None.
21/01/2015 14:18:46 - Process(7960.1) User(johnsmith) Program(amqzmuc0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5041: The queue manager task 'LOGGER-IO' has ended.
EXPLANATION:
The queue manager task LOGGER-IO has ended.
ACTION:
None.
21/01/2015 14:18:49 - Process(7528.3) User(johnsmith) Program(amqzmuc0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5051: The queue manager task 'LOGGER-IO' has started.
EXPLANATION:
The critical utility task manager has started the LOGGER-IO task. This task has
now started 1 times.
ACTION:
None.
21/01/2015 14:18:49 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ7229: 4 log records accessed on queue manager 'test_qm' during the log
replay phase.
EXPLANATION:
4 log records have been accessed so far on queue manager test_qm during the log
replay phase in order to bring the queue manager back to a previously known
state.
ACTION:
None.
21/01/2015 14:18:49 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ7230: Log replay for queue manager 'test_qm' complete.
EXPLANATION:
The log replay phase of the queue manager restart process has been completed
for queue manager test_qm.
ACTION:
None.
21/01/2015 14:18:49 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ7231: 0 log records accessed on queue manager 'test_qm' during the recovery
phase.
EXPLANATION:
0 log records have been accessed so far on queue manager test_qm during the
recovery phase of the transactions manager state.
ACTION:
None.
21/01/2015 14:18:49 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ7232: Transaction manager state recovered for queue manager 'test_qm'.
EXPLANATION:
The state of transactions at the time the queue manager ended has been
recovered for queue manager test_qm.
ACTION:
None.
21/01/2015 14:18:49 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ7233: 0 out of 0 in-flight transactions resolved for queue manager
'test_qm'.
EXPLANATION:
0 transactions out of 0 in-flight at the time queue manager test_qm ended have
been resolved.
ACTION:
None.
21/01/2015 14:18:49 - Process(7528.4) User(johnsmith) Program(amqzmuc0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5051: The queue manager task 'CHECKPOINT' has started.
EXPLANATION:
The critical utility task manager has started the CHECKPOINT task. This task
has now started 1 times.
ACTION:
None.
21/01/2015 14:18:51 - Process(9796.3) User(johnsmith) Program(amqzmur0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5037: The queue manager task 'ERROR-LOG' has started.
EXPLANATION:
The restartable utility task manager has started the ERROR-LOG task. This task
has now started 1 times.
ACTION:
None.
21/01/2015 14:18:51 - Process(9796.4) User(johnsmith) Program(amqzmur0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5037: The queue manager task 'APP-SIGNAL' has started.
EXPLANATION:
The restartable utility task manager has started the APP-SIGNAL task. This task
has now started 1 times.
ACTION:
None.
21/01/2015 14:18:51 - Process(9796.5) User(johnsmith) Program(amqzmur0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5037: The queue manager task 'APP-SIGNAL' has started.
EXPLANATION:
The restartable utility task manager has started the APP-SIGNAL task. This task
has now started 2 times.
ACTION:
None.
21/01/2015 14:18:51 - Process(9796.7) User(johnsmith) Program(amqzmur0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5037: The queue manager task 'APP-SIGNAL' has started.
EXPLANATION:
The restartable utility task manager has started the APP-SIGNAL task. This task
has now started 4 times.
ACTION:
None.
21/01/2015 14:18:51 - Process(9796.6) User(johnsmith) Program(amqzmur0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5037: The queue manager task 'APP-SIGNAL' has started.
EXPLANATION:
The restartable utility task manager has started the APP-SIGNAL task. This task
has now started 3 times.
ACTION:
None.
21/01/2015 14:18:52 - Process(10328.1) User(johnsmith) Program(amqzfuma.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ8077: Entity 'johnsmith#intranet' has insufficient authority to access object
'test_qm'.
EXPLANATION:
The specified entity is not authorized to access the required object. The
following requested permissions are unauthorized: connect/system
ACTION:
Ensure that the correct level of authority has been set for this entity against
the required object, or ensure that the entity is a member of a privileged
group.
----- amqzfubn.c : 515 --------------------------------------------------------
21/01/2015 14:18:52 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5525: The WebSphere MQ Object Authority Manager has failed.
EXPLANATION:
The Object Authority Manager has failed to complete an MQ request.
ACTION:
Check the queue manager error logs for messages explaining the failure and try
to correct the problem accordingly.
----- amqzxma0.c : 3825 -------------------------------------------------------
21/01/2015 14:18:52 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ8003: WebSphere MQ queue manager 'test_qm' started using V7.5.0.2.
EXPLANATION:
WebSphere MQ queue manager 'test_qm' started using V7.5.0.2.
ACTION:
None.
21/01/2015 14:18:52 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5008: An essential WebSphere MQ process 10328 (amqzfuma.exe) cannot be found
and is assumed to be terminated.
EXPLANATION:
1) A user has inadvertently terminated the process. 2) The system is low on
resources. Some operating systems terminate processes to free resources. If
your system is low on resources, it is possible it has terminated the process
so that a new process can be created.
ACTION:
WebSphere MQ will stop all MQ processes. Inform your systems administrator.
When the problem is rectified WebSphere MQ can be restarted.
----- amqzxmb0.c : 9956 -------------------------------------------------------
21/01/2015 14:18:52 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ5050: An essential WebSphere MQ process 9188 (zllPUBSUB) cannot be found and
is assumed to be terminated.
EXPLANATION:
1) A user has inadvertently terminated the process. 2) The system is low on
resources. Some operating systems terminate processes to free resources. If
your system is low on resources, it is possible it has terminated the process
so that a new process can be created. 3) MQ has encountered an unexpected
error. Check for possible errors reported in the MQ error logs and for any
FFSTs that have been generated.
ACTION:
WebSphere MQ will attempt to restart the terminated process.
----- amqzxmb0.c : 9679 -------------------------------------------------------
21/01/2015 14:18:53 - Process(9760.1) User(johnsmith) Program(amqzxma0.exe)
Host(NY0035546) Installation(WMQ75Install)
VRMF(7.5.0.2) QMgr(test_qm)
AMQ8004: WebSphere MQ queue manager 'test_qm' ended.
EXPLANATION:
WebSphere MQ queue manager 'test_qm' ended.
ACTION:
None.
At a guess, is johnsmith a domain id? Is this a domain workstation (what is 'johnsmith#intranet')? My suspicion is you have a machine in a domain but you have not configured MQ to run on a machine in a domain. I think the issue is that at startup MQ is trying to determine what groups the userid 'johnsmith#intranet' is a member of, and hence fails. Its possible, adding johnsmith to the local mqm group may get you futher although my suspicion is you need to do the domain configuration. See another answer for details of what to do:
Issue with permission grant to domain users in IBM web sphere queue manager
First, make sure your UserID is part of the 'mqm' group and second reboot your PC (wonderful things happen when you reboot Windows!).