Error when using Kafka console Producer from Host outside of Cluster - hadoop

I have a Hortonworks Hadoop Cluster and everything seems to work fine. I can use the Kafka Producer and the consumer from all the Hosts, that reside inside of that cluster.
But when i try to use the kafka console producer from another host i get the following error message:
ERROR Error when sending message to topic test with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1532 ms has passed since batch creation plus linger time
I can Telnet the Host and Port.
How can I resolve this issue?

Related

Spring Boot RabbitMQ access refused for user

I've got two Spring Boot based application, one is producer which obtain data from external services and sends it to Rabbit, the second one is consumer which gets data from Rabbit and process it.
I'm using my own created virtual host named "events". I have two queues:
xevents ( binded to amq.topic exchange with routing key 'xevent' )
yevents ( binded to amq.topic exchange with routing key 'yevent' )
User which I created (xyuser) has permission to both of them defined as below:
virtual-host: 'events', configure-regexp: '', write-regexp: '^(amq\.topic|xevents|yevents).*', read-regexp: '^(amq\.topic|xevents|yevents).*'
Topic permissions are set as below:
virtual-host: 'events', exchange: 'amq.topic', write-regexp: '^(xevent|yevent).*', read-regexp: '^(xevent|yevent).*'
Overall both applications works fine, but with current configuration during the startup of CONSUMER I'm getting few logs with the same error:
ERROR o.s.a.r.c.CachingConnectionFactory - Shutdown Signal: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to exchange 'amq.topic' in vhost 'events' refused for user 'xyuser', class-id=40, method-id=10)
After that I'm getting regular SpringBoot log:
Started ConsumerApplication in 20.991 seconds (JVM running for 21.932)
And everything seems to work okay.
I've tried to set all the permissions to
'.*' '.*' '.*'
and with this configuration I'm not getting any logs with ACCESS_REFUSED.
If I set write permissions to only exchange: ^(amq\.topic).* I'm getting ACCESS_REFUSED to QUEUE on Consumer application.
If I set write permissions to only queue: ^(xevents|yevents).* I don't have errors on consumer, but have errors on producer with ACCESS_REFUSED to EXCHANGE amq.topic.
I don't know why it's logging ACCESS_REFUSED with first configuration, and it does it only on Startup, after that everything is working fine, but I would like to avoid that. I thought that can be caused by RabbitListener which tries to get messages before all configuration beans are initialized but in this case even with '.*' permission errors would appear.
I would like to achieve that xyuser can:
Send messages on vhost events on amq.topic.xevent.* and amq.topic.yevent.*
Get messages on vhost events from queue xevents and yevents
Thanks in advance for help!

MQRC_UNKNOWN_ALIAS_BASE_Q when connecting with IBM MQ cluster using CCDT and Spring Boot JMSTemplate

I have a Spring Boot app using JMSListener + IBMConnectionFactory + CCDT for connecting an IBM MQ Cluster.
A set the following connection properties:
- url pointing to a generated ccdt file
- username (password not required, since test environment)
- queuemanager name is NOT defined - since it's the cluster's task to decide, and a few google results, including several stackoverflow ones indicate that in my case qmgr must be set to empty string.
When my Spring Boot JMSListener tries to connect to the queue, the following MQRC_UNKNOWN_ALIAS_BASE_Q error occurs:
2019-01-29 11:05:00.329 WARN [thread:DefaultMessageListenerContainer-44][class:org.springframework.jms.listener.DefaultMessageListenerContainer:892] - Setup of JMS message listener invoker failed for destination 'MY.Q.ALIAS' - trying to recover. Cause: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2082' ('MQRC_UNKNOWN_ALIAS_BASE_Q').
com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:513)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
In the MQ error log I see the following:
01/29/2019 03:08:05 PM - Process(27185.478) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9999: Channel 'MyCHL' to host 'MyIP' ended abnormally.
EXPLANATION:
The channel program running under process ID 27185 for channel 'MyCHL'
ended abnormally. The host name is 'MyIP'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
01/29/2019 03:15:14 PM - Process(27185.498) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9209: Connection to host 'MyIP' for channel 'MyCHL' closed.
EXPLANATION:
An error occurred receiving data from 'MyIP' over TCP/IP. The connection
to the remote host has unexpectedly terminated.
The channel name is 'MyCHL'; in some cases it cannot be determined and so
is shown as '????'.
ACTION:
Tell the systems administrator.
Since the MQ error log contains QMgr(MyQMGR), which MyQMGR value I did not set in the connection properties, I assume the routing seems to be fine: the MQ Cluster figured out a qmgr to use.
The alias exists and points to an existing q. Bot the target q and the alias are added to the cluster via the CLUSTER(clustname) command.
What can be wrong?
Short Answer
MQ Clustering is not used for a consumer application to find a queue to GET messages from.
MQ Clustering is used when a producer application PUTs messages to direct them to a destination.
Further reading
Clustering is used when messages are being sent to help provide load balancing to multiple instances of a clustered queue. In some cases people use this for hot/cold failover by having two instances of a queue and keeping only one PUT(ENABLED).
If an application is a producer that is putting messages to a clustered queue, it only needs to be connected to a queue manager in the cluster and have permissions to put to that clustered queue. MQ based on a number of different things will handle where to send that message.
Prior to v7.1 there was only two ways to provide access to remote clustered queues:
Using a QALIAS:
Define a local QALIAS which has a TARGET set to the clustered queue name
Note this QALIAS does not itself need to be clustered.
Grant permission to put to the local QALIAS.
Provide permissions to PUT to the SYSTEM.CLUSTER.TRANSMIT.QUEUE.
The first option allows for granting granular access to an application for specific clustered queues in the cluster. The second option allows for the application to put to any clustered queue in the cluster or any queue on any clustered queue manager in the cluster.
At 7.1 IBM added a new optional behavior, this was provided with the setting ClusterQueueAccessControl=RQMName in the Security stanza of the qm.ini. If this is enabled (it is not the default), then you can actually provide permission for the app to PUT to the remote clustered queues directly without the need for a local QALIAS.
What clustering is not for is consuming applications such as your example of a JMSListener.
An application that will consume from any QLOCAL (clustered or not) must be connected to the queue manager where the QLOCAL is defined.
If you have a situation where there are multiple instances of a clustered QLOCAL that are PUT(ENABLED), you would need to ensure you have consumers connected directly to each queue managers that an instance is hosted on.
Based on your comment you have a CCDT with an entry such as:
CHANNEL('MyCHL') CHLTYPE(CLNTCONN) QMNAME('MyQMGR') CONNAME('node1url(port1),node2url(port2)')
If there are two different queue managers with different queue manager names listening on node1url(port1) and node2url(port2), then you have different ways to accomplish this from the app side.
When you specify the QMNAME to connect to the app will expect the name to match the queue manager you connect to unless it meets one of the following:
If you specify *MyQMGR it will find the channel or channels with QMNAME('MyQMGR') and pick one and connect and will not enforce that the remote queue manager name must match.
If in your CCDT you have QNAME(''), it is set to NULL, then in your app you can specify a empty queue manager name or only a space and it will find this entry in the CCDT and will not enforce that the remote queue manager name must match.
In your app you specify the queue manager name as *, MQ will use any channel in the CCDT and will not enforce that the remote queue manager name must match.
One limitation of CCDT is that channel name must be unique in the CCDT. Even if the QMNAME is different you can't have a second entry with the same channel name.
When you connect you are hitting the entry with two CONNAME's and getting connected to the first IP(port), you would only get to the second IP(port) if at connect time the first is not available, MQ will try the second, or if you are connected and have RECONNECT enabled and then the first goes down MQ will try to connect to the first then second.
If you want to have both clustered queue PUT(ENABLED) to receive traffic then you want to be able to specifically connect to each of the two queue managers to read those queues.
I would suggest you add a new channel on each queue manager that has a different QM specific name that is also different from the existing name, something like this:
CHANNEL('MyCHL1') CHLTYPE(CLNTCONN) QMNAME('MyQMGR1') CONNAME('node1url(port1)')
CHANNEL('MyCHL2') CHLTYPE(CLNTCONN) QMNAME('MyQMGR2') CONNAME('node2url(port2)')
This would be in addition to the existing entry.
For your putting components you can continue to use the channel that can connect to either queue manager.
For your getting components you can configure at least two of them, one to connect to each queue manager using the new queue manager specific CCDT entries, this way both queues are being consumed.

IBM MQ failed error 2058

I'm new with MQ Series and then tried to start with the "Hello World"
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q030200_.htm
I execute it with linux as follow :
helloworld pQueueName QueueName SYSTEM.DEF.SVRCONN/TCP/hostname\(1414\)
I get this error message ImqQueuemanager::connect failed with reset reason code 2058.
The API say this error code is due to a wrong queue manager name.
http://www-01.ibm.com/support/docview.wss?uid=swg21166938
Then : Why do I have a such message and what do they mean by "wrong queue manager name"?
No, queue manager and queues must be created explicitly before you can use them. The setName method points to queue manager to connect to and does not create a queue manager.
Watch this video from T.Rob on how to install MQ and use it - https://www.youtube.com/watch?v=wSCHLBftjDw&pbjreload=10. In the video Linux OS is used. That's OK. You can skip the setup part (up to 2 minutes and 20 seconds or so) and start following from crtmqm command.

Kafka Spout fails to acknowledge message in storm while setting multiple workers

I have a storm topology that subscribes events from Kafaka queue. The topology works fine while the number of workers config.setNumWorkers is set to 1. When I update the number of workers to more than one or 2, the KafkaSpout fails to acknowledge the messages while looking at storm UI. What might be the possible cause, I am not able to figure out, the exactness of problem.
I have a 3 node cluster running one nimbus and 2 supervisors.
My Problem got resolve. The reason being kafka unable to acknowledge the spout message was the conflict with the Hostname. I had mistakenly the same host name in /etc/hostname and /etc/hosts file of the both workers. When I check the worker I was able to get the exception - unable to communicate with host. So there by I figured out, the problem was was host name .I updated the host name in etc/hosts mapping and /etc/host name file. The message started to be acknowledged. Thank you.

Issue in putting a message into IBM Websphere MQ from "CMD"

We are have a IBM websphere MQ[v5.2] on AIX platform and my machine is windows 7 pro which has MQ client v7.5.
I tried to connect to server remotely but i received a Authorization error message. This is because my local user account does have rights to connect to queue manager remotely.
So,i created a new user account in my system with some name as on server which has rights to put/get message and now i am able to connect in client mode[Websphere MQ v7.5].
I don't know whether it is actually connecting to server but it is not giving me authorization error message so i took it as success. But the issue is, when I tried to put message into queue from cmd using amqsputc Queue_name mqm the queue is opening and taking a message. But when i tried to get the message using command amqsgetc queue_name mqm it says "NO MORE MESSAGE".
What could be the issue?
Screen shot:
The fact the amqsputc doesnt give an error, and the fact the amqsgetc runs through to success implies the functionality is working. However, it finds nothing on the queue which makes me very suspicious that you have an application listening on that queue, which will consume the message as it arrives and hence before your amqsgetc. Check DISPLAY QSTATUS in runmqsc for IPPROCS on the queue - is it 0 when amqsgetc isnt running?

Resources