Sensu and Graphite. Configure transmission through AMQP - ruby

I want to use Sensu as monitoring system and Graphite as backend for graphics.
I wish to configure Sensu for receiving data from RabbitMQ via AMQP protocol that's why I configured Carbon in such way:
# vim /etc/carbon/carbon.conf
# Enable AMQP if you want to receve metrics using an amqp broker
ENABLE_AMQP = True
# Verbose means a line will be logged for every metric received
# useful for testing
AMQP_VERBOSE = True
AMQP_HOST = 10.0.3.16
AMQP_PORT = 5672
AMQP_VHOST = /sensu
AMQP_USER = sensu
AMQP_PASSWORD = kubuntu710
AMQP_EXCHANGE = metrics_my
AMQP_METRIC_NAME_IN_BODY = True
Per my understanding Carbon with some frequency requests data from RabbitMQ (via AMQP) and save it via Whisper.
On other side Sensu saves some metrics in RabbitMQ, I configured it in next way:
root#sensu_server:/etc/sensu/conf.d# vim graphite_handler_amqp.json
{
"handlers": {
"graphite_amqp": {
"type": "transport",
"pipe": {
"type": "topic",
"name": "metrics_my",
"durable": "true"
},
"mutator": "only_check_output"
}
}
}
And of course I attached this handler in such way:
root#sensu_server:/etc/sensu/conf.d# cat metrics_cpu.json
{
"checks": {
"metrics_cpu": {
"type": "metric",
"command": "/opt/sensu/embedded/bin/metrics-cpu-pcnt-usage.rb",
"interval": 10,
"subscribers": ["MONGO"],
"handlers": ["graphite_amqp"]
}
}
}
Everything fine but Graphite can't draw metrics. This is log from Graphite side:
13/06/2016 18:57:16 :: New AMQP connection made
And this is from rabbitMQ on Sensu server side:
=INFO REPORT==== 13-Jun-2016::15:57:16 ===
accepting AMQP connection <0.25298.0> (10.0.3.95:43722 -> 10.0.3.16:5672)
=ERROR REPORT==== 13-Jun-2016::15:57:16 ===
Channel error on connection <0.25298.0> (10.0.3.95:43722 -> 10.0.3.16:5672, vhost: '/sensu', user: 'sensu'), channel 1:
operation exchange.declare caused a channel exception precondition_failed: "inequivalent arg 'durable' for exchange 'metrics_my' in vhost '/sensu': received 'true' but current is 'false'"
Why rabbitMQ thinks that "durable": set to "false", when Sensu should set it to true?
Can anybody share own expirience with such logic?
PS. Configuration with just tcp handler is working fine for me.

operation exchange.declare caused a channel exception precondition_failed: "inequivalent arg 'durable' for exchange 'metrics_my' in vhost '/sensu': received 'true' but current is 'false'"
The exchange metrics_my already exists and has the durable property set to false. Some other process is now trying to re-declare that same exchange with a different value for durable (true).
It looks like that when the processes start up they are trying to configure RabbitMQ using the configuration you have specified - making sure the required exchanges and queues exist.
However, RabbitMQ does not allow changing some properties of exchanges and queues after they have been created, so one of the processes is starting up, trying to make sure the exchange exists but is failing because it is specifying a different value for the durable property than what it already is.
My guess is that carbon and sensu have been configured to have a different value of durable for the metrics_my exchange.
Based on the snippets of configuration you provided, I don't see an option for changing the durable property for carbon, but you can for sensu.
You need to make everyone agree on what durable should be, delete the exchange (if durable will be different) and restart everything.
PS: The durable property specifies that the exchange should be persisted to disk and survive restarts of the RabbitMQ process.

Related

How to display remote queue managers from client host using CLI

I am new to IBM MQ, wanted to know if its possible to display remote queue managers from client host using CLI, like I can successfully see the remote queue managers in webconsole of client-ibm-mq but how to check that from CLI or PCF or REST call
It is not currently possible to use remote MQ commands (such as PCF or MQSC) to display the queue managers on a machine.
However, using the MQ Console or the REST API it is possible. Do an HTTP GET from the following URL (changing the domain accordingly):-
http://localhost:9080/ibmmq/rest/v1/admin/qmgr
This will show you output like the following:-
{"qmgr": [
{
"name": "MQG1",
"state": "running"
},
{
"name": "MQG2",
"state": "ended"
}
]}
For more information on this particular REST API, see reference page for HTTP GET from /admin/qmgr

Make Spring RabbitMQ fail on missing exchange

There is nice property spring.rabbitmq.listener.simple.missing-queues-fatal=true
it makes SimpleMessageListenerContainer and whole application fail when I set it a non existent queue - which is what I want. I don't want to have an application running in invalid state.
I can't find similar solution for exchanges like
spring.rabbitmq.listener.simple.missing-exchanges-fatal
I get in logs several
ERROR 432430 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Shutdown Signal: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'some-non-existent-exchange' in vhost '/', class-id=40, method-id=30)
and application starts. I would like it to fail. How can I do it?
How can I make spring boot/rabbit fail when trying to bind to any of non existing queue or exchange?
The exchange does nothing with listener. We need it along side with a routing key when we produce data into AMQP. The listener has that option to fail because it is out of end-user control and starts automatically when application is ready. The exchange is used in the RabbitTemplate when you send a data. See publisherReturns option on the CachingConnectionFactory for use-case like yours to handle such an error in case of missed exchange:
https://docs.spring.io/spring-amqp/docs/current/reference/html/#cf-pub-conf-ret
https://www.rabbitmq.com/confirms.html
You also add a ReturnsCallback into your RabbitTemplate to catch an unrouted message and its reason and handle such an error respectively: in your case stop the app, e.g. System.exit(1);.

Apache Artemis doesn't stop scanning for expires

I'm using Apache Artemis ActiveMQ 2.6.3 as an MQTT broker embedded in a Spring 5 application:
#Bean(initMethod = "start", destroyMethod = "stop")
fun embeddedActiveMQ(securityManager: ActiveMQJAASSecurityManager) =
EmbeddedActiveMQ().apply {
setConfiguration(getEmbeddedActiveMQConfiguration())
setConfigResourcePath("activemq-broker.xml")
setSecurityManager(securityManager)
}
private fun getEmbeddedActiveMQConfiguration() =
ConfigurationImpl().apply {
addAcceptorConfiguration("netty", DefaultConnectionProperties.DEFAULT_BROKER_URL)
addAcceptorConfiguration("mqtt", "tcp://$host:$mqttPort?protocols=MQTT")
name = brokerName
bindingsDirectory = "$dataDir${File.separator}bindings"
journalDirectory = "$dataDir${File.separator}journal"
pagingDirectory = "$dataDir${File.separator}paging"
largeMessagesDirectory = "$dataDir${File.separator}largemessages"
isPersistenceEnabled = persistence
connectionTTLOverride = 60000
}
Although I'm setting the connection TTL to 60 seconds in the above Kotlin code as suggested in the documentation and the client disconnected and terminated an hour ago, the log shows the following entries:
2020-06-22 10:57:03,890 [Thread-29 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
2020-06-22 10:58:03,889 [Thread-35 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
Based on these log entries, I'm afraid that "dead" connection resources are never cleaned up by the server.
What should I do to actually remove the "dead" connections from the server to avoid leaking resources?
The broker will often create resources like addresses, queues, etc. to deal with clients. In the case of MQTT clients the broker will create queues which essentially represent the client's subscriptions.
In this particular case a queue named client1.some-topic has been created for an MQTT subscription and the broker is scanning that queue for expired messages. At this point it looks like the broker is working as designed.
When a client disconnects without unsubscribing what the broker does with the subscription depends on whether the client used a clean session or not.
If the client used a clean session then the broker will delete the subscription queue when the client disconnects (even in the event of a failure).
Otherwise the broker is obliged to hold on to the subscription queue and route messages to it. If the client never reconnects to unsubscribe then the subscription may fill up with lots of messages and trigger the broker's paging mode and eventually even limit message production altogether. In this case the client can either reconnect and unsubscribe or the subscription queue can be removed administratively.

How can I fix a new activemq-artemis install blocking issue?

I've been tasked with evaluating activemq-artemis for JMS clients. I have RabbmitMQ experience, but none with activemq-artemis/JMS.
I installed artemis to my local machine, created a new broker per the instructions, and set it up as a windows service. The windows service starts and stops just fine. I've made no changes to the broker.xml file.
For my first test I'm trying to perform a JMS Queue produce/consume from a stand alone java program. I'm using the code from the Artemis User Manual in the Using JMS section, (without using JNDI):
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName());
ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF,transportConfiguration);
Queue orderQueue = ActiveMQJMSClient.createQueue("OrderQueue");
Connection connection = cf.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(orderQueue);
MessageConsumer consumer = session.createConsumer(orderQueue);
connection.start();
TextMessage message = session.createTextMessage("This is an order");
producer.send(message);
TextMessage receivedMessage = (TextMessage)consumer.receive();
System.out.println("Got order: " + receivedMessage.getText());
When I run this code, I get the following error:
WARN: AMQ212054: Destination address=jms.queue.OrderQueue is blocked. If the system is configured to block make sure you consume messages on this configuration.
My research hasn't been conclusive on if this is a server side setting, or having the producer send without blocking. I haven't been able to find a producer send method that has a blocking boolean, only persistence. Any ideas on where to focus? Thanks.
Edit: new address-setting element added to broker.xml dedicated to this Queue:
<address-setting match="jms.queue.OrderQueue">
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
</address-setting>
I found this on further research in the user manual:
max-disk-usage The max percentage of data we should use from disks.
The System will block while the disk is full. Default=100
and in the log after service startup with no messages published yet:
WARN [org.apache.activemq.artemis.core.server] AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers.
so I think no matter my address settings, it would start to block. Looking at the max-disk-usage setting in broker.xml, it was set to 90. Documentation default says 100, I set to that, no startup log warnings, and my test pub/sub code now works.
This warn message comes when address policy set to BLOCK and memory reached. Check address policy set in broker.xml. If it is set to BLOCK, change it to PAGE. Or consume pending messages from OrderQueue.
By default max-disk-usage value is set as 90(%) and if the remaining free space size is less than 10%, then this warn message will be shown and no messages will be received until you adjust the parameter or free up space beyond 10%.

Cannot obtain exclusive access to locked queue

I have an anonymous and exclusive queue defined like this:
#Bean
public SimpleMessageListenerContainer responseMessageListenerContainer(){
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(simpleRoutingConnectionFactory());
container.setQueues(responseAnonymousQueue());
container.setMessageListener(rabbitTemplate());
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
container.setMessageConverter(jsonMessageConverter());
return container;
}
#Bean
public Queue responseAnonymousQueue() {
return new MyAnonymousQueue();
}
Sometimes I get this error en rabbitmq log:
=ERROR REPORT==== 12-Apr-2016::15:13:42 === Channel error on connection <0.6899.0> (XX.XXX.57.174:51716 -> 192.168.100.145:5671,
vhost: '/', user: 'XXXX_USER'), channel 1:
{amqp_error,resource_locked,
"cannot obtain exclusive access to locked queue ' XXXX_USER-broad-1457bb43-6487-4252-b21a-a5a92d19e0dc' in vhost '/'",
'queue.declare'}
So the client can’t declare the queue and it can’t receive the messages from the AMQP server.
It happens after this message:
=WARNING REPORT==== 12-Apr-2016::15:11:51 === closing AMQP connection <0.6810.0> (XX.XXX.57.174:17959 -> 192.168.100.145:5671):
connection_closed_abruptly
=INFO REPORT==== 12-Apr-2016::15:13:41 === accepting AMQP connection <0.6899.0> (XX.XXX.57.174:51716 -> 192.168.100.145:5671)
I can’t reproduce it (I have tried closing the connection from rabbitmq and removing the network cable, but the application reconnect well again), so I don’t know exactly why is this happening.
It is supposed that private and exclusive queues are deleted with the closing of the connection, so why is this happening? How can I catch this exception and recover from it?
Thanks
You are correct, exclusive queues are deleted when the connection that declared it; this implies that that connection is still open and it wasn't declared by the connection you see in the log.
When your system is in that condition, go to the admin UI where you can explore the queue and which connection owns it.
e.g. Exclusive owner 127.0.0.1:60113
If that shows the closed connection (XX.XXX.57.174:17959 in the example above) you should reach out to the rabbitmq guys on the rabbitmq-users google group, this does not appear to be a spring-amqp issue;
EDIT
FYI, if passive queue declaration fails for any reason, by default the consumer will try 3 times at 5 second intervals, then give up and stop the container.
There are two properties on the container that can be used to adjust this - declarationRetries and failedDeclarationRetryInterval (default 3 and 5000 respectively). If you are using <rabbit:listener-container /> configuration, there are equivalent attributes on the namespace.
Check the queue name should not bound with an exchange otherwise you will get this error. Or I think queue name already exists.

Resources