Currently I have an SNS topic for my ElasticBeanstalk instance. Deployments and health status transitions are posted to this topic, arn:aws:sns:us-east-1:309321511178:ElasticBeanstalkNotifications-Environment-Myapp.
Next a lambda function subscribes to the topic and posts to a slack channel.
However, I'd like to filter these messages to only deployments and transitions to Severe status.
I guess the filter policy of the SNS topic would be the way to do this, but I'm not sure exactly what JSON would be needed to get the results I desire.
You can set monitoring in EB with a threshold "Maximum Environment Health">=20.
Below are the values for different status:
0 (Ok), 1 (Info), 5 (Unknown), 10 (No data), 15 (Warning), 20 (Degraded), 25 (Severe)
Related
I need to list the messages that were posted in nats stream to know which ones were not recognized.
I have tried to look at the admin api that nats suggests in its documentation, but it does not specify if this can be done or not.
I have also looked at the jetstream library for go, with this I can get general information about the streams and their comsumers but not the messages that were not acknowledged and I don't see any functions that give me what I need.
Has anyone already done this no matter the programming language?
Acknowledgements are tied to a specific consumer, not a stream.
You can derive the state of acknowledgements from consumer info, precisely, the Acknowledgement floor:
nats consumer info
State:
Last Delivered Message: Consumer sequence: 8 Stream sequence: 158 Last delivery: 13m59s ago
Acknowledgment floor: Consumer sequence: 4 Stream sequence: 154 Last Ack: 13m59s ago
Outstanding Acks: 2 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 42
Waiting Pulls: 0 of maximum 512
Which is available in NATS CLI and most client libraries.
There is no way to directly see the list of acknowledged messages.
Summary:
I have created a small Spring Boot application which should consume messages from a Solace instance. In Solace the publisher has maintained a queue which is subscribed to different topics.
I, as consumer, am processing the messages provided in the queue. Depending on the original topic leading to a message in this queue I would like to react differently in my business logic.
Means I need to somehow extract the topic of the message provided via the solace queue.
I have already have checked JMS header/properties, but I found nothing related to topic.
Anyone any idea or had a similar use case?
A workaround which came to my mind was to directly subscribe to all topics and create for every topic a method to consume and react accordingly, but then we would miss the queue features, or?
Actually the Destination and JMSDestination headers should contain the topic that the message was published to.
For example, to test this real quick I created a StackOverflowQueue queue with a topic subscription to the this/is/a/topic topic. And upon publishing a message to this/is/a/topic my consumer, which was listening to the queue, got the topic info in the header.
To quickly test I used the JMS sample here: https://github.com/SolaceSamples/solace-samples-jms/blob/master/src/main/java/com/solace/samples/QueueConsumer.java
Awaiting message...
Message received.
Message Content:
JMSDeliveryMode: 2
JMSDestination: Topic 'this/is/a/topic'
JMSExpiration: 0
JMSPriority: 0
JMSDeliveryCount: 1
JMSTimestamp: 1665667425784
JMSProperties: {JMS_Solace_DeliverToOne:false,JMS_Solace_DeadMsgQueueEligible:false,JMS_Solace_ElidingEligible:false,Solace_JMS_Prop_IS_Reply_Message:false,JMS_Solace_SenderId:Try-Me-Pub/solclientjs/chrome-105.0.0-OSX-10.15.7/3410903749/0001,JMSXDeliveryCount:1}
Destination: Topic 'this/is/a/topic'
SenderId: Try-Me-Pub/solclientjs/chrome-105.0.0-OSX-10.15.7/3410903749/0001
SendTimestamp: 1665667425784 (Thu. Oct. 13 2022 09:23:45.784)
Class Of Service: USER_COS_1
DeliveryMode: PERSISTENT
Message Id: 1
Replication Group Message ID: rmid1:18874-bc0e45b2aa1-00000000-00000001
Binary Attachment: len=12
48 65 6c 6c 6f 20 77 6f 72 6c 64 21 Hello.world!
Note that sample code doesn't use Spring Boot, but that shouldn't make a difference.
I have single instance cluster for AlertManager and I see warning in AlertManager container level=warn ts=2021-11-03T08:50:44.528Z caller=delegate.go:272 component=cluster msg="dropping messages because too many are queued" current=4125 limit=4096
Alert Manager Version information:
Version Information
Branch: HEAD
BuildDate: 20190708-14:31:49
BuildUser: root#868685ed3ed0
GoVersion: go1.12.6
Revision: 1ace0f76b7101cccc149d7298022df36039858ca
Version: 0.18.0
AlertManager metrics
# HELP alertmanager_cluster_members Number indicating current number of members in cluster.
# TYPE alertmanager_cluster_members gauge
alertmanager_cluster_members 1
# HELP alertmanager_cluster_messages_pruned_total Total number of cluster messages pruned.
# TYPE alertmanager_cluster_messages_pruned_total counter
alertmanager_cluster_messages_pruned_total 23020
# HELP alertmanager_cluster_messages_queued Number of cluster messages which are queued.
# TYPE alertmanager_cluster_messages_queued gauge
alertmanager_cluster_messages_queued 4125
How do we see those queued messages in AlertManager?
Do we lose alerts when messages are dropped because of too many
queued ?
Why are messages queued even though there is logic to prune messages
on regular interval i.e 15 minutes ?
Do we lose alerts when AlertManager pruned messages on regular interval?
I am new to alerting. Could you please answer for the above questions?
I have EC2 service (elasticbeanstalk) which my project is located on it. Now Is there any way to see how many requests a specific API handles every day. I'm storing the error, access, ... logs to the cloud watch, Maybe somehow we could use the access logs to see how many requests each API handled every day. But I need to define a chart for it so in one look I could understand, For example, this new endpoint api/user/allowance that I have made, there are some customers started using it. So eventually what I need is something like this
Api | Total number of requests | filter_start_date | filter_end_date
Actually I have dig into the problem more, and I've found a solution for it, At least it's works for me. So I went to cloudWatch and then Insights panel, From there I could define a query to group my log messages by their request Url, The log message I have is sth like this
0.0.0.0 (11.11.11.11) - - [18/Oct/2019:13:33:49 +0000] "GET api/user/allowance HTTP/1.1" 200 2575 "-" "okhttp/3.6.0"
Then I have defined the query to get and group by the Request URL, And after grouping, I counted the grouped.
FIELDS #message
| PARSE #message "* [*] * * *" as ipAddresses, requestTime, RequestAction, RequestUrl, RestOfTheLog
| stats count(*) by RequestUrl
This way you'll have list of endpoints with total number of requests.
As you said, your app is writing the logs to CloudWatch Logs. You can log an unique entrance msg of the restful API you want to trace. Then create a custom metrics for the CloudWatch Log groups with filter matching the entrance msg of API.
See official doc for how to create custom metrics for CloudWatch Log groups.
I have a RabbitMQ server setup with thousands of queues. Of which only about 5 of these are persistent queues. Every now and then there is a back up of a queue that will have about 5-10 messages in a ready state. These messages do not appear to be in the persistent queues. I want to find out which queues had the messages in a ready state, but the only indication that it is happening is on the overview page of the web management console which is for all queues.
Is there a way to query Rabbit to tell me the stat info for messages that were in a ready state for a period of minutes and which queue they were in?
I would use the HTTP API.
http://rabbit-broker:15672/api/queues
This will give you a list of the current queue states in JSON so you'll have to keep polling it. Store the "messages_ready" for given queue "name" for the period you want to monitor. Now you'll be able to see which queues have that backlog spike.
You can use simple curl as well as whichever platform you prefer with an HTTP client.
Please note: the user you'll connect will have to have monitor tag to access all the queue information.
Out of the box there is no easy way AFAIK, you'd have to manually click through the queues and look at their graphs in the UI for the last hour, which is tedious.
I had similar requirements and I found a better way than polling. The docs say that you may get raw samples via api if you use special parameters in the request.
For example in your case, if you are interested in messages with ready state, you may ask your queue for a history of queue lengths, for example last 60 seconds with samples every 1 second (note 15672 is the default port used by rabbitmq_management):
http://rabbitHost:15672/api/queues/vhost/queue?lengths_age=60&lengths_incr=1
For default vhost=/ it will be:
http://rabbitHost:15672/api/queues/%2F/queue?lengths_age=60&lengths_incr=1
Then in the result json there will be some additional _details objects like this:
"messages_ready_details": {
"avg": 8.524590163934427,
"avg_rate": 0.08333333333333333,
"samples": [{
"timestamp": 1532699694000,
"sample": 5
}, {
"timestamp": 1532699693000,
"sample": 11
},
<... more samples ...>
],
"rate": -6.0
},
"messages_ready": 5,
Then on this raw data you may do any stats you need.
Other raw data samples appear if you use differen parameters in
What sampling will appear? What parameters are required for it to appear?
Messages sent and received msg_rates_age / msg_rates_incr
Bytes sent and received data_rates_age / data_rates_incr
Queue lengths lengths_age / lengths_incr
Node statistics (e.g. file descriptors, disk space free) node_stats_age / node_stats_incr