I have an application which tries to subscribes to a lot of different topics.
The server side publishes a lot of messages through these topics and as soon as the application starts subscribing, it receives so many messages that the application cannot even reach the end of the subscription function.
It seems that the OnMessage Listener is flooded so much (the listener is the class which is trying to subscribe itself ot all the topics).
So basically is there a way to stop the reception of messages until I have subscribed to all of the topics? Or am I missing something there?
The thread trying to subsscribe to all of the topics never get the processor again.
(If the server is down, the subscription is fine since it does not receive anything so it does not lose the processing power..)
Thank you in advance.
Paul.
You could try lower the prefetch limit of the consumers, this would prevent the broker from attempting to dispatch so many messages when they are created which should help reduce the flooding issue you are seeing.
Here's some documentation that might help.
http://activemq.apache.org/what-is-the-prefetch-limit-for.html
Tim -
www.fusesource.com
Related
I have a question about some strange behaviour of consumer.
Recently we had strange situation on production environment. Two consumers on two different microservices were stuck at some messages. The first one was holding 20 messages from rabbitMQ queue and the second one 2 messages and they weren't processing them. These messages were visible as Unacked in RabbitMQ for two days. They went back to Ready state just when that two microservices were restarted. At that time when consumers took this messages the whole program was processing thousands messages per hour, so basically our Saga and all consumers were working. When these messages went back to Ready state they were processed in one second after that so I don't think that it's problem with them.
The messages are published by Saga to Exchange and besides these two stucked consumers we have also EventLogger consumer subscribed to all messages and this EventLogger processed this 22 messages normally without any problems (from his own queue). Also we have connected Application Insights to consumers and there is no information about receiving these 22 messages by these two consumers (there are information about receiving it by EventLogger).
The other day we had the same issue with one message on test environment.
Recently we updated version of MassTransit in our project from version 6.2.0 to 7.1.6 and before that we didn't notice any similar issues with consumers but maybe it's just coincidence. We also have retry, redelivery, circuit breaker and in memory outbox mechanisms but I don't think that's problem with them because the consumer didn't even start to process these 22 messages.
Do you have any suggestions what could happened to this consumers?
Usually when a consumer doesn't even start to consume the message once it has been delivered to MassTransit by RabbitMQ, it could be an issue resolving the consumer from the container, such as a dependency to another backing service (database, log server, file, network connection, device, etc.).
The message remains unacknowledged on the broker because the transport/delivery mechanism to the consumer is waiting for a resource to become available. If there isn't anything in the logs for that time period indicating an issue with a resource, it's hard to know what could have blocked those messages from being consumed. The fact that they were ultimately consumed once the services were restarted seems to indicate the message content itself was fine.
Monitoring the lack of message consumption (and likely an associated queue depth increase) would give an indication that the situation has occurred. If it happens again, I'd increase the logging detail levels to see if the issue occurs again and can then be identified.
I am facing an issue when decoupling two systems by an event/message broker like Apache Kafka. The issue is related to a frontend triggering actions in a backend:
How does the producer (frontend service) know, that the published event has been properly handled by all the backend services (as consumers), if the publisher does not know neither the "identities" nor the count of consuming backends?
To be precise: Users can change for example their email address using a frontend UI. An associated service publishes that "change request" event to an appropriate topic within Kafka. The UI form is then "locked" to prevent subsequent change requests, until the change event has been fully processed by every consumer. But it's unclear how to detect this state.
You can use another topic to publish handled jobs. So your front-end publishes to one topic and your back-end publishes to another once it is done.
In Kafka terms, neither the producer nor consumer are considered backend - they're both clients connecting to a broker, which is generally considered to be the backend.
A producer will know that it has produced a message successfully, by virtue of the acks setting. A consumer will read a message, and then at a later point, its offset will be updated to a point corresponding to the last message it read. However, there is generally no interaction between a producer and a consumer, and they are generally completely unaware of one another.
How can I get all messages from a JMS topic in Tibco?
I know that I can use a topic subscriber, but it wouldn't fit exactly my needs. I want to start a process only once a day that will read all messages from a topic and process them. I cannot have both a timer and a topic subscriber in the same process.
I tried with "Wait for JMS Topic Message", but it seems that it gets only one message, no matter how many I have in the topic.
I would try going a different direction. You could implement this using 2 separate processes.
One process, a topic subscriber (with a durable) which receives all messages. This process starter should be disabled by default (so the listener is not active).
The second process is a timer, which will activate the first process through Hawk (Engine Command). So every time the subscriber gets activated, it will start processing events.
The problematic part here is the deactivation of the topic subscriber after it is done. For that you need a separate logic, when to deactivate the subscriber. This could also be done by a separate timer or some Hawk Rule which fires, when the subscriber has no more messages.
I think the best solution will be to bridge the JMS topic to a queue and use the "JMS Queue Receiver" activity at the start of your process.
Once you start the instance once a day, it will connect and process all the messages in the queue.
A much more natural solution (if it can be implemented) is to just implement a Topic Subscriber (or a Queue subscriber if the Topic is bridged to a Queue) and let the BusinessWorks Engine spawn Job instances whenever a message gets published.
This allows to spread the workload much more evenly than to get all the messages from either a Topic or a Queue.
I'm designing a quite complicated system and was wondering what the best way is to put a jms consumer (activemq, vm protocol, non persitent) inside a netty handler.
Let me explain, i have several clients connecting to my netty server using websockets. For every client connection i create a jms consumer that listens for interesting messages on one or more topics. If a interesting message arrives i need to do a extra step (additional filtering) before sending the message to the client using the websocket.
Is the following a good way to do this:
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i do the extra filter a send it using ctx.channel().write()
In this setup i'm a bit worried that the consumer might turn into slow consumer and slow everything down, cause the websocket goes over the internet.
I came up with a more complex one to decouple the "receiving of message by consumer" and "sending of message through a websocket".
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i put it in a blockedqueue
every minute i let a thread (created for every client) look in the queue and send the found messages to the client using ctx.channel().write().
At this point i'm a bit worried about the extra thread per client.
Or is there maybe a better way to accomplish this task?
This is a classic slow consumer problem and the first step to resolving it is to determine what the appropriate action is when a slow consumer is detected. If it is acceptable that the slow consumer misses messages then the solution is some variation on dropping messages or unsubscribing them from the feed. For example, if it's acceptable that the client misses messages then, when one is received from JMS, check if the channel is writable. If it isn't, drop the message. If you want to give yourself a bit more of a buffer (although OS buffers are quite large) you can track the number of write completion future's that haven't completed (ie the messages haven't been written to the OS send buffer) and drop messages if there are too many outstanding write requests.
If the client may not miss messages, and is consistently slow, then the problem is more difficult. One option might be to divert messages to a JMS queue with a specific header value, then open a new consumer that reads messages from that queue using a JMS selector. This will put more load on the JMS server but might be appropriate for temporary slowness and hopefully it won't interfere with you main topic feeds. Alternatively you might want to stash the messages in a different store, such as a database, so you can poll for messages when they can be sent. If you do this right a single polling thread can cope with many clients (query for clients which have outstanding messages, then for each client, load a bunch of messages). However this isn't as convenient as using JMS.
I wouldn't go with option 2 because the blocking queue is only going to solve the problem temporarily, and you can achieve the same thing by tracking how many write operations are waiting to complete.
I am relatively new to JMS and have encountered a weird problem implementing my first real application. I'm desporate for any help or advice.
Background: I use AtiveMQ (java) as the message broker with non-transacted, non-persitent queues.
The Design: I have a straight forward producer/consumer system based around a single queue. A number of nodes(currently 2) place messages onto/ consume from the queue. Selectors are used to filter which messages a node recieves.
The Problem: The producer succesfully places its items on to the queue (i have verified they are there using the web interface) however the consumers remain blocked and do not read them. Only when i close the JMS connection in the producer do the consumers jump into life and consume the messages as expected.
This bevaior seems very weird to me, surely you shouldnt have to completely hang up the producer connection for the consumers to be able to read from the queue. I must have made a mistake somewhere(possibly with sessions) but the at the moment the number of things that could be wrong is to large and i have no idea what would cause this behaviour.
Any hints as to a solution, the cause of the problem or just how to continue debugging would be greatly appreciated.
Thanks for your time,
P.S If you requrie any additional information i am happy to provide it
Hard to say without seeing the code, but it sounds like the producer is transacted. You should not have to close the producer in order for the consumers to receive a message but a transacted producer won't send it messages until you call commit. Other things to check is that the connection has been started. Also if you have many consumers you should look at the prefetch setting to ensure that one consumer doesn't hog all the messages, setting to prefetch of 1 might be needed, but hard to say without further insight into your use case.