swap files in swifMQ - jms

We are using SwifMQ as the JMS infrastructure in our product. In the routerconfig.xml file there is an entry like swap path="./store/swap"/. I wanted to understand when these swap files are created in store/swap. In the customers environment we are seeing swap files under /store/swap with names like hostname-xxx.swap
My assumption is that SwiftMQ uses some datastructure to store messages to be sent. This data-structure might get filled up as it is not able to send those messages because of network issue etc. I presume that in this scenario it will write to swap files. Is my assumption correct?
Any information on this would be appreciated.

Swap is used to store non-persistent messages when the queue cache is full.
if you go to sys$queuemanager swiftlet's queues properties you can see how many messages are configured to store in cache. (default is 500)
if the producer produces over 500 non-persistent messages and the consumer has not consumed, the messages will be written to the .swp file, if the messages are persistent they will always be written to the store/db/page.db directory

Related

How to create unique messages to rabbitmq queue - spring-amp

I am putting a message containing string data to rabbitmq queue.
Message publishing is called as a part of a service and the service can be called with same data (data goes to the queue) multiple times, thus chances for having duplicated data in the queue is very likely.
We have issues with this as the consumer code is inserting this data to table where this data is primary key. Consumer will be called from 4 different nodes simultaneously thus chances for having consumers consuming same data (from different messages) can happen.
I want to know if rabbitMQ publishing has any way to avoid message duplication.
Read "define a property "x-unique-message-code" to compare them is an easy and simple way" , but don't know how to do it.
I am using spring-amqp
Any help is highly appreciated.
Thank you
There is a good article from RabbitMQ about reliability: https://www.rabbitmq.com/reliability.html
There is a note like:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
For this purpose the message to produce can be supplied with a messageId property.

Does rabbitmq delete messages from physical storage

I have durable exchanges and queues in my application. The messages are persistent too. Using this configuration i am sure my messages gets stored in physical storage. I want to know if there is any expiry time when rabbitmq delete messages from my physical storage i mean the hard disk as it maintains the message store in it. Also in case i want to read the messages from physical storage then can i do so?
Durable queue + Persistent messages means indeed the messages will be kept.
Exceptions to this statement out of the top of my head:
you would have configured additional properties to your queues, for exemple limits in size
you reach the limit to the underlying filesystem
you delete the queues (this would delete the messages stored in it too)
As to reading the messages stored in the queues, you can typically consume them.
If you want to read them without them being deleted, you'd have few options:
trick the broker (for example by reading all of them but never acknowledging them, which would have them brought back into the queue)
republish them again to the broker for storage after reading them
But if indeed further conservation is desired, I'd seriously consider storing them somewhere else (DB of some kind) at it's clearly outside of the purpose of a message broker.

ActiveMQ converting existing Queue to CompositeQueue

I'll try to explain this the best I can.
As I store my data that I receive from my ActiveMQ queue in several distinct locations, I have decided to build a composite Queue so I can process the data for each location individually.
The issue I am running into is that I currently have the Queue in a production environment. It seems that changing a queue named A to a composite Queue also called A having virtual destinations named B and C causes me to lose all the data on the existing Queue. It does not on start-up forward the previous messages. Currently, I am creating a new CompositeQueue with a different name, say D, which forwards data to B and C. Then I have some clunky code that prevents all connections until I have both a) updated all the producers to send to D and b) pulled the data from A using a consumer and sent it to D with a producer.
It feels rather messy. Is there any way around this? Ideally I would be able to keep the same Queue name, have all its current data sent to the composite sub-queues, and have the Queue forward only in the end.
From the description given the desired behavior is no possible as message routing on the composite queue works when messages are in-flight and not sometime later when that queue has already stored messages and the broker configuration is changed. You need to consume the past messages from the initial Queue (A I guess it is) and send them onto the destinations desired.

Activemq topic subscribers heap memory leak - why are messages increasing?

I have console application which connects to activemq topics. Abount 10 messages per second are published on each topic. After some time monitored that the application memory is increasing and when all the memory is used the application crashes.
See the dump below. Why is ActiveMQTopicSubsctiber using so much heap? Also it is not visible but the ListEntries are about~14 000 (which means 14k messages).
http://imageshack.us/photo/my-images/404/amqmemoryproblem.png
A couple of things to possibly check for:
In your subscriber are you positive that the messages from the topic actually being consumed?
What is your prefetchLimit specified as?
If holding messages in memory continues to be a problem, you should consider configuring ActiveMQ to use file cursors. The use of file cursors tells ActiveMQ to spool messages to disk instead of holding them in memory.

ActiveMQ: Slow processing consumers

Concerning ActiveMQ: I have a scenario where I have one producer which sends small (around 10KB) files to the consumers. Although the files are small, the consumers need around 10 seconds to analyze them and return the result to the producer. I've researched a lot, but I still cannot find answers to the following questions:
How do I make the broker store the files (completely) in a queue?
Should I use ObjectMessage (because the files are small) or blob messages?
Because the consumers are slow processing, should I lower their prefetchLimit or use a round-robin dispatch policy? Which one is better?
And finally, in the ActiveMQ FAQ, I read this - "If a consumer receives a message and does not acknowledge it before closing then the message will be redelivered to another consumer.". So my question here is, does ActiveMQ guarantee that only 1 consumer will process the message (and therefore there will be only 1 answer to the producer), or not? When does the consumer acknowledge a message (in the default, automatic acknowledge settings) - when receiving the message and storing it in a session, or when the onMessage handler finishes? And also, because the consumers are so slow in processing, should I change some "timeout limit" so the broker knows how much to wait before giving the work to another consumer (this is kind of related to my previous questions)?
Not sure about others, but here are some thoughts.
First: I am not sure what your exact concern is. ActiveMQ does store messages in a data store; all data need NOT reside in memory in any single place (either broker or client). So you should actually be good in that regard; earlier versions did require that all ids needed to fit in memory (not sure if that was resolved), but even that memory usage would be low enough unless you had tens of millions of in-queue messages.
As to ObjectMessage vs blob; raw byte array (blob) should be most compact representation, but since all of these get serialized for storage, it only affects memory usage on client. Pre-fetch mostly helps with access latency; but given that they are slow to process, you probably don't need any prefetching; so yes, either set it to 1 or 2 or disable altogether.
As to guarantees: best that distributed message queues can guarantee is either at-least-once (with possible duplicates), or at-most-once (no duplicates, can lose messages). It is usually better to take at-least-once, and make clients to de-duping using client-provided ids. How acknowledgement is sent is defiend by JMS specification so you can read more about JMS; this is not ActiveMQ specific.
And yes, you should set timeout high enough that worker typically can finish up work, including all network latencies. This can slow down re-transmit of dropped messages (if worked dies), but it is probably not a problem for you.

Resources