Use case: Need to send a huge files (multiple of 100 MB) over the queue from one server to another.
At present, am using Activemq Artemis server to send huge file as ByteMessage with help of input stream over tcp protocol with reattempt interval as -1 for unlimited fail over retry. Here the main problem is consumer endpoints connection will be mostly unstable I.e disconnect from network because of mobility nature.
So while send a file in queue, if the connection is dropped and reconnectd broker should resume from where the transaction interrupted (e.g) while tranfer 300 mb of file to consumer queue ,assume that a portion of 100mb is transferred to consumer queue server , then the connection is dropped and reconnected after a while, then process should resume from transferring remaining 200 mb not the whole 300mb again.
My question is which one is the best protocol(tcp, Stomp and openwire) and best practice (blobmessage, bytemessage input stream)to achieve it in ActiveMQ Artemis
Apache ActiveMQ Artemis supports "large" messages which will stream over the network (which uses Netty TCP). This is covered in the documentation. Note: this functionality is only for "core" clients. It doesn't work with STOMP or OpenWire. It also doesn't support "resume" functionality where the message transfer will pick-up where it left off in the case of a disconnection.
My recommendation would be to send the message in smaller chunks in individual messages that will be easier to deal with in the case of network slowness or disconnection. The messages can be grouped together with a correlation ID or something and then the end client can take the pieces of the message and assemble them together once they are all received.
Related
My experience with setting up Tibco infrastructure is minimal, so please excuse any misuse of terminology, and correct me where wrong.
I am a developer in an organization where I don't have access to how the backend is setup for Tibco. However we have bandwidth issues between our regional centers, which I believe is due to how it's setup.
We have a producer that sends a message to multiple "regional" brokers. However these won't always have a client who needs to subscribe to the messages.
I have 3 questions around this:
For destination bridges: https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-174DF38C-4FDA-445C-BF05-0C6E93B20189.html
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
A bridge can be used to send messages from one destination to multiple destinations (queues or topic).
Alternatively Topics can be used to send a message to multiple consumer applications. Topics are not the best solution if a high level of integrity is needed(no message losses, queuing, etc).
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the bridge destination is a queue, messages will be put in the queue.
If the bridge destination is a Topic, messages will be distributed only if there are active consumers applications (or durable subscribers).
3 If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
This applies only to Topics (when there is no durable subscriber)
An alternative approach would be to use routing between EMS servers. In this approach Topics are send to remote EMS servers only when there is a consumer connected to the remote EMS server (or if there is a durable subscriber)
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-FFAAE7C8-448F-4260-9E14-0ACA02F1ED5A.html
I have a master/slave AMQ broker setup for JMS messaging. I have two servers that I would like to setup as a master/slave durable consumers using Apache Camel. We've been achieving this by having both servers attempt to connect with the same client ID. One node handles all of the work but if it goes down the other node connects and picks right back up on the work. This has been working fine for having a single consumer at a time but it makes noise in disconnected server's log files with the message
ERROR org.apache.camel.component.jms.DefaultJmsMessageListenerContainer]
(Camel (spring-context) thread #0 - JmsConsumer[global.topic.event]) Could
not refresh JMS Connection for destination 'global.topic.event' - retrying
using FixedBackOff{interval=5000, currentAttempts=12,
maxAttempts=unlimited}. Cause: Broker: broker - Client: client already
connected from tcp://xxx.xx.xx.xxx:xxxx
Is there a proper way to get the functionality that I'm looking to achieve? I was considering having the slave server ping the master to coordinate which one is connected but I'd like to keep the implementation as simple as possible.
Convert your usage of topics on the consumer side to Virtual Topics. Virtual Topics allow you to continue to have existing message flows produce and consume from the topic, but also have consumers listen on specially named queues.
Once you are consuming from a queue, you can implement all the consumer patterns-- exclusive consumer (which allows that hot-standby backup consumer), message groups, parallel consumers, etc.
I'm trying to build an mosquitto clustering, because mosquitto is single thread and seems cannot handle lots of qos 2 messages.
MQTT server comparison: Benchmark of MQTT Servers
I found mosquitto can use bridge way to build cluster (Cluster forming with Mosquitto broker), but I'm wondering if mosquitto subscribe all message from all other server will cause high overhead for internal message sending.
For example, if I have 10 mosquitto brokers, each of them serve 1,000 messages, originally it's total 10,000 messages. But messages will shared between brokers, so each message will send to another 9 brokers, that's a total 1,000 x 9 x 10 = 90,000 message for internal usage.
Is there some benchmark for mosquitto clustering? Or what's the general solution for sending lots of qos 2 messages?
Thanks
We used to setup a MQTT service platform which use Mosquitto as the broker, with 8 brokers bridged together, about 20k clients subscribed with 20k topics, qos=0, avg pubs 1k messages/sec with 100-2k bytes, the bridge subscribe and publish all the topics, and bring a huge forward latency, sometimes more then 2 mins.
So now we simply broadcast all the pubs to each of the broker, this does work.
But bridge is something else with cluster, which means it does not like a logic MQTT broker that support cluster session, load balance, single point of failure,..
so I have implemented a autonomous Mosquitto cluster, and did some performance test by Tsung, generally speaking, with a scenario that 30k subscriber/2.5k pubs/sec, payload length=744bytes, qos=1, average request response is a bit high then bridge(5.1ms vs 2.32ms), but no message lost and the load did balanced.
you can find the detailed test report under mosquitt-cluster-bridge-benchmark.
So far I didn't find a solution to read segmented messages with IBM's JMS implementation (without message grouping). See also: is IBM MQ Message Segmentation possible using JMS?
Ist there any workaround for a JMS client yet to receive segmented messages?
For e.g. is it possible to configure a "MQ server component" to reassemble segmented messages into one single message for the JMS client? Other ideas?
If the total reassembled message stays within 100MB (i.e. the maximum allowed message size), then you could have an interim queue with a non JMS MQ API application getting and reassembling the messages and then putting the large reassembled message onto a queue that the JMS application gets from. This would retain the smaller sized messages while they traverse though the MQ network, and are only large (read inefficient) messages at the last point before the application retrieves them.
However, if the total reassembled message is larger than 100MB, which may be the case if segments are in use then the above solution will not help.
In fact, if the total reassembled message is larger than 100MB then you can't send it over a client connection anyway, in which case you'll need to make th application local to the queue manager.
If you are local to a queue manager, then an API exit that changes the underlying MQGET call made by the JMS layer may also be a possibility. You can only use this if you have a local queue manager because client side API Exits are only supported in the C Client. You could cross the SVRCONN channel regardless of the type of client at the other end of the socket, but you cannot send a message greater than 100MB over the client channel so if the total reassembled message is greater than the channel's MAXMSGL then it can't be sent.
Related Reading
Writing API Exits
API Exit Reference
I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.