ActiveMQ offline message transfer on database level - jdbc

we are running an in-house EAI system using ActiveMQ as the message broker using JDBC persistence.
There we have a cold-standby failover solution each one having an own database schema (due to several reasons).
Now if the primary goes down and we want to startup the backup we would like to transfer all undelivered messages on database level from the one node to the other.
Having a look at the table "ACTIVEMQ_MSGS" made us unsure if we can do this without any drawbacks or side effects:
There is a column "ID" without any DB-sequence behind - can the backup broker handle this?
The column "MSGID_PROD" contains the host name of the primary server - is there a problem if the message should be processed by a broker with a different name?
There is a column "MSGID_SEQ" (which seems to be "1" all the time) - what does this mean? Can we keep it?
Thanks and kind regards,
Michael

I would raise a big red flag about this idea. Well, yes, in theory you could well succeed with this, but you are not supposed to touch the JDBC data piece by piece.
ActiveMQ has a few different patterns for master/slave HA setups. Either using a shared store for both the master and the slave, or use a replicated store (LevelDB+ZooKeeper).
Even a shared JDBC store could be replicated, but on the database level.
Ok, so you somehow want another setup than the official ones, fine. There is a way, but not using raw SQL commands.
By "Primary goes down", I assume you somehow assumes the primary database is still alive to copy data from. Fine. Then have a spare installation of ActiveMQ ready (on a laptop, on the secondary server or anywhere safe). You can configure that instance to connect to the "primary database" and ship all messages over to the secondary node using "network of brokers". From the "spare" broker, configure a network connection to the secondary broker and make sure you specify the "staticBrige" option to true. That will make the "spare" broker hand over all unread messages to the secondary broker. Once the spare broker is done, it can be shut down and the secondary should have all messages. This way, you can reuse the logic in whatever ActiveMQ version you have and need not to worry about ID sequences and so forth.

Related

Need to have two outbound queues on two different servers and queue managers act as primary and secondary

Need to have two outbound queues on two different servers and queue managers act as primary and secondary respectively if the sending to primary fails want to connect to secondary but as soon as primary become up application must send to primary any spring boot configuration can help? using websphere MQ.
For a Java JMS messaging application in Spring Boot, a connection name list allows for setting multiple host(port) endpoints as comma separated pairs.
The connection factory will then try these endpoints in turn to finds an available queue manager. Unsetting WMQConstants.WMQ_QUEUE_MANAGER means the connection factory will connect to the available queue manager on a given host(port) endpoint regardless of its name.
You'll need to think about the lifecycle of the connection factory as the connection will remain valid once it has been established. In the scenario where the connection factory is bound to the queue manager on hostB(1414) (the second in the list) and then the queue manager on hostA(1414) (the first in the list) becomes available again, nothing would change until the next connection attempt.
It's important to note that where the queue manager endpoints in the connection name list are unrelated, then the queues and messages available will not be the same. Multi-Instance queue managers allow a queue manager instance to failover between two host(port) endpoints. For container deployments, IBM MQ Native HA ensures your messaging applications are routed to an active instance of the queue manager
A CCDT allows for much more sophisticated connection matching options with IBM MQ over the connection name list method outlined above. More information on CCDT is available here. For JMS applications you'll need to set the connection factory to use CCDT e.g., cf.setStringProperty(WMQConstants.WMQ_CCDTURL, "") to the location of the CCDT JSON file.
It's probably worth me putting up an answer rather than more comments, though most of the ground has been covered by Rich's answer, augmented by Rich's, Morag's and my comments.
Client Reconnection seems the most natural fit for the first part of the use-case, and I'd suggest using a connection name list rather than going to the complexity of using a CCDT for this job. This will ensure that the application connects to your primary queue manager if it's available and the secondary queue manager if the primary isn't available, and will also move applications to the secondary if the primary fails.
Once a connection has been made, in general it won't move. The two exceptions are that in client reconnection configuration, a connection will be migrated to a different queue manager if communication to the first is broken, and in the context of uniform clusters connections may be requested to migrate to a queue manager to balance load over the cluster. There is no automatic mechanism that I can think of which would force all the connections back from your secondary queue manger to the primary - you'd need to either do something with the application, or in a client reconnect setup you could bounce the secondary queue manager.
(You could use this forum to request such a feature, but I can't promise either that the request would be accepted or that it would be rapidly acted on.)
If you want to discuss this further, I'd suggest that more dialogue is probably needed to help us understand your scenario properly, so that we can make the most helpful recommendations. The MQ discussion forum may be a useful place for such dialogue, as it doesn't fit well (IMHO) to the StackOverflow model.

Data replication in Micro Services: restoring database backup

I am currently working with a legacy system that consists of several services which (among others) communicate through some kind of Enterprise Service Bus (ESB) to synchronize data.
I would like to gradually work this system towards the direction of micro services architecture. I am planning to reduce the dependency on ESB and use more of message broker like RabbitMQ or Kafka. Due to some resource/existing technology limitation, I don't think I will be able to completely avoid data replication between services even though I should be able to clearly define a single service as the data owner.
What I am wondering now, how can I safely do a database backup restore for a single service when necessary? Doing so will cause the service to be out of sync with other services that hold the replicated data. Any experience/suggestion regarding this?
Have your primary database publish events every time a database mutation occurs, and let the replicated services subscribe to this event and apply the same mutation on their replicated data.
You already use a message broker, so you can leverage your existing stack for broadcasting the events. By having replication done through events, a restore being applied to the primary database will be propagated to all other services.
Depending on the scale of the backup, there will be a short period where the data on the other services will be stale. This might or might not be acceptable for your use case. Think of the staleness as some sort of eventual consistency model.

ØMQ N-to-M message queue

I am considering the feasibility that if we can replace our message-queue-middleware with ØMQ.
I have two set of servers.
The first set of the servers, they don't talk to another server from the same set, they only append the requests into specific message-queue.
The 2nd set of the servers, they don't talk to another server from the same set, they only receive the requests from specific message-queue to handle the requests.
It looks like a producer-consumer model.
And I think it can be replaced by the ØMQ's freelance pattern http://zguide.zeromq.org/page:all#Brokerless-Reliability-Freelance-Pattern.
But the questions are:
How to support dynamic discovery for both server & clients?
How to support dynamic discovery for both server & clients?
There are probably a hundred ways you could implement that, and greatly depend on your situation. If all the servers will always be on the same LAN you could bootstrap using the broadcast address on the local network and ask all responders who they are. Quick and dirty.
I would personally implement a bootstrap service that everyone knows about. They all can ask this always-available service for who is 'online' for the type of server they're after.
Another option, you could also use pub-sub. This would require a central publisher. newly connecting nodes would notify the publisher who would notify all other nodes of the new join, possibly including the new nodes ID, ip:port (if desired) etc. All nodes will still be able to communicate if the publisher crashes since its only used for global notifications, and a backup publisher could be used to make the system failsafe. Each node can also send heartbeats to publisher, with publisher notifying all other nodes when a node leaves/crashes.

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

Cache values in Java EE

I'm building a simple message delegation application. Messages are being send on both ends via JMS. I'm using a MDB to process incoming messages, transform them and send them to a target queue. Unfortunately the same messages can be send to the incoming queue more than once but it is not allowed to forward duplicates.
So what is the best way to accomplish that?
Since there can be multiple MDBs listening on the incoming queue a need a single cache where I can store the unique message uuids of the incoming messages for at least an hour. How should this cache be accessed? Via a singleton/ static class (I'm running Java EE 5 and thus don't have the singleton annotation)?
In addition I think all operations must be synchronized, right? Does that harm performance too much?
#Ingo: are you OK with database solution. You can full fledged DB server or simple apache derby solution for this..
If so, you can have a simple table where you can store message unique UId and can check against it for uniqueness....this solution will have following benefits:
Simple code
No need of time bound cache(1 hour). You can check for uniqueness of a message forever.
Persistent record of what messages came in.
No need of expensive synchronized, you can rely on DB isolation level to have consistency.
centralized solution for your possibly many deployments of application.

Resources