I have a clustered web logic environment with 2 servers.
The source drops JMS messages in the queues of both the servers.
My service, however, is designed to consume these messages only at a particular time of the day when it is activated by a "trigger.txt" file which is picked up by a file adapter which then activates the BPEL to start consuming JMS messages.
However, the problem is, if the server 1 adapter picks up the trigger.txt file, then JMS messages from only server 1 queue are consumed, messages on the other server are left untouched and vice versa.
I want the messages to be consumed by both the servers.
Is there any solution to this ?
This isn't a WLS JMS issue.
So the solution will lie within your BPEL implementation and your solution of leaving the trigger.txt file behind.
I am assuming you are removing the trigger.txt once its picked by a BPEL instance.
You will have to change this logic to say include timestamp something like trigger.txt so each BPEL instance picks and marks internally that it has picked this particular file and does it process it again.
or create 2 files one for each server, but this will be messy if you add say an extra server lateron.
The other option is for WLS to redirect the JMS messages to the server which has an active consumer, this would however effect your ability to parallely process the JMS messages on both the servers.
Related
I'm running ActiveMQ 5.16.3, and I'm testing a Java-based order management application which runs across several JVMs. These JVMs each do work to push orders through a status pipeline. For example, one JVM creates orders, one schedules orders, and so forth. JDBC is used to store data, and JMS is used to process work between the JVMs. One JVM will read work from the database and put up to 5000 messages into a JMS queue for another JVM to process and do its own work. I am running just one ActiveMQ server with many JMS queues. I have not changed how ActiveMQ is storing messages so whatever is the default is what is being used (which should be KahaDB). JDBC is used only by the Java application.
We support several different JMS vendors other than ActiveMQ (such as Weblogic JMS and IBM MQ). However, only with ActiveMQ I am finding that for longer running or high volume tests the JMS queues start to back up and sometimes have hundreds of thousands of messages in them. There is no where near that much work in the system so something else is going on. Via JMX I've confirmed that the ActiveMQ console is correctly showing the numbers. This behavior seems random in that it's not all JVMs doing this (though all JVMs conceptually do the same thing) and if I stop the work generators (so that the JVMs only process what is already in the queue), the queues usually empty out quickly.
For example, if I want to make 10k orders, 10k messages would go into the first JMS queue for the create order JVM to process. This JVM would then update the JDBC database to create the orders and insert records that the next JVM (in this case, schedule order) would then pick up. The schedule JVM reads from the database and based on what work it sees, puts messages into the next JMS queue (only up to 5k, then it waits for it to empty, then fetches 5k more) for it to process. What I am seeing is that the schedule order JMS queue is filling up far past 10k messages.
My studies have lead me to the possibility of uncommitted reads and concurrency but I've come to a dead end. Does anyone have any thoughts?
My application is spring boot micro service listening to a Rabbit MQ queue.
The queue receives messages from different sources.
The requirement is that when the application server is going down (this could happen because of many reasons, may be because we brought the site down, or we are deploying an updated software on to our application server) we would like the queue to process the current message. As of now, we lose the message that the queue is currently processing.
How can I achieve this?
The default shutdownTimeout is 5000ms; you can increase it.
You should not, however, lose any messages, it should be requeued (unless you are using AcknowledgeMode.NONE (which is generally a bad idea).
Question:
Is it possible to copy MQ Messages from one Queue Manager/Queue to a different Queue Manager/Queue?
Scenario:
I have a "PROD" Queue Manager and when it receives a Message on it's Queues I would like to "copy" the Message to a queue on a "TEST" Queue Manager.
Requirements
The original message must be left on the PROD queue to be processed as normal.
This must be an automated process (lots of messages during a day). I could not intervene on a Message by Message basis.
If at all possible I would like this to be implemented by some native MQ functionality rather than an ad hoc program/script.
The copying must be as near to real time as possible
Must work with MQ version is 7.0.2.1(!). This cannot be changed.
Must run on Red Hat Enterprise Linux Server release 5.11 (Tikanga). Again, can't be changed.
I'm no MQ expert so use small words please
Thanks in advance
The only problem with the technote pointed out by gouda is that MQ will modify/changed the MsgId and CorrelId of each message replicated.
If the MsgId and/or CorrelId fields are important then the only other option is an MQ API Exit that replicates the message. You may need a commercial product like MQ Message Replication.
The next question is how are you going to move the message from a PROD queue to a TEST queue? You definitely do NOT want to create channels between a PROD queue manager and a TEST queue manager.
There are lots of tools that can off offload PROD messages to a file then you can move the file to your TEST environment and then load the messages into a TEST queue. Here is a list of MQ tools that can do it. The 2 tools you should try out are: MQ Batch Toolkit and QLoad.
Personally, I would create a schedule task (CRONTAB) to be run every night at midnight to off the messages and I would make the filename include the date and time. The last steps of the script would be to zip/compress the file and delete the original file (because the data you offload could be massive).
Hence, any time you want a particular day's PROD messages, just copy the file to your TEST server and unzip/uncompress it and load it into the queue.
all you need is mqadmin staff and this technote
Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.
Im a web developer ended up in some j2ee development (newbie). I sincerely need this theory confirmed.
I been given the privilege to deliver a message from our system (producer) to the SOA Enterprice service bus (consumer) when the user hits the save button. The information can not be missed or not delivered and the delivery order must be kept.
Environment:
Jboss eap 5.1 as the producer.
JNDI server is the ESB (maybe standard).
Jboss ESB as the consumer.
My weapon of choice is JMS, p2p, due to the asynchronous nature.
When the producer is abut to send the message some problems can occur:
ESB is down causing JNDI exception
Queue manager is for some reason not awake or wrongly configured. This should cause some JMS exception.
Network hickup, causing a JMS error.
So Im looking for some failover pattern. Here is my suggestion:
Add a internal JMS queue to which the message is initially added.
Add a MDB that listen to the internal queue and tries to send it to the target queue (ESB).
If failing in any way log fatal and send email to cool support people.
This should generate a reliable pattern where a message remains on the internal que until processed by the MDB.
Please advice.
Best Regards
ds
Well a 'temporary' queue is not a totally bad idea, but during the time from moving data from one queue to putting it on another you'll have a potential window of risk. Even though that window is close to nothing, what would happen if you got some failure right there and then? -You'd have to put the message back on the queue (and there you'd get into the problem with getting it in the correct order - nasty stuff!) or hold on to it in some way until you put it the other queue (which in turn can be cumbersome if you'd e g get into some failure-situaton.
A more stable solution would be to put data in a db with a queue-order column. You can then select your data in the correct order, send it to the new queue, and finally flag it as 'done' or something or even (better?) remove the data in the db.