I am currently working on proof of concept for a new system, but it is based on a transport agent executing before journaling takes place.
My question is: Will a transport agent that adds text to the subject of all messages occur before the journaling executes?
Thanks
Yes, the transport agent should execute before journalling occurs.
Related
I realize there is a method to set on MQConnectionFactory to attempt to reconnect if the connection of a consumer or producer is broken. However, I'm wondering if one can do something similar for an application that is starting up and setting up consumers and producers. The code I have right now will not recover if the server is down when my client application comes up.
Is there a common/recommended practice here?
My recommendation would simply be to use the tools that are provided in the Java language itself. For example, you could write a loop with exception handling to retry the initial connection or JNDI lookup a configurable number of times. It's hard to provide more specific recommendations when you haven't provided any client code of your own.
I am thinking of using Spring State Machine for a TCP client. The protocol itself is given and based on proprietary TCP messages with message id and length field. The client sets up a TCP connection to the server, sends a message and always waits for the response before sending the next message. In each state, only certain responses are allowed. Multiple clients must run in parallel.
Now I have the following questions related to Spring State machine.
1) During the initial transition from disconnected to connected the client sets up a connection via java.net.Socket. How can I make this socket (or the DataOutputStream and BufferedReader objects got from the socket) available to the actions of the other transitions?
In this sense, the socket would be some kind of global resource of the state machine. The only way I have seen so far would be to put it in the message headers. But this does not look very natural.
2) Which runtime environment do I need for Spring State Machine?
Is a JVM enough or do I need Tomcat?
Is it thread-safe?
Thanks, Wolfgang
There's nothing wrong using event headers but those are not really global resources as header exists only for duration of a event processing. I'd try to add needed objects into an machine's extended state which is then available for all actions.
You need just JVM. On default machine execution is synchronous so there should not be any threading issues. Docs have notes if you want to replace underlying executor asynchronous(this is usually done if multiple concurrent regions are used).
We are using WebLogic 10.3.6.0 and IBM MQ 7.5.
Application design is to send messages to a dead letter queue (in WebLogic) on re-delivery. The re-delivery happens as the first delivery has failed due to some network issue or database data source failure.
My Client wants a way to browse the messages in the dead letter queue from the application GUI and pull them for processing when the network issue or data source issue has been resolved.
What is the best way to go about this?
I cam across QueueBrowser coupled with activemq or some other implementation. Is QueueBrowser possible with WebLogic? Please suggest on best ways to achieve this requirement.
Kindly pardon if my question is too naive. I am only a PL/SQL programmer.
Valerie is referring to the SYSTEM DLQ and application should never ever write to it. Application's should have there own DLQ.
i.e. If your application queue is called 'TEST.Q1' then your application DLQ should be called 'TEST.Q1.DLQ'.
There is a whole long list of MQ tools here to view messages and manage your MQ environment.
Is the application actually designed to write to the DLQ? If so, that is a very poor design. The DLQ is for the queue manager and MQ software to place messages which can not be delivered. The application should not be writing to the DLQ.
As for how to view messages on DLQ, that can be done with the MQ Explorer GUI. Or to write a script, use the DLQ handler (runmqdlq) with a rules table for processing messages.
I do not have much experiences with Oracle Service Bus, I am trying to design a logging solution with BigData.
As I read, the default log and report activity in OSB will put the data into the domain's server log file or into the database where we setup the server domain. If I want to put all the logs into a separate BigData database. I will need to either of these approaches:
Java callout, use JMS or some other technology to send data to the bigdata server.
Web service callout, create a separate web service to handle the logging.
Create custom report provider to replace the default one in OSB Reporting.
Something else
Please tell give me some ideas about what method I should be using, and please provide your reasons if you can, thank you so much.
Isn't the logging framework in weblogic based on Log4j? That means you can use a JMSAppender (probably prudent to wrap in an Async log4j appender if you can) and handle it however you want.
Or, if you're talking about the OSB Reporting framework, there's a few options:
Configure the default JMS reporting provider (which uses the underlying SOAINFRA database which hopefully is set up to be something better than the default Derby instance), then write a MDB that pulls reports off the queue and inserts it into SAS BigData
Turn the JMS provider off and use a custom provider, which can do anything you want. If you want, you can still do a two-step process, where the reporting provider itself puts reports on a JMS queue so it returns quickly, and a different MDB pulls messages off and persists them at its own pace.
I do not recommend a web service or database callout without an async step in the middle, because you need logging and reporting to be very quick and use as little resources for as short a period as possible.
You don't want logging to hog threads while you're experiencing load. I have seen entire buses brought down because of one hiccup, because the logging database suffered a performance blip, which caused a bunch of open threads trying to log to it, which caused thread starvation or timeouts, which caused more error logging...
if you have a buffer like a JMS queue, then you can handle peaks by planning ahead. You can say "actually I want a JMS queue of 10,000 messages, and if that overflows due to whatever reason, I want to (push the overflow to a separate queue over on this other box) or (filter out all the non-essential messages) or (throw new messages away) or (action of your choice). Oh yeah, and if the logging database fails then I will try 3 times to commit and if not, move it to this other queue". Or whatever you want.
There are multiple ways to achieve this. You could use the report activity to push to JMS or use the log activity.
You can also write a small routine such as this (either on OSB or outside it), that can read anything that you are logging (such as via the log activity but also additional metadata that is logged when you turn on monitoring of OSB components) and do with it whatever is needed (such as pushing it to a database or BigData store).
The key is to avoid writing an explicit service call in each pipeline/flow and the above approach(es) use standard OSB/ODL* loggers
*Oracle Diagnostic Logging
I have to check the IBM MQ queue manager status before opening a queue.
I have to create requestor app by checking that the QMgr is active or not then call put msg or get message from MQ
Is it possible to check the status,
please share some code snippets.
Thanks
You should NEVER have to check the QMgr before opening a queue. As I responded to a similar question today, the design proposed is a very, VERY bad design. The effect is to turn async messaging back into synchronous messaging. This couples message producers to consumers, introduces location and resolution dependencies, breaks clustering, defeats WMQ's load distribution and balancing, embeds network topology into the application, and makes the whole system brittle. Please do not blame WMQ for not working correctly after intentionally defeating all its best features except the actual queue/dequeue operations.
If your requestor app is checking that the QMgr is active, you are much better off using a multi-instance connection name and a layer of two or more functionally equivalent QMgrs that can access the cluster. So long as one of the QMgrs is up, the app will cycle between them until it finds one at which to connect.
If your responder app is checking that the QMgr is active, you are much better off just attempting to connect. Responder apps should never fail over to a different QMgr since doing so breaks transactionality and may leave queues unserviced. Instead just ensure that each queue has at least two input handles from local responder apps that do not fail over across QMgrs. (It is OK if the QMgr itself fails over using hardware clustering or multi-instance QMgr though).
If the intent is to check that there's an open input handle on the queue before putting messages there a better design is to have the requesting app not care to which queue instance the messages are routed and instead use the instrumentation built into WMQ to either restart responder apps that lose their input handle, or to disable the queue when nothing's listening.