Question to dbConn.executeCachedQuery(SQLStatement) on Mirth Connect Interface Engine - oracle

Because of too low set of max limit of processes and sessions in an Oracle DB, sometimes the following error occurs in mirth:
DBConnection - java.sql.SQLException: Listener refused the connection with the following error:
ORA-12516, TNS:listener could not find available handler with matching protocol stack
due to dbConn.executeCachedQuery(SQLStatement) with the DatabaseConnection Class in Mirth
So these are my questions:
Is there any way to throw this response/exception in the channel?
Is all data of the SQL query with the exception "lost", if this error occurs or is there an automatic retry?
And is there a best practice to handle this (e.g. to check first the the connection with a getConnection() method)?

I'll answer your questions in order:
1) If you are using the javascript connector, then you should have this in a try catch when you initiate the connection. In the catch, just put the error as Logger.Error(exceptionGoesHere).
If you are using the db connector this should get throw automatically in the logs. To make sure you have logging enabled at the channel level, access the Mirth Connect Server Manager, click on the Server tab and ensure that Channel Log Level is at least set to Error.
2) The way that Mirth Connect works, every time the message is initiated it will hit certain points of the Mirth DB to save the state of the message at that point in time. It's how Mirth guarantees message delivery. With that being said, you can always 'retry' the send manually. Otherwise, if you are using the DB connector there's an option that handles this for you under the Database Reader Settings section. The retry gives you the option to select the number of retries as well as the Retry interval in ms. When I was working there, by default it was set to 3 retries after 10 seconds.
3) Use the default database connector. Everything's already built in for you. Put the extra processing in the transformer to handle anything else. Don't try to re-invent the wheel if everything is already built is the best practice.
If you insist on using a code solution, then make sure all of your code is in a try catch, and make sure your catch is actually logging out the error exception.

Related

Azure event Hub connection is closed error

I see something like the following error while using azure event hub to send event message. But as I see in the azure portal, the metric shows that the event message is sent to the event hub. So I'm puzzled by what this error message means.
As I read from the azure doc (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-amqp-troubleshoot), it said "You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes."
The doc also said "You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link."
What should be done regarding this error message ? As although the event message can be sent, I worry whether there may be any potential issue there.
" Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} "
I once tried that if I call the close() method in EventHubProducerClient (by refer to sample code in https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send), this error seems not appear again. However, if doing so, when every time need to send the event, it will mean need to create a new EventHubProducerClient. I'm not sure if this may create another problem (like time required to create the new EventHubProducerClient, and memory consumption) if creating a new EventHubProducerClient for every send event, as there can be many events to send.
On another search, I found in How to configure Producer.close() in Eventhub, that it is recommended to close the producer client after using it.
However, if the above error message is actually not an error, whether to close or not may not matter.

Java client starting up when IBM MQ server is down or unreachable

I realize there is a method to set on MQConnectionFactory to attempt to reconnect if the connection of a consumer or producer is broken. However, I'm wondering if one can do something similar for an application that is starting up and setting up consumers and producers. The code I have right now will not recover if the server is down when my client application comes up.
Is there a common/recommended practice here?
My recommendation would simply be to use the tools that are provided in the Java language itself. For example, you could write a loop with exception handling to retry the initial connection or JNDI lookup a configurable number of times. It's hard to provide more specific recommendations when you haven't provided any client code of your own.

Spring Data Cassandra how to set finite number of connection retries?

I am currently implementing a spring boot microservice, which is persisting data to a single Cassandra database node. I need to be able to set the number of retries if the connection to the database is lost and the number of milliseconds between the retries in the microservice config file. I am using "spring-boot version 1.5.6" and spring-data-cassandra version 1.5.6". I was able to set the number of milliseconds between retries by creating cluster of type CassandraCqlClusterFactoryBean and passing a custom reconnection policy in the cluster.setReconnectionPolicy() method. But I am not able to set the number of retries with a custom retry policy. If understood correctly the retry policy handles only the case in which a query is made, but in my case I need to set the number of retries in all times no matter if a query is made or not. After a couple of days of research I was able to produce an ugly hack which basically uses a custom ReconnectionSchedule and stops the spring boot application after certain conditions are met in the nextDelayMs() method. Nevertheless I continued to look in the source code in debug mode and I saw that a NoHostAvailableException exception is thrown by the ControlConnection. So I checked the datastax official documentation regarding Control connection, and I found
Coming soon…
So could someone please show me how to correctly implement a way of stopping my cassandra driver of trying to reconnect to the node after a predefined number of retries.
Thanks in advance.
Look here at 9.3.1.
Maybe you can do something like trying to open a session each x second until a timeout expired or a session is successful created.

How does Spring-JPA EntityManager handle "broken" connections?

I have an application that uses a Spring-EntityManager (JPA) and I wonder what happens if the database happens to be unavailable during the lifetime of my aforesaid application.
I expect in that situation it will throw an exception the first time to do anything on the database, right?
But, say I wait 10 minutes and try again then and the DB happens to be back. Will it recover? Can I arrange it so it does?
Thanks
Actually, neither Spring nor JPA have anything to do with it. Internally all persistence frameworks simply call DataSource.getConnection() and expect to receive (probably pooled) JDBC connection. Once they're done, they close() the connection effectively returning it to the pool.
Now when the DataSource is asked to give a connection but database is unaivalable it will throw an exception. That exception will propagate up and will be somehow handled by whatever framework you use.
Now to answer your question - typically DataSource implementation (like dbcp, c3p0, etc.) will discard connection known to be broken and replace it with a fresh one. It really depends on the provider, but you can safely assume that once the database is available again, the DataSource will gradually get rid of sick connections and replace them with healthy ones.
Also many DataSource implementors provide ways of testing the connection periodically and before it is returned to the client. This is important in pooled environemnts where the DataSource contains a pool of connections and when the database becomes unavailable it has no way to discover that. So some DataSources test connection (by calling SELECT 1 or similar) before giving it back to the client and do the same once in a while to get rid of broken connections, e.g. due to broken underlying TCP connection.
TL;DR
Yes, you will get an exception and yes the system will work normally once the database is back. BTW you can easily test this!

Searching for patterns to create a TCP Connection Pool for high performance messaging

I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database.
I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message.
I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.
Introducing connection pool is a sort of optimization. Premature optimization can be bad.
I would recommend you start develpment without introducing connection pool. When client and server code is stable enough you can create load tests and detect performance problems.
Connection pool is required if time to create a connection is considerable compared to the rate at which data is coming to/from server (load tests should indicate that).
If data from client is send not that often you may not even need connection pool.

Resources