Spring Data Cassandra how to set finite number of connection retries? - spring

I am currently implementing a spring boot microservice, which is persisting data to a single Cassandra database node. I need to be able to set the number of retries if the connection to the database is lost and the number of milliseconds between the retries in the microservice config file. I am using "spring-boot version 1.5.6" and spring-data-cassandra version 1.5.6". I was able to set the number of milliseconds between retries by creating cluster of type CassandraCqlClusterFactoryBean and passing a custom reconnection policy in the cluster.setReconnectionPolicy() method. But I am not able to set the number of retries with a custom retry policy. If understood correctly the retry policy handles only the case in which a query is made, but in my case I need to set the number of retries in all times no matter if a query is made or not. After a couple of days of research I was able to produce an ugly hack which basically uses a custom ReconnectionSchedule and stops the spring boot application after certain conditions are met in the nextDelayMs() method. Nevertheless I continued to look in the source code in debug mode and I saw that a NoHostAvailableException exception is thrown by the ControlConnection. So I checked the datastax official documentation regarding Control connection, and I found
Coming soon…
So could someone please show me how to correctly implement a way of stopping my cassandra driver of trying to reconnect to the node after a predefined number of retries.
Thanks in advance.

Look here at 9.3.1.
Maybe you can do something like trying to open a session each x second until a timeout expired or a session is successful created.

Related

Question to dbConn.executeCachedQuery(SQLStatement) on Mirth Connect Interface Engine

Because of too low set of max limit of processes and sessions in an Oracle DB, sometimes the following error occurs in mirth:
DBConnection - java.sql.SQLException: Listener refused the connection with the following error:
ORA-12516, TNS:listener could not find available handler with matching protocol stack
due to dbConn.executeCachedQuery(SQLStatement) with the DatabaseConnection Class in Mirth
So these are my questions:
Is there any way to throw this response/exception in the channel?
Is all data of the SQL query with the exception "lost", if this error occurs or is there an automatic retry?
And is there a best practice to handle this (e.g. to check first the the connection with a getConnection() method)?
I'll answer your questions in order:
1) If you are using the javascript connector, then you should have this in a try catch when you initiate the connection. In the catch, just put the error as Logger.Error(exceptionGoesHere).
If you are using the db connector this should get throw automatically in the logs. To make sure you have logging enabled at the channel level, access the Mirth Connect Server Manager, click on the Server tab and ensure that Channel Log Level is at least set to Error.
2) The way that Mirth Connect works, every time the message is initiated it will hit certain points of the Mirth DB to save the state of the message at that point in time. It's how Mirth guarantees message delivery. With that being said, you can always 'retry' the send manually. Otherwise, if you are using the DB connector there's an option that handles this for you under the Database Reader Settings section. The retry gives you the option to select the number of retries as well as the Retry interval in ms. When I was working there, by default it was set to 3 retries after 10 seconds.
3) Use the default database connector. Everything's already built in for you. Put the extra processing in the transformer to handle anything else. Don't try to re-invent the wheel if everything is already built is the best practice.
If you insist on using a code solution, then make sure all of your code is in a try catch, and make sure your catch is actually logging out the error exception.

Java client starting up when IBM MQ server is down or unreachable

I realize there is a method to set on MQConnectionFactory to attempt to reconnect if the connection of a consumer or producer is broken. However, I'm wondering if one can do something similar for an application that is starting up and setting up consumers and producers. The code I have right now will not recover if the server is down when my client application comes up.
Is there a common/recommended practice here?
My recommendation would simply be to use the tools that are provided in the Java language itself. For example, you could write a loop with exception handling to retry the initial connection or JNDI lookup a configurable number of times. It's hard to provide more specific recommendations when you haven't provided any client code of your own.

One at a Time Processing for Oracle SOA JMS Queue

We have a requirement in where we need to send only one message at a time to a backend process. The call back of this process takes around an hour, only after the call back can we send another request to the process.
I am trying to achieve this by using a manager bpel process that will hold the messages first if there is already something being processed in the backend, and then send it once it realizes that the backend is free. This approach will work, but our architect wants a cleaner solution. He suggested using JMS queues. The idea is for the jms queue to messages to be read by a amanger one at a time, only moving on to the next one once we receive the callback from the backend and we know that the composite and bpel instance is finished. I've been scouring the internet for weeks, but I couldn't find a working jms based solution for my requirement.
I've tried the suggestions for this link but turning on unit of order and acknowledgement properties does nothing.
Try this approach!!
Use a event driven bpel process.
Use a database flag as your next trigger. (flag is TRUE)
jms Adapter receives first message from the queue. Here use a delay in the adapter since you are expecting the bpel to be long running. use below setting.
<binding.jca config="MyServiceInboundQueue_jms.jca">
<property name="minimumDelayBetweenMessages">10000</property>
<property name="singleton">true</property>
</binding.jca>
if flag == TRUE in the db causes the db adapter to proceed with the bpel process,
else skip the bpel.
mark flag==FALSE
call the backend system
callback is received after an hour.
set flag==TRUE
Hi Jonar,
At my company we always use JMS queues for Asynchronous messaging. You could do with a delay timer build in your composite set to 1 hour and 15 minutes for example, and it will work most of the time, but its hella messy. The whole idea is for any asynchronous process to kick off when a message is put upon its queue target (specified by the JMS queue). The JMS adapter in the composite of your project will pick up the message from the queue when it is free to process the queue. The goal for you would be to put the message on the queue and pick it up from it using the adapter. It will know which message to pick up because you specify which queues it listens to in the adapter.
The following blog post by John-Brown Evans eplains the whole process from step one. It might be a bit tedious, but I found it very helpful. Its using SOa Suite 11g instead of the nowadays more commonly used 12c, but its fundamentals remain the same.
Awesome JMS queue tutorial
I hope this works for you!
Cheers,
Jesper

Retrieving data from database using spring integration JDBC without poll

Currently learning spring integration, I want to retrieve information from a MySQL database to use inside an int:service-activator, or an int:splitter .
Unfortunately, it would seem that most examples and documentation is based around the idea of using an int-jdbc:inbound-channel-adapter, which in itself requires a poller. I don't want to poll a database, but rather retrieve specific data based on the payload of an existing message originating from an int:gateway. This data would then be used to further modify the payload, or assist in how the message is split.
I tried using int-jdbc:outbound-gateway, as the description states:
... jdbc.JdbcOutboundGateway' for updating a database in response to a message on the request channel, and/or for retrieving data from the database ...
This implies that it can be used for retrieval of data only and not just updates, but as I implement it, there's a complaint that at least one update statement is required:
And so I'm currently sitting with a faulty prototype that initially looks like so:
The circled piece being the non-functioning int-jdbc:outbound-gateway.
My end goal is to, based on the payload coming from the incomingGateway (in the picture above), retrieve some information from a MySQL database, and use that data to split the message in the analyzerSplitter, or to perhaps modify the payload using an int:service-activator. This should then all be linked up to a int-jdbc:message-store which I believe could assist with performance. I do not wish to poll the database on a regular basis, and I do not wish to update anything in the database.
By testing using the polling int-jdbc:inbound-channel-adapter, I am confident that my datasource bean is set up correctly and the query can execute.
How would I go about correctly setting up such behaviour in spring integration?
If you want to proceed with the flow after updating the database, you can simply use a JdbcTemplate in a method invoked by a service activator, or, if it's the end of the flow, use an outbound channel adapter.
The outbound channel adapter is the inverse of the inbound: its role is to handle a message and use it to execute a SQL query. By default, the message payload and headers are available as input parameters to the query, as the following example shows:
...

Handling long (> 5mins) requests from application deployed in WebLogic 12c

We have a problem at hand with a long service request to retrieve a huge amount of data which takes around 5 minutes to complete. We are using EJB and native JDBC to establish requests. Is there a way to extend the transaction timeout for this one particular request (that is overwriting the timeout configs in the domain's JTA) or do we have to increase the domain's JTA transaction timeout to 5 minutes? But the latter seems to be unfavorable since it might provoke database deadlock. Are there any other solutions you may suggested that is more robust and safe? Could we perhaps set the transaction timeout at a different level apart from Domain level? Looking forward to your reply soon. Thanks.
The JTA timeout can ba set at the EJB level. Read this documentation for details.

Resources