Currently, master-slave replication is finished, and select and insert are branched using the readwritesplit function of maxscale.
I was using common dbcp and checking the connection using the options of testOnBorrow and validationQuery through datasource configuration, but because the query is transmitted through maxscale, select 1 of validationQuery is only transmitted to the slave, and the connection validity of the master cannot be checked.
The master does not check the validity of the connection, so if you connect after not using WAS for a long time, a db connection related error occurs.
to solve this problem
I used master_accept_reads = true,
but I don't want to use it as it will generate more traffic to the master.
As another option persistpoolmax,persistmaxtime
I used, but I got the same error message.
I am wondering if there is a way to send a connection validation query such as ValidationQuery in maxscale or mariadb without distinction between master and slave.
Thank you for reading the long text.
You can use the hint filter to route queries to the master.
https://mariadb.com/kb/en/mariadb-maxscale-24-hintfilter/
Related
Why do I see disconnected when I run this tDB Connection?
Below log when I run tdbConnection.
Is it automatically getting disconnected?
[statistics] connecting to socket on port number 3024
[statistics] connected
[statistics] disconnected
There is no difference between metadata dbConnection and tdbConection. Metadata is the one you get from any database schema and utilize it, where as tdbConnection is the one in which you manually provided the details of the database details like schema, port, user, password, database and host.
The above issue would be because your database is inconsistent and the talend is unable to connect to it. May be there would be a latency issue. You can enable tdie and tlogcatcher and get the detailed logs
I am quite new to IBM MQ's. Mine is a multi-instance queue manager.
One instance is like fail-over.
How can I connect to them even if one of is down.
I am not sure whether my terminology is right or not?
I am trying to connect using below example now
https://raw.githubusercontent.com/ibm-messaging/mq-dev-samples/master/gettingStarted/jms/JmsPutGet.java
Instead of populating WMQ_HOST_NAME and WMQ_PORT populate WMQ_CONNECTION_NAME_LIST with a comma separated list that is in the format host1(port1),host2(port2). IBM MQ will attempt to connect to host1 first and if it fails it will attempt host2 during the initial connection attempt.
If you want the client to reconnect on failure you will need to enable mq auto reconnect like this:
cf.setClientReconnectOptions(WMQConstants.WMQ_CLIENT_RECONNECT);
cf.setClientReconnectTimeout(1800); // how long in seconds to continue to attempt reconnection before failing
I have an Oracle DB RAC setup of 2 servers and config SCAN hostname pointing to both servers. My Websphere application server config with jdbc string like below and connection pool of 50:
jdbc:oracle:thin:#//scan-hostname:port/dbname
Everything works fine and both db servers are getting request as expected, except that when either node is down (and the other node is normal), my application will get all kinds of exceptions like (connection reset/JDBC commit failed/Connection is closed) just within first several minutes and normal afterwards.
My guess is those pooled connections to the failing node does not do any retry or failover and just throw exceptions. Is it expected behavior for oracle RAC, that failover only working for new connections not existing connections, or am I missing something somewhere to enable failover?
We're testing the failover behaviour using the MariaDB JDBC connector Aurora specific features.
We've set the JDBC URL as the documentation suggest:
jdbc:mysql:aurora://cluster.cluster-xxxx.us-east-1.rds.amazonaws.com/db
The problem is that as soon as we add the aurora: part in the URL schema, we can see an increase in the connections to the database writer until the point that we've to rollback the change (it even reaches 3.000 connections).
Versions:
MariaDB connector: 2.0.1
HikariCP connection pool: 2.6.1
Play-Slick: 2.1.0
Slick: 3.2.0
Configuration:
master {
profile = "slick.jdbc.MySQLProfile$"
db {
driver = "org.mariadb.jdbc.Driver"
url = "jdbc:mysql:aurora://cluster-name.cluster-xxx.us-east-1.rds.amazonaws.com/db_name?characterEncoding=utf8mb4&rewriteBatchedStatements=true&usePipelineAuth=false"
user = "rw_user"
password = "rw_user_pass"
numThreads = 20
queueSize = 1000000
}
}
slaves = [
{
profile = "slick.jdbc.MySQLProfile$"
db {
driver = "org.mariadb.jdbc.Driver"
url = "jdbc:mysql:aurora://cluster-name.cluster-ro-xxx.us-east-1.rds.amazonaws.com/db_name?characterEncoding=utf8mb4&usePipelineAuth=false"
user = "ro_user"
password = "ro_user_pass"
numThreads = 20
queueSize = 1000000
}
}
]
We'd tried to add the aurora: part to the JDBC URL schema after upgrading the MariaDB connector version, but the number of connections to the Reader started to increase again:
If we run a show processlist on the read only endpoint, we can see all the opened connections in "cleaned up" state, and "Sleep" command.
We'd removed the aurora: part from the read only endpoint just in order to stabilize the number of connections to it. Is it possible that the driver searches for the cluster master while opening connections? That would explain this kind of behaviour.
When using the "aurora" keyword, driver , under the hood, create 2 connections:
a connection to the primary server,
a connection to one of the replicas if any.
The goal is always to save resources on the main server. Generally, only one pool is configured. The driver then uses the connection to the primary / replica according to [Connection.setReadOnly] [1].
When you have separate "write" / "read" pools, using the configuration "failover" will solve your issue: Driver will use only one real connection.
This way, there will be no "wasted" connection.
Failover will then be handled differently, but with the same results (for example, a query not in a transaction that is to be sent to a replica that just crashed will not directly use the primary connection as when using the "aurora" configuration, the driver will recreate a new connection to another replicas before executing the query).
Once you get past several dozen active connections, the database starts stumbling over itself. It is better to throttle the connections in the client instead of assuming you have infinite bandwidth to accept connections in Aurora.
We are using ADO to access Oracle 10g release 2, Oledb provider for Oracle 10g. We are facing some issue with the connection pooling. The database reside on the remote machine and connection pooling is occuring as it should. But if the remote machine goes down for some reason, the connection is returned from the pool and query on that connection fails. When this connection is closed, it is returned back to the pool instead of being invalid. The subsequent connection opening requests are sucessfull but query fails. This is strange behaviour, according to OLEDB specifications, provider must support DBPROP_CONNECTIONSTATUS property, thus in case of invalid connection, it would not be returned back to the pool.
Things get weired when the remote machine comes up. The connections in the pool are still invalid and although the connection opening succeeds, query on the connection fails. Oracle OLEDB is unable to connect to the server anymore and we have to restart our application. Well this is undesired cause our application is a critical application.
Any ideas on how to get over this.
Thanks
Mubashir
If you are doing this programmatically, use a try block, so that if something does happen, it won't fail. With a try block, you can catch an exception and ignore it, so that the errors are shushed.
You could tell the pool to not accept invalid connections, by marking the connection invalid before it is returned to the pool.
Connections are recovered after 10 minutes by default. Time can be set by the registry key SPTimeout under the oledb provider's root key.
Actually the default connection pool timeout is 120 seconds at least for this Oracle 11 32-bit OLEDB installation. You can find the registry settings at:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Oracle\KEY_orac
Where KEY_orac is the KEY_<oracle_home_name>
The key name is ORAMTS_CONN_POOL_TIMEOUT and the default value is 120.
It does not appear that you can set connection pool parameters at the connection string level.
In most connection pool implementation it is possible to check the connection before using it. For example: you define a check query like select * from dual and if you pick up a connection from the pool this query will be executed. If it fails, then the connection will be excluded from the pool and a new one will be opened.