Cassandra data stax driver update 4.4 query timeout issues - spring-boot

our application was recently upgraded to cassandra datastax driver 4.4.X. Earlier it was on version 3.3. After the upgrade we noticed quite a lot of timeout issues for ex:
init query timeout
session query timeout
control connection timeout..
and other timeout parameters defaulted in reference.conf
Earlier this used to work with default parameter but after the upgrade we need to default this to > 5 seconds.
Have any one faced similar issue after the upgrade?

You need to upgrade to at least Java driver 4.8 (better to 4.9) - previously timeouts were too aggressive (0.5 seconds), in 4.8 it was increased to 5 seconds (by fix of JAVA-2841).
Or you can just override corresponding parameters in the application.conf file.

Related

setFetchSize Mariadb jdbcdriver version 3.0.4

We encounter an error when we go from 2.7.2 et 3.0.4 MariaDB JDBC driver with setFetchSize(Integer.MIN_VALUE)
java.sql.SQLSyntaxErrorException: (conn=27489500) invalid fetch size
So we switch to setFetchSize(1)
https://mariadb.com/kb/en/about-mariadb-connector-j/
Before version 1.4.0, the only accepted value for fetch size was
Statement.setFetchSize(Integer.MIN_VALUE) (equivalent to
Statement.setFetchSize(1)). This value is still accepted for
compatilibity reasons but rather use Statement.setFetchSize(1), since
according to JDBC the value must be >= 0.
And I found nothing in the release notes.
This is a connector or idea bug, you need to downgrade.
I have the same problem, on the datagrid, searching the forums, the recommended solution is to reduce the version until the fix is released.
Ref: https://youtrack.jetbrains.com/issue/DBE-16376

User session timeout for sonarqube 5.6 version

We have two versions of sonarqubes with versions 6.7x and 5.6x. we want to timeout a user if a user is idle for 20 min. We were successful configuring this in 6.7x version by adding sonar.web.sessionTimeoutInMinutes=20 in _/Sonar_Home/conf/sonar.properties_. But we want to add for 5.6x version also. when I tried to do same configuration with 5.6x it is not working. Can someone help how we can meet this scenario in 5.6x?
The ability to configure the web session timeout arrived in SoanrQube 6.x series, see SONAR-8298.
No such capability in v5.6.x , which is anyhow end-of-life since late 2017. (read: perfect opportunity to upgrade to v6.7 LTS !)

How to set "pool remove abandoned" property in SonarQube

TL;DR: How can I configure SonarQube to remove abandoned DB connections?
We recently upgraded to SonarQube 5.1.2 and I'm now seeing connection errors from the Notification service (as described elsewhere on Stackoverflow).
While trying to diagnose, I noticed the "Pool Remove Abandoned" setting in the System Info.
DATABASE
Database MySQL
Database Version 5.6.26-log
....
Pool Remove Abandoned false
Pool Remove Abandoned Timeout (seconds) 300
I wondered if changing this to 'true' would have any impact on the errors, so I looked in conf/sonar.properties for this property, and I do not see it. All I see is
#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000
and I cannot tell if one of these is the property to change or perhaps just uncomment. There doesn't seem to be a "sonar.jdbc.removeAbandoned" property or similar.
sonar.jdbc.removeAbandoned=true
sonar.jdbc.removeAbandonedTimeout=60
sonar.jdbc.* is used org.apache.commons.dbcp.BasicDataSource

Neo4j Cypher queries really slow after upgrade to 2.1.3

This morning, with some struggles (see: Upgrading a neo4j database from 2.0.1 to 2.1.3 fails), i upgraded my database from version 2.0.1 to 2.1.3. My main goal with the upgrade was to gain performance on certain queries (see: Cypher SORT performance).
Everything seems to be working, except for the fact that all Cypher queries - without exception - have become much, much, much slower. Queries that used to take 75ms now take nearly 2000ms.
As i was running on an A1 (1xCPU ~2GB RAM) VM in Azure, i thought that giving neo4j some more ram and an extra core would help, but after upgrading to an A2 VM i get more or less the same results.
I'm no wondering, did i loose my indexes by doing a backup and upgrading/using that db? I have perhaps 50K nodes in my db, so it's not that spectacular, right?
I'm now still running on an A2 VM (2xCPU, ~4GB RAM), but had to downgrade to 2.0.1 again.
UPDATE: #1 2014-08-12
After reading Michael's first comment, on how to inspect my indexes using the shell, i did the following:
With my 2.0.1 database service running (and performing well), i executed Neo4jShell.bat and then executed the Schema command. This yielded the following response:
I uninstalled the 2.0.1 service using the Neo4jInstall.bat remove command.
I installed the 2.1.3 service using the Neo4jInstall install command.
With my 2.1.3 database service running, I again executed the Neo4jShell.bat and then executed the schema command. This yielded the following response:
I think it is safe to conclude that either the migration process (in 2.1.3) or the backup process (in 2.0.1) has removed the indexes from my database. This does explain why my backed up database is much smaller (~110MB) than the online database (~380MB). After migration to 2.1.3, my database became even smaller (~90MB).
Question is now, is it just a matter of recreating my indexes and be done with it?
UPDATE: #2 2014-08-12
I guess i have answered my own question. After recreating the constraints and indexes, my queries perform like they used to (some even faster, as expected).
Eventually, it turned out that in the process of backing up my database (in version 2.0.1) or during the migration process at startup (in version 2.1.3) i lost my indexes and constraints. Obvious solution is to manually recreate them (http://docs.neo4j.org/chunked/stable/cypher-schema.html) and be on your way.

ORA-01013 - Weblogic setting error?

I am running a banking program, coded in Oracle PL/SQL. This program runs for 2-3 hours everyday, as part of the End of Day processing.
Till yesterday, it was working fine. Today when I run it today, after around 30 mins, the program terminates with the error ORA-01013: user requested cancel of current operation. I am not terminating the program manually.
I feel this could be a weblogic (where the application is deployed) setting problem. I am not fluent in weblogic and am not sure what parameter can be changed to stop this error. Please help!!!
Oracle version: 11.2.0.3
Oracle weblogic server: 11g
This sounds like a JDBC timeout. From the WebLogic console go to Services->Data Sources and click the name of your data source to see its settings. Click the Connection Pool tab, and expand the Advanced section at the bottom of the page. Look for the Statement Timeout setting.
From the documentation:
When Statement Timeout is set to -1, (the default) statements do not timeout.
The behaviour you're seeing suggests the timeout is set to 1800 if it's timing out after 30 minutes.
However, this won't have changed on its own, and if it was already set then it was being ignored previously, which would need some investigation - has anything else changed?
Another possibility is that your code is making several calls within the 3-4 hour window and one of them is now exceeding the timeout on its own, which might be the case if the timeout is lower than 1800. Without seeing your code or the current timeout value I'm just guessing, obviously.

Resources