Can not load large amounts of data with DataGrip or IntelliJ to PostgreSQL - datagrip

I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip

The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)

Related

Upgrade problem with h2 database when upgrading from 192 to 200 : Scale must not be bigger than precision

Years ago I wrote an app to capture data into H2 datafiles for easy transport and archival purposes. The application was written with H2 1.4.192.
Recently, I have been revisiting some load code relative to that application and I have found that there are some substantial gains to be had in some things I am doing in H2 1.4.200.
I would like to be able to load the data that I had previously saved to the other databases. But I had some tables that used a now invalid precision scale specification. Here is an example:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5)
H2 databases created with 1.4.192 that contain tables like this will not load on 1.4.200,
they will get the following error:
Scale($"23") must not be bigger than precision({1}); SQL statement:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5) [90051-200] 90051/90051 (Help)
My question is how can I go about correcting the invalid table schema? My application utilizes a connection to an H2 database and then loads the data it contains into another database. Ideally I'd like to have my application be able to detect this situation and repair it automatically so the app can simply utilize the older data files. But in H2 1.4.200 I get the error right up front upon connection.
Is there a secret/special mode that will allow me to connect 1.4.200 to the database to repair its schema? I hope???
Outside of that it seems like my only option is have a separate classloader for different versions of H2 and have remedial operations happen in a classloader and the load operations happen in another classloader. Either that or start another instance of the JVM to do remedial operations.
I wanted to check for options before I did a bunch of work.
This problem is similar to this reported issue, but there was no specifics in how he performed his resolution.
This data type is not valid and was never supported by H2, but old H2, due to bug, somehow accepted it.
You need to export your database to a script with 1.4.192 Beta using
SCRIPT TO 'source.sql'
You need to use the original database file, because if you opened file from 1.4.192 Beta with 1.4.200, it may be corrupted by it, such automatic upgrade is not supported.
You need to replace DATETIME(23,3) with TIMESTAMP(3) or whatever you need using a some text editor. If exported SQL is too large for regular text editors, you can use a stream editor, such as sed:
sed 's/DATETIME(23,3)/TIMESTAMP(3)/g' source.sql > fixed.sql
Now you can create a new database with 1.4.200 and import the edited script into it:
RUNSCRIPT FROM 'fixed.sql'

DB2 Exclusive Lock Not Released

In a particularly requested DB2 table, accessed by distributed Java desktop applications via JDBC, I'm getting the following scenario several times a day:
Client A wants to INSERT new registers and gets a IX lock on the table, and X locks in each new row;
Other client(s) want(s) to perform a SELECT, is granted a IS lock on the table, but the application stucks;
Client A continues to work, but the INSERT and UPDATE queries are not commited, the locks are not released, and it keeps collecting X locks to each row;
Client A exits and its work is not committed. The other clients finnally get their SELECT result set.
Used to work well, and it does most of the time, but the lock situations are getting more and more frequent.
Auto-commit is ON.
There are no exceptions thrown or errors detected in the logs.
DB2 9.5 / JDBC Driver 9.1 (JDBC 3 specification)
If the jdbc applications are not performing COMMIT then the locks will persist until a rollback or commit. If an application quits with uncommitted inserts then a rollback will happen for all recent versions of Db2. This is expected behaviour for Db2 on Linux/Unix/Windows.
If the jdbc application is failing to commit then it is broken or misconfigured so you must get to root cause of that if you seek a permanent solution.
If the other clients wish to ignore the insert row-locks then they should choose the correct isolation level and you can configure Db2 to skip insert-locks . See documentation DB2_SKIPINSERTED at this link
It turns out that sometimes the auto-commit, and I don't know why, becomes off to a random single instance of the application.
The following validation seems to solve the problem (but not the root of it):
if (!conn.getAutoCommit()) {
conn.commit();
}

sybase and jdbc. Could not commit jdbc transaction. Read time out

After my app tries committing many transactions after several mins, I'm getting the following exception:
could not commit jdbc transaction nested exception is
java.sql.sqlexception: jz006: caught ioexception:
java.net.SocketTimeoutException: Read timed out..."
I'm using Sybase with the JDBC 4 driver with Spring JDBC, and I found this link: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0707/html/prjdbc0707/prjdbc070714.htm
Could I just use any of the following:
SESSION_TIMEOUT
DEFAULT_QUERY_ TIMEOUT
INTERNAL_QUERY_TIMEOUT
One idea is to make the transactions in batch, but I have no time to develop that.
What options are there to avoid getting that error?
Check if your processes are blocking each other when they execute (or ask your DBA if you're not sure how to check). Depending upon the connection properties (specifically autocommit being set to off) you may not actually be committing each transaction fully before the next one is attempted and they may block each other if you're using a connection pool with multiple threads. Talk to your DBA and check the table's locking scheme as for example if its set to allpages locking, you will hold locks at the page rather than row-level of data. You can also check this yourself via sp_help . Some more info regarding the various types of locking scheme can be found at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/X25549.htm (old version but still valid on current versions).
You can check for locks via sp_who, sp_lock or against the system tables directly (select spid, blocked from master..sysprocesses where blocked !=0 is a very simple one to get the process and blocking process you can add more columns to this as required).
You should also ask your DBA to check that the transactions are optimal as for example a tablescan on an update could well lock out the whole table to other transactions and would lead to the timeout issues you're seeing here.

Tibco JDBC Update dry run

Is it possible to have a dry run in Tibco for the JDBC Update activities? Meaning that I want to run those activities, but not actually update the database.
Even running in test mode if it's possible will be good.
The only option I see is having a copy of the targeted tables in a separe schema, duplicate the data, and temporary align the JDBC connection of you activity on this secondary, temporary/test database.
Since you can use global variables, no code is changed between test and delivery (a typical goal), and you can compare both DB tables to see if the WOULD HAVE ran well...
I think I found a way. I haven't tested it yet, but theoretically it should work.
The solution is to install P6Spy and to create a custom module that will throw an exception when trying to execute an INSERT/UPDATE.
You could wrap the activity into a transaction group and rollback whenever you only want to test the statement. Otherwise just exit the group normally so the data gets commited.

OBIEE sql statment generation for jdbc

How does OBIEE generate the sql statements that are then run against the target database? I have a report that generates one SQL statement when executed against Oracle database and completely different when executed via jdbc driver against Apache Drill. My problem is that in the second case the query is not even syntactically valid.
I've read this - http://gerardnico.com/wiki/dat/obiee/query_compiler
but still don't understand the mechanism through which Oracle decides on the actual query to be executed based on the driver.
OBIEE uses a "common metadata model" known as the RPD. This has a logical model of your data, along with the physical data source(s) for it. When a user runs a report it is submitted as a "logical" query that the BI Server then compiles using the RPD to generate the necessary SQL query (or queries) against the data sources.
Whilst Hive and Impala definitely work with OBIEE, I've not heard of Drill being successfully used. If you've got the connectivity working then to sort out the query syntax it generates you need to fiddle with the DBFeatures configuration which OBIEE uses to understand what SQL statements are valid for a given database. So if Drill doesn't support, for example, INTERSECT, you simply untick INTERSECT_SUPPORTED (I'm paraphasing the exact dialog terminology).

Resources