oracle insert with many bind variables over WAN is very slow - oracle

we have problem with slow insert statement using 40 bind variables as columns values. It runs several seconds when running over WAN link and we were not able to nail down the problem, until we used network analyzer. Every single execution of this prepared query required exchanging over 120 packets between client and server to complete. What we can do to to execute it more efficiently?
When I run the same insert with actual parameters(without bind variables) from the same host it completes in tens of miliseconds. There is nothing special about the parameters, there are only short varchars and numbers.
We are using Delphi 6 with ODAC, we tried various versions of ODAC and Oracle client with no avail. On server side we tried both Oracle 10 and 11.

TNS is not designed to work well over WAN.
If it's possible, rewrite your application to use other network layer, like HTTP, which is more efficient.
You can do it using Oracle HTTP Server, for instance.

Have you looked at External Tables? Replaces the need for SQL Loader
Requires Oracle 9i or above though

Related

Best way to transfer data IN BATCHES/BULK from Oracle 19c to SQL Server 2016 using SSIS

We have a legacy process that runs on SSIS 2016 on Windows Server 2016, executes custom queries against databases on remote servers, pulls the results (thousands or sometimes millions of records) and stores them in a local SQL Server database. These other databases are on DB2 and Oracle 19c.
This process has always connected using an OLE DB driver and a data flow with OLE DB source and destination components. It also has always been slow.
Because of some article we read recently talking about how OLE DB transfers only 1 record at a time, but with ADO.NET this network transfer could be done in batches (is this even true?), we decided to try to use an ADO.NET driver to connect to DB2 and replace the OLE DB source and destination components by ADO.NET components.
The transfer we were using as test case, which involved 46 million records, basically flew and we could see it bring down around 10K records at a time. Something that used to run in 13 hours ran in 24 minutes with no other changes. Some small tweaks in the query allowed us to bring that time even lower to 11 minutes.
This is obviously major and we want to be able to replicate it with our Oracle data sources. Network bandwidth seems to have been the main issue, so we want to be able to transfer data from Oracle 19c to our SQL Server 2016 databases using SSIS in batches, but want to ask the experts what the best/fastest way to do this is.
Is Microsoft Connector for Oracle the way to go as far as driver? Since we're not on SQL Server 2019, this article says we also need to install the Oracle Client and Microsoft Connector Version 4.0 for Oracle by Attunity. What exactly is the Oracle Client? Is it one of these? If so, which one, based on our setup?
Also, should we use ADO.NET components in the data flow just like we did with DB2? In other words, is the single record vs. record batches difference driven by the driver used to connect, the type of components in the data flow or both need to go hand in hand for this to work?
Thanks in advance for your responses!
OLEDB connections are not slow by themselves - it's a matter or what features the driver has available to it. It sounds like the ADO.NET driver for DB2 allows bulk insert and the OLEDB one does not.
Regarding Oracle, the attunity driver is the way to go. You'll need to install the oracle driver as well. The links that you have look correct to me but I don't have access to test.
Also, please note that dataflows will batch data by default in increments of the buffer size. 10k rows for example.

Can not load large amounts of data with DataGrip or IntelliJ to PostgreSQL

I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip
The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)

IBM Websphere application server jdbc Connection pool performance issue

I am using IBM WAS 8.5 on a windows server
the database I am working with, is DB2 9.7 and is installed on windows server too (on another machine).
I have a table for logs that contains more than 4,000,000 records.
the data is growing very fast.
when I run a count query on that table, the result is very confusing.
with WAS jdbc connection pool, the count take more than 10 seconds to get the result,
but with a simple jdbc connection (in the same application or out of it using any db tool) the result is gaind in less than 0.2 seconds!
I've tried jmeter to perform load test and tivoli to find the right setting but no result!
I've tried dbpool too, the result was better but not acceptable!
any idea?!
I would start with http://www-01.ibm.com/support/docview.wss?uid=swg21247168 and open a PMR if you are unable to analyze the data. This can be any number of problems and without data very difficult to hazard a guess.
Also, are you doing the necessary DB2 work on the DB2 side with runstats/reorg?
Do you have Wireshark and are you looking at the TCP between the app server and the database? Are you seeing any lag there or not?

How to allow more than one database connection per machine with JBoss 5 and Oracle

We are trying to reproduce an Oracle deadlock issue in our Grails / JBoss 5 / Windows Server 2003 application with The Grinder. We are simulating 800 concurrent users using 8 VM Grinder nodes, but we are only seen one database connection per VM, so somewhere along the line there appears to be some sort of limit.
How can we lift this limit to allow more than one database connection per VM?
Are you trying to connect directly from the Grinder to Oracle? Normally you'd use the Grinder to apply load against your JBoss server, and let JBoss worry about the Oracle connections.
If you really want to go from The Grinder to Oracle, and you want to control exactly how many DB connections you open, it can be done by opening a separate connection for each Grinder threads. Instantiate a new connection in the _init_ method of your TestRunner class. You'll want to avoid using any ORM tools (Hibernate, Ibatis, ...) since they do connection pooling for you and won't let you have direct control of the number of DB connections you open. Use the JDBC API (via jython) instead.

Monitor and change SQL queries with SQL Server 2000

I have a database upgrade tool that is misbehaving. I would like to catch one of the queries it sends to the database and change it before it is executed.
The tool connects via ODBC.
The tool and the SQL Server are on the same Windows 2003 Server box.
Any ideas?
EDIT: (More info)
When the tool runs it dies on step 12 out of 100. It issues some bad SQL intended to create a view. I need to suppress the error message or correct the SQL before it is executed. I can't just create the view because the first thing it does it drop the view. Even then it would error because the view would already exist.
Certainly - use the SQL Profiler to intercept and record the query.
Very useful little tool that...

Resources