I am using IBM WAS 8.5 on a windows server
the database I am working with, is DB2 9.7 and is installed on windows server too (on another machine).
I have a table for logs that contains more than 4,000,000 records.
the data is growing very fast.
when I run a count query on that table, the result is very confusing.
with WAS jdbc connection pool, the count take more than 10 seconds to get the result,
but with a simple jdbc connection (in the same application or out of it using any db tool) the result is gaind in less than 0.2 seconds!
I've tried jmeter to perform load test and tivoli to find the right setting but no result!
I've tried dbpool too, the result was better but not acceptable!
any idea?!
I would start with http://www-01.ibm.com/support/docview.wss?uid=swg21247168 and open a PMR if you are unable to analyze the data. This can be any number of problems and without data very difficult to hazard a guess.
Also, are you doing the necessary DB2 work on the DB2 side with runstats/reorg?
Do you have Wireshark and are you looking at the TCP between the app server and the database? Are you seeing any lag there or not?
Related
We have a legacy process that runs on SSIS 2016 on Windows Server 2016, executes custom queries against databases on remote servers, pulls the results (thousands or sometimes millions of records) and stores them in a local SQL Server database. These other databases are on DB2 and Oracle 19c.
This process has always connected using an OLE DB driver and a data flow with OLE DB source and destination components. It also has always been slow.
Because of some article we read recently talking about how OLE DB transfers only 1 record at a time, but with ADO.NET this network transfer could be done in batches (is this even true?), we decided to try to use an ADO.NET driver to connect to DB2 and replace the OLE DB source and destination components by ADO.NET components.
The transfer we were using as test case, which involved 46 million records, basically flew and we could see it bring down around 10K records at a time. Something that used to run in 13 hours ran in 24 minutes with no other changes. Some small tweaks in the query allowed us to bring that time even lower to 11 minutes.
This is obviously major and we want to be able to replicate it with our Oracle data sources. Network bandwidth seems to have been the main issue, so we want to be able to transfer data from Oracle 19c to our SQL Server 2016 databases using SSIS in batches, but want to ask the experts what the best/fastest way to do this is.
Is Microsoft Connector for Oracle the way to go as far as driver? Since we're not on SQL Server 2019, this article says we also need to install the Oracle Client and Microsoft Connector Version 4.0 for Oracle by Attunity. What exactly is the Oracle Client? Is it one of these? If so, which one, based on our setup?
Also, should we use ADO.NET components in the data flow just like we did with DB2? In other words, is the single record vs. record batches difference driven by the driver used to connect, the type of components in the data flow or both need to go hand in hand for this to work?
Thanks in advance for your responses!
OLEDB connections are not slow by themselves - it's a matter or what features the driver has available to it. It sounds like the ADO.NET driver for DB2 allows bulk insert and the OLEDB one does not.
Regarding Oracle, the attunity driver is the way to go. You'll need to install the oracle driver as well. The links that you have look correct to me but I don't have access to test.
Also, please note that dataflows will batch data by default in increments of the buffer size. 10k rows for example.
I am developing java applications which connect to Oracle databases. Application loads some amount of data on startup. Because of slow connection to TEST environment, applications start up take some time.
I am looking if there is some proxy/cache tool, which would locally store results for every query. So it could load result from memory if query was already called, instead of calling DB again. This could save a lot of time.
I guess ProxySQL does something similar but it is targeted for MySQL. Is there something for Oracle DB ?
Check out the Oracle Client Result Cache. It works with the JDBC OCI driver.
https://docs.oracle.com/database/121/TGDBA/tune_result_cache.htm#TGDBA626
I have simple informatica(9.1) mapping(one to one) which loads data from flat file to RDBMS
it take 5 mins to load to Oracle db and 20 mins to load same file in SQL Server 2008 R2.
Can there be any source/pointers for performance improvement
A few things I can think of
for both tests is the file local to the app server running the mapping?
is the connection/distance between the app server and the data servers comparable
Is "Target load type" of the Target in the Session Properties set to "Bulk"?
Check the thread statistics in the session log to understand if the issue is while writing to db or while reading from file.
Is the PC server installed on Oracle db server? Is it the same case with SQL Server? Are SQL Server and PC server on the same box?
Is the mapping using ODBC or native connection to DB ?
We are trying to reproduce an Oracle deadlock issue in our Grails / JBoss 5 / Windows Server 2003 application with The Grinder. We are simulating 800 concurrent users using 8 VM Grinder nodes, but we are only seen one database connection per VM, so somewhere along the line there appears to be some sort of limit.
How can we lift this limit to allow more than one database connection per VM?
Are you trying to connect directly from the Grinder to Oracle? Normally you'd use the Grinder to apply load against your JBoss server, and let JBoss worry about the Oracle connections.
If you really want to go from The Grinder to Oracle, and you want to control exactly how many DB connections you open, it can be done by opening a separate connection for each Grinder threads. Instantiate a new connection in the _init_ method of your TestRunner class. You'll want to avoid using any ORM tools (Hibernate, Ibatis, ...) since they do connection pooling for you and won't let you have direct control of the number of DB connections you open. Use the JDBC API (via jython) instead.
We've a java application running inside JBoss EAP version 5.1 and until today we've always used the standard thin driver to connect to Oracle.
Upon further investigation after having upgraded all our clients to Oracle 11.2.0.2 Jdbc driver and having downloaded all the related files from Oracle site we've found three possible connections than could be used by JBoss
<connection-url>jdbc:oracle:thin:#...</connection-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<connection-url>jdbc:oracle:thin:#...</connection-url>
<driver-class>oracle.jdbc.pool.OracleDataSource</driver-class>
<connection-url>jdbc:oracle:thin:#...</connection-url>
<driver-class>oracle.ucp.UniversalConnectionPool</driver-class>
The latest requires the copy of the UCP.JAR file in the JBoss lib directory.
Question is: does somebody experienced the different configurations and found one better than the others in terms of performance and stability?
Regards
Massimo
It depends what type of connection you want. Do you want to set up a pooled connection or not? Generally in mid-tier environments you want to use pooled connections to limit the number of connections to your database and at the same time provide good service times.
1) Direct connection to the database
2) Pooled connection to the database
3) Pooled connection to the database, uses the new UCP pool
We got some answers by RedHat.
Their suggestion was basically to continue using the first option and let JBoss manage the connection pool.
Option number 2 is not a suggested option, while option number 3 is too recent and RedHat does not have experience using it.
Regards
Massimo