My org uses legacy components with javalite + activejdbc for an ORM in our java web application. I am creating a local docker database (oracle 12c) for development. When I start the local jetty server pointing at my local database the startup takes more than 1 hour. The cause is active jdbc is looking at all the entity classes for all the tables and fetching metadata for each one in a loop. Looking at active JDBC registry class (org.javalite.activejdbc.Registry) its doing this:
Connection c = ConnectionsAccess.getConnection(dbName);
java.sql.DatabaseMetaData databaseMetaData = c.getMetaData();
String[] tables = metaModels.getTableNames(dbName);
for (String table : tables) {
ResultSet rs = databaseMetaData.getColumns(null, schema, tableName, null);
...
}
Each of these calls are taking like 15-30 seconds and there are hundreds of entity classes. When I point my local server at our test database its much faster (but still very slow). Is there anyway I can tune my local docker database so these metadata calls are faster? Or is there any activejdb configuration I can set to make the initialization lazy? There has to be some reason these calls take so much longer on local database vs our test database. I dont think its because our test DB is such a power beast - test DB is really pretty slow and has low resources.
EDITS / Clarifications:
This really seems to not be less an active jdbc question and more a question why the metadata queries take so long on my local docker database. The below code takes 16 seconds with local db URL and 356 ms when pointing at test. I can also see the local CPU spikes to 100% in the docker image.
public class DatabaseMetaDataTest {
public static void main(String args[]) throws SQLException {
//Registering the Driver
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
//Getting the connection
String url = "jdbc:oracle:thin:#localhost:1521/ORCLCDB.localdomain";
//String url = "jdbc:oracle:thin:#test:1532:xe";
Connection con = DriverManager.getConnection(url, "user", "pass");
System.out.println("Connection established......");
//Retrieving the meta data object
DatabaseMetaData metaData = con.getMetaData();
//Retrieving the columns in the database
long start = System.currentTimeMillis();
ResultSet columns = metaData.getColumns(null, "SCHEMA", "TABLE", null);
long end = System.currentTimeMillis();
System.out.println("duration:" + (end-start));
//Printing the column name and size
}
}
Further updates:
I decompiled the oracle driver and found this is the SQL taking forever:
SELECT NULL AS table_cat,
t.owner AS table_schem,
t.table_name AS table_name,
t.column_name AS column_name,
DECODE (t.data_type, 'CHAR', 1, 'VARCHAR2', 12, 'NUMBER', 3,
'LONG', -1, 'DATE', 93, 'RAW', -3, 'LONG RAW', -4,
'BLOB', 2004, 'CLOB', 2005, 'BFILE', -13, 'FLOAT', 6,
'TIMESTAMP(6)', 93, 'TIMESTAMP(6) WITH TIME ZONE', -101,
'TIMESTAMP(6) WITH LOCAL TIME ZONE', -102,
'INTERVAL YEAR(2) TO MONTH', -103,
'INTERVAL DAY(2) TO SECOND(6)', -104,
'BINARY_FLOAT', 100, 'BINARY_DOUBLE', 101,
'XMLTYPE', 2009,
1111)
AS data_type,
t.data_type AS type_name,
DECODE (t.data_precision, null, DECODE(t.data_type, 'NUMBER', DECODE(t.data_scale, null, 0 , 38), DECODE (t.data_type, 'CHAR', t.char_length, 'VARCHAR', t.char_length, 'VARCHAR2', t.char_length, 'NVARCHAR2', t.char_length, 'NCHAR', t.char_length, 'NUMBER', 0, t.data_length) ), t.data_precision)
AS column_size,
0 AS buffer_length,
DECODE (t.data_type, 'NUMBER', DECODE(t.data_precision, null, DECODE(t.data_scale, null, -127 , t.data_scale), t.data_scale), t.data_scale) AS decimal_digits,
10 AS num_prec_radix,
DECODE (t.nullable, 'N', 0, 1) AS nullable,
NULL AS remarks,
t.data_default AS column_def,
0 AS sql_data_type,
0 AS sql_datetime_sub,
t.data_length AS char_octet_length,
t.column_id AS ordinal_position,
DECODE (t.nullable, 'N', 'NO', 'YES') AS is_nullable,
null as SCOPE_CATALOG,
null as SCOPE_SCHEMA,
null as SCOPE_TABLE,
null as SOURCE_DATA_TYPE,
'NO' as IS_AUTOINCREMENT
FROM all_tab_columns t
WHERE t.owner LIKE 'SCHEMA' ESCAPE '/'
AND t.table_name LIKE 'TABLE' ESCAPE '/'
AND t.column_name LIKE '%' ESCAPE '/'
ORDER BY table_schem, table_name, ordinal_position
I can see when I run this in oracle sql developer it takes like .5 seconds for my sysdba user but the other users its taking 16 seconds. Still investigating what the difference is between these users.
Further updates...
This seems to be due to some oracle bug in 12c. select * from all_tab_columns has a bad execution plan when running as regular user. Its complaining about some obscure table "X$KZSRO" doing full table scan and taking forever to sort (the table has 2 rows ffs). When I connect as sysdba it runs faster. Im guessing there is some issue with regular users accessing this table. For now since this is just dev database Im just going to grant sysdba role to my user and figure out some sql profile later. I know its not great solution but it fixes the performance bug in oracle. Startup time went from 1 hour to 1 minute.
First, if getting the metadata from your database takes 15 - 30 seconds per table, there must be something very wrong with that database. ActiveJDBC uses dynamic discovery in order to be in sync with the database on each start. This is the default behavior.
However, if you want, you can use the Static Metadata Generation.
Using this method, all database metadata will be collected during a build time and will be packaged into your jar as a file. ActiveJDBC will then start instantly on all other environments because it will read metadata from this file rather than a database.
Obviously, you must ensure that the database at the build time has exactly the same schema as your other databases. If not, you will experience some mapping issues.
While the Static Metadata Generation will solve your startup performance issue, you still have a problem with your database, and I strongly suggest that you investigate that.
Note: the first implementation of ActiveJDBC was in 2009 for Humana and we used an Oracle database as well. Our schema at the time was about 120 tables, and ActiveJDBC always started lightning fast.
this issue is not really an activejdbc issue its an oracle 12c issue. Seems reading from the ALL_TAB_COLUMNS table on 12c has bad performance / bad query plan unless you log in as a sysdba. Its not really a good solution but it works for a local docker dev database so I am just setting my user as sysdba. Will find some sql profile someday as a real solution. sysdba cant be used for any prod environment but its fine with me for localhost dev database.
grant sysdba to my_user;
jetty-env.xml:
<New class="oracle.jdbc.pool.OracleDataSource">
<Set name="URL">jdbc:oracle:thin:#localhost:1521/ORCLCDB.localdomain</Set>
<Set name="user">my_user as sysdba</Set>
<Set name="password">my_password</Set>
</New>
Related
We have a C# job that is querying an Oracle database (version 12.1.0.2) on the OdbcDataAdapter object like this:
using (OdbcConnection OdbcConn = source.GetOdbcConnection())
{
OdbcCommand cmd = new OdbcCommand(query, OdbcConn);
cmd.CommandTimeout = 0;
using (OdbcDataAdapter dAdapter = new OdbcDataAdapter(cmd))
{
dAdapter.Fill(fileLoaded);
}
}
The query contains this in the select:
,TO_CHAR(DEL_DTTM, 'YYYY-MM-DD HH24:MI:SS') "Delivery Time2"
When the above code runs, '2018-10-21 02:24:00' is returned. However, when the same query is run in SQL Developer (Oracle query editor), we are getting '2018-10-21 03:24:00'.
(Adding more details)
This column is actually a 'DATE' type. And come to find out, this is actually stored as '2018-10-21 07:24:00', and is being manipulated via a view by this function:
,to_char(CAST((FROM_TZ(CAST(OB_DEL_BIRTH_DTTM AS TIMESTAMP),'GMT') AT TIME ZONE SESSIONTIMEZONE) AS DATE), 'YYYY-MM-DD HH24:MI:SS') as DEL_DTTM
I'm in the America/New_York timezone, so what I think is happening is this conversion is applying 'todays date' to the localization from the ODBC connection, but 'today' is GMT -5:00, whereas the date being compared is during Daylight Savings, so should be -4:00.
(More Details)
So when I run the 'select sessiontimezone from dual' in SQL Developer, I get 'America/New_York', but then through C#/ODBC I get -5:00. If I run 'select TZ_OFFSET(sessiontimezone) from dual' through both I get -5:00.
Any ideas my friends?
I'm trying to understand what is causing an open query on an Oracle (10) database.
On AWR it shows a very high number of parse calls (e.g. 15,000+ in a 1 hour period), but 0 executions.
How it can the query not be executed, but then parsed 15000 times?
Parse Calls : 15,000+
Executions : 0
SQL Text : select * from AVIEW
The * in the SQL would explain the repeated parsing. You should replace it with a list of field names.
Oracle 11, java, jdbc 11.2.0.3
Problem occurs when you getting sequence from insert like this
PreparedStatement ps = connection.prepareStatement(QUERY, new String[] { "student_id" });
We found that jdbc driver prepares "SELECT * FROM " statement before every insert. There is only parse operation without execution.
T4CConnection.doDescribeTable
T4CStatement localT4CStatement = new T4CStatement(this, -1, -1);
localT4CStatement.open();
String str1 = paramAutoKeyInfo.getTableName();
String str2 = new StringBuilder().append("SELECT * FROM ").append(str1).toString();
localT4CStatement.sqlObject.initialize(str2);
Oracle parser doesn't cache parsed queries with "*" so there is additional parse operation per every insert.
Zero executions indicates that the query did not complete within the AWR snapshot
We have a similar issue, but the query was slightly different:
select col1, col2, col3 from table
Result was the same. A high parse rate but zero executions.
The reason was StatementCreatorUtils#setNull from spring-jdbc. ver 4.2.7
When executing:
insert into table (col1, col2, col3) values (val1, null, null)
there was a call to the database for a parameter type.
Using h2 in embedded mode, I am restoring an in memory database from a script backup that was previously generated by h2 using the SCRIPT command.
I use this URL:
jdbc:h2:mem:main
I am doing it like this:
FileReader script = new FileReader("db.sql");
RunScript.execute(conn,script);
which, according to the doc, should be similar to this SQL:
RUNSCRIPT FROM 'db.sql'
And, inside my app they do perform the same. But if I run the load using the web console using h2.bat, I get a different result.
Following the load of this data in my app, there are rows that I know are loaded but are not accessible to me via a query. And these queries demonstrate it:
select count(*) from MY_TABLE yields 96576
select count(*) from MY_TABLE where ID <> 3238396 yields 96575
select count(*) from MY_TABLE where ID = 3238396 yields 0
Loading the web console and using the same RUNSCRIPT command and file to load yields a database where I can find the row with that ID.
My first inclination was that I was dealing with some sort of locking issue. I have tried the following (with no change in results):
manually issuing a conn.commit() after the RunScript.execute()
appending ;LOCK_MODE=3 and the ;LOCK_MODE=0 to my URL
Any pointers in the right direction on how I can identify what is going on? I ended up inserting :
Server.createWebServer("-trace","-webPort","9083").start()
So that I could run these queries through the web console to sanity check what was coming back through JDBC. The problem happens consistently in my app and consistently doesn't happen via the web console. So there must be something at work.
The table schema is not exotic. This is the schema column from
select * from INFORMATION_SCHEMA.TABLES where TABLE_NAME='MY_TABLE'
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
ID INTEGER SELECTIVITY 100,
P_ID INTEGER SELECTIVITY 4,
TYPE VARCHAR(10) SELECTIVITY 1,
P_ORDER DECIMAL(8, 0) SELECTIVITY 11,
E_GROUP INTEGER SELECTIVITY 1,
P_USAGE VARCHAR(16) SELECTIVITY 1
)
Any push in the right direction would be really appreciated.
EDIT
So it seems that the database is corrupted in some way just after running the RunScript command to load it. As I was trying to debug to find out what is going on, I tried executing the following:
delete from MY_TABLE where ID <> 3238396
And I ended up with:
Row not found when trying to delete from index "PUBLIC.MY_TABLE_IX1: 95326", SQL statement:
delete from MY_TABLE where ID <> 3238396 [90112-178] 90112/90112 (Help)
I then tried dropping and recreating all my indexes from within the context, but it had no effect on the overall problem.
Help!
EDIT 2
More information: The problem occurs due to the creation of an index. (I believe I have found a bug in h2 and I have working on creating a minimal case that reproduces it). The simple code below will reproduce the problem, if you have the right set of data.
public static void main(String[] args)
{
try
{
final String DB_H2URL = "jdbc:h2:mem:main;LOCK_MODE=3";
Class.forName("org.h2.Driver");
Connection c = DriverManager.getConnection(DB_H2URL, "sa", "");
FileReader script = new FileReader("db.sql");
RunScript.execute(c,script);
script.close();
Statement st = c.createStatement();
ResultSet rs = st.executeQuery("select count(*) from MY_TABLE where P_ID = 3238396");
rs.next();
if(rs.getLong(1) == 0)
System.err.println("It happened");
else
System.err.println("It didn't happen");
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I have reduced the db.sql script to about 5000 rows and it still happens. When I attempted to go to 2500 rows, it stopped happening. If I remove the last line of the db.sql (which is the index creation), the problem will also stop happening. The last line is this:
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But the data is an important player in this. It still appears to only ever be the one row and the index somehow makes it inaccessible.
EDIT 3
I have identified the minimal data example to reproduce. I stripped the table schema down to a single column, and I found that the values in that column don't seem to matter -- just the number of rows. Here is the contents of (snipped with obvious stuff) of my db.sql generated via the SCRIPT command:
;
CREATE USER IF NOT EXISTS SA SALT '8eed806dbbd1ea59' HASH '6d55cf715c56f4ca392aca7389da216a97ae8c9785de5d071b49de5436b0c003' ADMIN;
CREATE MEMORY TABLE PUBLIC.MY_TABLE(
P_ID INTEGER SELECTIVITY 100
);
-- 5132 +/- SELECT COUNT(*) FROM PUBLIC.MY_TABLE;
INSERT INTO PUBLIC.MY_TABLE(P_ID) VALUES
(1),
(2),
(3),
... snipped you obviously have breaks in the bulk insert here ...
(5143),
(3238396);
CREATE INDEX PUBLIC.MY_TABLE_IX1 ON PUBLIC.MY_TABLE(P_ID);
But that will recreate the problem. [Note that my numbering skips a number every time there was a bulk insert line. So there really is 5132 rows, though you see 5143 select count(*) from MY_TABLE yields 5132]. Also, I seem to be able to recreate the problem in the WebConsole directly now by doing:
drop table MY_TABLE
runscript from 'db.sql'
select count(*) from MY_TABLE where P_ID = 3238396
You have recreated the problem if you get 0 back from the select when you know you have a row in there.
Oddly enough, I seem to be able to do
select * from MY_TABLE order by P_ID desc
and I can see the row at this point. But going directly for the row:
select * from MY_TABLE where P_ID = 3238396
Yields nothing.
I just realized that I should note that I am using h2-1.4.178.jar
The h2 folks have already apparently resolved this.
https://code.google.com/p/h2database/issues/detail?id=566
Just either need to get the code from version control or wait for the next release build. Thanks Thomas.
I am using a materialized view, and I cant set it to fast refresh because some of the tables are from remote database which does not have materialized view log.
When I create the materialized view, it took like 20 30 seconds. however when I was trying to refresh it.
It took more than 2 3 hours. and total number of records are only around 460,000.
Does anyone have any clue about how it would happen?
Thanks
Code looks like as following
create materialized view MY_MV1
refresh force on demand
start with to_date('20-02-2013 22:00:00', 'dd-mm-yyyy hh24:mi:ss') next trunc(sysdate)+1+22/24
as
( SELECT Nvl(Cr.Sol_Chng_Num, ' ') AS Change_Request_Nbr,
Nvl(Sr.Sr_Num, ' ') AS Service_Request_Nbr,
Nvl(Sr.w_Org_Id, 0) AS Org_Id,
Fcr.rowid,
Cr.rowid,
Bsr.rowid,
Sr.rowid,
SYSDATE
FROM Dwadmin.f_S_Change#DateWarehouse.World Fcr
INNER JOIN Dwadmin.d_S_Change#DateWarehouse.World Cr
ON Fcr.w_Sol_Chng_Id = Cr.w_Sol_Chng_Id
INNER JOIN Dwadmin.b_S_Change_Obl#DateWarehouse.World Bsr
ON Fcr.w_Sol_Chng_Id = Bsr.w_Sol_Chng_Id
INNER JOIN Dwadmin.d_S_Rec#DateWarehouse.World Sr
ON Sr.w_Srv_Rec_Id = Bsr.w_Srv_Rec_Id
WHERE Sr.Sr_Num <> 'NS'
);
I have tried to use dbms_mview.refresh('MY_MATVIEW', 'C', atomic_refresh=>false)
but it still took 141 mins to run... vs 159 mins without atomic_refresh=>false
I would personally NOT use the scheduler built into the mat view CREATE statement (start with ... next clause).
The main reason (for me) is that you cannot declare the refresh non-ATOMIC this way (at least I haven't found the syntax for this at CREATE time). Depending on your refresh requirements and size, this can save A LOT of time.
I would use dbms_mview.refresh('MY_MATVIEW', 'C', atomic_refresh=>false). This would:
Truncate MY_MATVIEW snapshot table
Insert append into MY_MATVIEW table
If you use the next clause in the create statement, it will setup an atomic refresh, meaning it will:
Delete * from MY_MATVIEW
Insert into MY_MATVIEW
Commit
This will be slower (sometimes much slower), but others can still query from MY_MATVIEW while the refresh is occurring. So, depends on your situation and needs.
You can test it. I run this manually and it works for me friend :)
BEGIN
DBMS_REFRESH.make(
name => 'DB_NAME.MINUTE_REFRESH',
list => '',
next_date => SYSDATE,
interval => '/*1:Mins*/ SYSDATE + 1/(60*24)',
implicit_destroy => FALSE,
lax => FALSE,
job => 0,
rollback_seg => NULL,
push_deferred_rpc => TRUE,
refresh_after_errors => TRUE,
purge_option => NULL,
parallelism => NULL,
heap_size => NULL);
END;
/
BEGIN
DBMS_REFRESH.add(
name => 'DB_NAME.MINUTE_REFRESH',
list => 'DB_NAME.MV_NAME',
lax => TRUE);
END;
/
And then u can destroy it with this.
BEGIN
DBMS_REFRESH.destroy(name => 'DB_NAME.MINUTE_REFRESH');
END;
/
You can create materialize view log.
CREATE MATERIALIZED VIEW LOG ON DB_NAME.TABLE_NAME
TABLESPACE users
WITH PRIMARY KEY
INCLUDING NEW VALUES;
I hope it can help you. :)
if it only takes 20-30 seconds to create why not just drop and recreate the materialized view instead of refreshing it?
I am guessing:
Create Table doesn't need to write into the transaction log, as it is a new table. atomic_refresh => false means there is a truncate on the delete side (bypassing logging), but you then still have the INSERT side to deal with, which likely means you get a lot of transaction logging.
I use the following statement prepared and bound in ODBC:
SELECT (CASE profile WHEN ? THEN 1 ELSE 2 END) AS profile_order
FROM engine_properties;
Executed in an ODBC 3.0 connection to an Oracle 10g database in AL32UTF8 charset, even after binding to a wchar_t string using SQLBindParameter(SQL_C_WCHAR), it still gives the error ORA-12704: character set mismatch.
Why? I'm binding as wchar. Shouldn't a wchar be considered an NCHAR?
If I change the parameter to wrap it with TO_NCHAR() then the query works without error. However since these queries are used for multiple database backends, I don't want to add TO_NCHAR just on Oracle text bindings. Is there something that I am missing? Another way to solve this without the TO_NCHAR hammer?
I haven't been able to find anything relevant via searches or in the manuals.
More details...
-- error
SELECT (CASE profile WHEN '_default' THEN 1 ELSE 2 END) AS profile_order
FROM engine_properties;
-- ok
SELECT (CASE profile WHEN TO_NCHAR('_default') THEN 1 ELSE 2 END) AS profile_order
FROM engine_properties;
SQL> describe engine_properties;
Name Null? Type
----------------------------------------- -------- ----------------------------
EID NOT NULL NVARCHAR2(22)
LID NOT NULL NUMBER(11)
PROFILE NOT NULL NVARCHAR2(32)
PKEY NOT NULL NVARCHAR2(50)
VALUE NOT NULL NVARCHAR2(64)
READONLY NOT NULL NUMBER(5)
This version without TO_NCHAR works fine in SQL Server and PostgreSQL (via ODBC) and SQLite (direct). However in Oracle it returns "ORA-12704: character set mismatch".
SQLPrepare(SELECT (CASE profile WHEN ? THEN 1 ELSE 2 END) AS profile_order
FROM engine_properties;) = SQL_SUCCESS
SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_WCHAR,
SQL_VARCHAR, 32, 0, "_default", 18, 16) = SQL_SUCCESS
SQLExecute() = SQL_ERROR
SQLGetDiagRec(1) = SQL_SUCCESS
[SQLSTATE: HY000, NATIVE: 12704, MESSAGE: [Oracle][ODBC]
[Ora]ORA-12704: character set mismatch]
SQLGetDiagRec(2) = SQL_NO_DATA
If I do use TO_NCHAR, it's okay (but won't work in SQL Server, Postgres, SQLite, etc).
SQLPrepare(SELECT (CASE profile WHEN TO_NCHAR(?) THEN 1 ELSE 2 END) AS profile_order
FROM engine_properties;) = SQL_SUCCESS
SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_WCHAR,
SQL_VARCHAR, 32, 0, "_default", 18, 16) = SQL_SUCCESS
SQLExecute() = SQL_SUCCESS
SQLNumResultCols() = SQL_SUCCESS (count = 1)
SQLFetch() = SQL_SUCCESS
If the Oracle database character set is AL32UTF8, why are the columns defined as NVARCHAR2? That means that you want those columns encoded using the national character set (normally AL16UTF16, but that may be different on your database). Unless you are primarily storing Asian language data (or other data that requires 3 bytes of storage in AL32UTF8), it is relatively uncommon to create NVARCHAR2 columns in an Oracle database when the database character set supports Unicode.
In general, you are far better served sticking with the database character set (CHAR and VARCHAR2 columns) rather than trying to work with the national character set (NCHAR and NVARCHAR2 columns) because there are far fewer hoops that need to be jumped through on the development/ configuration side of things. Since you aren't increasing the set of characters you can encode by choosing NVARCHAR2 data types, I'll wager that you'd be happier with VARCHAR2 data types.
Thanks Justin.
I can't say that I understand exactly how to choose between VARCHAR2 and NVARCHAR2 still. I had tried using VARCHAR2 for my date (which does include a lot of different languages, both European and Asian) and it didn't work that time.
I have had another bit of playing around again though and I found that using Justin's suggestion works in this combination:
AL32UTF8 database charset
VARCHAR2 column types
set NLS_LANG=.UTF8 before starting sqlplus.exe
data files using UTF-8 (i.e. the files with all the INSERT statements)
inserting and extracting strings from the database using SQL_C_WCHAR
I still don't find Oracle as fun to play with as (for instance) PostgreSQL though... :-)