NHibernate + Oracle: speed problems when querying data - oracle

We have an WCF application which uses NHibernate to query data from the database. After installing the application into a new test environment we're facing some performance problems with queries. Our old and new environment use different Oracle-servers but both of the databases have the same data.
We have went through our NHibernate logs and identified the problematic part:
2010-12-02 07:14:22,673 NHibernate.SQL - SELECT this_.CC...
2010-12-02 07:14:22,688 NHibernate.Loader.Loader - processing result set
2010-12-02 07:14:27,235 NHibernate.Loader.Loader - result set row: 0
In this case the query returned one row. But it seems that in our new environment the "processing result set" is taking much longer (5 seconds vs 0.5 seconds) than in our other environment. Is there some way to find out exactly what inside the "processing result set" is taking so long?
Note. Executing the same exact query directly into the DB with Toad doesn't reproduce the problem. With Toad, both database server are equally fast.
We are using DetachedCriteria to create the query and then it is executed like this:
Dim criteria As ICriteria = crit.GetExecutableCriteria(GetSession())
Return New Generic.List(Of T)(criteria.List(Of T))
The version of NHibernate is 2.1.2.4 and we're using ActiveRecord 2.1.0 to create the mappings. Oracle servers are of version 10g.
So in our case we have two environments that have the same version of the application with identical configuration files and are querying against identical databases, but which have different application and oracle servers. In one environment querying through the NHibernate takes around 5.5 seconds and in the another 0.5 seconds. The results are consistent and the same query has been executed around 50 times to both environments.
Is there something in the Oracle configuration which could cause it to misbehave with NHiberate? And is there a way to get more detailed logging out from NHibernate so that the exact problem inside the "processing result set" could be found?
Any advice is greatly appreciated.

We were able to fix our problem by switching the database drivers from Microsoft's to Oracle's ODP.net. Now, both servers are equally fast and even our previously fast server executes the queries much more rapidly. We don't know what setting in our new server made the Microsoft's Oracle driver so slow.
And it seems that Microsoft is nowadays recommending everyone to use something else than their own Oracle-drivers. http://blogs.msdn.com/b/adonet/archive/2009/06/15/system-data-oracleclient-update.aspx

Do a sql trace on both environments by adding this statement to your session:
alter session set timed_statistics=true;
alter session set max_dump_file_size=unlimited;
alter session set events '10046 trace name context forever, level 8' ;
-- your query goes here
select * from mytable where x= 1;
alter session set events '10046 trace name context off';
then use tkprof to examine the trace file ( goto the user_dump_dest usually a directory with udump in the name and tkprof outputfile.log inputtracefilename.trc )
type tkprof
by itself to see help screen and command options
Also
Check that you are using the same settings in the INIT.ORA for things like CURSOR_SHARING=
in both databases

Related

Batching of queries is not allowed by J2EE compliance

I am trying to run a stand alone Java application which executes a batch of Select queries with PreparedStatement ( by using addBatch() & executeBatch() functions of PreparedStatement ) against DB2 V 9.7.
I am getting this error message at executeBatch(),
com.ibm.db2.jcc.c.lh: [ibm][db2][jcc][105][10840] Batching of queries is not allowed by J2EE compliance.
at com.ibm.db2.jcc.c.gg.c(gg.java:2566)
at com.ibm.db2.jcc.c.gg.b(gg.java:2536)
at com.ibm.db2.jcc.c.gg.executeBatch(gg.java:1421)
at
Anybody know about this error? Nothing shows up on SO or Google.
Seems pretty self-explanatory to me.
I've only ever seen INSERT/UPDATE used with addBatch.
Given that executeBatch() returns just int[] it seems obvious that it wouldn't be of much use for SELECT queries.
This should be possible if you move your logic for multiple dynamic SQL statements into a Stored Procedure. You can then issue one JDBC call to the stored procedure.

Query very slow after a few execution

I'm new of oracle and now I'm becoming crazy with the following situation. I'm working on a oracle 11g database and many times is happening that I run a query with sql developer and this is correctly executed in 5/6 seconds, others time instead the same query take 300/400 second to be executed. There is some tools to debug what is happening when the query employs 300/400 second?
Update 1
This is my sql developer screenshot the problem seems be direct path read temp
Update 2
report
Update 3
report2
Any suggestion?
Try setting a trace. User being whatever user is experiencing the delay
As sys:
GRANT ALTER SESSION TO USER;
As the user executing the trace:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION";
Produce the error/issue, then as the user testing:
ALTER SESSION SET EVENTS '10046 trace name context off';
As system find out where the trace files are kept:
show parameter background_dump_dest;
Go to that directory and look for .trc/.trm files containing MY_TEST_SESSION. For example ORCL_ora_29772_MY_TEST_SESSION.trc.
After that tkprof those files. In linux:
tkprof ORCL_ora_29772_MY_TEST_SESSION.trc output=ORCL_ora_29772_MY_TEST_SESSION.tkprof explain=user/password sys=no
Read the tkprof file and it will will show you wait times on given statements.
For more info on TKPROF read this. For more info on enabling/disabling a trace read this.
The best tool is Real-Time SQL Monitoring. It does not require changing code or access to the operating system. The only downside is it requires licensing the Tuning Pack.
Compare this single line of code with the trace steps in the other answer. Also, the output looks much nicer.
select dbms_sqltune.report_sql_monitor(sql_id => 'your sql id', type => 'text') from dual;
There's almost never a need to use trace in 11g and beyond.
This behaviour can be caused by cardinality feedback bugs / issues in 11gR2. I had a similar issue. You can test if this is the case by turning off this feature with _optimizer_use_feedback=false
Also try applying the latest updates.

hibernate 4.3.0.Final causes a ORA-01000: maximum open cursors exceeded [duplicate]

I am getting an ORA-01000 SQL exception. So I have some queries related to it.
Are maximum open cursors exactly related to number of JDBC connections, or are they also related to the statement and resultset objects we have created for a single connection ? (We are using pool of connections)
Is there a way to configure the number of statement/resultset objects in the database (like connections) ?
Is it advisable to use instance variable statement/resultset object instead of method local statement/resultset object in a single threaded environment ?
Does executing a prepared statement in a loop cause this issue ? (Of course, I could have used sqlBatch) Note: pStmt is closed once loop is over.
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
{ //finally starts
pStmt.close()
} //finally ends
What will happen if conn.createStatement() and conn.prepareStatement(sql) are called multiple times on single connection object ?
Edit1:
6. Will the use of Weak/Soft reference statement object help in preventing the leakage ?
Edit2:
1. Is there any way, I can find all the missing "statement.close()"s in my project ? I understand it is not a memory leak. But I need to find a statement reference (where close() is not performed) eligible for garbage collection ? Any tool available ? Or do I have to analyze it manually ?
Please help me understand it.
Solution
To find the opened cursor in Oracle DB for username -VELU
Go to ORACLE machine and start sqlplus as sysdba.
[oracle#db01 ~]$ sqlplus / as sysdba
Then run
SELECT A.VALUE,
S.USERNAME,
S.SID,
S.SERIAL#
FROM V$SESSTAT A,
V$STATNAME B,
V$SESSION S
WHERE A.STATISTIC# = B.STATISTIC#
AND S.SID = A.SID
AND B.NAME = 'opened cursors current'
AND USERNAME = 'VELU';
If possible please read my answer for more understanding of my solution
ORA-01000, the maximum-open-cursors error, is an extremely common error in Oracle database development. In the context of Java, it happens when the application attempts to open more ResultSets than there are configured cursors on a database instance.
Common causes are:
Configuration mistake
You have more threads in your application querying the database than cursors on the DB. One case is where you have a connection and thread pool larger than the number of cursors on the database.
You have many developers or applications connected to the same DB instance (which will probably include many schemas) and together you are using too many connections.
Solution:
Increasing the number of cursors on the database (if resources allow) or
Decreasing the number of threads in the application.
Cursor leak
The applications is not closing ResultSets (in JDBC) or cursors (in stored procedures on the database)
Solution: Cursor leaks are bugs; increasing the number of cursors on the DB simply delays the inevitable failure. Leaks can be found using static code analysis, JDBC or application-level logging, and database monitoring.
Background
This section describes some of the theory behind cursors and how JDBC should be used. If you don't need to know the background, you can skip this and go straight to 'Eliminating Leaks'.
What is a cursor?
A cursor is a resource on the database that holds the state of a query, specifically the position where a reader is in a ResultSet. Each SELECT statement has a cursor, and PL/SQL stored procedures can open and use as many cursors as they require. You can find out more about cursors on Orafaq.
A database instance typically serves several different schemas, many different users each with multiple sessions. To do this, it has a fixed number of cursors available for all schemas, users and sessions. When all cursors are open (in use) and request comes in that requires a new cursor, the request fails with an ORA-010000 error.
Finding and setting the number of cursors
The number is normally configured by the DBA on installation. The number of cursors currently in use, the maximum number and the configuration can be accessed in the Administrator functions in Oracle SQL Developer. From SQL it can be set with:
ALTER SYSTEM SET OPEN_CURSORS=1337 SID='*' SCOPE=BOTH;
Relating JDBC in the JVM to cursors on the DB
The JDBC objects below are tightly coupled to the following database concepts:
JDBC Connection is the client representation of a database session and provides database transactions. A connection can have only a single transaction open at any one time (but transactions can be nested)
A JDBC ResultSet is supported by a single cursor on the database. When close() is called on the ResultSet, the cursor is released.
A JDBC CallableStatement invokes a stored procedure on the database, often written in PL/SQL. The stored procedure can create zero or more cursors, and can return a cursor as a JDBC ResultSet.
JDBC is thread safe: It is quite OK to pass the various JDBC objects between threads.
For example, you can create the connection in one thread; another thread can use this connection to create a PreparedStatement and a third thread can process the result set. The single major restriction is that you cannot have more than one ResultSet open on a single PreparedStatement at any time. See Does Oracle DB support multiple (parallel) operations per connection?
Note that a database commit occurs on a Connection, and so all DML (INSERT, UPDATE and DELETE's) on that connection will commit together. Therefore, if you want to support multiple transactions at the same time, you must have at least one Connection for each concurrent Transaction.
Closing JDBC objects
A typical example of executing a ResultSet is:
Statement stmt = conn.createStatement();
try {
ResultSet rs = stmt.executeQuery( "SELECT FULL_NAME FROM EMP" );
try {
while ( rs.next() ) {
System.out.println( "Name: " + rs.getString("FULL_NAME") );
}
} finally {
try { rs.close(); } catch (Exception ignore) { }
}
} finally {
try { stmt.close(); } catch (Exception ignore) { }
}
Note how the finally clause ignores any exception raised by the close():
If you simply close the ResultSet without the try {} catch {}, it might fail and prevent the Statement being closed
We want to allow any exception raised in the body of the try to propagate to the caller.
If you have a loop over, for example, creating and executing Statements, remember to close each Statement within the loop.
In Java 7, Oracle has introduced the AutoCloseable interface which replaces most of the Java 6 boilerplate with some nice syntactic sugar.
Holding JDBC objects
JDBC objects can be safely held in local variables, object instance and class members. It is generally better practice to:
Use object instance or class members to hold JDBC objects that are reused multiple times over a longer period, such as Connections and PreparedStatements
Use local variables for ResultSets since these are obtained, looped over and then closed typically within the scope of a single function.
There is, however, one exception: If you are using EJBs, or a Servlet/JSP container, you have to follow a strict threading model:
Only the Application Server creates threads (with which it handles incoming requests)
Only the Application Server creates connections (which you obtain from the connection pool)
When saving values (state) between calls, you have to be very careful. Never store values in your own caches or static members - this is not safe across clusters and other weird conditions, and the Application Server may do terrible things to your data. Instead use stateful beans or a database.
In particular, never hold JDBC objects (Connections, ResultSets, PreparedStatements, etc) over different remote invocations - let the Application Server manage this. The Application Server not only provides a connection pool, it also caches your PreparedStatements.
Eliminating leaks
There are a number of processes and tools available for helping detect and eliminating JDBC leaks:
During development - catching bugs early is by far the best approach:
Development practices: Good development practices should reduce the number of bugs in your software before it leaves the developer's desk. Specific practices include:
Pair programming, to educate those without sufficient experience
Code reviews because many eyes are better than one
Unit testing which means you can exercise any and all of your code base from a test tool which makes reproducing leaks trivial
Use existing libraries for connection pooling rather than building your own
Static Code Analysis: Use a tool like the excellent Findbugs to perform a static code analysis. This picks up many places where the close() has not been correctly handled. Findbugs has a plugin for Eclipse, but it also runs standalone for one-offs, has integrations into Jenkins CI and other build tools
At runtime:
Holdability and commit
If the ResultSet holdability is ResultSet.CLOSE_CURSORS_OVER_COMMIT, then the ResultSet is closed when the Connection.commit() method is called. This can be set using Connection.setHoldability() or by using the overloaded Connection.createStatement() method.
Logging at runtime.
Put good log statements in your code. These should be clear and understandable so the customer, support staff and teammates can understand without training. They should be terse and include printing the state/internal values of key variables and attributes so that you can trace processing logic. Good logging is fundamental to debugging applications, especially those that have been deployed.
You can add a debugging JDBC driver to your project (for debugging - don't actually deploy it). One example (I have not used it) is log4jdbc. You then need to do some simple analysis on this file to see which executes don't have a corresponding close. Counting the open and closes should highlight if there is a potential problem
Monitoring the database. Monitor your running application using the tools such as the SQL Developer 'Monitor SQL' function or Quest's TOAD. Monitoring is described in this article. During monitoring, you query the open cursors (eg from table v$sesstat) and review their SQL. If the number of cursors is increasing, and (most importantly) becoming dominated by one identical SQL statement, you know you have a leak with that SQL. Search your code and review.
Other thoughts
Can you use WeakReferences to handle closing connections?
Weak and soft references are ways of allowing you to reference an object in a way that allows the JVM to garbage collect the referent at any time it deems fit (assuming there are no strong reference chains to that object).
If you pass a ReferenceQueue in the constructor to the soft or weak Reference, the object is placed in the ReferenceQueue when the object is GC'ed when it occurs (if it occurs at all). With this approach, you can interact with the object's finalization and you could close or finalize the object at that moment.
Phantom references are a bit weirder; their purpose is only to control finalization, but you can never get a reference to the original object, so it's going to be hard to call the close() method on it.
However, it is rarely a good idea to attempt to control when the GC is run (Weak, Soft and PhantomReferences let you know after the fact that the object is enqueued for GC). In fact, if the amount of memory in the JVM is large (eg -Xmx2000m) you might never GC the object, and you will still experience the ORA-01000. If the JVM memory is small relative to your program's requirements, you may find that the ResultSet and PreparedStatement objects are GCed immediately after creation (before you can read from them), which will likely fail your program.
TL;DR: The weak reference mechanism is not a good way to manage and close Statement and ResultSet objects.
I am adding few more understanding.
Cursor is only about a statement objecct; It is neither resultSet nor the connection object.
But still we have to close the resultset to free some oracle memory. Still if you don't close the resultset that won't be counted for CURSORS.
Closing Statement object will automatically close resultset object too.
Cursor will be created for all the SELECT/INSERT/UPDATE/DELETE statement.
Each ORACLE DB instance can be identified using oracle SID; similarly ORACLE DB can identify each connection using connection SID. Both SID are different.
So ORACLE session is nothing but a jdbc(tcp) connection; which is nothing but one SID.
If we set maximum cursors as 500 then it is only for one JDBC session/connection/SID.
So we can have many JDBC connection with its respective no of cursors (statements).
Once the JVM is terminated all the connections/cursors will be closed, OR JDBCConnection is closed CURSORS with respect to that connection will be closed.
Loggin as sysdba.
In Putty (Oracle login):
[oracle#db01 ~]$ sqlplus / as sysdba
In SqlPlus:
UserName: sys as sysdba
Set session_cached_cursors value to 0 so that it wont have closed cursors.
alter session set session_cached_cursors=0
select * from V$PARAMETER where name='session_cached_cursors'
Select existing OPEN_CURSORS valuse set per connection in DB
SELECT max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE a.statistic# = b.statistic# AND b.name = 'opened cursors current' AND p.name= 'open_cursors' GROUP BY p.value;
Below is the query to find the SID/connections list with open cursor values.
SELECT a.value, s.username, s.sid, s.serial#
FROM v$sesstat a, v$statname b, v$session s
WHERE a.statistic# = b.statistic# AND s.sid=a.sid
AND b.name = 'opened cursors current' AND username = 'SCHEMA_NAME_IN_CAPS'
Use the below query to identify the sql's in the open cursors
SELECT oc.sql_text, s.sid
FROM v$open_cursor oc, v$session s
WHERE OC.sid = S.sid
AND s.sid=1604
AND OC.USER_NAME ='SCHEMA_NAME_IN_CAPS'
Now debug the Code and Enjoy!!! :)
Correct your Code like this:
try
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
finally
{ //finally starts
pStmt.close()
}
Are you sure, that you're really closing your pStatements, connections and results?
To analyze open objects you can implment a delegator pattern, which wraps code around your statemant, connection and result objects. So you'll see, if an object will successfully closed.
An Example for: pStmt = obj.getConnection().prepareStatement(sql);
class obj{
public Connection getConnection(){
return new ConnectionDelegator(...here create your connection object and put it into ...);
}
}
class ConnectionDelegator implements Connection{
Connection delegates;
public ConnectionDelegator(Connection con){
this.delegates = con;
}
public Statement prepareStatement(String sql){
return delegates.prepareStatement(sql);
}
public void close(){
try{
delegates.close();
}finally{
log.debug(delegates.toString() + " was closed");
}
}
}
If your application is a Java EE application running on Oracle WebLogic as the application server, a possible cause for this issue is the Statement Cache Size setting in WebLogic.
If the Statement Cache Size setting for a particular data source is about equal to, or greater than, the Oracle database maximum open cursor count setting, then all of the open cursors can be consumed by cached SQL statements that are held open by WebLogic, resulting in the ORA-01000 error.
To address this, reduce the Statement Cache Size setting for each WebLogic datasource that points to the Oracle database to be significantly less than the maximum cursor count setting on the database.
In the WebLogic 10 Admin Console, the Statement Cache Size setting for each data source can be found at Services (left nav) > Data Sources > (individual data source) > Connection Pool tab.
I too had faced this issue.The below exception used to come
java.sql.SQLException: - ORA-01000: maximum open cursors exceeded
I was using Spring Framework with Spring JDBC for dao layer.
My application used to leak cursors somehow and after few minutes or so, It used to give me this exception.
After a lot of thorough debugging and analysis, I found that there was the problem with the Indexing, Primary Key and Unique Constraints in one of the Table being used in the Query i was executing.
My application was trying to update the Columns which were mistakenly Indexed.
So, whenever my application was hitting the update query on the indexed columns, The database was trying to do the reindexing based on the updated values. It was leaking the cursors.
I was able to solve the problem by doing Proper Indexing on the columns which were used to search in the query and applying appropriate constraints wherever required.
I faced the same problem (ORA-01000) today. I had a for loop in the try{}, to execute a SELECT statement in an Oracle DB many times, (each time changing a parameter), and in the finally{} I had my code to close Resultset, PreparedStatement and Connection as usual. But as soon as I reached a specific amount of loops (1000) I got the Oracle error about too many open cursors.
Based on the post by Andrew Alcock above, I made changes so that inside the loop, I closed each resultset and each statement after getting the data and before looping again, and that solved the problem.
Additionaly, the exact same problem occured in another loop of Insert Statements, in another Oracle DB (ORA-01000), this time after 300 statements. Again it was solved in the same way, so either the PreparedStatement or the ResultSet or both, count as open cursors until they are closed.
Did you set autocommit=true? If not try this:
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
Connection conn = obj.getConnection()
pStmt = conn.prepareStatement(sql);
for (String language : additionalLangs) {
pStmt.setLong(1, subscriberID);
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
conn.commit();
}
} //method/try ends {
//finally starts
pStmt.close()
} //finally ends
query to find sql that opened.
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
and S.USERNAME='XXXX'
GROUP BY user_name, sql_text, machine
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
This problem mainly happens when you are using connection pooling because when you close connection that connection go back to the connection pool and all cursor associated with that connection never get closed as the connection to database is still open.
So one alternative is to decrease the idle connection time of connections in pool, so may whenever connection sits idle in connection for say 10 sec , connection to database will get closed and new connection created to put in pool.
Using batch processing will result in less overhead. See the following link for examples:
http://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm
In our case, we were using Hibernate and we had many variables referencing the same Hibernate mapped entity. We were creating and saving these references in a loop. Each reference opened a cursor and kept it open.
We discovered this by using a query to check the number of open cursors while running our code, stepping through with a debugger and selectively commenting things out.
As to why each new reference opened another cursor - the entity in question had collections of other entities mapped to it and I think this had something to do with it (perhaps not just this alone but in combination with how we had configured the fetch mode and cache settings). Hibernate itself has had bugs around failing to close open cursors, though it looks like these have been fixed in later versions.
Since we didn't really need to have so many duplicate references to the same entity anyway, the solution was to stop creating and holding onto all those redundant references. Once we did that the problem when away.
I had this problem with my datasource in WildFly and Tomcat, connecting to a Oracle 10g.
I found that under certain conditions the statement wasn't closed even when the statement.close() was invoked.
The problem was with the Oracle Driver we were using: ojdbc7.jar. This driver is intended for Oracle 12c and 11g, and it seems has some issues when is used with Oracle 10g, so I downgrade to ojdbc5.jar and now everything is running fine.
I faced the same issue because I was querying db for more than 1000 iterations.
I have used try and finally in my code. But was still getting error.
To solve this I just logged into oracle db and ran below query:
ALTER SYSTEM SET open_cursors = 8000 SCOPE=BOTH;
And this solved my problem immediately.
I ran into this issue after setting the prepared statement cache size to a large value. Apparently, when prepared statements are kept in cache, the cursor stays open.

Same stored procedure acts differently on two/(three) different IDEs

I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.

Sql vs Oracle - profiler [duplicate]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
i work with sql server, but i must migrate to an application with Oracle DB.
for trace my application queries, in Sql Server i use wonderful Profiler tool. is there something of equivalent for Oracle?
I found an easy solution
Step1. connect to DB with an admin user using PLSQL or sqldeveloper or any other query interface
Step2. run the script bellow; in the S.SQL_TEXT column, you will see the executed queries
SELECT
S.LAST_ACTIVE_TIME,
S.MODULE,
S.SQL_FULLTEXT,
S.SQL_PROFILE,
S.EXECUTIONS,
S.LAST_LOAD_TIME,
S.PARSING_USER_ID,
S.SERVICE
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('oracle user name here')
ORDER BY TO_DATE(S.LAST_LOAD_TIME, 'YYYY-MM-DD/HH24:MI:SS') desc;
The only issue with this is that I can't find a way to show the input parameters values(for function calls), but at least we can see what is ran in Oracle and the order of it without using a specific tool.
You can use The Oracle Enterprise Manager to monitor the active sessions, with the query that is being executed, its execution plan, locks, some statistics and even a progress bar for the longer tasks.
See: http://download.oracle.com/docs/cd/B10501_01/em.920/a96674/db_admin.htm#1013955
Go to Instance -> sessions and watch the SQL Tab of each session.
There are other ways. Enterprise manager just puts with pretty colors what is already available in specials views like those documented here:
http://www.oracle.com/pls/db92/db92.catalog_views?remark=homepage
And, of course you can also use Explain PLAN FOR, TRACE tool and tons of other ways of instrumentalization. There are some reports in the enterprise manager for the top most expensive SQL Queries. You can also search recent queries kept on the cache.
alter system set timed_statistics=true
--or
alter session set timed_statistics=true --if want to trace your own session
-- must be big enough:
select value from v$parameter p
where name='max_dump_file_size'
-- Find out sid and serial# of session you interested in:
select sid, serial# from v$session
where ...your_search_params...
--you can begin tracing with 10046 event, the fourth parameter sets the trace level(12 is the biggest):
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 12, '');
end;
--turn off tracing with setting zero level:
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 0, '');
end;
/*possible levels:
0 - turned off
1 - minimal level. Much like set sql_trace=true
4 - bind variables values are added to trace file
8 - waits are added
12 - both bind variable values and wait events are added
*/
--same if you want to trace your own session with bigger level:
alter session set events '10046 trace name context forever, level 12';
--turn off:
alter session set events '10046 trace name context off';
--file with raw trace information will be located:
select value from v$parameter p
where name='user_dump_dest'
--name of the file(*.trc) will contain spid:
select p.spid from v$session s, v$process p
where s.paddr=p.addr
and ...your_search_params...
--also you can set the name by yourself:
alter session set tracefile_identifier='UniqueString';
--finally, use TKPROF to make trace file more readable:
C:\ORACLE\admin\databaseSID\udump>
C:\ORACLE\admin\databaseSID\udump>tkprof my_trace_file.trc output=my_file.prf
TKPROF: Release 9.2.0.1.0 - Production on Wed Sep 22 18:05:00 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
C:\ORACLE\admin\databaseSID\udump>
--to view state of trace file use:
set serveroutput on size 30000;
declare
ALevel binary_integer;
begin
SYS.DBMS_SYSTEM.Read_Ev(10046, ALevel);
if ALevel = 0 then
DBMS_OUTPUT.Put_Line('sql_trace is off');
else
DBMS_OUTPUT.Put_Line('sql_trace is on');
end if;
end;
/
Just kind of translated http://www.sql.ru/faq/faq_topic.aspx?fid=389 Original is fuller, but anyway this is better than what others posted IMHO
GI Oracle Profiler v1.2
It's a Tools for Oracle to capture queries executed similar to the SQL Server Profiler.
Indispensable tool for the maintenance of applications that use this database server.
you can download it from the official site iacosoft.com
Try PL/SQL Developer it has a nice user friendly GUI interface to the profiler. It's pretty nice give the trial a try. I swear by this tool when working on Oracle databases.
http://www.allroundautomations.com/plsqldev.html?gclid=CM6pz8e04p0CFQjyDAodNXqPDw
Seeing as I've just voted a recent question as a duplicate and pointed in this direction . . .
A couple more - in SQL*Plus - SET AUTOTRACE ON - will give explain plan and statistics for each statement executed.
TOAD also allows for client side profiling.
The disadvantage of both of these is that they only tell you the execution plan for the statement, but not how the optimiser arrived at that plan - for that you will need lower level server side tracing.
Another important one to understand is Statspack snapshots - they are a good way for looking at the performance of the database as a whole. Explain plan, etc, are good at finding individual SQL statements that are bottlenecks. Statspack is good at identifying the fact your problem is that a simple statement with a good execution plan is being called 1 million times in a minute.
The Catch is Capture all SQL run between two points in time. Like the way SQL Server also does.
There are situations where it is useful to capture the SQL that a particular user is running in the database. Usually you would simply enable session tracing for that user, but there are two potential problems with that approach.
The first is that many web based applications maintain a pool of persistent database connections which are shared amongst multiple users.
The second is that some applications connect, run some SQL and disconnect very quickly, making it tricky to enable session tracing at all (you could of course use a logon trigger to enable session tracing in this case).
A quick and dirty solution to the problem is to capture all SQL statements that are run between two points in time.
The following procedure will create two tables, each containing a snapshot of the database at a particular point. The tables will then be queried to produce a list of all SQL run during that period.
If possible, you should do this on a quiet development system - otherwise you risk getting way too much data back.
Take the first snapshot
Run the following sql to create the first snapshot:
create table sql_exec_before as
select executions,hash_value
from v$sqlarea
/
Get the user to perform their task within the application.
Take the second snapshot.
create table sql_exec_after as
select executions, hash_value
from v$sqlarea
/
Check the results
Now that you have captured the SQL it is time to query the results.
This first query will list all query hashes that have been executed:
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
/
This one will display the hash and the SQL itself:
set pages 999 lines 100
break on hash_value
select hash_value, sql_text
from v$sqltext
where hash_value in (
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
)
order by
hash_value, piece
/
5.
Tidy up Don't forget to remove the snapshot tables once you've finished:
drop table sql_exec_before
/
drop table sql_exec_after
/
Oracle, along with other databases, analyzes a given query to create an execution plan. This plan is the most efficient way of retrieving the data.
Oracle provides the 'explain plan' statement which analyzes the query but doesn't run it, instead populating a special table that you can query (the plan table).
The syntax (simple version, there are other options such as to mark the rows in the plan table with a special ID, or use a different plan table) is:
explain plan for <sql query>
The analysis of that data is left for another question, or your further research.
There is a commercial tool FlexTracer which can be used to trace Oracle SQL queries
This is an Oracle doc explaining how to trace SQL queries, including a couple of tools (SQL Trace and tkprof)
link
Apparently there is no small simple cheap utility that would help performing this task. There is however 101 way to do it in a complicated and inconvenient manner.
Following article describes several. There are probably dozens more...
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm

Resources