How can I generate trace file from within SQL developer in Oracle? I know how to generate explain plan but I want to know how to generate a trace (.trc) file.
There’s several ways you can do it, one is
alter session set events '10046 trace name context forever, level 12';
End it with
alter session set events '10046 trace name context off';
Then grab the file - you can look up the path by checking the session’s process in v$process. If you’re on a modern version of Oracle you can read it directly from v$diag_trace_file_contents , Jonathan Lewis has a really useful view that you can set up as SYS to make this simple and safe https://jonathanlewis.wordpress.com/2019/10/03/trace-files-2/
Related
I need to collect all the SQL queries (SELECT, UPDATE, DELETE, INSERT) which have been used by the application when any order is processed through the application.
If I can get all SQL's for atleast 50 orders processed through the application then I can check that which SELECT, UPDATE, DELETE statements are frequently in use and which tables are being frequently used by the application after finding these information.
I can get to conclusion that on which table I can use partitioning as if I get the whole SQL's with the WHERE clause I can also get to know that which type of partitioning will be better for any particular table and the partitioning.
However it seems to be a hectic exercise as there could be lots of SQL's which the application use but it helps me understand the application and also after this exercise i will be having a scrutiny report of my application behavior with database which can be used by the later employees.
For this till now i have used the DBMS_adivsor package which gives me some tables of my database to be partitioned and when i check the EXPLAIN PLAN of SQL which i used in the DBMS_ADVISOR then it occur to me that tables which are being full table scan in EXPLAIN PLAN the DBMS_ADVISOR told me to partition them.
The thing is that i can not partition the tables based on this information as its a application level partitioning and also my manager will be not convinced by this little information. so i have come up with the ABOVE plan:(
I need to do this to find out the tables where i can perform table partitioning and other performance tuning things like creating index's as i can get the where clause with filter so its like a database tuning and i want to do this as it will help me grow my career in database development.
Please help me out with this scenario.
Will this query give me required information !
select st.command
from V$SQLTEXT_WITH_NEWLINES st, SYS.V_$SQL s
where st.hash_value = s.hash_value
and parsing_schema_name = 'NETSERVICOS2CM'
and s.module = 'JDBC THIN CLIENT';
Tracing for non-dba USER's ----
GRANT SELECT ON SYS.V_$SESSION TO USER;
GRANT SELECT ON SYS.V_$MYSTAT TO USER;
To get the SID and SERAIL#
SELECT sid, serial# FROM SYS.V_$SESSION
WHERE SID = (SELECT DISTINCT SID FROM SYS.V_$MYSTAT);
Then on DBA user execute this --
EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (sid=>3002, serial#=>31833,sql_trace=> true);
OR
on no-dba user i am using --
ALTER SESSION SET SQL_TRACE = TRUE;
OR
EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
Trigger to trace a session for a particular user ----
CREATE OR REPLACE TRIGGER ON_MY_SCHEMA_LOGIN
AFTER LOGON ON DATABASE
WHEN ( USER = 'NETSERVICOS1CM' )
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = "net1cm"';
EXECUTE IMMEDIATE 'alter session set statistics_level=ALL';
EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
After that to stop trace i am using
ALTER SESSION SET EVENTS '10046 trace name context off';
ALTER SYSTEM SET EVENTS '10046 trace name context off';
As suggested by Derek.
After this you may have multiple trace files to make a consolidate trace file we can use TRCSESS utility --
trcsess output=net1cm_trcsess.trc module="JDBC Thin Client" *net1cm.trc
It will create a single trace file net1cm_trcsess.trc for all trace file generated in my case (with trace file identifier net1cm).
Now we can use TKPROF utility to generate a report which is in human readable form using below command for example ---
tkprof net1cm_trcsess.trc OUTPUT=net1cm_trcsess.txt EXPLAIN=netservicos1cm/netservicos1 SYS=NO
Thanks
So here is my advise.
You can use several different traces for application context actions, such as INSERT, DELETE, UPDATE, SELECT, or even all actions.
Say you have a PL/SQL program run by an application, or have an OCI call to the database. You would have this oracle code at the module/stored proc level:
dbms_application_info.set_module(<module_name>,'execute');
before you execute the entire code. (After the BEGIN in the code).
or
dbms_application_info.set_module(<module_name>,'UPDATE');
before you do an update SQL statement.
To turn off application context, you would use (before the END;):
dbms_application_info.set_module(NULL,NULL);
Then when you execute the module or run the update statement you would like to trace in the module you would make sure you did this prior to and after the module runs
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS, -
waits => TRUE, -
binds => TRUE);
All actions would be traced and you would know exactly where the statement ran and what action was executed.
To turn it off:
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS);
To do this at the session level, you would do the following when 9 is the serial number and 100 is the Sid, for example. (check the syntax).
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,TRUE);
To turn it off:
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,FALSE);
At the database level, (You have to be very careful with this because it will generate a trace for the entire database and can fill up your diagnostic directory on your oracle database. Disclaimer: USE WITH CAUTION).
execute DBMS_MONITOR.DATABASE_TRACE_ENABLE(waits=>TRUE, binds=>TRUE, instance_name=>'<Instance_name>');
execute DBMS_MONITOR.DATABASE_TRACE_DISABLE(instance_name=>'<instance_name>');
You can leverage v$sqltext_with_newlines ,V$SESSION and v$session_longops. You can google with these words and see if these can be useful for your requirements.
I'm new of oracle and now I'm becoming crazy with the following situation. I'm working on a oracle 11g database and many times is happening that I run a query with sql developer and this is correctly executed in 5/6 seconds, others time instead the same query take 300/400 second to be executed. There is some tools to debug what is happening when the query employs 300/400 second?
Update 1
This is my sql developer screenshot the problem seems be direct path read temp
Update 2
report
Update 3
report2
Any suggestion?
Try setting a trace. User being whatever user is experiencing the delay
As sys:
GRANT ALTER SESSION TO USER;
As the user executing the trace:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION";
Produce the error/issue, then as the user testing:
ALTER SESSION SET EVENTS '10046 trace name context off';
As system find out where the trace files are kept:
show parameter background_dump_dest;
Go to that directory and look for .trc/.trm files containing MY_TEST_SESSION. For example ORCL_ora_29772_MY_TEST_SESSION.trc.
After that tkprof those files. In linux:
tkprof ORCL_ora_29772_MY_TEST_SESSION.trc output=ORCL_ora_29772_MY_TEST_SESSION.tkprof explain=user/password sys=no
Read the tkprof file and it will will show you wait times on given statements.
For more info on TKPROF read this. For more info on enabling/disabling a trace read this.
The best tool is Real-Time SQL Monitoring. It does not require changing code or access to the operating system. The only downside is it requires licensing the Tuning Pack.
Compare this single line of code with the trace steps in the other answer. Also, the output looks much nicer.
select dbms_sqltune.report_sql_monitor(sql_id => 'your sql id', type => 'text') from dual;
There's almost never a need to use trace in 11g and beyond.
This behaviour can be caused by cardinality feedback bugs / issues in 11gR2. I had a similar issue. You can test if this is the case by turning off this feature with _optimizer_use_feedback=false
Also try applying the latest updates.
We have an WCF application which uses NHibernate to query data from the database. After installing the application into a new test environment we're facing some performance problems with queries. Our old and new environment use different Oracle-servers but both of the databases have the same data.
We have went through our NHibernate logs and identified the problematic part:
2010-12-02 07:14:22,673 NHibernate.SQL - SELECT this_.CC...
2010-12-02 07:14:22,688 NHibernate.Loader.Loader - processing result set
2010-12-02 07:14:27,235 NHibernate.Loader.Loader - result set row: 0
In this case the query returned one row. But it seems that in our new environment the "processing result set" is taking much longer (5 seconds vs 0.5 seconds) than in our other environment. Is there some way to find out exactly what inside the "processing result set" is taking so long?
Note. Executing the same exact query directly into the DB with Toad doesn't reproduce the problem. With Toad, both database server are equally fast.
We are using DetachedCriteria to create the query and then it is executed like this:
Dim criteria As ICriteria = crit.GetExecutableCriteria(GetSession())
Return New Generic.List(Of T)(criteria.List(Of T))
The version of NHibernate is 2.1.2.4 and we're using ActiveRecord 2.1.0 to create the mappings. Oracle servers are of version 10g.
So in our case we have two environments that have the same version of the application with identical configuration files and are querying against identical databases, but which have different application and oracle servers. In one environment querying through the NHibernate takes around 5.5 seconds and in the another 0.5 seconds. The results are consistent and the same query has been executed around 50 times to both environments.
Is there something in the Oracle configuration which could cause it to misbehave with NHiberate? And is there a way to get more detailed logging out from NHibernate so that the exact problem inside the "processing result set" could be found?
Any advice is greatly appreciated.
We were able to fix our problem by switching the database drivers from Microsoft's to Oracle's ODP.net. Now, both servers are equally fast and even our previously fast server executes the queries much more rapidly. We don't know what setting in our new server made the Microsoft's Oracle driver so slow.
And it seems that Microsoft is nowadays recommending everyone to use something else than their own Oracle-drivers. http://blogs.msdn.com/b/adonet/archive/2009/06/15/system-data-oracleclient-update.aspx
Do a sql trace on both environments by adding this statement to your session:
alter session set timed_statistics=true;
alter session set max_dump_file_size=unlimited;
alter session set events '10046 trace name context forever, level 8' ;
-- your query goes here
select * from mytable where x= 1;
alter session set events '10046 trace name context off';
then use tkprof to examine the trace file ( goto the user_dump_dest usually a directory with udump in the name and tkprof outputfile.log inputtracefilename.trc )
type tkprof
by itself to see help screen and command options
Also
Check that you are using the same settings in the INIT.ORA for things like CURSOR_SHARING=
in both databases
We're looking for a way to log any call to stored procedures in Oracle, and see what parameter values were used for the call.
We're using Oracle 10.2.0.1
We can log SQL statements and see the bound variables, but when we track stored procedures we see bind variables B1, B2, etc. but no values.
We'd like to see the same kind of information we've seen in MS SQL Server Profiler.
Thanks for any help
You could take a look at the DBMS_APPLICATION_INFO package. This allows you to "instrument" your PL/SQL code with whatever information you want - but it does entail adding calls to each procedure to be instrumented.
See also this AskTom thread on using DBMS_APPLICATION_INFO to monitor PL/SQL.
I think you are using the word "log" in a strange manner.
We can log SQL Statements...
Do you really mean to say you can TRACE sql statements with bind variables? Tony's answer is directed to the ability to LOG what you are doing. This is always superior to tracing because only you know what is important to you. Perhaps the execution of your process depends heavily on querying a value from a table. Since that value changes and it's not passed in as a parameter, you could lose that information.
But if you actually LOG what you are doing, you can include that value in your Log table and you'll know not only the variables you passed in but that key value as well.
alter system set events '10046 trace name context forever, level 12'; Is that what you were using?
Yes, I think I should have used the term 'trace'
I'll try to describe what we've done:
Using the enterprise manager (as dbo) we've gone to a session, and started a trace
start trace
Enable wait info, bind info
Run an operation on our application that hits the DB
Finish the trace, run this on the output:
tkprof .prc output2.txt sys=no record=record.txt explain=dbo#DBINST/PW
What we're wanting to see is, "these procedures were called with these parameters" What we're getting is:
Begin dbo.UPKG_PACKAGENAME.PROC(:v0, :v1, :v2 ...); End;
/
Begin dbo.UPKG_PACKAGENAME.PROC2(:v0, :v1, :v2 ...); End;
/
...
So we can trace the procedures that were called, but we don't get the actual parameter values, just the :v0, etc.
My understanding is that what we've done is the same as the alter system statement, but please let us know if that's not the case.
Thanks
are you using 10g
let try with this
exec dbms_monitor.session_trace_enable(session_id=>xxx, serial_num=>xx, waits=>true, binds=>true);
you can get session_id=SID & serial_num=SERIAL# from v$session
I have a stored procedure that consists of a single select query used to insert into another table based on some minor math that is done to the arguments in the procedure. Can I generate the plan used for this query by referencing the procedure somehow, or do I have to copy and paste the query and create bind variables for the input parameters?
Use SQL Trace and TKPROF. For example, open SQL*Plus, and then issue the following code:-
alter session set tracefile_identifier = 'something-unique'
alter session set sql_trace = true;
alter session set events '10046 trace name context forever, level 8';
select 'right-before-my-sp' from dual;
exec your_stored_procedure
alter session set sql_trace = false;
Once this has been done, go look in your database's UDUMP directory for a TRC file with "something-unique" in the filename. Format this TRC file with TKPROF, and then open the formatted file and search for the string "right-before-my-sp". The SQL command issued by your stored procedure should be shortly after this section, and immediately under that SQL statement will be the plan for the SQL statement.
Edit: For the purposes of full disclosure, I should thank all those who gave me answers on this thread last week that helped me learn how to do this.
From what I understand, this was done on purpose. The idea is that individual queries within the procedure are considered separately by the optimizer, so EXPLAIN PLAN doesn't make sense against a stored proc, which could contain multiple queries/statements.
The current answer is NO, you can't run it against a proc, and you must run it against the individual statements themselves. Tricky when you have variables and calculations, but that's the way it is.
Many tools, such as Toad or SQL Developer, will prompt you for the bind variable values when you execute an explain plan. You would have to do so manually in SQL*Plus or other tools.
You could also turn on SQL tracing and execute the stored procedure, then retrieve the explain plan from the trace file.
Be careful that you do not just retrieve the explain plan for the SELECT statement. The presence of the INSERT clause can change the optimizer goal from first rows to all rows.