what is the best way to find out how much time an oracle select statement takes. I have the following query for which I want to find out the time, however, since this query brings four thousand records and it takes time to display those 4 thousand records on the screen, the elapsed time stated might not be correct.
Is there a way I can wrap this into a cursor and then run it from sql plus so that I get the correct time it takes to execute this?
SELECT a.code, NVL(a.org, ' '), NVL(a.office_number, ' '), SUBSTR(a.code, 0, 2)
FROM PARTICIPANT a WHERE a.type_code = 'PRIME';
In SQL*Plus you can also use the simple TIMING option:
SQL> SET TIMING ON
SQL> SELECT bla FROM bla...
...
Elapsed: 00:00:00:01
SQL> SELECT bar FROM foo...
...
Elapsed: 00:00:23:41
SQL> SET TIMING OFF
This will report timing information for each statement individually.
Another option is to set up individual timers:
SQL> TIMING START mytimer
SQL> ... run all my scripts ...
SQL> TIMING STOP
timinig for: mytimer
Elapsed: 00:00:08.32
You can even nest these individual timers - the TIMING STOP pops the most recent timer off a stack.
There are a couple of ways I can think of.
I normally do this sort of thing by running it into a table with CREATE TABLE AS SELECT...., which means I often litter my schema with many tables named MIKE_TEMP_1.
Other option is in SQL*Plus to use SET AUTOTRACE TRACEONLY which should run all the query but suppress the printing of the results.
Options that spring to mind:
a) use an outer select, which may not be entirely accurate if the optimizer mangles it but can give a good idea:
SELECT COUNT(*) from (
SELECT a.code, NVL(a.org, ' '), NVL(a.office_number, ' '), SUBSTR(a.code, 0, 2)
FROM PARTICIPANT a WHERE a.type_code = 'PRIME'
);
b) put it in a script, run it from the command line and redirect the output to a file.
c) turn spool on and termout off (not sure about that one).
d) set autotrace traceonly (which #MikeyByCrikey beat me to).
You can go to V$SQL where you have the following columns :
APPLICATION_WAIT_TIME
CONCURRENCY_WAIT_TIME
CLUSTER_WAIT_TIME
USER_IO_WAIT_TIME
PLSQL_EXEC_TIME
CPU_TIME
ELAPSED_TIME
but they are an aggregate for all executions of that SQL. You can do a before/after snapshot and work out the difference if no-one else is running the SQL.
Just do not display query results
SET TERMOUT OFF
Related
Parallel hints in normal DML SQL queries in oracle can be used in following fashion
select /*+ PARALLEL (A,2) */ * from table A ;
In similar fashion can we use parallel hints in PL/SQL for select into statements in oracle?
select /*+ PARALLEL(A,2) */ A.* BULK COLLECT INTO g_table_a from Table A ;
If i use the above syntax is there any way to verify whether the above select statement is executed in parallel?
Edit : Assuming g_table_a is a table data structure of ROWTYPE table
If the statement takes short elapsed time, you don't want to run it in parallel. Note, that e.g. query taking say 0.5 seconds in serial execution could take 2,5 second in parallel, as the most overhead is to set up the parallel execution.
So, if the query takes long time, you have enough time to check V$SESSION (use gv$sessionin RAC) and see all session with the user running the query.
select * from gv$session where username = 'your_user'
For serial execution you see only one session, for parallel execution you see one coordinator and additional session up to twice of the chosen parallel degree.
Alternative use the v$px_session which connects the parallel worker sessions with the query coordinator.
select SID, SERIAL#, DEGREE, REQ_DEGREE
from v$px_session
where qcsid = <SID of the session running teh parallel statement>;
Here you see also the required degree of parallelism and the real used DOP
YOu can easily check this from Explain Plan of the query. In case of Plsql you can also have trace of the procedure and check in the TKprof file.
Today I heard that a query with <> will take more time to execute than one with not in.
I tried to test this and with an equal plan had the following time results:
select * from test_table where test <> 'test'
0,063 seconds
select * from test_table where test not in ('test')
0,073 seconds
So the question is, what is the difference between <> and not in for a single condition and what is better to use.
Whether or not the column is indexed, I would expect both queries to perform a full scan on the table, i.e the query plan is essentially the same. The small timing difference you noted is probably insignificant - run the same query more than once and you will get different timings.
Having said that I would use <> because it is more natural.
I have a script that i use to load my data into my tables in Oracle (by a list of insert statements). How can i get the execution time of the entire loading process? I have tried with set timing on, but that gives me a duration for each insert statement and not the entire process. The script is shown below:
spo load.log
prompt '**** load data ****'
set termout off
##inserts.sql
commit;
set termout on
prompt '**** done ****'
spo off
exit;
Not sure why everybody is making it so complex. Simple as:
SQL> set timing on
SQL> select 1 from dual;
1
----------
1
1 row selected.
Elapsed: 00:00:00.00
SQL>
It old question but i have found easy way to measure time of running a script in sqlplus. You just have to add this on the beginning
timing start timing_name
And this on the end of a script
timing stop
More information about this command can be found at Oracle's SQL*Plus® User's Guide and Reference: Collecting Timing Statistics
Try this, add the following at the beginning and it remembers the current time:
set serveroutput on
variable n number
exec :n := dbms_utility.get_time
Add this at the end and it calculates the time elapsed:
exec :n := (dbms_utility.get_time - :n)/100
exec dbms_output.put_line(:n)
If you are on Unix, you can also do it like that:
time sqlplus #inserts.sql
It will print:
real 0m9.34s
user 0m2.03s
sys 0m1.02s
The first line gives the total execution time.
What you're describing is essentially a way to audit the script's execution. Whether it's an elapsed time, or specific start and end times you're capturing you want to log them properly to see if things went well (or if not, why not).
Here's a template similar to what we use for capturing and logging all database activity we are implementing. We use it via sqlplus.exe for all DDL updates (e.g. CREATE TABLE) and for inserts into setup tables.
--Beginning of all SQL scripts:
set serveroutput on feedback on echo on verify on sqlblanklines on timing on define on
col time new_v v_time
col name new_v v_name
col user new_v v_user
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
--Creates a new log file every time the script is run, and it's immediately
--obvious when it was run, which environment it ran in, and who ran it.
spool &v_time._&v_name._&v_user..log
--Run the select again so it appears in the log file itself
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
Place the body of your work here.
--End of all SQL scripts:
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
spool off
I would like to get the query execution time in Oracle. I don't want the time Oracle needs to print the results - just the execution time.
In MySQL it is easy to get the execution time from the shell.
How can I do this in SQL*Plus?
One can issue the SQL*Plus command SET TIMING ON to get wall-clock times, but one can't take, for example, fetch time out of that trivially.
The AUTOTRACE setting, when used as SET AUTOTRACE TRACEONLY will suppress output, but still perform all of the work to satisfy the query and send the results back to SQL*Plus, which will suppress it.
Lastly, one can trace the SQL*Plus session, and manually calculate the time spent waiting on events which are client waits, such as "SQL*Net message to client", "SQL*Net message from client".
Use:
set serveroutput on
variable n number
exec :n := dbms_utility.get_time;
select ......
exec dbms_output.put_line( (dbms_utility.get_time-:n)/100) || ' seconds....' );
Or possibly:
SET TIMING ON;
-- do stuff
SET TIMING OFF;
...to get the hundredths of seconds that elapsed.
In either case, time elapsed can be impacted by server load/etc.
Reference:
ASKTOM - SET TIMING ON/OFF
select LAST_LOAD_TIME, ELAPSED_TIME, MODULE, SQL_TEXT elapsed from v$sql
order by LAST_LOAD_TIME desc
More complicated example (don't forget to delete or to substitute PATTERN):
select * from (
select LAST_LOAD_TIME, to_char(ELAPSED_TIME/1000, '999,999,999.000') || ' ms' as TIME,
MODULE, SQL_TEXT from SYS."V_\$SQL"
where SQL_TEXT like '%PATTERN%'
order by LAST_LOAD_TIME desc
) where ROWNUM <= 5;
I'd recommend looking at consistent gets/logical reads as a better proxy for 'work' than run time. The run time can be skewed by what else is happening on the database server, how much stuff is in the cache etc.
But if you REALLY want SQL executing time, the V$SQL view has both CPU_TIME and ELAPSED_TIME.
set timing on
spool /home/sss/somefile.txt
set termout off
select ...
set termout on
set timing off
spool off
Save it to a script.sql, and do this in terminal:
sqlcl user#host/dbname #/path/to/your/script.sql
How do i import a script with 3954275 Lines of Insert Statements into a Oracle 10g. I can do it with sqlplus user/pass # script.sql but this is dam slow (even worse the commit is at the end of this 900MB file. I dont know if my Oracle configuration can handle this). Is there a better (faster) way to import the Data?
Btw. the DB is empty before the Import.
Use SQL*Loader.
It can parse even your INSERT commands if you don't have your data in another format.
SQL*Loader is a good alternative if your 900MB file contains insert statements to the same table. It will be cumbersome if it contains numerous tables. It is the fastest option however.
If for some reason a little improvement is good enough, then make sure your sessions CURSOR SHARING parameter is set to FORCE or SIMILAR. Each insert statement in your file will likely be the same except for the values. If CURSOR_SHARING is set to EXACT, then each of insert statements needs to be hard parsed, because it is unique. FORCE and SIMILAR automatically turns your literals in the VALUES clause to bind variables, removing the need for the hard parse over and over again.
You can use the script below to test this:
set echo on
alter system flush shared_pool
/
create table t
( id int
, name varchar2(30)
)
/
set echo off
set feedback off
set heading off
set termout off
spool sof11.txt
prompt begin
select 'insert into t (id,name) values (' || to_char(level) || ', ''name' || to_char(level) || ''');'
from dual
connect by level <= 10000
/
prompt end;;
prompt /
spool off
set termout on
set heading on
set feedback on
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = force
/
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = exact
/
set echo off
drop table t purge
/
The example executes 10,000 statements like "insert into t (id,name) values (1, 'name1');
". The output on my laptop:
SQL> alter system flush shared_pool
2 /
Systeem is gewijzigd.
SQL> create table t
2 ( id int
3 , name varchar2(30)
4 )
5 /
Tabel is aangemaakt.
SQL> set echo off
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:17.10
Sessie is gewijzigd.
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:05.50
More than 3 times as fast with CURSOR_SHARING set to FORCE.
Hope this helps.
Regards,
Rob.
Agreed with the above: use SQL*Loader.
However, if that is not an option, you can adjust the size of the blocks that SQL Plus brings in by putting the statement
SET arraysize 1000;
at the beginning of your script. This is just an example from my own scripts, and you may have to fine tune it to your needs considering latency, etc. I think it defaults to like 15, so you're getting a lot of overhead in your script.