I have a script that i use to load my data into my tables in Oracle (by a list of insert statements). How can i get the execution time of the entire loading process? I have tried with set timing on, but that gives me a duration for each insert statement and not the entire process. The script is shown below:
spo load.log
prompt '**** load data ****'
set termout off
##inserts.sql
commit;
set termout on
prompt '**** done ****'
spo off
exit;
Not sure why everybody is making it so complex. Simple as:
SQL> set timing on
SQL> select 1 from dual;
1
----------
1
1 row selected.
Elapsed: 00:00:00.00
SQL>
It old question but i have found easy way to measure time of running a script in sqlplus. You just have to add this on the beginning
timing start timing_name
And this on the end of a script
timing stop
More information about this command can be found at Oracle's SQL*Plus® User's Guide and Reference: Collecting Timing Statistics
Try this, add the following at the beginning and it remembers the current time:
set serveroutput on
variable n number
exec :n := dbms_utility.get_time
Add this at the end and it calculates the time elapsed:
exec :n := (dbms_utility.get_time - :n)/100
exec dbms_output.put_line(:n)
If you are on Unix, you can also do it like that:
time sqlplus #inserts.sql
It will print:
real 0m9.34s
user 0m2.03s
sys 0m1.02s
The first line gives the total execution time.
What you're describing is essentially a way to audit the script's execution. Whether it's an elapsed time, or specific start and end times you're capturing you want to log them properly to see if things went well (or if not, why not).
Here's a template similar to what we use for capturing and logging all database activity we are implementing. We use it via sqlplus.exe for all DDL updates (e.g. CREATE TABLE) and for inserts into setup tables.
--Beginning of all SQL scripts:
set serveroutput on feedback on echo on verify on sqlblanklines on timing on define on
col time new_v v_time
col name new_v v_name
col user new_v v_user
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
--Creates a new log file every time the script is run, and it's immediately
--obvious when it was run, which environment it ran in, and who ran it.
spool &v_time._&v_name._&v_user..log
--Run the select again so it appears in the log file itself
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
Place the body of your work here.
--End of all SQL scripts:
select name, USER, to_char(sysdate, 'YYYYMMDD-HH24MISS') time from v$database;
spool off
Related
How do you create an Oracle Automatic Workload Repository (AWR) report?
To generate AWR report follow below steps :
Take begin snap id
set serveroutput on;
DECLARE
v_snap_id number ;
begin
v_snap_id := DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT;
dbms_output.put_line(v_snap_id);
end;
/
Run your batch or the program you want to monitor.
Take end snap id
set serveroutput on;
DECLARE
v_snap_id number ;
begin
v_snap_id := DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT;
dbms_output.put_line(v_snap_id);
end;
/
Go to oracle directory. e.g. in my case
cd C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin
go to sqlplus promt
sqlplus dbusername/dbpassword#host:port/dbenv
run #awrrpt command
It will ask for format of the report, default is html.
provide no of days, if you dont remember your snap id
enter begin snap
enter end snap
Give report name and press enter
Your report will be generated in "admin" e.g. in my case
C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin
sqlplus into to Oracle as the DBA users. Run the report sql. Answer the questions prompted by the report to narrow down the time period
sqlplus / as sysdba
#$ORACLE_HOME/rdbms/admin/awrrpt.sql
The script will ask you some questions so you get a report for the time period you are interested in.
You can use dbms_workload_repository package without the need to log into the server itself.
For a text report, use e.g.:
select output
from table(dbms_workload_repository.awr_report_text(1557521192, 1, 5390, 5392);
Or to get a HTML report, use awr_report_text() instead.
The first paramter is the DBID which can be obtained using:
select dbid from v$database
The second one is the instance number. Only relevant for a RAC environment.
And the last two parameters are the IDs of the start and end snapshot. The available snapshots can be obtained using:
select snap_id,
begin_interval_time
end_interval_time
from dba_hist_snapshot
order by begin_interval_time desc;
Especially for the HTML return - which returns a CLOB - you must configure your SQL client to properly display the output. In SQL*Plus you would use set long
conn / as sysdba
SQL> #$ORACLE_HOME/rdbms/admin/awrrpt.sql
Specify the Report Type
AWR reports can be generated in the following formats. Please enter the
name of the format at the prompt. Default value is 'html'.
'html' HTML format (default)
'text' Text format
'active-html' Includes Performance Hub active report
Enter value for report_type:
old 1: select 'Type Specified: ',lower(nvl('&&report_type','html')) report_type from dual
new 1: select 'Type Specified: ',lower(nvl('','html')) report_type from dual
Type Specified: html
old 1: select '&&report_type' report_type_def from dual
new 1: select 'html' report_type_def from dual
old 1: select '&&view_loc' view_loc_def from dual
new 1: select 'AWR_PDB' view_loc_def from dual
Current Instance
2. you can schedule report by email alert also.
I am fetching data from cursor in the way like this:
variable rc refcursor
spool file.txt
set timing on
exec :rc := someoraclepackage.somefunction(some,parameters,here)
set timing off
print rc
spool off
But I've identified a problem with TIMING. Some packages give me the real time of their execution, but some other not. They show me time like 120 miliseconds, and the real time of processing the package is for example 10 minutes.
I need the exact time of the package processing for performance testing.
All I have heard about the solution of this problem is:
Inserting the data from the cursor to a temporary table (database server side), to count the real package runtime.
Running a select * on the temporary table to fetch the data (jenkins side)
But I have no Idea where to start with putting the data from the cursor to a temporary table, and I don't know if this is the real solution of the problem...
what is the best way to find out how much time an oracle select statement takes. I have the following query for which I want to find out the time, however, since this query brings four thousand records and it takes time to display those 4 thousand records on the screen, the elapsed time stated might not be correct.
Is there a way I can wrap this into a cursor and then run it from sql plus so that I get the correct time it takes to execute this?
SELECT a.code, NVL(a.org, ' '), NVL(a.office_number, ' '), SUBSTR(a.code, 0, 2)
FROM PARTICIPANT a WHERE a.type_code = 'PRIME';
In SQL*Plus you can also use the simple TIMING option:
SQL> SET TIMING ON
SQL> SELECT bla FROM bla...
...
Elapsed: 00:00:00:01
SQL> SELECT bar FROM foo...
...
Elapsed: 00:00:23:41
SQL> SET TIMING OFF
This will report timing information for each statement individually.
Another option is to set up individual timers:
SQL> TIMING START mytimer
SQL> ... run all my scripts ...
SQL> TIMING STOP
timinig for: mytimer
Elapsed: 00:00:08.32
You can even nest these individual timers - the TIMING STOP pops the most recent timer off a stack.
There are a couple of ways I can think of.
I normally do this sort of thing by running it into a table with CREATE TABLE AS SELECT...., which means I often litter my schema with many tables named MIKE_TEMP_1.
Other option is in SQL*Plus to use SET AUTOTRACE TRACEONLY which should run all the query but suppress the printing of the results.
Options that spring to mind:
a) use an outer select, which may not be entirely accurate if the optimizer mangles it but can give a good idea:
SELECT COUNT(*) from (
SELECT a.code, NVL(a.org, ' '), NVL(a.office_number, ' '), SUBSTR(a.code, 0, 2)
FROM PARTICIPANT a WHERE a.type_code = 'PRIME'
);
b) put it in a script, run it from the command line and redirect the output to a file.
c) turn spool on and termout off (not sure about that one).
d) set autotrace traceonly (which #MikeyByCrikey beat me to).
You can go to V$SQL where you have the following columns :
APPLICATION_WAIT_TIME
CONCURRENCY_WAIT_TIME
CLUSTER_WAIT_TIME
USER_IO_WAIT_TIME
PLSQL_EXEC_TIME
CPU_TIME
ELAPSED_TIME
but they are an aggregate for all executions of that SQL. You can do a before/after snapshot and work out the difference if no-one else is running the SQL.
Just do not display query results
SET TERMOUT OFF
I would like to get the query execution time in Oracle. I don't want the time Oracle needs to print the results - just the execution time.
In MySQL it is easy to get the execution time from the shell.
How can I do this in SQL*Plus?
One can issue the SQL*Plus command SET TIMING ON to get wall-clock times, but one can't take, for example, fetch time out of that trivially.
The AUTOTRACE setting, when used as SET AUTOTRACE TRACEONLY will suppress output, but still perform all of the work to satisfy the query and send the results back to SQL*Plus, which will suppress it.
Lastly, one can trace the SQL*Plus session, and manually calculate the time spent waiting on events which are client waits, such as "SQL*Net message to client", "SQL*Net message from client".
Use:
set serveroutput on
variable n number
exec :n := dbms_utility.get_time;
select ......
exec dbms_output.put_line( (dbms_utility.get_time-:n)/100) || ' seconds....' );
Or possibly:
SET TIMING ON;
-- do stuff
SET TIMING OFF;
...to get the hundredths of seconds that elapsed.
In either case, time elapsed can be impacted by server load/etc.
Reference:
ASKTOM - SET TIMING ON/OFF
select LAST_LOAD_TIME, ELAPSED_TIME, MODULE, SQL_TEXT elapsed from v$sql
order by LAST_LOAD_TIME desc
More complicated example (don't forget to delete or to substitute PATTERN):
select * from (
select LAST_LOAD_TIME, to_char(ELAPSED_TIME/1000, '999,999,999.000') || ' ms' as TIME,
MODULE, SQL_TEXT from SYS."V_\$SQL"
where SQL_TEXT like '%PATTERN%'
order by LAST_LOAD_TIME desc
) where ROWNUM <= 5;
I'd recommend looking at consistent gets/logical reads as a better proxy for 'work' than run time. The run time can be skewed by what else is happening on the database server, how much stuff is in the cache etc.
But if you REALLY want SQL executing time, the V$SQL view has both CPU_TIME and ELAPSED_TIME.
set timing on
spool /home/sss/somefile.txt
set termout off
select ...
set termout on
set timing off
spool off
Save it to a script.sql, and do this in terminal:
sqlcl user#host/dbname #/path/to/your/script.sql
How do i import a script with 3954275 Lines of Insert Statements into a Oracle 10g. I can do it with sqlplus user/pass # script.sql but this is dam slow (even worse the commit is at the end of this 900MB file. I dont know if my Oracle configuration can handle this). Is there a better (faster) way to import the Data?
Btw. the DB is empty before the Import.
Use SQL*Loader.
It can parse even your INSERT commands if you don't have your data in another format.
SQL*Loader is a good alternative if your 900MB file contains insert statements to the same table. It will be cumbersome if it contains numerous tables. It is the fastest option however.
If for some reason a little improvement is good enough, then make sure your sessions CURSOR SHARING parameter is set to FORCE or SIMILAR. Each insert statement in your file will likely be the same except for the values. If CURSOR_SHARING is set to EXACT, then each of insert statements needs to be hard parsed, because it is unique. FORCE and SIMILAR automatically turns your literals in the VALUES clause to bind variables, removing the need for the hard parse over and over again.
You can use the script below to test this:
set echo on
alter system flush shared_pool
/
create table t
( id int
, name varchar2(30)
)
/
set echo off
set feedback off
set heading off
set termout off
spool sof11.txt
prompt begin
select 'insert into t (id,name) values (' || to_char(level) || ', ''name' || to_char(level) || ''');'
from dual
connect by level <= 10000
/
prompt end;;
prompt /
spool off
set termout on
set heading on
set feedback on
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = force
/
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = exact
/
set echo off
drop table t purge
/
The example executes 10,000 statements like "insert into t (id,name) values (1, 'name1');
". The output on my laptop:
SQL> alter system flush shared_pool
2 /
Systeem is gewijzigd.
SQL> create table t
2 ( id int
3 , name varchar2(30)
4 )
5 /
Tabel is aangemaakt.
SQL> set echo off
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:17.10
Sessie is gewijzigd.
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:05.50
More than 3 times as fast with CURSOR_SHARING set to FORCE.
Hope this helps.
Regards,
Rob.
Agreed with the above: use SQL*Loader.
However, if that is not an option, you can adjust the size of the blocks that SQL Plus brings in by putting the statement
SET arraysize 1000;
at the beginning of your script. This is just an example from my own scripts, and you may have to fine tune it to your needs considering latency, etc. I think it defaults to like 15, so you're getting a lot of overhead in your script.