CTE With Insert In Oracle - performance

i am running a query in oracle with CTE.
When i execute the query it works fine in select statement but when i use insert statement it takes ample of time to execute.Any help here is the code
INSERT INTO port_weeklydailypricesTest (co_code,start_dtm,end_dtm)
SELECT * FROM
(
WITH CTE(co_code, start_dtm, end_dtm) AS
(
SELECT co_code ,
CAST(NEXT_DAY(MIN(dlyprice_date),'FRIDAY')-6 AS DATE) start_dtm ,
CAST(NEXT_DAY(MIN(dlyprice_date),'FRIDAY') AS DATE) end_dtm
FROM feed_dlyprice
GROUP BY co_code
UNION ALL
SELECT co_code ,
CAST(TO_CHAR(end_dtm + INTERVAL '1' DAY,'DD-MON-YYYY') AS DATE),
CAST(TO_CHAR(end_dtm + INTERVAL '7' DAY,'DD-MON-YYYY') AS DATE)
FROM CTE
WHERE CAST(end_dtm AS DATE) <= TO_CHAR(TO_DATE(SYSDATE+1,'DD-MON-YYYY'))
)
SELECT co_code,start_dtm,end_dtm
FROM CTE
);

If, as you say, the performance of the SELECT on its own is satisfactory the problem must lie with the INSERT part of the statement.
There are a number of things which might cause an insert to run slow:
The most likely is the presence of a trigger on the target table which executes something very expensive.
Another possibility is that the insert is waiting on a locked resource (say some other process has an exclusive table level lock on the target table, or some other shared resource such as a code control table).
it could be a storage allocation issue, chaining or row migration, too many indexes or lots of derived columns.
perhaps it is down to hardware - underpowered network, dodgy interconnects, a bad disk.
This is by no means exhaustive. The items at the top are application issues which you should be able to investigate and resolve. The further down the list you go the more likely it is that you will need the assistance on an on-site DBA.

Related

How can you check query performance with small data set

All the Oracles out here,
I have an Oracle PL/SQL procedure but very small data that can run on the query. I suspect that when the data gets large, the query might start performing back. Are there ways in which I can check for performance and take corrective measure even before the data build up? If I wait for the data buildup, it might get too late.
Do you have any general & practical suggestions for me? Searching the internet did not get me anything convincing.
Better to build yourself some test data to get an idea of how things will perform. Its easy to get started, eg
create table MY_TEST as select * from all_objects;
gives you approx 50,000 rows typically. You can scale that easily with
create table MY_TEST as select a.* from all_objects a ,
( select 1 from dual connect by level <= 10);
Now you have 500,000 rows
create table MY_TEST as select a.* from all_objects a ,
( select 1 from dual connect by level <= 10000);
Now you have 500,000,000 rows!
If you want unique values per row, then add rownum eg
create table MY_TEST as select rownum r, a.* from all_objects a ,
( select 1 from dual connect by level <= 10000);
If you want (say) 100,000 distinct values in a column then TRUNC or MOD. You can also use DBMS_RANDOM to generate random numbers, strings etc.
Also check out Morten's test data generator
https://github.com/morten-egan/testdata_ninja
for some domain specific data, and also the Oracle sample schemas on github which can also be scaled using techniques above.
https://github.com/oracle/db-sample-schemas

how to get select statement query which was used to create table in oracle

I created a table in oracle like
CREATE TABLE suppliers AS (SELECT * FROM companies WHERE id > 1000);
I would like to know the complete select statement which was used to create this table.
I have already tried get_ddl but it is not giving the select statement. Can you please let me know how to get the select statement?
If you're lucky one of these statements will show the DDL used to generate the table:
select *
from gv$sql
where lower(sql_fulltext) like '%create table suppliers%';
select *
from dba_hist_sqltext
where lower(sql_text) like '%create table%';
I used the word lucky because GV$SQL will usually only have results for a few hours or days, until the data is purged from the shared pool. DBA_HIST_SQLTEXT will only help if you have AWR enabled, the statement was run in the last X days that AWR is configured to hold data (the default is 8), the statement was run after the last snapshot collection (by default it happens every hour), and the statement ran long enough for AWR to think it's worth saving.
And for each table Oracle does not always store the full SQL. For security reasons, DDL statements are often truncated in the data dictionary. Don't be surprised if the text suddenly cuts off after the first N characters.
And depending on how the SQL is called the case and space may be different. Use lower and lots of wildcards to increase the chance of finding the statement.
TRY THIS:
select distinct table_name
from
all_tab_columns where column_name in
(
select column_name from
all_tab_columns
where table_name ='SUPPLIERS'
)
you can find table which created from table

Stored procedure for Select and Inner select query in Oracle

I have a query like this:
select PROMOTER_DSMID,
PROMOTER_NAME,
PROMOTER_MSISDN,
RETAILER_DSMID,
RETAILER_MSISDN,
RETAILER_NAME ,
ATTENDANCE_FLAG,
ATTENDANCE_DATE
from PROMO_ATTENDANCE_DETAILS
where PROMOTER_DSMID not in
(SELECT PROMOTER_DSMID
FROM PROMO_ATTENDANCE_DETAILS
WHERE PROMOTERS_ASM_DSMID='ASM123'
AND ATTENDANCE_FLAG='TRUE'
AND TRUNC(ATTENDANCE_DATE) ='16-07-17')
and PROMOTERS_ASM_DSMID='ASM123'
AND ATTENDANCE_FLAG='FALSE'
AND TRUNC(ATTENDANCE_DATE) ='16-07-17';
This query is taking too much time when I run this in PROD database because of large number of records.
I need to write a procedure for this but am not able to get the correct approach of how to write a procedure. Somebody please guide me.
"was thinking to write a proc in which inner select statement can put the data in some temporary table and then from that temporary table I can run the outer select statement"
No need for that. Use a WITH clause to select the data once and use it twice.
with cte as (
select PROMOTER_DSMID,
PROMOTER_NAME,
PROMOTER_MSISDN,
RETAILER_DSMID,
RETAILER_MSISDN,
RETAILER_NAME ,
ATTENDANCE_FLAG,
ATTENDANCE_DATE
from PROMO_ATTENDANCE_DETAILS
where PROMOTERS_ASM_DSMID='ASM123'
AND TRUNC(ATTENDANCE_DATE) ='16-07-17'
)
select *
from cte
where ATTENDANCE_FLAG='FALSE'
AND PROMOTER_DSMID not in
(SELECT PROMOTER_DSMID
FROM cte
where ATTENDANCE_FLAG='TRUE')
;
This will perform better than a temporary table, which involve a lot of disk I/O.
There are other possible performance improvements, depending on the usual tuning
considerations: data volume and skew, indexes, etc

Optimize Oracle 11g Procedure

I have a procedure to find the first, last, max and min prices for a series of transactions in a very large table which is organized by date, object name, and a code. I also need the sum of quantities transacted. There are about 3 billion rows in the table and this procedure takes many days to run. I would like to cut that time down as much as possible. I have an index on the distinct fields in the trans table, and looking at the explain plan on the select portion of the queries, the index is being used. I am open to suggestions on an alternate approach. I use Oracle 11g R2. Thank you.
declare
cursor c_iter is select distinct dt, obj, cd from trans;
r_iter c_iter%ROWTYPE;
v_fir number(15,8);
v_las number(15,8);
v_max number(15,8);
v_min number(15,8);
v_tot number;
begin
open c_iter;
loop
fetch c_iter into r_iter;
exit when c_iter%NOTFOUND;
select max(fir), max(las) into v_fir, v_las
from
( select
first_value(prc) over (order by seq) as "FIR",
first_value(prc) over (order by seq desc) as "LAS"
from trans
where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD );
select max(prc), min(prc), sum(qty) into v_max, v_min, v_tot
from trans
where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD;
insert into stats (obj, dt, cd, fir, las, max, min, tot )
values (r_iter.OBJ, r_iter.DT, r_iter.CD, v_fir, v_las, v_max, v_min, v_tot);
commit;
end loop;
close c_iter;
end;
alter session enable parallel dml;
insert /*+ append parallel(stats)*/
into stats(obj, dt, cd, fir, las, max, min, tot)
select /*+ parallel(trans) */ obj, dt, cd
,max(prc) keep (dense_rank first order by seq) fir
,max(prc) keep (dense_rank first order by seq desc) las
,max(prc) max, min(prc) min, sum(qty) tot
from trans
group by obj, dt, cd;
commit;
A single SQL statement is usually significantly faster than multiple SQL statements. They sometimes require more resources, like more temporary tablespace, but your distinct cursor is probably already sorting the entire table on disk anyway.
You may want to also enable parallel DML and parallel query, although depending on your object and system settings this may already be happening. (And it may not necessarily be a good thing, depending on your resources, but it usually helps large queries.)
Parallel write and APPEND should improve performance if the SQL writes a lot of data, but it also means that the new table will not be recoverable until the next backup. (Parallel DML will automatically use direct path writes, but I usually include APPEND anyway just in case the parallelism doesn't work correctly.)
There's a lot to consider, even for such a small query, but this is where I'd start.
Not the solid answer I'd like to give, but a few things to consider:
The first would be using a bulk collect. However, since you're using 11g, hopefully this is already being done for you automatically.
Do you really need to commit after every single iteration? I could be wrong, but am guessing this is one of your top time consumers.
Finally, +1 for jonearles' answer. (I wasn't sure if I'd be able to write everything into a single SQL query, but I was going to suggest this as well.)
You could try and make the query run in parallel, there is a reasonable Oracle White Paper on this here. This isn't an Oracle feature that I've ever had to use myself so I've no first hand experience of it to pass on. You will also need to have enough resources free on the Oracle server to allow you run the parallel processes that this will create.

Oracle command hangs when using view for "WHERE x IN..." subquery

I'm working on a web service that fetches data from an oracle data source in chunks and passes it back to an indexing/search tool in XML format. I'm the C#/.NET guy, and am kind of fuzzy on parts of Oracle.
Our Oracle team gave us the following script to run, and it works well:
SELECT ROWID, [columns]
FROM [table]
WHERE ROWID IN (
SELECT ROWID
FROM (
SELECT ROWID
FROM [table]
WHERE ROWID > '[previous_batch_last_rowid]'
ORDER BY ROWID
)
WHERE ROWNUM <= 10000
)
ORDER BY ROWID
10,000 rows is an arbitrary but reasonable chunk size and ROWID is sufficiently unique for our purposes to use as a UID since each indexing run hits only one table at a time. Bracketed values are filled in programmatically by the web service.
Now we're going to start adding views to the indexing, each of which will union a few separate tables. Since ROWID would no longer function as a unique identifier, they added a column to the views (VIEW_UNIQUE_ID) that concatenates the ROWIDs from the component tables to construct a UID for each union.
But this script does not work, even though it follows the same form as the previous one:
SELECT VIEW_UNIQUE_ID, [columns]
FROM [view]
WHERE VIEW_UNIQUE_ID IN (
SELECT VIEW_UNIQUE_ID
FROM (
SELECT VIEW_UNIQUE_ID
FROM [view]
WHERE VIEW_UNIQUE_ID > '[previous_batch_last_view_unique_id]'
ORDER BY VIEW_UNIQUE_ID
)
WHERE ROWNUM <= 10000
)
ORDER BY VIEW_UNIQUE_ID
It hangs indefinitely with no response from the Oracle server. I've waited 20+ minutes and the SQLTools dialog box indicating a running query remains the same, with no progress or updates.
I've tested each subquery independently and each works fine and takes a very short amount of time (<= 1 second), so the view itself is sound. But as soon as the inner two SELECT queries are added with "WHERE VIEW_UNIQUE_ID IN...", it hangs.
Why doesn't this query work for views? In what important way are they not interchangeable here?
Updated: the architecture of the solution stipulates that it is to be stateless, so I shouldn't try to make the web service preserve any index state information between requests from consumers.
they added a column to the views
(VIEW_UNIQUE_ID) that concatenates the
ROWIDs from the component tables to
construct a UID for each union.
God, that is the most obscene idea I've seen in a long time.
Let's say the view is a simple one like
SELECT C.CUST_ID, C.CUST_NAME, O.ORDER_ID, C.ROWID||':'||O.ROWID VIEW_UNIQUE_ID
FROM CUSTOMER C JOIN ORDER O ON C.CUST_ID = O.CUST_ID
Every time you want to do the
SELECT VIEW_UNIQUE_ID
FROM [view]
WHERE VIEW_UNIQUE_ID > '[previous_batch_last_view_unique_id]'
ORDER BY VIEW_UNIQUE_ID
It has to build that entire result set, apply the filter, and order it. For anything other than trivially sized tables, that will be a nightmare.
Stop using the database to paginate/chunk the data here and do that in the client. Open the database connection, execute the query, fetch the first ten thousand rows from the query, index them, fetch the next ten thousand. Don't close and reopen the query each time, only after you've processed each row. You'll be able to forget about ordering.
For stateless, you need to re-architect. The whole thing with concatenated ROWIDs will not fly.
Start by putting the records to be processed into a fresh table, then you can flag them/process them/delete them in chunks.
INSERT INTO pending_table
SELECT 'N' state_flag, v.* FROM view v;
<start looping here>
UPDATE pending_table
SET state_flag = 'P'
WHERE ROWNUM < 10000;
COMMIT;
SELECT * FROM pending_table
WHERE state_flag = 'P';
<client processing>
DELETE FROM pending_table
WHERE state_flag = 'P';
<go back to start of loop, and keep going until pending_table is empty>

Resources