I am looking to insert blank records into a table to get a final value of 57,816,000 records. I am starting with a table of 550 unique identifiers and for each distinct TMC name and i want to add 105,119 records for each distinct value. So each TMC would have 105,120 records. (105,119 * 550 = 57,815,450 (+ original 550 = 57,816,000))
However, the problem i am having is an error within TOAD saying this.
ORA-03113: end-of-file on communication channel
Process ID: 10272
Session ID: 247 Serial number: 1959
Script will end
It works on a small scale but i suspect its not the best way to go. Below is my script. Any advice?
declare i number(10) :=1;
begin
for i in 1..105119
Loop
insert into npmrds_dummy
select distinct tmc,
travel_time_passenger_vehicles,
travel_time_freight_trucks,
travel_time_all_vehicles,
road_number,
road_name,
epoch,
distance,
date1,
admin_level_1,
admin_level_2,
admin_level_3
from npmrds_dummy;
end loop;
end;
You could do this with one SQL statement:
insert into npmrds_dummy
select npmrds_dummy.*
from npmrds_dummy
inner join ( select 1 from dual connect by level <= 105119 )
The join with the sub select is a neat trick to multiply the number of records.
As at the start your records are unique, there is no need for the distinct, and you can just do select npmrds_dummy.*.
Related
There is a table contact_history with 1.244.000.000 number of data (from 04.03.22-05.06.2022) and with fields contact_dt and contact_dttm. I tried to transfer all the data to test using contact_dt with script:
**DECLARE
dat date;
begin
dat:= TO_DATE('04.03.2022', 'dd.mm.yyyy');
while dat<= TO_DATE('05.06.2022', 'dd.mm.yyyy') loop
INSERT /*+ append enable_parallel_dml parallel(16)*/
INTO CONTACT_HISTORY_TEST ct
SELECT -- + parallel(16)
ch.sas_contact_id,
ch.contact_source,
ch.client_id,
ch.contact_dttm,
ch.contact_dt,
ch.sas_contact_error_desc,
ch.sas_contact_status
FROM CONTACT_HISTORY ch
WHERE ch.contact_dt = dat;
commit;
dat:= dat+1;
end loop;
end;**
There is such a problem that when SELECT COUNT(*) FROM CONTACT_HISTORY_TEST shows only 1.200.000.000 data in the test table, when in general table 1.244.000.000.
And there is such a moment that when checking
SELECT COUNT(*)
FROM CONTACT_HISTORY
WHERE CONTACT_DT>= TO_DATE('04.03.2021', 'dd.mm.yyyy')
AND CONTACT_DT<= TO_DATE('05.06.2022', 'dd.mm.yyyy');
SELECT COUNT(*)
FROM CONTACT_HISTORY_TEST
WHERE CONTACT_DT>= TO_DATE('04.03.2021', 'dd.mm.yyyy')
AND CONTACT_DT<= TO_DATE('05.06.2022', 'dd.mm.yyyy')
In both tables, there are 1.200.000.000 data, please tell me where the remaining 44 million data have gone and how can I completely transfer the data from the table or how to do it right?
I presume that contact_dt column contains date values that have time component; for example, it isn't just 04.03.2021, but 04.03.2021 13:23:45.
Code you posted handles "start" of the period correctly as 04.03.2021 actually represents 04.03.2021 00:00:00.
However, the last day of that period isn't handled correctly - you're missing (almost) the whole last day because you copied only rows whose contact_dt is equal to 05.06.2022 00:00:00. What about eg. 05.06.2022 08:32:13?
Therefore, modify something. If contact_dt column is indexed, you shouldn't truncate it, so the simplest option is to change this
while dat <= TO_DATE('05.06.2022', 'dd.mm.yyyy') loop
to
while dat < TO_DATE('06.06.2022', 'dd.mm.yyyy') loop
As #APC commented, where clause should then also be fixed to
where ch.contact_dt >= dat and ch.contact_dt < dat + 1
To verify number of rows and date values, run the following code in both schemas and then post the result (edit the question, not as a comment):
alter session set nls_date_format = 'dd.mm.yyyy hh24:mi:ss';
select min(contact_dt) min_dat, max(contact_dt) max_dat, count(*) cnt
from contact_history;
The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;
I have a test script that I'm beggining to play with. I'm getting stuck with something that seems simple.
I want to iterate through rows to fetch data from last row of result set to use only it.
procedure e_test_send
is
cursor get_rec is
select
id,
email_from,
email_to,
email_cc,
email_subject,
email_message
from test_email_tab;
begin
for rec_ in get_rec loop
ifsapp.send_email_api.send_html_email(rec_.email_to,rec_.email_from, rec_.email_subject, rec_.email_message);
end loop;
end e_test_send;
All I'm trying to do is send an email with a message and to a person from the last row only. This is a sample table that will grow in records. At the minute I have 2 rows of data in it, if I execute this procedure it will send 2 emails which is not the desired action.
I hope this makes sense.
Thanks
Do you know which row is the last row? The one with the MAX(ID) value? If so, then you could base cursor on a straightforward
SELECT id,
email_from,
email_to,
email_cc,
email_subject,
email_message
FROM test_email_tab
WHERE id = (SELECT MAX (id) FROM test_email_tab)
As it scans the same table twice, its performance will drop as number of rows gets higher and higher. In that case, consider
WITH
temp
AS
(SELECT id,
email_from,
email_to,
email_cc,
email_subject,
email_message,
ROW_NUMBER () OVER (ORDER BY id DESC) rn
FROM test_email_tab)
SELECT t.id,
t.email_from,
t.email_to,
t.email_cc,
t.email_subject,
t.email_message
FROM temp t
WHERE t.rn = 1
which does it only once; sorts rows by ID in descending order and returns the one that ranks as the "highest" (i.e. the last).
I am recieving information from a csv file from one department to compare with the same inforation in a different department to check for discrepencies (About 3/4 of a million rows of data with 44 columns in each row). After I have the data in a table, I have a program that will take the data and send reports based on a HQ. I feel like the way I am going about this is not the most efficient. I am using oracle for this comparison.
Here is what I have:
I have a vb.net program that parses the data and inserts it into an extract table
I run a procedure to do a full outer join on the two tables into a new table with the fields in one department prefixed with '_c'
I run another procedure to compare the old/new data and update 2 different tables with detail and summary information. Here is code from inside the procedure:
DECLARE
CURSOR Cur_Comp IS SELECT * FROM T.AEC_CIS_COMP;
BEGIN
FOR compRow in Cur_Comp LOOP
--If service pipe exists in CIS but not in FM and the service pipe has status of retired in CIS, ignore the variance
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
--If there is not a summary record for this HQ in the table for this run, create one
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT compRow.HQ, to_date(sysdate, 'DD/MM/YYYY') from dual WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = compRow.HQ AND RUN_DATE = to_date(sysdate, 'DD/MM/YYYY'))
-- Check fields and update the tables accordingly
If (compRow.cis_loop <> compRow.cis_loop_c) Then
--Insert information into the details table
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl,
DateTime, Changed_Field, CIS_Value, FM_Value)
VALUES(compRow.Fac_ID, compRow.Pipe_Num, compRow.Hq, compRow.Street_Num || ' ' || compRow.Street_Name,
'Y', sysdate, 'Cis_Loop', compRow.cis_loop, compRow.cis_loop_c);
-- Update information into the summary table
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq = compRow.Hq
AND Run_Date = to_date(sysdate, 'DD/MM/YYYY')
End If;
END LOOP;
END;
Any suggestions of an easier way of doing this rather than an if statement for all 44 columns of the table? (This is run once a week if it matters)
Update: Just to clarify, there are 88 columns of data (44 of duplicates to compare with one suffixed with _c). One table lists each field in a row that is different so one row can mean 30+ records written in that table. The other table keeps tally of the number of discrepencies for each week.
First of all I believe that your task can be implemented (and should be actually) with staight SQL. No fancy cursors, no loops, just selects, inserts and updates. I would start with unpivotting your source data (it is not clear if you have primary key to join two sets, I guess you do):
Col0_PK Col1 Col2 Col3 Col4
----------------------------------------
Row1_val A B C D
Row2_val E F G H
Above is your source data. Using UNPIVOT clause we convert it to:
Col0_PK Col_Name Col_Value
------------------------------
Row1_val Col1 A
Row1_val Col2 B
Row1_val Col3 C
Row1_val Col4 D
Row2_val Col1 E
Row2_val Col2 F
Row2_val Col3 G
Row2_val Col4 H
I think you get the idea. Say we have table1 with one set of data and the same structured table2 with the second set of data. It is good idea to use index-organized tables.
Next step is comparing rows to each other and storing difference details. Something like:
insert into diff_details(some_service_info_columns_here)
select some_service_info_columns_here_along_with_data_difference
from table1 t1 inner join table2 t2
on t1.Col0_PK = t2.Col0_PK
and t1.Col_name = t2.Col_name
and nvl(t1.Col_value, 'Dummy1') <> nvl(t2.Col_value, 'Dummy2');
And on the last step we update difference summary table:
insert into diff_summary(summary_columns_here)
select diff_row_id, count(*) as diff_count
from diff_details
group by diff_row_id;
It's just rough draft to show my approach, I'm sure there is much more details should be taken into account. To summarize I suggest two things:
UNPIVOT data
Use SQL statements instead of cursors
You have several issues in your code:
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
"cis_status_c" is not declared. Is it a variable or a column in AEC_CIS_COMP?
In case it is a column, just put the condition into the cursor, i.e. SELECT * FROM T.AEC_CIS_COMP WHERE not (compRow.pipe_num = '' AND cis_status_c = 'R')
to_date(sysdate, 'DD/MM/YYYY')
That's nonsense, you convert a date into a date, simply use TRUNC(SYSDATE)
Anyway, I think you can use three single statements instead of a cursor:
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT comp.HQ, trunc(sysdate)
from AEC_CIS_COMP comp
WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = comp.HQ AND RUN_DATE = trunc(sysdate));
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl, DateTime, Changed_Field, CIS_Value, FM_Value)
select comp.Fac_ID, comp.Pipe_Num, comp.Hq, comp.Street_Num || ' ' || comp.Street_Name, 'Y', sysdate, 'Cis_Loop', comp.cis_loop, comp.cis_loop_c
from T.AEC_CIS_COMP comp
where comp.cis_loop <> comp.cis_loop_c;
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq IN (Select Hq from T.AEC_CIS_COMP)
AND trunc(Run_Date) = trunc(sysdate);
They are not tested but they should give you a hint how to do it.
I asked a question yesterday which got answers but didnt answer the main point. I wanted to reduce amount of time it took to do a MINUS operation.
Now, I'm thinking about doing MINUS operation in blocks of 5000, appending each iterations results to the cursor and finally returning the cursor.
I have following:
V_CNT NUMBER :=0;
V_INTERVAL NUMBER := 5000;
begin
select count(1) into v_cnt from TABLE_1
while (v_cnt > 0)
loop
open cv_1 for
SELECT A.HEAD,A.EFFECTIVE_DATE,
FROM TABLE_1 A
WHERE A.TYPE_OF_ACTION='6' AND A.EFFECTIVE_DATE >= ADD_MONTHS(SYSDATE,-15)
AND A.ROWNUM <= V_INTERVAL
MINUS
SELECT B.head,B.EFFECTIVE_DATE,
FROM TABLE_2 B
AND B.ROWNUM <= V_INTERVAL
V_CNT := V_CNT - V_INTERVAL;
END LOOP;
end;
However, as you see...in each iteration the cursor is overwritten. How can I change the code so that in each iteration it appends to cv_1 cursor rather than overwriting?
You haven't stated the requirement clearly.
So , i am assuming , you want to do a MINUS on two tables, A and B.
i.e you want to find tuples in A that are not in B.
Assuming this , the logic that you have written is not completely correct, as you are doing a MINUS on corresponding (5000-length) batches of A and B.
Eg: Your logic will return a tuple in the 4000th row in table A, that is present in say the 6000th row of table B.
I suggest you use left-outer join to accomplish your need. (Same as Peter Lang's post).
That should suffice for your performance requirements too, I think.
That's not how cursors work, you would have to store the values in some sort of collection.
Your current query gets you 5000 random rows from Table_1 and removes rows that also exist in 5000 random rows selected from Table_2.
Have you tried doing it without the MINUS?
As I understand the query, it should produce the same as this one:
Select a.head, a.effective_date,
From table_1 a
Left Join table_2 b On (b.head = a.head And b.effective_date = a.effective_date )
Where a.type_of_action='6' And a.effective_date >= ADD_MONTHS(SYSDATE,-15)
And b.head Is Null;
Having a compound index on TABLE_1 (type_of_action, head, effective_date) and on TABLE_2 (head, effective_date) should help you with performance.