how to convert column value to multiple insert rown oracle cursor - oracle

I am trying to copy value from our old db to new db where there is a change in the table structure.
Below is the structure of the table
Table1
Table1ID, WheelCount, BlindCount, OtherCount
For eg values of table 1 is like below
Table1ID, 1,2,5
Table1 is now changed to TableNew with the below
TableNewID, DisableID , Quantity.
So the value should be
TableNewID1, 1,1 here 1= WheelCount from table1
TableNewID2, 2,2 here 2= BlindCount
TableNewID3, 3,5 here 5= OtherCount
how to write a cursor to transform table1 value to the new table tableNew structure.
Table1
Table1ID WheelCount BlindCount OtherCount
1 1 2 5
2 8 10 15
A master table defined to map disableid
DisableID Type
1 wheelCount
2 blindcount
3 otherCount
Expected structure
ID **Table1ID** **DISABLEID** QUANTITY
1 1 1 1
2 1 2 2
3 1 3 5
4 2 1 8
5 2 2 10
6 2 3 15

The simplest is a UNION ALL for each column you want to turn into a row.
insert into tablenew
select table1id,1,wheelcount from table1
union all
select table1id,2,blindcount from table1
union all
select table1id,3,othercount from table1
There are other, sleeker methods for avoiding multiple passes on the first table, in case it's huge.

This is how I understood it.
Current table contents:
SQL> SELECT * FROM table1;
TABLE1ID WHEELCOUNT BLINDCOUNT OTHERCOUNT
---------- ---------- ---------- ----------
1 1 2 5
2 8 10 15
Prepare new table:
SQL> CREATE TABLE tablenew
2 (
3 id NUMBER,
4 table1id NUMBER,
5 disableid NUMBER,
6 quantity NUMBER
7 );
Table created.
Sequence (will be used to populate tablenew.id column):
SQL> CREATE SEQUENCE seq_dis;
Sequence created.
Trigger (which actually populates tablenew.id):
SQL> CREATE OR REPLACE TRIGGER trg_bi_tn
2 BEFORE INSERT
3 ON tablenew
4 FOR EACH ROW
5 BEGIN
6 :new.id := seq_dis.NEXTVAL;
7 END;
8 /
Trigger created.
Copy data:
SQL> INSERT INTO tablenew (table1id, disableid, quantity)
2 SELECT table1id, 1 disableid, wheelcount AS quantity FROM table1
3 UNION ALL
4 SELECT table1id, 2 disableid, blindcount AS quantity FROM table1
5 UNION ALL
6 SELECT table1id, 3 disableid, othercount AS quantity FROM table1;
6 rows created.
Result:
SQL> SELECT *
2 FROM tablenew
3 ORDER BY table1id, disableid;
ID TABLE1ID DISABLEID QUANTITY
---------- ---------- ---------- ----------
1 1 1 1
3 1 2 2
5 1 3 5
2 2 1 8
4 2 2 10
6 2 3 15
6 rows selected.

Related

Oracle - how to insert if not exists?

Example dB : https://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=49af209811bce88aa67b42387f1bb5f6
I'd like to add insert this line
1002 9 1 UNKNOWN
Because of the line exists
1002 5 1 JIM
I was thinking about something like
select codeclient from STATS_CLIENT_TEST where CODEAXESTAT=5
and insert codeclient, 9,1,UNKNOWN.
but not sure how to do it? And simple query or a PL/SQL?
What's the best way to get it?
Thanks
Use an INSERT .. SELECT statement with a PARTITIONed outer join:
INSERT INTO stats_client_test (
codeclient, codeaxestat, codeelementstat, valeuraxestatistiqueclient
)
SELECT cc.codeclient,
s.codeaxestat,
s.codeelementstat,
'UNKNOWN'
FROM (SELECT DISTINCT codeclient FROM stats_client_test) cc
LEFT OUTER JOIN stats_client_test s
PARTITION BY (s.codeaxestat, s.codeelementstat)
ON (s.codeclient = cc.codeclient)
WHERE s.rowid IS NULL;
or a MERGE statement:
MERGE INTO stats_client_test dst
USING (
SELECT cc.codeclient,
s.codeaxestat,
s.codeelementstat,
s.ROWID AS rid
FROM (SELECT DISTINCT codeclient FROM stats_client_test) cc
LEFT OUTER JOIN stats_client_test s
PARTITION BY (s.codeaxestat, s.codeelementstat)
ON (s.codeclient = cc.codeclient)
) src
ON (dst.ROWID = src.rid)
WHEN NOT MATCHED THEN
INSERT (codeclient, codeaxestat, codeelementstat, valeuraxestatistiqueclient)
VALUES (src.codeclient, src.codeaxestat, src.codeelementstat, 'UNKNOWN');
db<>fiddle here
Here's one option: using the MINUS set operator, find missing codeclient values and then insert appropriate row(s).
Before:
SQL> select * From stats_client_Test order by codeaxestat, codeclient;
CODECLIENT CODEAXESTAT CODEELEMENTSTAT VALEURAXESTATISTIQUECLIENT
-------------------- ----------- --------------- ----------------------------------------
1000 5 1 JOHN
1001 5 1 ALICE
1002 5 1 JIM
1003 5 1 BOB
1000 9 1 MAN
1001 9 1 WOMAN
1002 9 1 unknown
1003 9 1 MAN
8 rows selected.
Query:
SQL> insert into stats_client_test
2 (codeclient, codeaxestat, codeelementstat, VALEURAXESTATISTIQUECLIENT)
3 select x.codeclient, 9, 1, 'unknown'
4 from (select codeclient from stats_client_Test
5 where codeaxestat = 5
6 minus
7 select codeclient from stats_client_Test
8 where codeaxestat = 9
9 ) x;
0 rows created.
After:
SQL> select * From stats_client_Test order by codeaxestat, codeclient;
CODECLIENT CODEAXESTAT CODEELEMENTSTAT VALEURAXESTATISTIQUECLIENT
-------------------- ----------- --------------- ----------------------------------------
1000 5 1 JOHN
1001 5 1 ALICE
1002 5 1 JIM
1003 5 1 BOB
1000 9 1 MAN
1001 9 1 WOMAN
1002 9 1 unknown --> here it is
1003 9 1 MAN
8 rows selected.
SQL>

Oracle update from random on another table

I have some fields in table1 to update with random values from some fields in table2.
I have to random into rows of table2 and update each rows of table1 with the same rows values of table2.
Here is my SQL code, but it doesn't work.
update owner.table1 t1
set (t1.adress1, t1.zip_code, t1.town) = (select t2.adress, t2.zip_code, t2.town
from table1 t2
where id = trunc(dbms_random.value(1,20000)))
Result: all rows are updated with the same values, like no random on table 2 rows
How about switching to analytic ROW_NUMBER function? It doesn't really create a random value, but might be good enough.
Here's an example: first, create test tables and insert some data:
SQL> create table t1 (id number,address varchar2(20), town varchar2(10));
Table created.
SQL> create table t2 (id number, address varchar2(20), town varchar2(10));
Table created.
SQL> insert into t1
2 select 1, 'Ilica 20', 'Zagreb' from dual union all
3 select 2, 'Petrinjska 30', 'Sisak' from dual union all
4 select 3, 'Stradun 12', 'Dubrovnik' from dual;
3 rows created.
SQL> insert into t2
2 select 1, 'Pavelinska 15', 'Koprivnica' from dual union all
3 select 2, 'Baščaršija 11', 'Sarajevo' from dual union all
4 select 3, 'Riva 22', 'Split' from dual;
3 rows created.
SQL> select * From t1 order by id;
ID ADDRESS TOWN
---------- -------------------- ----------
1 Ilica 20 Zagreb
2 Petrinjska 30 Sisak
3 Stradun 12 Dubrovnik
SQL> select * From t2 order by id;
ID ADDRESS TOWN
---------- -------------------- ----------
1 Pavelinska 15 Koprivnica
2 Baščaršija 11 Sarajevo
3 Riva 22 Split
Update t1 with rows from t2:
SQL> update t1 set
2 (t1.address, t1.town) =
3 (select x.address, x.town
4 from (select row_number() over (order by address) id, t2.address, t2.town
5 from t2
6 ) x
7 where x.id = t1.id);
3 rows updated.
SQL> select * From t1 order by id;
ID ADDRESS TOWN
---------- -------------------- ----------
1 Baščaršija 11 Sarajevo
2 Pavelinska 15 Koprivnica
3 Riva 22 Split
SQL>

What is the Best way to Perform Bulk insert in oracle ?

With this, I mean Inserting millions of records in tables. I know how to insert data using loops but for inserting millions of data it won't be a good approach.
I have two tables
CREATE TABLE test1
(
col1 NUMBER,
valu VARCHAR2(30),
created_Date DATE,
CONSTRAINT pk_test1 PRIMARY KEY (col1)
)
/
CREATE TABLE test2
(
col2 NUMBER,
fk_col1 NUMBER,
valu VARCHAR2(30),
modified_Date DATE,
CONSTRAINT pk_test2 PRIMARY KEY (col2),
FOREIGN KEY (fk_col1) REFERENCES test1(col1)
)
/
Please suggest a way to insert some dummy records upto 1 million without loops.
As a fairly simplistic approach, which may be enough for you based on your comments, you can generate dummy data using a hierarchical query. Here I'm using bind variables to control how many are created, and to make some of the logic slightly clearer, but you could use literals instead.
First, parent rows:
var parent_rows number;
var avg_children_per_parent number;
exec :parent_rows := 5;
exec :avg_children_per_parent := 3;
-- create dummy parent rows
insert into test1 (col1, valu, created_date)
select level,
dbms_random.string('a', dbms_random.value(1, 30)),
trunc(sysdate) - dbms_random.value(1, 365)
from dual
connect by level <= :parent_rows;
which might generate rows like:
COL1 VALU CREATED_DA
---------- ------------------------------ ----------
1 rYzJBVI 2016-11-14
2 KmSWXfZJ 2017-01-20
3 dFSTvVsYrCqVm 2016-07-19
4 iaHNv 2016-11-08
5 AvAxDiWepPeONGNQYA 2017-01-20
Then child rows, which a random fk_col1 in the range generated for the parent:
-- create dummy child rows
insert into test2 (col2, fk_col1, valu, modified_date)
select level,
round(dbms_random.value(1, :parent_rows)),
dbms_random.string('a', dbms_random.value(1, 30)),
trunc(sysdate) - dbms_random.value(1, 365)
from dual
connect by level <= :parent_rows * :avg_children_per_parent;
which might generate:
select * from test2;
COL2 FK_COL1 VALU MODIFIED_D
---------- ---------- ------------------------------ ----------
1 2 AqRUtekaopFQdCWBSA 2016-06-30
2 4 QEczvejfTrwFw 2016-09-23
3 4 heWMjFshkPZNyNWVQG 2017-02-19
4 4 EYybXtlaFHkAYeknhCBTBMusGAkx 2016-03-18
5 4 ZNdJBQxKKARlnExluZWkHMgoKY 2016-06-21
6 3 meASktCpcuyi 2016-10-01
7 4 FKgmf 2016-09-13
8 3 JZhk 2016-06-01
9 2 VCcKdlLnchrjctJrMXNb 2016-05-01
10 5 ddL 2016-11-27
11 4 wbX 2016-04-20
12 1 bTfa 2016-06-11
13 4 QP 2016-08-25
14 3 RgmIahPL 2016-03-04
15 2 vhinLUmwLwZjczYdrPbQvJxU 2016-12-05
where the number of children varies for each parent:
select fk_col1, count(*) from test2 group by fk_col1 order by fk_col1;
FK_COL1 COUNT(*)
---------- ----------
1 1
2 3
3 3
4 7
5 1
To insert a million rows instead, just change the bind variables.
If you needed more of a relationship between the children and parents, e.g. so the modified date is always after the created date, you can modify the query; for example:
insert into test2 (col2, fk_col1, valu, modified_date)
select *
from (
select level,
round(dbms_random.value(1, :parent_rows)) as fk_col1,
dbms_random.string('a', dbms_random.value(1, 30)),
trunc(sysdate) - dbms_random.value(1, 365) as modified_date
from dual
connect by level <= :parent_rows * :avg_children_per_parent
) t2
where not exists (
select null from test1 t1
where t1.col1 = t2.fk_col1 and t1.created_date > t2.modified_date
);
You may also want non-midnight times (I set everything to midnight via the trunc() call, based on the column name being 'date' not 'datetime'), or some column values null; so this might just be a starting point for you.

How do you shift values down in a column in an Oracle table?

Given the following oracle database table:
group revision comment
1 1 1
1 2 2
1 null null
2 1 1
2 2 2
2 3 3
2 4 4
2 null null
3 1 1
3 2 2
3 3 3
3 null null
I want to shift the comment column one step down in relation to version, within its group, so that I get the following table:
group revision comment
1 1 null
1 2 1
1 null 2
2 1 null
2 2 1
2 3 2
2 4 3
2 null 4
3 1 null
3 2 1
3 3 2
3 null 3
I have the following query:
MERGE INTO example_table t1
USING example_table t2
ON (
(t1.revision = t2.revision+1 OR
(t2.revision = (
SELECT MAX(t3.revision)
FROM example_table t3
WHERE t3.group = t1.group
) AND t1.revision IS NULL)
)
AND t1.group = t2.group)
WHEN MATCHED THEN UPDATE SET t1.comment = t2.comment;
That does most of this (still need a separate query to cover revision = 1), but it is very slow.
So my question is, how do I use Max here as efficiently as possible to pull out the highest revision for each group?
I would use lag not max
create table example_table(group_id number, revision number, comments varchar2(40));
insert into example_table values (1,1,1);
insert into example_table values (1,2,2);
insert into example_table values (1,3,null);
insert into example_table values (2,1,1);
insert into example_table values (2,2,2);
insert into example_table values (2,3,3);
insert into example_table values (2,4,null);
select * from example_table;
merge into example_table e
using (select group_id, revision, comments, lag(comments, 1) over (partition by group_id order by revision nulls last) comments1 from example_table) u
on (u.group_id = e.group_id and nvl(u.revision,0) = nvl(e.revision,0))
when matched then update set comments = u.comments1;
select * from example_table;

How to create id with AUTO_INCREMENT on a view in Oracle

can any one help to create a AUTO_INCREMENT column on a view in oracle 11g.
Thanks
While it's not possible to return a single unique identity column for a view whose underlying data does not have any single unique identifier, it is possible to return composite values that uniquely identify the data. For example given a table of CSV Data with a unique ID on each row:
create table sample (id number primary key, csv varchar2(4000));
where the CSV column contains a string of comma separated values:
insert into sample
select 1, 'a' from dual union all
select 2, 'b,c' from dual union all
select 3, 'd,"e",f' from dual union all
select 4, ',h,' from dual union all
select 5, 'j,"",l' from dual union all
select 6, 'm,,o' from dual;
The following query will unpivot the csv data and the composite values (ID, SEQ) will uniquely identify each VALue, The ID column idetifies the record the data came from, and SEQ uniquely identifies the position in the CSV:
WITH pvt(id, seq, csv, val, nxt) as (
SELECT id -- Parse out individual list items
, 1 -- separated by commas and
, csv -- optionally enclosed by quotes
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',1,1,null,2)
, REGEXP_INSTR(csv, ',', 1, 1)
FROM sample
UNION ALL
SELECT id
, seq+1
, csv
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',nxt+1,1,null,2)
, REGEXP_INSTR(csv, ',', nxt+1, 1)
FROM pvt
where nxt > 0
)
select * from pvt order by id, seq;
ID SEQ CSV VAL NXT
---------- ---------- ---------- ---------- ----------
1 1 a a 0
2 1 b,c b 2
2 2 b,c c 0
3 1 d,"e",f d 2
3 2 d,"e",f e 6
3 3 d,"e",f f 0
4 1 ,h, [NULL] 1
4 2 ,h, h 3
4 3 ,h, [NULL] 0
5 1 j,"",l j 2
5 2 j,"",l [NULL] 5
5 3 j,"",l l 0
6 1 m,,o m 2
6 2 m,,o [NULL] 3
6 3 m,,o o 0
15 rows selected.

Resources