Wondering if there is a way to insert a row into a table from another, with exception of one column in the middle without specifying all the column name? I have 128 columns in the table.
I created a view to store the original records.
CREATE VIEW V_TXN_STG AS
SELECT * FROM TXN_STG;
In table TXN_STG, only one column BRN_CODE is changing.
Something like this doesn't work, because the column is not on the last, but somewhere middle of table structure.
INSERT INTO TXN_STG
SELECT v.*, 'BRN-001' AS BRN_CODE
FROM V_TXN_STG v;
I don't believe that this is possible without explicitly specifying the columns in your select.
first you have to get the columns:
SELECT listagg(column_name, ',') within group (order by column_name) columns
FROM all_tab_columns
WHERE table_name = 'AAA' --Table to insert too
and column_name <> 'B' -- column name you want to exclude
GROUP BY table_name;
Then insert that result on the first row:
insert into aaa(A,C) -- A,C is my result from above,I have excluded column B
select *
from (select 'a' A,'q' AMOUNT,'c' C from dual union all -- my sample data
select 'a','a','c' from dual union all
select 'a','b','c' from dual union all
select 'a','c','c' from dual union all
select 'a','d','c' from dual )
pivot
(
max(1)
for (AMOUNT) -- the column you want to remove from the sample data
IN ()
)
where 1=1
order by A;
Related
I have a requirement to do matching of few attributes one by one. I'm looking to avoid multiple select statements. Below is the example.
Table1
Col1|Price|Brand|size
-----------------------
A|10$|BRAND1|SIZE1
B|10$|BRAND1|SIZE1
C|30$|BRAND2|SIZE2
D|40$|BRAND2|SIZE4
Table2
Col1|Col2|Col3
--------------
B|XYZ|PQR
C|ZZZ|YYY
Table3
Col1|COL2|COL3|LIKECOL1|Price|brand|size
-----------------------------------------
B|XYZ|PQR|A|10$|BRAND1|SIZE1
C|ZZZ|YYY|D|NULL|BRAND2|NULL
In table3, I need to insert data from table2 by checking below conditions.
Find a match for record in table2, if Brand and size, Price match
If no match found, then try just Brand, Size
still no match found, try brand only
In the above example, for the first record in table2, found match with all the 3 attributes and so inserted into table3 and second record, record 'D' is matching but only 'Brand'.
All I can think of is writing 3 different insert statements like below into an oracle pl/sql block.
insert into table3
select from tab2
where all 3 attributes are matching;
insert into table3
select from tab2
where brand and price are matching
and not exists in table3 (not exists is to avoid
inserting the same record which was already
inserted with all 3 attributes matched);
insert into table3
select from tab2
where Brand is matching and not exists in table3;
Can anyone please suggest a better way to achieve it in any better way avoiding multiple times selecting from table2.
This is a case for OUTER APPLY.
OUTER APPLY is a type of lateral join that allows you join on dynamic views that refer to tables appearing earlier in your FROM clause. With that ability, you can define a dynamic view that finds all the matches, sorts them by the pecking order you've specified, and then use FETCH FIRST 1 ROW ONLY to only include the 1st one in the results.
Using OUTER APPLY means that if there is no match, you will still get the table B record -- just with all the match columns null. If you don't want that, you can change OUTER APPLY to CROSS APPLY.
Here is a working example (with step by step comments), shamelessly stealing the table creation scripts from Michael Piankov's answer:
create table Table1 (Col1,Price,Brand,size1)
as select 'A','10','BRAND1','SIZE1' from dual union all
select 'B','10','BRAND1','SIZE1' from dual union all
select 'C','30','BRAND2','SIZE2' from dual union all
select 'D','40','BRAND2','SIZE4'from dual
create table Table2(Col1,Col2,Col3)
as select 'B','XYZ','PQR' from dual union all
select'C','ZZZ','YYY' from dual;
-- INSERT INTO table3
SELECT t2.col1, t2.col2, t2.col3,
t1.col1 likecol1,
decode(t1.price,t1_template.price,t1_template.price, null) price,
decode(t1.brand,t1_template.brand,t1_template.brand, null) brand,
decode(t1.size1,t1_template.size1,t1_template.size1, null) size1
FROM
-- Start with table2
table2 t2
-- Get the row from table1 matching on col1... this is our search template
inner join table1 t1_template on
t1_template.col1 = t2.col1
-- Get the best match from table1 for our search
-- template, excluding the search template itself
outer apply (
SELECT * FROM table1 t1
WHERE 1=1
-- Exclude search template itself
and t1.col1 != t2.col1
-- All matches include BRAND
and t1.brand = t1_template.brand
-- order by match strength based on price and size
order by case when t1.price = t1_template.price and t1.size1 = t1_template.size1 THEN 1
when t1.size1 = t1_template.size1 THEN 2
else 3 END
-- Only get the best match for each row in T2
FETCH FIRST 1 ROW ONLY) t1;
Unfortunately is not clear what do you mean when say match. What is you expectation if there is more then one match?
Should it be only first matching or it will generate all available pairs?
Regarding you question how to avoid multiple inserts there is more then one way:
You could use multitable insert with INSERT first and condition.
You could join table1 to self and get all pairs and filter results in where condition
You could use analytical function
I suppose there is another ways. But why you would like to avoid 3 simple inserts. Its easy to read and maintain. And may be
There is example with analytical function next:
create table Table1 (Col1,Price,Brand,size1)
as select 'A','10','BRAND1','SIZE1' from dual union all
select 'B','10','BRAND1','SIZE1' from dual union all
select 'C','30','BRAND2','SIZE2' from dual union all
select 'D','40','BRAND2','SIZE4'from dual
create table Table2(Col1,Col2,Col3)
as select 'B','XYZ','PQR' from dual union all
select'C','ZZZ','YYY' from dual
with s as (
select Col1,Price,Brand,size1,
count(*) over(partition by Price,Brand,size1 ) as match3,
count(*) over(partition by Price,Brand ) as match2,
count(*) over(partition by Brand ) as match1,
lead(Col1) over(partition by Price,Brand,size1 order by Col1) as like3,
lead(Col1) over(partition by Price,Brand order by Col1) as like2,
lead(Col1) over(partition by Brand order by Col1) as like1,
lag(Col1) over(partition by Price,Brand,size1 order by Col1) as like_desc3,
lag(Col1) over(partition by Price,Brand order by Col1) as like_desc2,
lag(Col1) over(partition by Brand order by Col1) as like_desc1
from Table1 t )
select t.Col1,t.Col2,t.Col3, coalesce(s.like3, like_desc3, s.like1, like_desc1, s.like1, like_desc1),
case when match3 > 1 then size1 end as size1,
case when match1 > 1 then Brand end as Brand,
case when match2 > 1 then Price end as Price
from table2 t
left join s on s.Col1 = t.Col1
COL1 COL2 COL3 LIKE_COL SIZE1 BRAND PRICE
B XYZ PQR A SIZE1 BRAND1 10
C ZZZ YYY D - BRAND2 -
I am populating a dimension table named TIMES with data from an OLTP Table called SALES with the following code:
CREATE TABLE TIMES
(saleDay DATE PRIMARY KEY,
dayType VARCHAR(50) NOT NULL);
BEGIN
FOR rec IN
(SELECT saleDate, CASE WHEN h.hd IS NOT NULL THEN 'Holiday'
WHEN to_char(saleDate, 'd') IN (1,7) THEN 'Weekend'
ELSE 'Weekday' END dayType
FROM SALES s LEFT JOIN
(SELECT '01.01' hd FROM DUAL UNION ALL
SELECT '15.01' FROM DUAL UNION ALL
SELECT '19.01' FROM DUAL UNION ALL
SELECT '28.05' FROM DUAL UNION ALL
SELECT '04.07' FROM DUAL UNION ALL
SELECT '08.10' FROM DUAL UNION ALL
SELECT '11.11' FROM DUAL UNION ALL
SELECT '22.11' FROM DUAL UNION ALL
SELECT '25.12' FROM DUAL) h
ON h.hd = TO_CHAR(s.saleDate, 'dd.mm'))
LOOP
INSERT INTO TIMES VALUES rec;
END LOOP;
END;
/
When I run this, I'm getting the errors ORA-00001 (Unique Constraint Violation) and ORA-06512. I believe this is happening because the code is trying to input multiple dates (some of which are the same) into PK for my TIMES Dimension Table (saleDay). How would I implement a clause into this loop so it will only populate one instance of each saleDate into the saleDay PK so there isn't a violation?
For instance, If there are three rows in the SALES table where the saleDate is 2015-10-10, the code should only populate ONE instance of 2015-10-10 into the saleDay PK. I'm thinking the direction I should head is to implement a WHILE clause, however I'm not 100% sure on how that would work since this code is also using CASE to determine whether the saleDay was a Holiday, Weekday, or Weekend and populating the result into the dayType column.
Adding DISTINCT as suggested in a Comment below your question is one way to solve the problem.
The following approach may be more efficient:
for rec in (select distinct saledate from sales)
loop
insert into times (saleday, daytype) values
(rec.saledate, CASE .......);
end loop;
That is: put the CASE expression in the INSERT statement, not in the definition of the (implicit) cursor. There is no reason to compute the CASE expression multiple times for the same date, which may appear many times in the SALES table. There is no reason for the CASE expression to be part of the cursor, either. The CASE expression can use an IN condition (case when to_char(rec.saledate, 'dd.mm') in ('01.01', '15.01', ....) then 'Holiday' when .......)
Unless, of course, the homework problem specifically instructs you to use a left outer join....... :-(
Adding DISTINCT resolved this. Originally thought DISTINCT would negatively impact the CASE but it doesn't. Thanks to I3rutt for pointing this out.
BEGIN
FOR rec IN
(SELECT DISTINCT saleDate, CASE WHEN h.hd IS NOT NULL THEN 'Holiday'
WHEN to_char(saleDate, 'd') IN (1,7) THEN 'Weekend'
ELSE 'Weekday' END dayType
FROM SALES s LEFT JOIN
(SELECT '01.01' hd FROM DUAL UNION ALL
SELECT '15.01' FROM DUAL UNION ALL
SELECT '19.01' FROM DUAL UNION ALL
SELECT '28.05' FROM DUAL UNION ALL
SELECT '04.07' FROM DUAL UNION ALL
SELECT '08.10' FROM DUAL UNION ALL
SELECT '11.11' FROM DUAL UNION ALL
SELECT '22.11' FROM DUAL UNION ALL
SELECT '25.12' FROM DUAL) h
ON h.hd = TO_CHAR(s.saleDate, 'dd.mm'))
LOOP
INSERT INTO TIMES VALUES rec;
END LOOP;
END;
/
I have a function, which will get greatest of three dates from the table.
create or replace FUNCTION fn_max_date_val(
pi_user_id IN number)
RETURN DATE
IS
l_modified_dt DATE;
l_mod1_dt DATE;
l_mod2_dt DATE;
ret_user_id DATE;
BEGIN
SELECT MAX(last_modified_dt)
INTO l_modified_dt
FROM table1
WHERE id = pi_user_id;
-- this table contains a million records
SELECT nvl(MAX(last_modified_ts),sysdate-90)
INTO l_mod1_dt
FROM table2
WHERE table2_id=pi_user_id;
-- this table contains clob data, 800 000 records, the table 3 does not have user_id and has to fetched from table 2, as shown below
SELECT nvl(MAX(last_modified_dt),sysdate-90)
INTO l_mod2_dt
FROM table3
WHERE table2_id IN
(SELECT id FROM table2 WHERE table2_id=pi_user_id
);
execute immediate 'select greatest('''||l_modified_dt||''','''||l_mod1_dt||''','''||l_mod2_dt||''') from dual' into ret_user_id;
RETURN ret_user_id;
EXCEPTION
WHEN OTHERS THEN
return SYSDATE;
END;
this function works perfectly fine and executes within a second.
-- random user_id , just to test the functionality
SELECT fn_max_date_val(100) as max_date FROM DUAL
MAX_DATE
--------
27-02-14
For reference purpose i have used the table name as table1,table2 and table3 but my business case is similar to what i stated below.
I need to get the details of the table1 along with the highest modified date among the three tables.
I did something like this.
SELECT a.id,a.name,a.value,fn_max_date_val(id) as max_date
FROM table1 a where status_id ='Active';
The above query execute perfectly fine and got result in millisecods. But the problem came when i tried to use order by.
SELECT a.id,a.name,a.value,a.status_id,last_modified_dt,fn_max_date_val(id) as max_date
FROM table1 where status_id ='Active' a
order by status_id desc,last_modified_dt desc ;
-- It took almost 300 seconds to complete
I tried using index also all the values of the status_id and last_modified, but no luck. Can this be done in a right way?
How about if your query is like this?
select a.*, fn_max_date_val(id) as max_date
from
(SELECT a.id,a.name,a.value,a.status_id,last_modified_dt
FROM table1 where status_id ='Active' a
order by status_id desc,last_modified_dt desc) a;
What if you don't use the function and do something like this:
SELECT a.id,a.name,a.value,a.status_id,last_modified_dt x.max_date
FROM table1 a
(
select max(max_date) as max_date
from (
SELECT MAX(last_modified_dt) as max_date
FROM table1 t1
WHERE t1.id = a.id
union
SELECT nvl(MAX(last_modified_ts),sysdate-90) as max_date
FROM table2 t2
WHERE t2.table2_id=a.id
...
) y
) x
where a.status_id ='Active'
order by status_id desc,last_modified_dt desc;
Syntax might contain errors, but something like that + the third table in the derived table too.
I have two table with the same columns in different databases. Both table have records.i want to insert the records of table2 in table1 but i want to ignore those records which are already in table 1. As well i want to store all ignored records in a new table.
Example:
create table dest
(id number primary key,
col1 varchar2(10));
create table src
(id number,
col1 varchar2(10));
insert into src values(1,'ABC');
insert into src values(2,'GHB');
insert into src values(3,'DUP');
insert into src values(3,'DUP');
commit;
merge into dest
using
(select id,col1 from src) src on(dest.id=src.id)
when not matched then
insert values(src.id,src.col1)
when matched
then update set dest.col1=src.col1;
Error report -
SQL Error: ORA-00001: unique constraint (SCOTT.SYS_C0010807) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
you can use intersect and minus to determine the differences
-- Test Data
-- table1#abc
with data1(id,
val) as
(select 1, 'val1'
from dual
union all
select 2, 'val2'
from dual
union all
select 3, 'val3'
from dual),
-- table2#xyz
data2(id,
val) as
(select 1, 'another val1'
from dual
union all
select 2, 'val2'
from dual
union all
select 4, 'val4'
from dual)
-- Intersection
select 'Intersection', intersection.*
from ((select * from data2) intersect (select * from data1)) intersection
union all
-- data2 not in data1
select 'data2 not in data1', d2.*
from ((select * from data2) minus (select * from data1)) d2
union all
-- data1 not in data2
select 'data1 not in datad', d1.*
from ((select * from data1) minus (select * from data2)) d1;
I am trying to join two tables together but the key in one table is formated like 'xxxxxx' and the key I am trying to join together in the second table is formated like '2222xxxxxx'. Is there a way I can either split the second column into two different columns to make the join, or join on only the last 6 numbers of column 2?
Notes: values are numeric. the '2222' is always the same 4 numbers.
Try this way:
select *
from table1 a
INNER JOIN table2 b
on a.fieldname = substr(b.fieldname, -6)
Here is the documentation for the SUBSTR Function in Oracle.
Know that this is a very bad design for your second table you should consider in separating this values in different columns.
EDIT
This comment from #Boneist "if there were 7 numbers after the first 4, not just 6?"
To fix this you should use: substr(b.fieldname, 5)
To separate that column on a select statement either you create a new column and update it values with the given substr or just add it in the select command.
On the select command:
select fielda, fieldb,
substr(b.fieldname, 1, 4) firstPartOfField,
substr(b.fieldname, 5) secondPartOfField
from tableb
To create another column it would be
alter table tableb add column newField number(6);
update tableb
set oldField = substr(b.fieldname,1,4),
newField = substr(b.fieldname,5);
I would suggest a slightly different approach:
If your column has varchar/char data type:
drop table tab1;
drop table tab2;
create table tab1(col char(30));
create table tab2(col char(30));
insert into tab1
select lpad(to_char(level), 6, '0') as val
from dual connect by level <=100 order by level;
insert into tab2
select '2222' || lpad(to_char(level), 6, '0') as val
from dual connect by level <=100 order by level;
create unique index ux_tab1 on tab1(col);
create unique index ux_tab2 on tab2(col);
select tab1.col, tab2.col
from tab1
join tab2
on '2222' || tab1.col = tab2.col;
-- check execution plan
explain plan for
select tab1.col, tab2.col
from tab1
join tab2
on '2222' || tab1.col = tab2.col;
select * from table(dbms_xplan.display);
If your column has number data type:
drop table tab1;
drop table tab2;
create table tab1(col number);
create table tab2(col number);
insert into tab1
select 100000 + level as val
from dual connect by level <=100 order by level;
insert into tab2
select 2222100000 + level as val
from dual connect by level <=100 order by level;
create unique index ux_tab1 on tab1(col);
create unique index ux_tab2 on tab2(col);
select tab1.col, tab2.col
from tab1
join tab2
on 2222000000 + tab1.col = tab2.col;
-- check execution plan
explain plan for
select tab1.col, tab2.col
from tab1
join tab2
on 2222000000 + tab1.col = tab2.col;
select * from table(dbms_xplan.display);
Using this approach Oracle will be able to use indexes for joining !