Oracle insert if not exists statement - oracle

insert into OPT (email, campaign_id) values('mom#cox.net',100)
where not exists( select * from OPT where (email ="mom#cox.net" and campaign_id =100)) ;
Error report: SQL Error: ORA-00933: SQL command not properly ended
00933. 00000 - "SQL command not properly ended"
*Cause:
*Action:
how to insert a new row if it doesn't exists in Oracle?

insert into OPT (email, campaign_id)
select 'mom#cox.net',100
from dual
where not exists(select *
from OPT
where (email ='mom#cox.net' and campaign_id =100));

The correct way to insert something (in Oracle) based on another record already existing is by using the MERGE statement.
Please note that this question has already been answered here on SO:
oracle insert if row not exists
insert if not exists oracle

MERGE INTO OPT
USING
(SELECT 1 "one" FROM dual)
ON
(OPT.email= 'mom#cox.net' and OPT.campaign_id= 100)
WHEN NOT matched THEN
INSERT (email, campaign_id)
VALUES ('mom#cox.net',100)
;

insert into OPT (email, campaign_id)
select 'mom#coxnet' as email, 100 as campaign_id from dual MINUS
select email, campaign_id from OPT;
If there is already a record with mom#cox.net/100 in OPT, the MINUS will subtract this record from the select 'mom#coxnet' as email, 100 as campaign_id from dual record and nothing will be inserted. On the other hand, if there is no such record, the MINUS does not subract anything and the values mom#coxnet/100 will be inserted.
As p.marino has already pointed out, merge is probably the better (and more correct) solution for your problem as it is specifically designed to solve your task.

Another approach would be to leverage the INSERT ALL syntax from oracle,
INSERT ALL
INTO table1(email, campaign_id) VALUES (email, campaign_id)
WITH source_data AS
(SELECT 'mom#cox.net' email,100 campaign_id
FROM dual
UNION ALL
SELECT 'dad#cox.com' email,200 campaign_id
FROM dual)
SELECT email
,campaign_id
FROM source_data src
WHERE NOT EXISTS (SELECT 1
FROM table1 dest
WHERE src.email = dest.email
AND src.campaign_id = dest.campaign_id);
INSERT ALL also allow us to perform a conditional insert into multiple tables based on a sub query as source.
There are some really clean and nice examples are there to refer.
oracletutorial.com
oracle-base.com/

Related

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

Sybase insert into temp table with identity column

I'm trying to insert records for couple of columns from a physical table into a temp table with customized IDENTITY. It creates the identity column (field name = idnum), but the values are 0 for all rows. I'm using below code. If anyone can help me what I'm doing wrong would be greatly appreciated.
Note: I'm trying this is Sybase ASE 15.7
SELECT
* INTO #achu_test
FROM (SELECT TOP 10
idnum = IDENTITY(8),
First_Name,
Last_Name
FROM Employees) myTable
My bad! I misplaced the IDENTITY. instead of using it before "* INTO", I used inside the Subquery.
SELECT idnum = IDENTITY(8),* INTO #achu_test
FROM (SELECT TOP 10 First_Name, Last_Name FROM Employees) myTable
A good sleep might have given the result for me :)

Oracle identity column and insert into select

Oracle 12 introduced nice feature (which should have been there long ago btw!) - identity columns. So here's a script:
CREATE TABLE test (
a INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
b VARCHAR2(10)
);
-- Ok
INSERT INTO test (b) VALUES ('x');
-- Ok
INSERT INTO test (b)
SELECT 'y' FROM dual;
-- Fails
INSERT INTO test (b)
SELECT 'z' FROM dual UNION ALL SELECT 'zz' FROM DUAL;
First two inserts run without issues providing values for 'a' of 1 and 2. But the third one fails with ORA-01400: cannot insert NULL into ("DEV"."TEST"."A"). Why did this happen? A bug? Nothing like this is mentioned in the documentation part about identity column restrictions. Or am I just doing something wrong?
I believe the below query works, i havent tested!
INSERT INTO Test (b)
SELECT * FROM
(
SELECT 'z' FROM dual
UNION ALL
SELECT 'zz' FROM dual
);
Not sure, if it helps you any way.
For, GENERATED ALWAYS AS IDENTITY Oracle internally uses a Sequence only. And the options on general Sequence applies on this as well.
NEXTVAL is used to fetch the next available sequence, and obviously it is a pseudocolumn.
The below is from Oracle
You cannot use CURRVAL and NEXTVAL in the following constructs:
A subquery in a DELETE, SELECT, or UPDATE statement
A query of a view or of a materialized view
A SELECT statement with the DISTINCT operator
A SELECT statement with a GROUP BY clause or ORDER BY clause
A SELECT statement that is combined with another SELECT statement with the UNION, INTERSECT, or MINUS set operator
The WHERE clause of a SELECT statement
DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement
The condition of a CHECK constraint
The subquery and SET operations rule above should answer your Question.
And for the reason for NULL, when pseudocolumn(eg. NEXTVAL) is used with a SET operation or any other rules mentioned above, the output is NULL, as Oracle couldnt extract them in effect with combining multiple selects.
Let us see the below query,
select rownum from dual
union all
select rownum from dual
the result is
ROWNUM
1
1

Hive multiple insert goes wrong with the DISTINCT select statement

I read this code from "Hadoop the Definitive Guide":
SELECT a.ad_id, a.campaign_id, a.account_id, b.user_id
FROM dim_ads a JOIN impression_logs b ON (b.ad_id = a.ad_id)
WHERE b.dateid = '2008-12-01') x
INSERT OVERWRITE DIRECTORY 'results_gby_adid'
SELECT x.ad_id, count(1), count(DISTINCT x.user_id) GROUP BY x.ad_id
INSERT OVERWRITE DIRECTORY 'results_gby_campaignid'
SELECT x.campaign_id, count(1), count(DISTINCT x.user_id) GROUP BY x.campaign_id
INSERT OVERWRITE DIRECTORY 'results_gby_accountid'
SELECT x.account_id, count(1), count(DISTINCT x.user_id) GROUP BY x.account_id;
but as my test, using several DISTINCT cannot get right results.
my hiveql as below:
CREATE TABLE IF NOT EXISTS a (logindate int, id int);
then
load local file to this table...
CREATE TABLE IF NOT EXISTS user (id INT) PARTITIONED BY (logindate INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE;
then
if inserting table separately:
INSERT OVERWRITE TABLE user PARTITION(logindate=20130120) SELECT DISTINCT(id) FROM a WHERE logindate=20130120;
INSERT OVERWRITE TABLE user PARTITION(logindate=20130121) SELECT DISTINCT(id) FROM a WHERE logindate=20130121;
the results are correct;
but if choosing the next multiple insert hql:
FROM a
INSERT OVERWRITE TABLE user PARTITION(logindate=20130120) SELECT DISTINCT(id) WHERE logindate=20130120
INSERT OVERWRITE TABLE user PARTITION(logindate=20130121) SELECT DISTINCT(id) WHERE logindate=20130121;
the results are not correct, both partitions have the same number of records, seems like select from DISTINCT(id) WHERE logindate=20130120 OR logindate=20130121
so is it a bug or did I write some wrong syntax?
DISTINCT has a bit of an odd history in the code as an alias to group by.
If there is a bug, then the version of hive you are using would be important to know since bugs are addressed in each release.
This might work:
FROM a
INSERT OVERWRITE TABLE user PARTITION(logindate=20130120) SELECT id WHERE logindate=20130120 GROUP BY id
INSERT OVERWRITE TABLE user PARTITION(logindate=20130121) SELECT id WHERE logindate=20130121 GROUP BY id;
if that doesn't work, this will definitely work...even though it isn't the approach you were attempting to use...
FROM (select distinct id, logindate from a where logindate in ('20130120','20130121')) subq_a
INSERT OVERWRITE TABLE user PARTITION(logindate=20130120) SELECT id WHERE logindate=20130120
INSERT OVERWRITE TABLE user PARTITION(logindate=20130120) SELECT id WHERE logindate=20130121;

Select and Insert across dblink

I am having a bit of trouble with a select into insert across a dblink in oracle 10. I am using the following statement:
INSERT INTO LOCAL.TABLE_1 ( COL1, COL2)
SELECT COL1, COL2
FROM REMOTE.TABLE1#dblink s
WHERE COL1 IN ( SELECT COL1 FROM WORKING_TABLE)
When I run the statement the following is what gets run against the remote server on the DB Link:
SELECT /*+ OPAQUE_TRANSFORM */ "COL1", "COL2"
FROM "REMOTE"."TABLE1" "S"
If I run the select only and do not do the insert into the following is run:
SELECT /*+ */ "A1"."COL1"
, "A1"."COL2"
FROM "REMOTE"."TABLE1" "A1"
WHERE "A1"."COL1" =
ANY ( SELECT "A2"."COL1"
FROM "LOCAL"."TABLE1"#! "A2")
The issue is in the insert case the enitre table is being pulled across the dblink and then limited localy which takes a fair bit of time given the table size. Is there any reason adding the insert would change the behavior in this manner?
You may want to use the driving_site hint. There is a good explanation here:
http://www.dba-oracle.com/t_sql_dblink_performance.htm
When it comes to DML, oracle chooses to ignore any driving_site hint and executes the statement at the target site. So I doubt if you would be able to change that (even using WITH approach described above). A possible workaround is you can create a synonym for LOCAL.TABLE1 on the remote database and use the same in your INSERT statement.
Leveraging the WITH clause could optimize your retrieval of your working set:
WITH remote_rows AS
(SELECT /*+DRIVING_SITE(s)*/COL1, COL2
FROM REMOTE.TABLE1#dblink s
WHERE COL1 IN ( SELECT COL1 FROM WORKING_TABLE))
INSERT INTO LOCAL.TABLE_1 ( COL1, COL2)
SELECT COL1, COL2
FROM remote_rows
Oracle will ignore the driving_site hint for insert statements, as DML is always executed locally. The way to do this is to create a cursor with the driving site hint, and then loop through the cursor with a bulkcollect/forall and insert into the target local table.
How big is WORKING_TABLE ?
If it is small enough, you could try selecting from work_table into a collection, and then passing the elements of that collect as elements in an IN list.
declare
TYPE t_type IS TABLE OF VARCHAR2(60);
v_coll t_type;
begin
dbms_application_info.set_module('TEST','TEST');
--
select distinct object_type
bulk collect into v_coll
from user_objects;
--
IF v_coll.count > 20 THEN
raise_application_error(-20001,'You need '||v_coll.count||' elements in the IN list');
ELSE
v_coll.extend(20);
END IF;
insert into abc (object_type, object_name)
select object_type, object_name
from user_objects#tmfprd
where object_type in
(v_coll(1), v_coll(2), v_coll(3), v_coll(4), v_coll(5),
v_coll(6), v_coll(7), v_coll(8), v_coll(9), v_coll(10),
v_coll(11), v_coll(12), v_coll(13), v_coll(14), v_coll(15),
v_coll(16), v_coll(17), v_coll(18), v_coll(19), v_coll(20)
);
--
dbms_output.put_line(sql%rowcount);
end;
/
Insert into zith cardinality hint seems to work in 11.2
INSERT /*+ append */
INTO MIG_CGD30_TEST
SELECT /*+ cardinality(ZFD 400000) cardinality(CGD 60000000)*/
TRIM (CGD.NUMCPT) AS NUMCPT, TRIM (ZFD.NUMBDC_NEW) AS NUMBDC
FROM CGD30#DBL_MIG_THALER CGD,
ZFD10#DBL_MIG_THALER ZFD,
EVD01_ADS_DR3W2 EVD

Resources