Oracle: MERGE INTO with invisible columns - oracle

I have got an problem with ORA-00904: invalid identifier.
As example:
I have a table created like this:
CREATE TABLE TEST_TABLE
(
COL_1 VARCHAR2(5 CHAR) NOT NULL,
COL_2 VARCHAR2(30 CHAR),
COL_3 RAW(16) INVISIBLE DEFAULT SYS_GUID ()
)
CREATE UNIQUE INDEX TEST_TABLE_PK ON TEST_TABLE
(COL_1);
A second table on a remote db (DBLINK: testdb) looks like this:
CREATE TABLE TEST_TABLE
(
COL_1 VARCHAR2(5 CHAR) NOT NULL,
COL_2 VARCHAR2(30 CHAR)
)
CREATE UNIQUE INDEX TEST_TABLE_PK ON TEST_TABLE
(COL_1);
In the next step I want to merge the data between the local and remote db with an merge into statement like this:
MERGE INTO TEST_TABLE#testdb target
USING (SELECT * FROM TEST_TABLE
WHERE COL_3 = '3F47613050860B4EE0539D0A10AC10B7') source
ON (target.COL_1 = source.COL_1)
WHEN MATCHED
THEN
UPDATE SET target.COL_2 = source.COL_2
WHEN NOT MATCHED
THEN
INSERT (COL_1, COL_2)
VALUES (source.COL_1, source.COL_2);
The merge into statement does not work, because of an ORA-00904: "A5".COL_3 invalid identifier. But the same merge into statement works fine if the COL_3 column is visible.
Where does the "A5" come from?
Whats the problem here? Does anyone have the same issues?
Oracle versions: The local db is 12cSE and the remote db is 11g.

Specify your source table select, by name COL_1 and COL_2.
The key is to get rid of SELECT * from
MERGE INTO TEST_TABLE#testdb target
USING (SELECT COL_1,COL_2 FROM TEST_TABLE
WHERE COL_3 = '3F47613050860B4EE0539D0A10AC10B7') source
ON (target.COL_1 = source.COL_1)
WHEN MATCHED
THEN
UPDATE SET target.COL_2 = source.COL_2
WHEN NOT MATCHED
THEN
INSERT (COL_1, COL_2)
VALUES (source.COL_1, source.COL_2);

Related

Can we use an insert overwrite after using insert all

In Snowflake I am trying to insert updated records to a table. Then I want to identify the records that were just inserted as the most recent records save that as the final table output in a new column called ACTIVE which will either be true or flase. I am having an issue incorporating some sort of updated table segment to my current query. I need everything be contained in the same query rather than break it up into separate parts.
I have my table as follows
CREATE TABLE IF NOT EXISTS MY_TABLE
(
LINK_ID BINARY NOT NULL,
LOAD TIMESTAMP NOT NULL,
SOURCE STRING NOT NULL,
SOURCE_DATE TIMESTAMP NOT NULL,
ORDER BIGINT NOT NULL,
ID BINARY NOT NULL,
ATTRIBUTE_ID BINARY NOT NULL
);
I have records being inserted in this way:
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
I would like my final table from this to be the output as
SELECT *, ORDER != MAX(ORDER) OVER (PARTITION BY ID) AS ACTIVE
FROM MY_TABLE;
This is so I can identify the most recent record per ID group as ACTIVE/TRUE and the previous records within that ID group as INACTIVE/FALSE
I tried to use an insert overwrite method like this
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
INSERT OVERWRITE INTO MY_TABLE
SELECT *, RSRC_OFFSET != MAX(RSRC_OFFSET) OVER (PARTITION BY ID) AS ACTIVE
FROM L_OPTION_OPTION_ALLOCATION_TEST
SELECT *
FROM MY_TABLE;
However, it seems the insert overwrite doesn't work in this way (also I am not sure if I can just add a new column to the table like this?). Is there a way I can incorporate it into this query or a different way to update the table with this new ACTIVE column within this query itself?
Also I am using INSERT ALL here because I actually have multiple different tables I am inserting into at once, but this is the current table that I am trying to modify.
You can use the overwrite option with conditional multi-table inserts.
Starting with your current statement:
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
Add the overwrite option immediately after the insert command:
INSERT OVERWRITE ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
Note that this will truncate and insert ALL tables in the multi-table insert. There is not a way to be selective about which tables get truncated and inserted and which don't.
https://docs.snowflake.com/en/sql-reference/sql/insert-multi-table.html#optional-parameters

"ORA-00001: unique constraint (constraint_name) violated" Error even though I have NOT EXISTS check

I am trying to insert some values in the table through the application and get the issue "ORA-00001: unique constraint (constraint_name) violated". I have the below table:
CREATE TABLE EMPLOYEE
(
EMP_ID VARCHAR2(32) NOT NULL,
NAME VARCHAR2(30) NOT NULL,
TIME TIMESTAMP(3),
PRIMARY KEY(EMP_ID)
);
I have below statement in procedure and it is failing at this INSERT statement even though I have NOT EXISTS check before inserting:
INSERT INTO EMPLOYEE
(EMP_ID, NAME, TIME)
( SELECT v_empid ,
sys_context('USERENV','SID'),
SYSTIMESTAMP+INTERVAL '10' SECOND
FROM DUAL
WHERE NOT EXISTS ( SELECT 1
FROM EMPLOYEE
WHERE EMP_ID = v_empid) );
The above procedure is getting invoked from multiple services almost at the same time, Is there something issue with the parallel transactions like multiple sessions are trying to insert into the same table?
Any help is appreciated in this, thanks in advance.
To me it looks like 2 sessions are trying to insert same user at the same time.
Here how I reproduced it:
Step 1. Session A:
INSERT INTO test_EMPLOYEE(EMP_ID, NAME, TIME)
(SELECT 123, sys_context('USERENV','SID'), SYSTIMESTAMP+INTERVAL '10' SECOND
FROM DUAL
WHERE NOT EXISTS ( SELECT 1 FROM test_EMPLOYEE WHERE EMP_ID = 123));
Step 2. Session B:
-- same query as Session A
INSERT INTO test_EMPLOYEE(EMP_ID, NAME, TIME)
(SELECT 123, sys_context('USERENV','SID'), SYSTIMESTAMP+INTERVAL '10' SECOND
FROM DUAL
WHERE NOT EXISTS ( SELECT 1 FROM test_EMPLOYEE WHERE EMP_ID = 123));
Step 3. Session A:
commit;
Voila: session B fails with constraint violation
So I believe it's users or something outside of database causing the error

How to CREATE TABLE AS adding an identity?

In such way I can create table from a table adding a column number:
drop table A_TEST
/
CREATE TABLE A_TEST AS
SELECT CAST( null as NUMBER ) as ROW_ID,
C_CODE,B_CODE
FROM A
However I want to add the column as an identity how to do that ? I tried the below, but throwing an error:
CREATE TABLE A_TEST AS
SELECT CAST( null as NUMBER GENERATED BY DEFAULT AS IDENTITY ) as ROW_ID,
C_CODE,B_CODE
FROM A
You can not create the table using CTAS including the IDENTITY column.
But, You can simply create a table without an identity column using CTAS and then ALTER the table to include IDENTITY columns as following:
CREATE TABLE A_TEST
AS
SELECT
C_CODE,
B_CODE
FROM
A;
ALTER TABLE A_TEST ADD ROW_ID NUMBER
GENERATED BY DEFAULT AS IDENTITY;
Cheers!!
Do it in multiple steps: create the table from the other table without data; then alter the table to add the identity column; and finally insert the data.
Oracle Setup:
CREATE TABLE A ( A_CODE, B_CODE, C_CODE ) AS
SELECT 999, 'BBB', SYSDATE FROM DUAL UNION ALL
SELECT 0, NULL, DATE '1970-01-01' FROM DUAL;
Create Table:
Create the table without the IDENTITY column and with no rows:
CREATE TABLE A_TEST AS
SELECT C_CODE, B_CODE
FROM A
WHERE 1 = 0;
Then alter the table to add the IDENTITY column:
ALTER TABLE A_TEST ADD (
ROW_ID NUMBER
GENERATED ALWAYS AS IDENTITY
CONSTRAINT A_TEST__ROW_ID__PK PRIMARY KEY
);
Then insert the rows:
INSERT INTO A_TEST ( C_CODE, B_CODE )
SELECT C_CODE, B_CODE FROM A;
(Or you can create the table and insert the rows in the first step; and alter the table to add the identity column without a NOT NULL/PRIMARY KEY constraint in the second step; and, if you want to add a NOT NULL/PRIMARY KEY constraint afterwards then it must be done in a separate subsequent ALTER TABLE statement. db<>fiddle)
Output:
SELECT * FROM A_TEST;
C_CODE | B_CODE | ROW_ID
:------------------ | :----- | -----:
2019-12-19 09:06:27 | BBB | 1
1970-01-01 00:00:00 | null | 2
db<>fiddle here

Insert data listing columns with partitioning field in Hive

First of all let's setup a test environment:
CREATE TABLE IF NOT EXISTS source_table (
`col1` TIMESTAMP,
`col2` STRING
);
CREATE TABLE IF NOT EXISTS dest_table (
`col1` TIMESTAMP,
`col2` STRING,
`col3` STRING
)
PARTITIONED BY (day STRING)
STORED AS AVRO;
INSERT INTO TABLE source_table VALUES ('2018-03-21 17:08:04.401', 'test1'), ('2018-03-22 12:02:04.222', 'test2'), ('2018-03-22 07:21:04.111', 'test3');
How could I list the column names during insertion and put the partition value dynamically? The following command doesn't work:
INSERT INTO TABLE dest_table(col1, col2) PARTITION(day) SELECT col1, col2, date_format(col1, 'yyyy-MM-dd') FROM source_table;
By the way, without listing the columns of dest_table inside the INSERT INTO command, having two tables with the same columns number, everything works fine. What if my dest_table has more fields than the source_table?
Thank you for helping me.
P.S.
Ok, if I hardcode NULL this works. I leave the question opened because there might be better ways to achieve that.
INSERT INTO TABLE dest_table PARTITION(day) SELECT col1, col2, NULL, date_format(col1, 'yyyy-MM-dd') FROM source_table;
Anyway, this method is strictly bounded with columns order? In a real-life scenario, how could I handle lots of columns specifying a mapping, to avoid mistakes?
The syntax for inserting into a partitioned table when you want to list the specific columns is shown below. You don't need to put null on col3 since Hive will put a default value NULL since it is not in the column list during insert.
INSERT INTO TABLE dest_table PARTITION (day)(col1, col2, day)
SELECT col1, col2, date_format(col1, 'yyyy-MM-dd') FROM source_table;
Result:
col1 col2 col3 day
2018-03-22 12:02:04.222 test2 NULL 2018-03-22
2018-03-22 07:21:04.111 test3 NULL 2018-03-22
2018-03-21 17:08:04.401 test1 NULL 2018-03-21

Changing the data type of a column in Oracle

I created the following table
CREATE TABLE PLACE(
POSTCODE VARCHAR(10) PRIMARY KEY,
STREET_NAME VARCHAR(10),
COUNTY VARCHAR(10),
CITY VARCHAR(10));
I want to change the name, county and city from varchar(10) to varchar(20). How do I do that?
ALTER TABLE place
MODIFY( street_name VARCHAR2(20),
county VARCHAR2(20),
city VARCHAR2(20) )
Note that I am also changing the data type from VARCHAR to VARCHAR2 to be more conventional. There is no functional difference at present between the two though the behavior of VARCHAR may change in the future to match the SQL standard.
if you want to change only type of column use below:
ALTER TABLE <table_name> MODIFY (<column_name> <new_Type>)
in your case:
ALTER TABLE place MODIFY (street_name VARCHAR2(20),
county VARCHAR2(20),
city VARCHAR2(20))
If your table has data you could act below:
add a column with new type to table.
copy data from old column to new column.
drop old column.
rename new column to old.
For rename a column use below:
ALTER TABLE <table_name> rename column <column_name> to <new_column_name>
Oracle 10G and later
ALTER TABLE table_name
MODIFY column_name datatype;
A very general example is here to do the same -
Table:
CREATE TABLE TABLE_NAME(
ID NUMBER PRIMARY KEY,
COLUMN_NAME NUMBER NOT NULL, -- Modify with varchar2(20) NOT NULL
.
.
.
);
Step to modify the datatype of COLUMN_NAME from NUMBER to VARCHAR2
STEPS:
--Step 1: Add a temp column COLUMN_NAME_TEMP in table TABLE_NAME to hold data temporary
ALTER TABLE TABLE_NAME
ADD( COLUMN_NAME_TEMP varchar2(20) );
--Step 2: Update temp column COLUMN_NAME_TEMP with Old columns COLUMN_NAME data
UPDATE TABLE_NAME
SET COLUMN_NAME_TEMP = COLUMN_NAME;
--Step 3: Remove NOT NULL constrain from old columns COLUMN_NAME
ALTER TABLE TABLE_NAME MODIFY (COLUMN_NAME NULL);
--Step 4: Update old columns COLUMN_NAME data with NULL
UPDATE TABLE_NAME SET COLUMN_NAME = NULL;
--Step 5: Alter table old columns COLUMN_NAME to new data type varchar2(20)
ALTER TABLE TABLE_NAME MODIFY COLUMN_NAME varchar2(20);
--Step 6: Update old columns COLUMN_NAME with data from temp columns COLUMN_NAME_TEMP
UPDATE TABLE_NAME
SET COLUMN_NAME = COLUMN_NAME_TEMP;
--Step 7: Add NOT NULL constrain from old columns [COLUMN_NAME]
ALTER TABLE TABLE_NAME MODIFY (COLUMN_NAME NOT NULL);
--Step 8: Drop the temp column [COLUMN_NAME_TEMP]
alter table TABLE_NAME drop column COLUMN_NAME_TEMP;
If NOT NULL constrain is not exist the omitte step-3 and step-7
Alter table placemodify(street name varchar2(20),city varchar2(20)
You can't modify the data type of a table if you have some amount of records already present in the table.
You have to empty the table records of the column (you want to modify the data type) first and then use the below command :
alter table place
modify ( street_name varchar2(20), country varchar2(20), city varchar2(20) );
Definitely it will work!

Resources