Alter column data type in production database - oracle

I'm looking for the best way to change a data type of a column in a populated table. Oracle only allows changing of data type in colums with null values.
My solution, so far, is a PLSQL statement which stores the data of the column to be modified in a collection, alters the table and then iterates over the collection, restoring the original data with data type converted.
-- Before: my_table ( id NUMBER, my_value VARCHAR2(255))
-- After: my_table (id NUMBER, my_value NUMBER)
DECLARE
TYPE record_type IS RECORD ( id NUMBER, my_value VARCHAR2(255));
TYPE nested_type IS TABLE OF record_type;
foo nested_type;
BEGIN
SELECT id, my_value BULK COLLECT INTO foo FROM my_table;
UPDATE my_table SET my_value = NULL;
EXECUTE IMMEDIATE 'ALTER TABLE my_table MODIFY my_value NUMBER';
FOR i IN foo.FIRST .. foo.LAST
LOOP
UPDATE my_table
SET = TO_NUMBER(foo(i).my_value)
WHERE my_table.id = foo(i).id;
END LOOP;
END;
/
I'm looking for a more experienced way to do that.

The solution is wrong. The alter table statement does an implicit commit. So the solution has the following problems:
You cannot rollback after alter the alter table statement and if the database crashes after the alter table statement you will loose data
Between the select and the update users can make changes to the data
Instead you should have a look at oracle online redefinition.

Your solution looks a bit dangerous to me. Loading the values into a collection and subsequently deleting them fom the table means that these values are now only available in memory. If something goes wrong they are lost.
The proper procedure is:
Add a column of the correct type to the table.
Copy the values to the new column.
Drop the old column.
Rename the new column to the old columns name.

Related

Inserting values into newly created table from a pre-existing table using a cursor and for loop - Snowflake SQL (classic web interface)

I'm trying to insert values into a new table in the classic Snowflake SQL web interface using data from a table that was already created, a cursor, and a for loop. My goal is to insert new information and information from the original table into the new table, but when I try and run my code, there is an error where I am referring to the column of my original table. (See code below)
-- Creation and inserting values into table invoice_original
create temporary table invoice_original (id integer, price number(12,2));
insert into invoice_original (id, price) values
(1, 11.11),
(2, 22.22);
-- Creates final empty table invoice_final
create temporary table invoice_final (
study_number varchar,
price varchar,
price_type varchar);
execute immediate $$
declare
c1 cursor for select price from invoice_original;
begin
for record in c1 do
insert into invoice_final(study_number, price, price_type)
values('1', record.price, 'Dollars');
end for;
end;
$$;
My end goal is to have the resulting table invoice_final with 3 columns - study_number, price, and price_type where the price value comes from the invoice_original table. The error I'm currently getting is:
Uncaught exception of type 'STATEMENT_ERROR' on line 6 at position 8 : SQL compilation error: error line 2 at position 20 invalid identifier 'RECORD.PRICE'.
Does anyone know why the record.price is not capturing the price value from the invoice_original table?
there are a number of type of dynamic SQL that do not handle the cursor name, and thus give this error if you push it into a single name temp value it will work:
for record in c1 do
let temp_price number := record.price;
insert into invoice_final(study_number, price, price_type)
values('1', temp_price, 'Dollars');
end for;
this sql has not been run, and could be the wrong format, but it is the base issue.
Also this really looks like an INSERT would work, but I also assume this is the nature of simplify the question down.
See the following for details on working with variables:
https://docs.snowflake.com/en/developer-guide/snowflake-scripting/variables.html#working-with-variables
The revised code below functions as desired:
-- Creation and inserting values into table invoice_original
create
or replace temporary table invoice_original (id integer, price number(12, 2));
insert into
invoice_original (id, price)
values
(1, 11.11),
(2, 22.22);
-- Creates final empty table invoice_final
create
or replace temporary table invoice_final (
study_number varchar,
price number(12, 2),
price_type varchar
);
execute immediate $$
declare
new_price number(12,2);
c1 cursor for select price from invoice_original;
begin
for record in c1 do
new_price := record.price;
insert into invoice_final(study_number, price, price_type) values('1',:new_price, 'Dollars');
end for;
end;
$$;
Note that I changed the target table definition for price to NUMBER (12,2) instead of VARCHAR, and assigned the record.price to a local variable that was passed to the insert statement as :new_price.
That all said ... I would strongly recommend against this approach for loading tables for performance reasons. You can replace all of this with an INSERT .. AS ... SELECT.
Always opt for set based processing over cursor / loop / row based processing with Snowflake.
https://docs.snowflake.com/en/sql-reference/sql/insert.html

PL/SQL reusable dynamic sql program for same type of task but different table and column

Thank you for reply guys. I kind of solved my problem.
I used to try to update data with ref cursor in dynamic SQL using "where current of" but I now know that won't work.
Then I tried to use %rowtype to store both 'id' and 'clob' in one variable for future updating but turns out weak ref cursor can't use that type binding either.
After that I tried to use record as return of an ref cursor and that doesn't work on weak cursor either.
On the end, I created another cursor to retrieve 'id' separately along with cursor to retrieve 'clob' on the same time then update table with that id.
I'm now working on a Oracle data cleaning task and have a requirement like below:
There are 38 tables(maybe more in the future) and every table has one or multiple column which type is Clob. I need to find different keyword in those columns and according to a logic return binary label of the column and store it in a new column.
For example, there is a table 'myTable1' which has 2 Clob columns 'clob1' and 'clob2'. I'd like to find keyword 'sky' from those columns and store '0'(if not found) or '1'(if found) in two new columns 'clob1Sky','clob2Sky'.
I know if I could write it on a static way which will provide higher efficiency but I have to modify it for those very similar tasks every time. I want save some time on this so I'm trying to write it in a reusable way and not binding to certain table.
But I met some problem when writing the program. My program is like below:
create or replace PACKAGE body LABELTARGETKEYWORD
as
/**
#param varcher tableName: the name of table I want to work on
#param varchar colName: the name of clob column
#param varchar targetWord: the word I want to find in the column
#param varchar newColName: the name of new column which store label of clob
*/
PROCEDURE mainProc(tableName varchar, colName varchar,targetWord varchar,newColName varchar2)
as
type c_RecordCur is ref cursor;
c_sRecordCur c_recordCur;
/*other variables*/
begin
/*(1) check whether column of newColName exist
(2) if not, alter add table of newColName
(3) open cursor for retrieving clob
(4) loop cursor
(5) update set the value in newColName accroding to func labelword return
(6) close cursor and commit*/
end mainProc;
function labelWord(sRecord VARCHAR2,targetWord varchar2) return boolean...
function ifColExist(tableName varchar2,newColName varchar2) return boolean...
END LABELTARGETKEYWORD;
Most DML and DDL are written in dynamic sql way.
The problem is when I write the (5) part, I notice 'Where current of' clause can not be used in a ref cursor or dynamic sql statement. So I have to change the plan.
I tried to use a record(rowid,label) to store result and alter the table later.(the table only be used by two people in my group, so there won't be problem of lock and data changes). But I find because I'm trying to use dynamic sql so actually I have to define ref cursor with return of certain %rowtype and basically all other variables, %type in dynamic sql statement. Which makes me feel my method has something wrong.
My question are:
If there a way to define %type in dynamic sql? Binding type to variable in dynamic SQL?
Could anybody give me a hint how to write that (5) part in dynamic SQL?
Should not I design my program like that?
Is it not the way how to use dynamic SQL or PLSQL?
I'm very new to PL/SQL. Thank you very much.
According to Tom Kyte's advice, to do it in one statement if it can be done in one statement, I'd try to use a single UPDATE statement first:
CREATE TABLE mytable1 (id NUMBER, clob1 CLOB,
clob2 CLOB, clob1sky NUMBER, clob2sky NUMBER )
LOB(clob1, clob2) STORE AS SECUREFILE (ENABLE STORAGE IN ROW);
INSERT INTO mytable1(id, clob1, clob2)
SELECT object_id, object_name, object_type FROM all_objects
WHERE rownum <= 10000;
CREATE OR REPLACE
PROCEDURE mainProc(tableName VARCHAR2, colName VARCHAR2, targetWord VARCHAR2, newColName VARCHAR2)
IS
stmt VARCHAR2(30000);
BEGIN
stmt := 'UPDATE '||tableName||' SET '||newColName||'=1 '||
'WHERE DBMS_LOB.INSTR('||colName||','''||targetWord||''')>1';
dbms_output.put_line(stmt);
EXECUTE IMMEDIATE stmt;
END mainProc;
/
So, calling it with mainProc('MYTABLE1', 'CLOB1', 'TAB', 'CLOB1SKY'); fires the statement
UPDATE MYTABLE1 SET CLOB1SKY=1 WHERE DBMS_LOB.INSTR(CLOB1,'TAB')>1
which seems to do the trick:
SELECT * FROM mytable1 WHERE clob1sky=1;
id clob1 clob2 clob1sky clob2skiy
33 I_TAB1 INDEX 1
88 NTAB$ TABLE 1
89 I_NTAB1 INDEX 1
90 I_NTAB2 INDEX 1
...
I am not sure with your question-
If this job is suppose to run on daily or hourly basis ,running query through it will be very costly. One thing you can do - put all your clob data in a file and save it in your server(i guess it must be linux). then you can create a shell script and schedule a job to run gerp command and fetch your required value and "if found then update your table".
I think you should approaches problem another way:
1. Find all columns that you need:
CURSOR k_clobs
select table_name, column_name from dba_tab_cols where data_type in ('CLOB','NCLOB');
Or 2 cursor(you can build you query if you have more than 1 CLOB per table:
CURSOR k_clobs_table
select DISTINCT table_name from dba_tab_cols where data_type in ('CLOB','NCLOB');
CURSOR k_clobs_columns(table_namee varchar(255)) is
select column_name from dba_tab_cols where data_type in ('CLOB','NCLOB') and table_name = table_namee;
Now you are 100% that column you are checking is clob, so you don't have to worry about data type ;)
I'm not sure what you want achieve, but i hope it may help you.

Insert in Merge not working in Oracle

I am new to Oracle. I have a table in Oracle which has 4 columns Period, Open_Flag,Creation_Dt,Updated_By.
The Period column is the Primary key of the table. I have created a proc which will check the value of period from input parameter in the table, if its existing, the value of Open_flag has to be updated else a new record shall be inserted.
create or replace
PROCEDURE PROC_REF_SAP_PERIOD(
V_PERIOD IN NUMBER,V_OPEN_FLAG IN VARCHAR2,V_CREATION_DT IN DATE,V_UPDATED_BY IN VARCHAR2)
AS
BEGIN
MERGE INTO REF_SAP_PERIOD T
USING (SELECT * FROM REF_SAP_PERIOD WHERE PERIOD=V_PERIOD )S
ON (T.PERIOD=S.PERIOD )
WHEN MATCHED THEN UPDATE SET OPEN_FLAG = V_OPEN_FLAG --WHERE PERIOD=V_PERIOD AND CREATION_DT=V_CREATION_DT AND UPDATED_BY=V_UPDATED_BY
WHEN NOT MATCHED THEN INSERT (PERIOD,OPEN_FLAG,CREATION_DT,UPDATED_BY) VALUES (V_PERIOD,V_OPEN_FLAG,V_CREATION_DT,V_UPDATED_BY);
END;
The issue is that the Update is working well in this case, however, the insert is not working. Please help.
You are merging table with itself, filtered by period. Obviously, it will never see your non-existent values in itself.
Try this line instead of your USING line:
using (select V_PERIOD "period" from dual)S

ORA-02437: "primary key violated" - why can't I see duplicate ID in SQL Developer?

I would receive an error:
ORA-02437: cannot validate (%s.%s) - primary key violated
Cause: attempted to validate a primary key with duplicate values or null values
I found it was because I have a stored procedure that increments the ID, but it had failed to do so when it re-ran and had an error related to one of my datatypes. I found I now had a duplicate ID in my database table. All this made sense and I was able to easily rectify it with a DELETE FROM MyTable WHERE ID = x, where x was the offending duplicate ID. The problem I have is the only way I was able to even find the IDs that were duplicated is in the first place is because I did a SELECT * FROM MyTable WHERE ID = x -- where x was one greater than the last ID I could actually see. I found it just by an educated guess. So:
Why can't I see these duplicate IDs when I open the table in Oracle SQL Developer? It only shows the last row as the ID before the duplicates. I don't think it is because of my primary key constraint, since the first line in my stored procedure is to remove that (and put it back, at the end - probably when I got my error), and it was not present when I looked at my table.
Is there some way to make these last IDs that got inserted into the table visible, so I wouldn't have to guess or assume that the duplicate IDs are "hiding" as one greater than the last ID I have in my table, in the future? There is a commit; in my stored procedure, so they should have appeared -- unless, of course, the procedure got hung up before it could run that line of code (highly probable).
Stored procedure that runs:
create or replace
PROCEDURE PRC_MYTABLE_INTAKE(
, EMPLOYEE_ID IN NVARCHAR2
, TITLE_POSITION IN NVARCHAR2
, CREATED_DATE IN DATE
, LAST_MODIFIED IN DATE
) AS
myid integer := 0;
appid integer := 0;
BEGIN
-- disable PK constraint so it can be updated
EXECUTE IMMEDIATE 'ALTER TABLE MYTABLE DROP CONSTRAINT MYTABLE_PK';
COMMIT;
-- assign ID to myid
SELECT ID INTO myid FROM MYTABLE WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE);
-- increment
myid := myid + 1;
-- assign APPLICATION_ID to appid
SELECT APPLICATION_ID INTO appid FROM MYTABLE WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE);
-- increment
appid := appid + 1;
-- use these ids to insert with
INSERT INTO MYTABLE (ID, APPLICATION_ID,
, EMPLOYEE_ID
, TITLE_POSITION
, CREATED_DATE
, LAST_MODIFIED
) VALUES(myid, appid,
, EMPLOYEE_ID
, TITLE_POSITION
, CREATED_DATE
, LAST_MODIFIED
);
COMMIT;
-- re-enable the PK constraint
EXECUTE IMMEDIATE 'ALTER TABLE PASS ADD CONSTRAINT MYTABLE_PK PRIMARY KEY (ID)';
COMMIT;
END;
Here's one problem:
SELECT ID
INTO myid
FROM MYTABLE
WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE)
There is no correlation between ID and ROWID, so you're not getting the maximum current ID, you're just getting the one that happens to be on the row that is furthest from the start of a datafile with a high number.
The code you need is:
SELECT COALESCE(MAX(ID),0)
FROM MYTABLE;
Or better yet, just use a sequence.
No idea why you're dropping the PK either.
Furthermore, when you issue the query:
SELECT APPLICATION_ID INTO appid ...
... that could be for a different row than the one you already got the id for, because a change could have been committed to the table.
Of course another issue is that you can't run two instances of this procedure at the same time either.
For David Aldridge, since he wants to look at code instead of the real reason I posted my question, run this ---
CREATE TABLE YOURSCHEMA.TESTING
(
TEST_ID NVARCHAR2(100) NOT NULL
, TEST_TYPE NVARCHAR2(100) NOT NULL
, CONSTRAINT TEST_PK PRIMARY KEY
(
TEST_ID
)
ENABLE
);
create or replace
PROCEDURE PRC_TESTING_INSERT(
TEST_TYPE IN NVARCHAR2
) AS
testid integer := 0;
BEGIN
-- disable PK constraint so it can be updated
EXECUTE IMMEDIATE 'ALTER TABLE TESTING DROP CONSTRAINT TEST_PK';
COMMIT;
-- assign TEST_ID to testid
SELECT TEST_ID INTO testid FROM TESTING WHERE ROWID IN (SELECT MAX(ROWID) FROM TESTING);
-- increment
testid := testid + 1;
-- use this id to insert with
INSERT INTO TESTING (TEST_ID, TEST_TYPE) VALUES(testid, TEST_TYPE);
COMMIT;
-- re-enable the PK constraint
EXECUTE IMMEDIATE 'ALTER TABLE TESTING ADD CONSTRAINT TEST_PK PRIMARY KEY (TEST_ID)';
COMMIT;
END;
SET serveroutput on;
DECLARE
test_type varchar(100);
BEGIN
test_type := 'dude';
YOURSCHEMA.PRC_TESTING_INSERT(test_type);
-- to verify the variable got set and procedure ran, could do:
--dbms_output.enable;
--dbms_output.put_line(test_type);
END;
Now, because there is no data in the table, the stored procedure will fail with ORA-06512: no data found. If you then try and run it again, you will get ORA-02443: cannot drop constraint - nonexistent constraint, because the EXECUTE IMMEDIATE 'ALTER TABLE TESTING DROP CONSTRAINT TEST_PK'; successfully dropped it, and the procedure never ran the command at the end to re-add it. This is what made me think I needed the commits, but even without them, it still will not complete the whole procedure.
To prove that the procedure DOES run, if given proper data, run this after creating the table, but before creating/running the stored procedure:
INSERT INTO TESTING (TEST_ID, TEST_TYPE)
VALUES ('1', 'hi');
And if you run the proc from a new table (not one with its constraint dropped), it will run fine.
Since mathguy didn't post this as the answer, though I'll credit him for the information...
Answer to why I can't see the duplicates is because the COMMIT does not occur in the procedure when it failed due to a datatype mismatch (which we found was actually in the application's code that sent the variable's values into this procedure, not in the stored procedure, itself). (It's also why I'll mark down anyone that says you don't have to add so many COMMIT lines in this procedure.) The commands were run in the session of the user that starts it - in my case, another session of the same DB user I was logged in with, but started from my application, instead of my SQL Developer session. It also explains why I could do a COMMIT, myself, but it did not affect the application's session - I could not commit any actions ran from another session. Had I ran a COMMIT as an OracleCommand and did an .ExecuteNonQuery on my OracleConnection right after the failure within the catch of my application, I would have seen the rows in SQL Developer without having to do a special query.
So, in short, the only way to see the items was with a direct query using WHERE ID =, find the last ID and increment it, and put it in the query.

How to duplicate all data in a table except for a single column that should be changed

I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);

Resources