ESU_1 is the Source table
create table ESU_1
(
emp_id NUMBER(10),
emp_name VARCHAR2(100)
);
I created a table ESU_2 by using ESU_1
create table ESU_2
as
select * from ESU_1 t
where t.emp_id>20;
When I used below query to get table definition
select dbms_metadata.get_ddl('TABLE', 'ESU_2','SNAPREP') from dual;
I got this o/p
CREATE TABLE ESU_2
( EMP_ID NUMBER(10),
EMP_NAME VARCHAR2(100)
);
But I want the exact table definition that is
create table ESU_2
as
select * from ESU_1 t
where t.emp_id>20;
How can I get this?
When you run
create table ESU_2
as
select * from ESU_1 t
where t.emp_id>20;
internally it will check the definiton of ESU_1 and create a similar table.
create table ESU_2
(
emp_id NUMBER(10),
emp_name VARCHAR2(100)
);
Then it will insert all the matching rows into the table:
insert into ESU_2 select * from ESU_1 where t.emp_id>20;
and perform a commit to pretend that the whole operation is DDL:
commit;
That is why the table definition you get is exactly what the table definition is.
Why is the original DDL not saved? Because it does not make any sense. If later on you change the content of ESU_1 or even the structure of ESU_1 the ESU_2 will not be automatically updated, so the initial query cannot create the same table as it was before.
However it makes total sense to store the DDL for views and materialized views (because materialized view in fact is a combination of a view and a table); it is stored and you can always retrieve it.
Related
I am trying to find out if there is a way to bulk collect into a multi-level type in Oracle. The example below should help in explaining the concept of what I am trying to do.
There is a source table with a denormalised list of counties and towns:
create table county_town (county varchar2(20), town varchar2(20));
insert into county_town values ('Surrey', 'Dorking');
insert into county_town values ('Surrey', 'Woking');
insert into county_town values ('Surrey', 'Guildford');
insert into county_town values ('Oxfordshire', 'Thame');
insert into county_town values ('Oxfordshire', 'Abingdon');
What I want to do is load this into a multilevel type that looks like this:
create type towns_typ as table of varchar2(20);
create type counties_typ as object (country varchar2(20), towns towns_type);
create type nt_counties_typ as table of counties_typ;
l_county_data nt_counties_typ
Is there some way that I can write a SELECT statement to BULK collect this data into l_county_data from the table county_town ? If BULK COLLECT cant be used is there another way to do this simply?
Yes, like here:
declare
l_county_data nt_counties_typ;
begin
select counties_typ(county, cast(collect(town) as towns_typ))
bulk collect into l_county_data
from county_town
group by county;
dbms_output.put_line(l_county_data(2).county);
end;
dbfiddle
I need to make the below amendment to this stored procedure
create or replace PROCEDURE "USP_IMPORT_FOBTPP_DATA"
AS
BEGIN
INSERT INTO FINIMP.FOBT_PARTPAYMENT
SELECT
PART_PAYMENT_ID,
ISSUING_SHOP,
TILL_NUMBER,
SLIP_NUMBER,
FOBT_NUMBER,
WHO_PAID,
WHEN_PAID,
AMOUNT_LEFT_TO_PAY,
FOBT_VALUE,
STATUS
FROM IMPORTDB.CLN_FOBTPP;
COMMIT;
END;
In order to skip any records that would result in a primary key violation, this is so the dataload process does not break.
Source Table
CREATE TABLE "FINIMP"."FOBT_PARTPAYMENT"
( "PART_PAYMENT_ID" NUMBER(*,0),
"ISSUING_SHOP" CHAR(4 BYTE),
"TILL_NUMBER" NUMBER(3,0),
"SLIP_NUMBER" NUMBER(*,0),
"FOBT_NUMBER" VARCHAR2(30 BYTE),
"WHO_PAID" CHAR(20 BYTE),
"WHEN_PAID" DATE,
"AMOUNT_LEFT_TO_PAY" NUMBER(19,4),
"FOBT_VALUE" NUMBER(19,4),
"STATUS" CHAR(2 BYTE)
);
ALTER TABLE "FINIMP"."FOBT_PARTPAYMENT" ADD CONSTRAINT "PK_FOBT_PP" PRIMARY KEY ("PART_PAYMENT_ID", "ISSUING_SHOP", "WHEN_PAID")
I am new to PL/SQL, how can I do this?
There are a number of ways to accomplish this, and the best method depends on your environment/requirements. Is the CLN_FOBTPP table considerably large? Is the USP_IMPORT_FOBTPP_DATA procedure called frequently, and does it need to meet certain performance criteria? These are all things you should consider.
One way to do this would be to start with the query that you use.
create or replace PROCEDURE "USP_IMPORT_FOBTPP_DATA"
AS
BEGIN
INSERT INTO FINIMP.FOBT_PARTPAYMENT
SELECT ...
FROM IMPORTDB.CLN_FOBTPP;
This will return all of the rows of data from IMPORTDB.CLN_FOBTP and insert them into FINIMP.FOBT_PARTPAYMENT. Instead, you could control for this by doing:
INSERT INTO FINIMP.FOBT_PARTPAYMENT
SELECT ...
FROM IMPORTDB.CLN_FOBTPP WHERE PART_PAYMENT_ID NOT IN (FINIMP.FOBT_PARTPAYMENT)
This would go through the FOBT_PARTPAYMENT table and check to see if a row's PART_PAYMENT_ID existed in the table before doing the insert. However, this can be prohibitively expensive if the table is large or if you have performance requirements.
Another way would be to create a temp table for each time the procedure is called, store the values in that temp table, and then add the new rows after validating the data. This would look something like:
create global temporary table temp_USP_table ("PART_PAYMENT_ID" NUMBER(*,0), "ISSUING_SHOP" CHAR(4 BYTE),...) on commit delete rows;
create or replace PROCEDURE "USP_IMPORT_FOBTPP_DATA"
AS
BEGIN
INSERT INTO temp_USP_table
SELECT ...
FROM IMPORTDB.CLN_FOBTPP;
From there, you can do a number of things. You could use the same procedure to add the new rows from the temp table into the FINIMP.FOBT_PARTPAYMENT table:
delete from temp_USP_table where PART_PAYMENT_ID in FINIMP.FOBT_PARTPAYMENT;
insert into FINIMP.FOBT_PARTPAYMENT select * from temp_USP_table;
Or you could create a new procedure to load the new data from the temp_USP_table into the FINIMP.FOBT_PARTPAYMENT table, in case you'd like to do something additional to the new data before it's added to the table. Since you reference a data load, I would recommend going the temporary table route because it should allow you to load the data without issue. Once the data is loaded, you can worry about adding it to the proper table(s).
I have a series of tables and objects I have defined. I have an object nested table that I am trying to insert values into. The values are in the form of a variable array but I don't know how to insert them. my tables and code are as follows.
Table wu.classes
crn number(5)
department varchar2(8)
title carchar2(25)
Table wu.students
student_id char(11)
name varchar2(10)
dept varchar2(8)
advisor varchar(10)
classes wu.classes_va
wu.classes_va varray(5) of number (5)
create type classes_ty as object(crn varchar2(5),department varchar2(8), coursetitle varchar2(25)
create table classes_ot of classes_ty;
insert into classes_ot select crn,department,title from wu.classes;
create or replace type classes_ref_ty as table of ref classes_ty;
create table student_plus(student# varchar2(11),student_name varchar2(10),major varchar2(8), advisor (10), enrolled classes_ref_ty) nested table enrolled store as classes_ref_ty_tab;
Problem here (I need to loop through to fill the table but I just need to know how to do it for one values and i can figure the rest out):
begin
insert into student_plus values('700-123-948','Hooker','CS','VanScoy',classes_ref_ty();
insert into table(select enrolled from student_plus where student#='700-123-948')
select ref(c) from classes_ot c where ???
end;
/
I don't know how to access the variable array and use it with the classes_ref_ty.
I create a table in Oracle 11g with the default value for one of the columns. Syntax is:
create table xyz(emp number,ename varchar2(100),salary number default 0);
This created successfully. For some reasons I need to create another table with same old table structure and data. So I created a new table with name abc as
create table abc as select * from xyz.
Here "abc" created successfully with same structure and data as old table xyz. But for the column "salary" in old table "xyz" default value was set to "0". But in the newly created table "abc" the default value is not set.
This is all in Oracle 11g. Please tell me the reason why the default value was not set and how we can set this using select statement.
You can specify the constraints and defaults in a CREATE TABLE AS SELECT, but the syntax is as follows
create table t1 (id number default 1 not null);
insert into t1 (id) values (2);
create table t2 (id default 1 not null)
as select * from t1;
That is, it won't inherit the constraints from the source table/select. Only the data type (length/precision/scale) is determined by the select.
The reason is that CTAS (Create table as select) does not copy any metadata from the source to the target table, namely
no primary key
no foreign keys
no grants
no indexes
...
To achieve what you want, I'd either
use dbms_metadata.get_ddl to get the complete table structure, replace the table name with the new name, execute this statement, and do an INSERT afterward to copy the data
or keep using CTAS, extract the not null constraints for the source table from user_constraints and add them to the target table afterwards
You will need to alter table abc modify (salary default 0);
new table inherits only "not null" constraint and no other constraint.
Thus you can alter the table after creating it with "create table as" command
or you can define all constraint that you need by following the
create table t1 (id number default 1 not null);
insert into t1 (id) values (2);
create table t2 as select * from t1;
This will create table t2 with not null constraint.
But for some other constraint except "not null" you should use the following syntax
create table t1 (id number default 1 unique);
insert into t1 (id) values (2);
create table t2 (id default 1 unique)
as select * from t1;
I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);