Loading 5GB of data into Autonomous Database? - filesize

To load the 5 GB of data from object storage to Autonomous Database . Can I create one external table on this 5 GB file and load data or do I have to divide this file into few parts and then load it. Is there any restriction in object storage for maximum size of file which we can load to Autonomous Database?

On Autonomous Database on Shared Infrastructure, you can use your 5GB (or larger, no practical file size limitation) file to create an external table, or you can use it directly to load data into your table using the DBMS_CLOUD package.
To create an external table over data in your object store, you will want to use the "dbms_cloud.create_external_table"
Eg.
> BEGIN DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
> table_name =>'CHANNELS_EXT',
> credential_name =>'DEF_CRED_NAME',
> file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/channels.txt',
> format => json_object('delimiter' value ','),
> column_list => 'CHANNEL_ID NUMBER, CHANNEL_DESC VARCHAR2(20), CHANNEL_CLASS VARCHAR2(20)' ); END; /
To load data into your database, you will need the "dbms_cloud.copy_data" procedure.
Eg.
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
For more details on the parameter options and credential required for your object store of choice, refer to the documentation here.

Related

Creation of Oracle temporary table with same table structure to that of a existing table

How to create a global temporary table with same table structure to that of a existing table?
I know this concept is available in SQL server like "select * into #temp123 from abc". But I want to perform the same in Oracle.
Create global temporary table mytemp
as
select * from myTable
where 1=2
Global temporary tables in Oracle are very different from temporary tables in SQL Server. They are permanent data structures, it is merely the data in them which is temporary (limited to the session or transaction, depending on how a table is defined).
Therefore, the correct way to use global temporary tables is very different to how we use temporary tables in SQL Server. The CREATE GLOBAL TEMPORARY TABLE statement is a one-off exercise (like any other table). Dropping and recreating tables on the fly is bad practice in Oracle, which doesn't stop people wanting to do it.
Given the creation of a global temporary table should a one-off exercise, there is no real benefit to using the CREATE TABLE ... AS SELECT syntax. The statement should be explicitly defined and the script stored in source control like any other DDL.
You have tagged your question [oracle18c]. If you are really using Oracle 18c you have a new feature open to you, private temporary tables, which are closer to SQL Server temporary tables. These are tables which are genuinely in-memory and are dropped automatically at the end of the transaction or session (again according to definition). These are covered in the Oracle documentation but here are the headlines.
Creating a private temporary table data with a subset of data from permanent table T23:
create table t23 (
id number primary key
, txt varchar2(24)
);
insert into t23
select 10, 'BLAH' from dual union all
select 20, 'MEH' from dual union all
select 140, 'HO HUM' from dual
/
create private temporary table ORA$PTT_t23
on commit preserve definition
as
select * from t23
where id > 100;
The ORA$PTT prefix is mandatory (although it can be changed by setting the init.ora parameter PRIVATE_TEMP_TABLE_PREFIX, but why bother?
There after we can execute any regular DML on the table:
select * from ORA$PTT_t23;
The big limitation is that we cannot use the table in static PL/SQL. The table doesn't exist in the data dictionary as such, and so the PL/SQL compiler hurls - even for anonymous blocks:
declare
rec t23%rowtype;
begin
select *
into rec
from ORA$PTT_t23';
dbms_output.put_line('id = ' || rec.id);
end;
/
ORA-06550: line 6, column 10: PL/SQL: ORA-00942: table or view does not exist
Any reference to a private temporary table in PL/SQL must be done with dynamic SQL:
declare
n pls_integer;
begin
execute immediate 'select id from ORA$PTT_t23' into n;
dbms_output.put_line('id = ' || n);
end;
/
Basically this restricts their usage to SQL*Plus (or sqlcl scripts which run a series of pure SQL statements. So, if you have a use case which fits that, then you should check out private temporary tables. However, please consider that Oracle is different from SQL Server in many aspects, not least its multi-version consistency model: readers do not block writers. Consequently, there is much less need for temporary tables in Oracle.
In the SQL Server's syntax the prefix "#" (hash) in the table name #temp123 means - create temporary table that is only accessible via the current session ("##" means "Global").
To achive exactly the same thing in Oracle you can use private temporary tables:
SQL> show parameter private_temp_table
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
private_temp_table_prefix string ORA$PTT_
create table mytab as
select 1 id, cast ('aaa' as varchar2 (32)) name from dual
;
create private temporary table ora$ptt_mytab on commit preserve definition as
select * from mytab where 1=0
;
Private TEMPORARY created.
Afterwards you can use these tables in SQL and PL/SQL blocks:
declare
r mytab%rowtype;
begin
insert into ora$ptt_mytab values (2, 'bbb');
select id + 1, name||'x' into r from ora$ptt_mytab where rownum = 1;
insert into ora$ptt_mytab values r;
end;
/
select * from mytab
union all
select * from ora$ptt_mytab;
ID NAME
---------- --------------------------------
1 aaa
2 bbb
3 bbbx
Some important restrictions on private temporary tables:
The name must always be prefixed with whatever is defined with the parameter PRIVATE_TEMP_TABLE_PREFIX. The default is ORA$PTT_.
You cannot reference PTT in the static statements of the named PL/SQL blocks, e.g. packages, functions, or triggers.
The %ROWTYPE attribute is not applicable to that table type.
You cannot define column with default values.

Oracle:inner join between file and table

I'm new to oracle and plsql, so just bear with me.
I have a file TYPES.txt,
id,name,values
1,aaa,32
2,bbb,23
3,cvv,12
4,fff,54
I also have a table in my db, PARTS.ATTRIBUTES
id,name,props,crops
1,aaa,100,zzzz
2,bbb,200,yyyy
3,cvv,300,xxxx
4,fff,400,wwww
5,sasa,343,gfgg
6,uyuy,897,hhdf
I'd like to do an INNER JOIN on the file TYPES and ATTRIBUTES based on the column name. Now, i have done this by initially loading file TYPES into a temp table and then doing INNER JOIN between the temp table and ATTRIBUTES table.
But i'd like to know whether it is possible to do INNER JOIN between TYPES file and ATTRIBUTES table without making use of a temp table.
I understand that i can load the file using and get respective rows using following script:
declare
file utl_file.file_type;
line varchar2(500);
begin
file :=utl_file.fopen('USER_DIR','TYPES.txt','r');
loop
utl_file.get_line(file ,line);
dbms_output.put_line(line);
end loop;
exception
when others then
utl_file.fclose(file);
end;
Could someone be kind enough to explain to me how i can do the join between file contents and the db table?
P.S. The file TYPES.txt is dynamically generated and can have different number of columns at different times.
One cleaner approach is to use an EXTERNAL TABLE.
Use a create statement like this to create TYPES_external table.
CREATE TABLE TYPES_external (
id NUMBER(5),
name VARCHAR2(50),
Values VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY USER_DIR
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
id NUMBER(5),
name VARCHAR2(50),
Values VARCHAR2(50)
)
)
LOCATION ('TYPES.txt','TYPES.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
Once created , you can use this external table(TYPES_external) just as you
use any database table for select operation.

Sqlplus parameters and variables with default values

Problem
I have sql scripts which may use different tablespaces for different database users.
In order to remain flexible with the table creation I'd like to keep only 1 script and apply it to the various users. For that purpose I have something like this:
Tablespaces:
CREATE TABLESPACE MY_TABLESPACE DATAFILE 'MY_TABLESPACE.dat' SIZE 40M ONLINE;
CREATE TABLESPACE MY_INDEXSPACE DATAFILE 'MY_INDEXSPACE.dat' SIZE 40M ONLINE;
And the table creation script:
define default_tablespace = 'MY_TABLESPACE';
define default_indexspace = 'MY_INDEXSPACE';
drop table test_table;
create table test_table ( id number ) tablespace &default_tablespace;
create index my_index on test_table( id) tablespace &default_indexspace;
i. e. I can't set a default tablespace for the user, because the index uses a different tablespace.
Question
Is it possible to override the definition of default_tablespace and default_indexspace depending on e. g. an environment variable?
Something like:
define default_tablespace = isEnviromentVariableSet( 'OTHER_TABLESPACE') ? getEnvironmentVariable( OTHER_TABLESPACE) : 'MY_TABLESPACE';
That way I could use different tablespaces whenever I invoke the script externally by some utility and at the same time I could keep the default tablespace.
Thank you very much for the help!
In DDL operation (create, drop, etc.) u can't use variables.
Easy way is use pl/sql anonymous block like this.
declare
my_tabable_space varchar2(100) default 'my_some_tablespace';
other_tablespace varchar2(100);
begin
DBMS_SYSTEM.get_env('OTHER_TABLESPACE',other_tablespace);
if other_tablespace is not null then
my_tabable_space := other_tablespace ;
end if;
execute immediate 'create table test_table ( id number ) tablespace' || my_tabable_space;
end;
/
and for select ENV variable u can use DBMS_SYSTEM.get_env ('NAME of VARIABLE', my_variable) but this package need DBA right (i think.. :-) )

How to compare two Oracle schemas to get delta changes by Altering the table not to drop and recreate it

I've already tried out a tool named TOYS. I found it free but unfortunately it didn't work.
Then, I tried "RED-Gate Schema Compare for Oracle" but it uses the technique to drop and recreate the table mean while I need to just alter the table with the newly added/dropped columns.
Any help is highly appreciated
Thanks
Starting from Oracle 11g you could use dbms_metadata_diff package and specifically compare_alter() function to compare metadata of two schema objects:
Schema #1 HR
create table tb_test(
col number
)
Schema #2 HR2
create table tb_test(
col_1 number
)
select dbms_metadata_diff.compare_alter( 'TABLE' -- schema object type
, 'TB_TEST' -- object name
, 'TB_TEST' -- object name
, 'HR' -- by default current schema
, 'HR2'
) as res
from dual;
Result:
RES
-------------------------------------------------
ALTER TABLE "HR"."TB_TEST" ADD ("COL_1" NUMBER);
ALTER TABLE "HR"."TB_TEST" DROP ("COL");

How to compare a local CLOB column against a CLOB column in a remote database instance

I want to verify that the data in 2 CLOB columns is the same on 2 different instances. If these were VARCHAR2 columns, I could use a MINUS or a join to determine if rows were in one instance or the other. Unfortunately, Oracle does not allow you to perform set operations on CLOB columns.
How do I compare 2 CLOB columns, one of which is in my local instance and one that is in a remote instance?
Example table structure:
CREATE OR REPLACE TABLE X.TEXT_TABLE
( ID VARCHAR2,
NAME VARCHAR2,
TEXT CLOB
);
You can use an Oracle global temporary table to pull the CLOBs over to your local instance temporarily. You can then use the DBMS_LOB.COMPARE function to compare the CLOB columns.
If this query returns any rows, the CLOBs are different (more or less characters, newlines, etc) or one of the rows exists in only one of the instances.
--Create temporary table to store the text in
CREATE GLOBAL TEMPORARY TABLE X.TEMP_TEXT_TABLE
ON COMMIT DELETE ROWS
AS
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Use this statement if you need to refresh the TEMP_TEXT_TABLE table
INSERT INTO X.TEMP_TEXT_TABLE
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Do the comparision
SELECT DISTINCT
TARGET.NAME TARGET_NAME
,SOURCE.NAME SOURCE_NAME
,DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) AS COMPARISON
FROM (SELECT ID, NAME, TEXT FROM X.TEMP_TEXT_TABLE) TARGET
FULL OUTER JOIN
(SELECT ID, NAME, TEXT FROM X.TEXT_TABLE) SOURCE
ON TARGET.ID = SOURCE.ID
WHERE DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) <> 0
OR DBMS_LOB.COMPARE (TARGETTEXT, SOURCE.TEXT) IS NULL;
You can use DBMS_SQLHASH to compare the hashes of the relevant data. This should use significantly less IO than moving and comparing the CLOBs. The query below will just tell you if there are any differences in the entire table, but you can narrow it down.
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table'
,digest_type => 1/*MD4*/) from dual
minus
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table#remoteDB'
,digest_type => 1/*MD4*/) from dual;

Resources