How do I load XMLTYPE from file? - oracle

I have big (1 Mb +) XML file that is stored in local folder (for example: c:\temp\data.xml) that should be loaded inside XMLTYPE variable
How can I do that?

the size limit of an Oracle XMLTYPE field should be 4 GB so you will not experience problems loading files having size of 1 MB.
You have to create on Oracle directory (on the Database server), put into the created directory your xml file, then execute your insert as follow:
oracle#server>mkdir yourdirectory
oracle#server>chown youroracleaccount.youroraclegroup yourdirectory
SQL> CREATE OR REPLACE DIRECTORY XMLDIR AS 'YOURDESIREDPATH'
SQL> GRANT read, write ON DIRECTORY XMLDIR TO <DESIREDORACLESCHEMA>
SQL> INSERT INTO YOURTABLE VALUES (...., XMLType(bfilename('XMLDIR', 'yourfilename.xml') , nls_charset_id('YOURCHARSETID') ));
SQL> commit;
If you want to put your xml in a variable, you have to create an external table, for example as follow (but you can adjust this sample as you neeed):
CREATE TABLE YOURXMLTABLE (doc CLOB)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY xmlfile_dir
ACCESS PARAMETERS
(
FIELDS (lobfn CHAR TERMINATED BY ',')
COLUMN TRANSFORMS (doc FROM lobfile (lobfn))
)
LOCATION ('yourfilename.xml')
)
REJECT LIMIT UNLIMITED;
and then execute
select * into XMLTYPVARIABLE from XMLTABLE
Regards
Giova

Related

Read data from a text file and Instead storing into a table, Can we directly add the data into a cursor and fetch the cursor in procedure?

Context: I have a text file that may contain a data say :
Employee Salary
name:start_salary:current_salary
emp1:30000:40000
emp2:35000:40000
.
.
Emp details
name:role:experience
emp1:Analyst:2
emp2:DBA:1
emp3:Developer:3
I want to read this text file from a PL/SQL code and I can load the data into a Table and then using a cursor I can utilize that data in my PL/SQL code.
But I want to skip the step of creating a table and want to use the data on the fly, may be Can we directly read the data into cursor?
Can someone please help if that is possible?
You can do that using the UTL_FILE package. This allows PL/SQL to read and write operating system text files.
Once you open the file, you can read its contents into a PL/SQL cursor, then perform all necessary processing directly on the data in the cursor.
Note that you have to know how your file is formatted and the structure of the data you are reading.
Check out implict EXTERNAL TABLE syntax, which lets you query a flat file direcly from within a SELECT statement ,eg
SQL> select * from external (
2 empno number(4),
3 ename varchar2(10),
4 ...
12 ( type oracle_loader
13 default directory TMP
14 access parameters
15 ( records delimited by newline
16 fields terminated by ','
17 missing field values are null
18 ( empno,ename,job,mgr,...)
19 )
20 location ('emp20161001.dat')
21 );

How to load an extracted ORACLE CLOB into only 1 TEXT column in Postgres?

I'm currently looking at migrating CLOB data from ORACLE into Postgres from an external file. I have created my table in Postgres and the data type I'm using is TEXT which will replicate using a CLOB in ORACLE and now I just need to get my data in.
So far what I've done is extract a CLOB column from ORACLE into a file as per the below, it is only 1 CLOB from 1 COLUMN so Iā€™m trying to load the contents of this entire CLOB into 1 column in Postgres..
CREATE TABLE clob_test (
id number,
clob_col CLOB);
DECLARE
c CLOB;
CURSOR scur IS
SELECT text
FROM dba_source
WHERE rownum < 200001;
BEGIN
EXECUTE IMMEDIATE 'truncate table clob_test';
FOR srec IN scur LOOP
c := c || srec.text;
END LOOP;
INSERT INTO clob_test VALUES (1, c);
COMMIT;
END;
/
DECLARE
buf CLOB;
BEGIN
SELECT clob_col
INTO buf
FROM clob_test
WHERE id = 1;
dbms_advisor.create_file(buf, 'TEST_DIR', 'clob_1.txt');
END;
/
This works fine and generates the clob_1.txt file containing all the contents of the ORACLE CLOB column CLOB_COL. Below is an example of the file output, it seems to contain every possible character you can think of including "~"...
/********** Types and subtypes, do not reorder **********/
type BOOLEAN is (FALSE, TRUE);
type DATE is DATE_BASE;
type NUMBER is NUMBER_BASE;
subtype FLOAT is NUMBER; -- NUMBER(126)
subtype REAL is FLOAT; -- FLOAT(63)
...
...
...
END;
/
My problem now is how do I get the entire contents of this 1 file into 1 record in Postgres so it simulates exactly how the data was originally stored in 1 record in ORACLE?
Effectively what I'm trying to achieve is similar to this, it works but the formatting is awful and doesn't really mirror how the data was originally stored.
POSTGRES> insert into clob_test select pg_read_file('/home/oracle/clob_1.txt');
I have tried using the COPY command but I'm having 2 issues. Firstly if there is a carriage return it will see that as another record and split the file up and the second issue is I can't find a delimiter which isn't being used in the file. Is there some way I can bypass the delimiter and just tell Postgres to COPY everything from this file without delimiters as it's only 1 column?
Any help would be great šŸ˜Š
Note for other answerers: This is incomplete and will still put the data into multiple records; the question also wants all the data in a single field.
Use COPY ... FROM ... CSV DELIMITER e'\x01' QUOTE e'\x02'. The only thing this can't handle is actual binary blobs, which, as I understand it, is not permitted in CLOB (I have never used Oracle myself). This only avoids the delimiter issue; it will still insert the data into one row per line of the input.
I'm not sure how to go about fixing that issue, but you should be aware that it's probably not possible to do this correctly in all cases. The largest field value PG supports is 1gb, while CLOB supports up to 4GB. If you need to correctly import >1GB CLOBs, the only route available is PG's large object interface.

How to read data from text file with comma separated values and insert into temp table using in stored procedure

FIle name emp.txt - the text file contains data like this:
emp_no,emp_EXPIRY_DATE,STATUS
a123456,2020-07-12,A
a123457,2020-07-12,A
I want to insert data into a temp table using a stored procedure.
Which database do you use? "Oracle" SQL Developer looks like "Oracle" (of course), but - code you posted as a comment isn't Oracle.
Anyway, if it was, then doing what you plan to do would require UTL_FILE package. CSV file should be put into a directory (usually on the database server) which is a source for directory (as an Oracle object); user that is supposed to load data should have read (and write?) privileges on it.
Alternatively, you could use the CSV file as an external table. That option might be simpler as it allows you to write ordinary SELECT statements against it, i.e. read data from it and insert into the target table that resides in an Oracle database. This option also requires the "directory" stuff.
Or, if you want to do that locally, consider using SQL*Loader; create a control file and load data. This option might be extremely fast, way faster than previous options. You won't see any difference for small files, but - for a lot of data - this might be your choice.
A SQL*Loader example:
Test table:
SQL> create table test
2 (emp_no varchar2(10),
3 emp_expiry_date date,
4 status varchar2(1));
Table created.
Control file:
options (skip=1)
LOAD DATA
infile emp.txt
replace
INTO TABLE test
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
emp_no,
emp_expiry_date "to_date(:emp_expiry_date, 'yyyy-dd-mm')",
status
)
Loading session & the result:
SQL> alter session set nls_date_Format = 'yyyy-mm-dd';
Session altered.
SQL> $sqlldr scott/tiger control=test13.ctl log=test13.log
SQL*Loader: Release 11.2.0.2.0 - Production on Sri Pro 11 21:02:44 2019
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 1
Commit point reached - logical record count 2
SQL> select * from test;
EMP_NO EMP_EXPIRY S
---------- ---------- -
a123456 2020-12-07 A
a123457 2020-12-07 A
SQL>

Oracle columns were missed while fetching data to CSV file using || in select query

We are trying to fetch data from oracle to CSV file through oracle spool.
But all columns were not fetched some columns were missing. Using below query
select colm1||'"~"'||colm2||'"~"'||...colm159||'"~"'|| from table;
It fetched only few columns.
While using same select query separated with ',' fetching all columns
eg: colm1,colm2,colm3...colm159 from table;
Please help me to sort this out
Thanks
Many IDEs such as SQL Developer support CSV export with a right-click menu but you can do it in SQL*Plus like this:
SQL> set colsep '~'
SQL> set echo off
SQL> set pages 0
SQL> set line 5000 -- or whatever is enough up to 32767
SQL> spool your_table.tsv
SQL> select /* csv */ * from your_table;
SQL> spool off
I recommend using the .tsv file extension. If you use .csv Excel assumes comma separation whereas with .tsv we get a nice dialog box to specify the separator.
If your longest line is more than 32k characters you'll need a different approach.

Oracle 10g: Can CLOB data lengths be less than 4,000?

We have three databases: dev, staging, and production. We do all our coding in the dev environment. We then push all our code and database changes to staging so the client can see how it works in a live environment. After they sign off, we do the final deployment to the production environment.
Now, about these CLOB columns: When using desc and/or querying the all_tab_columns view for the dev database, CLOBs show a data length of 4,000. However, in the staging and production databases, data lengths for dev-equivalent CLOB columns are odd numbers like 86. I've searched for every possible solution as to how this could have come about. I've even tried adding a new CLOB(86) column thinking it would work like it does for VARCHAR2, but Oracle just spits out an error.
Could the DBAs have botched something up? Is this even something to worry about? Nothing has ever seemed to break as a result of this, but I just like the metadata to be the same across all environments.
First of all, I - as a dba - feel sorry to see the lack of cooperation between you and the dbas. We all need to cooperate to be successful. Clob data lengths can be less than 4000 bytes.
create table z ( a number, b clob);
Table created.
insert into z values (1, 'boe');
1 row created.
exec dbms_stats.gather_table_stats (ownname => 'ronr', tabname => 'z');
PL/SQL procedure successfully completed.
select owner, avg_row_len from dba_tables where table_name = 'Z'
SQL> /
OWNER AVG_ROW_LEN
------------------------------ -----------
RONR 109
select length(b) from z;
LENGTH(B)
----------
3
Where do you find that a clob length can not be less than 4000?
DATA_LENGTH stores the maximun # of bytes that will be taken up within the row for a column. If the CLOB can be stored in row, then the maximum is 4000. LOBS will never take up more than 4000 bytes. If in row storage is disabled, then the LOB will only store the pointer information it needs to find the LOB data, which is much less than 4000 bytes.
SQL> create table t (clob_in_table clob
2 , clob_out_of_table clob
3 ) lob (clob_out_of_table) store as (disable storage in row)
4 , lob (clob_in_table) store as (enable storage in row)
5 /
Table created.
SQL> select table_name, column_name, data_length
2 from user_tab_columns
3 where table_name = 'T'
4 /
TABLE_NAME COLUMN_NAME DATA_LENGTH
------------------------------ ------------------------------ -----------
T CLOB_IN_TABLE 4000
T CLOB_OUT_OF_TABLE 86
EDIT, adding info on *_LOBS view
Use the [DBA|ALL|USER]_LOBS view to look at the defined in row out of row storage settings:
SQL> select table_name
2 , cast(substr(column_name, 1, 30) as varchar2(30))
3 , in_row
4 from user_lobs
5 where table_name = 'T'
6 /
TABLE_NAME CAST(SUBSTR(COLUMN_NAME,1,30)A IN_
------------------------------ ------------------------------ ---
T CLOB_IN_TABLE YES
T CLOB_OUT_OF_TABLE NO
EDIT 2, some references
See LOB Storage in Oracle Database Application Developer's Guide - Large Objects for more information on defining LOB storage, especially the third note that talks about what can be changed:
Note:
Only some storage parameters can be modified. For example, you
can use the ALTER TABLE ... MODIFY LOB statement to change RETENTION,
PCTVERSION, CACHE or NO CACHE LOGGING or NO LOGGING, and the STORAGE
clause.
You can also change the TABLESPACE using the ALTER TABLE ...
MOVE statement.
However, once the table has been created, you cannot change the CHUNK
size, or the ENABLE or DISABLE STORAGE IN ROW settings.
Also, LOBs in Index Organized Tables says:
By default, all LOBs in an index organized table created without an overflow segment will be stored out of line. In other words, if an index organized table is created without an overflow segment, then the LOBs in this table have their default storage attributes as DISABLE STORAGE IN ROW. If you forcibly try to specify an ENABLE STORAGE IN ROW clause for such LOBs, then SQL will raise an error.
This explains why jonearles did not see 4,000 in the data_length column when he created the LOB in an index organized table.
CLOBs don't have a specified length. When you query ALL_TAB_COLUMNS, e.g.:
select table_name, column_name, data_length
from all_tab_columns
where data_type = 'CLOB';
You'll notice that data_length is always 4000, but this should be ignored.
The minimum size of a CLOB is zero (0), and the maximum is anything from 8 TB to 128 TB depending on the database block size.
As ik_zelf and Jeffrey Kemp pointed out, CLOBs can store less than 4000 bytes.
But why are CLOB data_lengths not always 4000? The number doesn't actually limit the CLOB, but you're probably right to worry about the metadata being
different on your servers. You might want to run DBMS_METADATA.GET_DDL on the objects on all servers and compare the results.
I was able to create a low data_length by adding a CLOB to an index organized table.
create table test
(
column1 number,
column2 clob,
constraint test_pk primary key (column1)
)
organization index;
select data_length from user_tab_cols
where table_name = 'TEST' and column_name = 'COLUMN2';
On 10.2.0.1.0, the result is 116.
On 11.2.0.1.0, the result is 476.
Those numbers don't make any sense to me and I'd guess it's a bug. But I don't have a good understanding of the different storage options, maybe I'm just missing something.
Does anybody know what's really going on here?

Resources