PL/SQL to insert history row with long raw column in Oracle - oracle

I have a long raw column in an Oracle table. Insert with select is not working because of the long raw column which is part of my select statement as well. Basically I am trying to insert to insert history row with couple of parameters changed. Hence I was thinking of using PL/SQL in Oracle. I have no experience in PL/SQL neither I got anything after googling for couple of days. Can anyone help me with a sample PL/ SQL for my problem ? Thanks in advance !!!

LONG and LONG RAW datatypes are deprecated, and have been for many years now. You really are much better off getting away from them.
Having said that, if you're using PL/SQL, you will be limited to 32,760 bytes of data, which is the max that the LONG RAW PL/SQL datatype will hold. However, the LONG RAW database datatype, can hold up to 2GB of data. So, if any rows in your table contain data longer than 32,760 bytes, you will not be able to retrieve it using PL/SQL. This is a fundamental limitation of LONG and LONG RAW datatypes, and one of the reasons Oracle has deprecated their use.
In that case, the only options are Pro*C or OCI.
More information can be found here:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/datatypes.htm#CJAEGDEB
Hope that helps.

You can work with a LONG RAW column directly in PL/SQL if your data is limited to 32kB:
FOR cc IN (SELECT col1, col2... col_raw FROM your_table) LOOP
INSERT INTO your_other_table (col1, col2... col_raw)
VALUES (cc.col1, cc.col2... cc.col_raw);
END LOOP;
This will fail if any LONG RAW is larger than 32k.
In that case you will have to use another language. You could use java since it is included in the DB. I already answered a couple of questions on SO with LONG RAW and java:
Copying data from LOB Column to Long Raw Column (will work with LONG RAW to LONG RAW too, just replace the UPDATE with an INSERT)
Get the LENGTH of a LONG RAW
In any case as you have noticed it is a pain to work with this data type. If converting to LOB is not possible you will have to use a workaround.

Related

Oracle Temporary Table to convert a Long Raw to Blob

Questions have been asked in the past that seems to handle pieces of my full question, but I'm not finding a totally good answer.
Here is the situation:
I'm importing data from an old, but operational and production, Oracle server.
One of the columns is created as LONG RAW.
I will not be able to convert the table to a BLOB.
I would like to use a global temporary table to pull out data each time I call to the server.
This feels like a good answer, from here: How to create a temporary table in Oracle
CREATE GLOBAL TEMPORARY TABLE newtable
ON COMMIT PRESERVE ROWS
AS SELECT
MYID, TO_LOB("LONGRAWDATA") "BLOBDATA"
FROM oldtable WHERE .....;
I do not want the table hanging around, and I'd only do a chunk of rows at a time, to pull out the old table in pieces, each time killing the table. Is it acceptable behavior to do the CREATE, then do the SELECT, then DROP?
Thanks..
--- EDIT ---
Just as a follow up, I decided to take an even different approach to this.
Branching the strong-oracle package, I was able to do what I originally hoped to do, which was to pull the data directly from the table without doing a conversion.
Here is the issue I've posted. If I am allowed to publish my code to a branch, I'll post a follow up here for completeness.
Oracle ODBC Driver Release 11.2.0.1.0 says that Prefetch for LONG RAW data types is supported, which is true.
One caveat is that LONG RAW can technically be up to 2GB in size. I had to set a hard max size of 10MB in the code, which is adequate for my use, so far at least. This probably could be a variable sent in to the connection.
This fix is a bit off original topic now however, but it might be useful to someone else.
With Oracle GTTs, it is not be necessary to drop and create each time, and you don't need to worry about data "hanging around." In fact, it's inadvisable to drop and re-create. The structure itself persists, but the data in it does not. The data only persists within each session. You can test this by opening up two separate clients, loading data with one, and you will notice it's not there in the second client.
In effect, each time you open a session, it's like you are reading a completely different table, which was just truncated.
If you want to empty the table within your stored procedure, you can always truncate it. Within a stored proc, you will need to execute immediate if you do this.
This is really handy, but it also can make debugging a bear if you are implementing GTTs through code.
Out of curiosity, why a chunk at a time and not the entire dataset? What kind of data volumes are you talking about?
-- EDIT --
Per our comments conversation, this is very raw and untested, but I hope it will give you an idea what I mean:
CREATE OR REPLACE PROCEDURE LOAD_DATA()
AS
TOTAL_ROWS number;
TABLE_ROWS number := 1;
ROWS_AT_A_TIME number := 100;
BEGIN
select count (*)
into TOTAL_ROWS
from oldtable;
WHILE TABLE_ROWS <= TOTAL_ROWS
LOOP
execute immediate 'truncate table MY_TEMP_TABLE';
insert into MY_TEMP_TABLE
SELECT
MYID, TO_LOB(LONGRAWDATA) as BLOBDATA
FROM oldtable
WHERE ROWNUM between TABLE_ROWS and TABLE_ROWS + ROWS_AT_A_TIME;
insert into MY_REMOTE_TABLE#MY_REMOTE_SERVER
select * from MY_TEMP_TABLE;
commit;
TABLE_ROWS := TABLE_ROWS + ROWS_AT_A_TIME;
END LOOP;
commit;
end LOAD_DATA;

Import blob through SAS from ORACLE DB

Good time of a day to everyone.
I face with a huge problem during my work on previous week.
Here ia the deal:
I need to download exel file (blob) from ORACLE database through SAS.
I am using:
First step i need to get data from oracle. I used the construction (blob file is nearly 100kb):
proc sql;
connect to oracle;
create table SASTBL as
select * from connection to oracle (
select dbms_lob.substr(myblobfield,1,32767) as blob_1,
dbms_lob.substr(myblobfield,32768,32767) as blob_2,
dbms_lob.substr(myblobfield,65535,32767) as blob_3,
dbms_lob.substr(myblobfield,97302,32767) as blob_4
from my_tbl;
);
quit;
And the result is:
blob_1 = 70020202020202...02
blob_2 = 02020202020...02
blob_3 = 02020202...02
I do not understand why the field consists from "02"(the whole file)
And the length of any variable in sas is 1024 (instead of 37767) $HEX2024 format.
If I ll take:
dbms_lob.substr(my_blob_field,2000,900) from the same object the result will mush more similar to the truth:
blob = "A234ABC4536AE7...."
The question is: 1. how can i get binary data from blob field correctly trough SAS? What is my mistake?
Thank you.
EDIT 1:
I get the information but max string is 2000 kb.
Use the DBMAX_TEXT option on the CONNECT statement (or a LIBNAME statement) to get up to 32,767 characters. The default is probably 1024.
PROC SQL uses SQL to interact with SAS datasets (create tables, query tables, aggregate data, connect externally, etc.). The procedure mostly follows the ANSI standard with a few SAS specific extensions. Each RDMS extends ANSI including Oracle with its XML handling such as saving content in a blob column. Possibly, SAS cannot properly read the Oracle-specific (non-ANSI) binary large object type. Typically SAS processes string, numeric, datetime, and few other types.
As an alternative, consider saving XML content from Oracle externally as an .xml file and use SAS's XML engine to read content into SAS dataset:
** STORING XML CONTENT;
libname tempdata xml 'C:\Path\To\XML\File.xml';
** APPEND CONTENT TO SAS DATASET;
data Work.XMLData;
set tempdata.NodeName; /* CHANGE TO REPEAT PARENT NODE OF XML. */
run;
Adding as another answer as I can't comment yet... the issue you experienced is that the return of dbms_lob.substr is actually a varchar so SAS limits it to 2,000. To avoid this, you could wrap it in to_clob( ... ) AND set the DBMAX_TEXT option as previously answered.
Another alternative is below...
The code below is an effective method for retrieving a single record with a large CLOB. Instead of calculating how many fields to split the clob into resulting in a very wide record, it instead splits it into multiple rows. See expected output at bottom.
Disclaimer: Although effective it may not be efficient ie may not scale well to multiple rows, the generally accepted approach then is row pipelining PLSQL. That being said, the below got me out of a pinch if you can't make a procedure...
PROC SQL;
connect to oracle (authdomain=YOUR_Auth path=devdb DBMAX_TEXT=32767 );
create table clob_chunks (compress=yes) as
select *
from connection to Oracle (
SELECT id
, key
, level clob_order
, regexp_substr(clob_value, '.{1,32767}', 1, level, 'n') clob_chunk
FROM (
SELECT id, key, clob_value
FROM schema.table
WHERE id = 123
)
CONNECT BY LEVEL <= regexp_count(clob_value, '.{1,32767}',1,'n')
)
order by id, key, clob_order;
disconnect from oracle;
QUIT;
Expected output:
ID KEY CHUNK CLOB
1 1 1 short_clob
2 2 1 long clob chunk1of3
2 2 2 long clob chunk2of3
2 2 3 long clob chunk3of3
3 3 1 another_short_one
Explanation:
DBMAX_TEXT tells SAS to adjust the default of 1024 for a clob field.
The regex .{1,32767} tells Oracle to match at least once but no more than 32767 times. This splits the input and captures the last chunk which is likely to be under 32767 in length.
The regexp_substr is pulling a chunk from the clob (param1) starting from the start of the clob (param2), skipping to the 'level'th occurance (param3) and treating the clob as one large string (param4 'n').
The connect by re-runs the regex to count the chunks to stop the level incrementing beyond end of the clob.
References:
SAS KB article for DBMAX_TEXT
Oracle docs for REGEXP_COUNT
Oracle docs for REGEXP_SUBSTR
Oracle regex syntax
Stackoverflow example of regex splitting

Temporarily storing multiple values in Oracle [duplicate]

This question already has an answer here:
Alternate method to global temp tables for Oracle Stored Procedure
(1 answer)
Closed 8 years ago.
I need a way to temporarily store and use multiple values returned from an Oracle query. In SQL Server, I stored my values in a temp table, did my work, then dropped the table. I'm discovering the Oracle equivalent isn't as clear cut.
Here's a SQL Server example of what I'm trying to do:
select id into #temp from SomeTable where SomeColumn = 'Some Value'
:
(do whatever I need to do with #temp data)
:
drop table #temp
I can code my way around SQL Server pretty well, but am almost clueless when it comes to Oracle syntax. I've been reading various Oracle references, and they haven't been very helpful. I did read that Oracle temp tables work differently than SQL, and are often not recommended.
I'm looking into the temp table route, but if there's a better way to do this that doesn't use temp tables, I'm all ears. Anyone know a better way to do this in Oracle?
Thanks in advance.
As with many things, that depends. It depends on how much data you'll be retrieving and how you want to use it. If you don't have too much data to work with ("too much" meaning, oh, say, more than a couple thousand rows) and you want to manipulate the data procedurally, i.e. in a PL/SQL procedure or script, AND you don't want to access it using DML, i.e. you don't want to say something like SELECT * FROM your_temp_data... than loading the data into a PL/SQL collection, as #EgorSkriptunoff mentions above, might be a workable solution.
However, if the temp data is large (more than a couple thousand rows) and/or you need to be able to do something like SELECT * FROM your_temp_data... then your best bet it to use Oracle's Global Temp Tables. A GTT is a table which is used to hold data which should only last as long as either a single transaction or a complete session (i.e. as long as you're attached to the database). Documentation here and here, and another write-up on them here.
Share and enjoy.

Insert CLOB into Oracle database from SQL Developer

I want to insert the same long string into all cells in certain column, which is CLOB type.
It said I should use "bind variables" to do it. So i Googled this:
variable xmlstuff CLOB;
exec :xmlstuff := '<?xml version="1.0"?> ... really long xml...';
UPDATE TABLE_NAME SET COLUMN_NAME = '&&xmlstuff';
Now it still says
The string literal is longer than 4000 characters.
What is the proper use of bind variable in this case?
If you are programming using C# or Java - just use the OracleCLOB object, and it will do all the necessary steps.
If you want to use a CLOB in SQL or PL/SQL,
you need to allocate it, and release it after using.search for DBMS_LOB information.
Regarding the 4000 bytes limit - this is a varchar2 limit within SQL.
to bypass this - you can use PL/SQL - which limits you to a varchar of 32KB, which is not as near to the 4GB that you can hold in a CLOB, but that is the limit for "automatic" creation of a CLOB.
if your string is longer than 32K - you'll have to use a DBMS_LOB to load the data into the clog object, by using append on the clob object.
this is the fastest link I found about how to do it: http://geekswithblogs.net/robertphyatt/archive/2010/03/24/write-read-and-update-oracle-clobs-with-plsql.aspx
I wanted to answer fast, so please let me know if you cannot solve your issue after getting this information - and I'll try to explain it better.

Oracle 11g XMLType Experience

1
Hello community,
we are currently evaluating possibilities to store generic data structures. We found that at least from a functional point of view Oracle XMLType is a good alternative to the good old BLOB. Because you can query and update single fields from the xml and also create indexes on XPath expressions.
We are a bit worried about the performance of XMLType. Especially the select performance in interesting. We have queries that select multiple data structures at once. These need to be fast.
Such a query looks something like this
SELECT DOC_VALUE.getClobval() AS XML_VALUE FROM XML_TABLE WHERE d.ID = IN ('1','2',...);
Our XML documents are 7 to 8 KB in size. We are on Oracle 11g and create the XML column with type 'XMLTYPE'
Do you have experience about the performance of selects on xml type columns. What overall experiences do you have with XMLTYPE. Is this a robust and fast Oracle feature? Or is it rather something immature and experimental.
Regards, Mathias
XMLDB is a strong and reliable feature we have since 9i.
Flexible-schema XMLType columns are implemented over hidden CLOB columns, while fixed-schema XMLType are decomposed into hidden tables and views; after all XMLType is as reliable as standard objects.
Performance would vary on your use, but just reading the whole XML on an XMLType is fast as reading the same on a classical CLOB, because actually it really just read and give you the XML code stored on CLOB as-is.
You can assign an XDB.XMLIndex to an XMLType, and performance will be good after that. We had a table with only 13,000 records, each containing an XML Clob that wasn't too big, maybe 40 pages each if it was printed on text, and running simple queries without an XMLIndex often took over 10 minutes, and running more complex queries often took 2x or 3x longer!
With an XMLIndex, the performance was on par with a typical table (On the order of milliseconds for full table scans). The XMLIndex can be as complex as you need it to be, I tried this naive one and it worked fine: (I say naive because I do not fully understand the inner workings of this index type!)
CREATE INDEX myschema.my_idx_name ON myschema.mytable
(SYS_MAKEXML(0,"SYS_SOMETHING$"))
INDEXTYPE IS XDB.XMLINDEX
NOPARALLEL;
You should understand the various XMLIndex options available to you, and choose according to your needs. Documentation:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28369/xdb_indexing.htm#CHDJECDA

Resources