Oracle CLOB data type to Redshift data type - oracle

we are in the process of migrating Oracle tables to redshift tables. We found that few tables are having CLOB data type. In redshift we converted CLOB to Varchar(65535) type. While doing copy command , we are getting
The length of the data column investigation_process is longer than the length defined in the table. Table: 65000, Data: 90123.
Which data type we need to use? Please share your suggestion.

Redshift isn't designed to store CLOB (or BLOB) data. Most databases that do store the CLOB separately from the table contents to not burden all queries with the excess data. A CLOB reference is stored in the table contents and a replacement of CLOB for reference is performed at result generation.
CLOBs should be stored in S3 and references to the appropriate CLOB (S3 key) stored in the Redshift table. The issue is that there isn't a prepackaged tool for doing the CLOB for reference replacement with Redshift AFAIK. Your solution will need some retooling to perform this replacement actions for all data users. It's doable, it's just going to take a data layer that performs the needed replacement.

Related

i was migrating data from oracle to s3 using AWS DMS, DMS is reading LONG datatype as LOB and is skipping the entire column and data

I was migrating the data from Oracle DB to AWS S3 using DMS, In the Oracle DB in one of the table the datatype for the column is Long, but the dms while reading and transfering to S3 getting the below message in the Logs, replacing the actual column and table definition names.
Column 'sample_column' was removed from table definition 'sample_table: the column data type is LOB and the table has no primary key or unique index.
But i verified that the source datatype for the sample_column was Long.
How do i resolve this issue ?
From the documentation: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html
Oracle data type: LONG
AWS DMS data type: CLOB
The LONG data type isn't supported in batch-optimized apply mode
(TurboStream CDC mode). To use this data type with AWS DMS, you must
enable the use of LOBs for a specific task. During CDC, AWS DMS
supports LOB data types only in tables that have a primary key.
So the type conversion appears to be expected, and the real issue here is that you have no primary key on the table.

Edit RAW column in Oracle SQL Developer

I am using Oracle SQL Developer 18.3 but when I want to edit(or insert) a column with RAW datatype it shows the field as read only and does not allow to edit.
As you may know Oracle SQL Developer shows RAW datatype as hex string despite BLOB datatype that it does not show the value but you can download and upload the BLOB data.
I know that I can update(or insert) the RAW data as hex string like this :
CREATE TABLE t1(the_id NUMBER PRIMARY KEY, raw_col RAW(2000));
INSERT INTO t1(the_id, raw_col) VALUES(1, '1a234c');
But I want do it by Oracle SQL Developer GUI.
Sorry, we do not have a 'raw' editor like we have for BLOBs, so it's up to using SQL.
If you want a reason for that omission, it's partly due to the fact that RAW is not a commonly used data type in Oracle Database.
Related: if you're talking about LONG RAW
We (Oracle) recommend you stop using it, and instead convert them to BLOBs.
The LONG RAW datatype is provided for backward compatibility with
existing applications. For new applications, use the BLOB and BFILE
datatypes for large amounts of binary data. Oracle also recommends
that you convert existing LONG RAW columns to LOB columns. LOB columns
are subject to far fewer restrictions than LONG columns. Further, LOB
functionality is enhanced in every release, whereas LONG RAW
functionality has been static for several releases.

How to export data from tables with BLOBs as SQL inserts

I need to export data from one schema and import it to another. But in the second schema there are tables with different names, different attribute names, etc, but these tables are suitable for data in first schema. So I export data as SQL inserts and manually rewrite names etc. in this inserts.
My problem is with tables which have columns with type BLOB. PL/SQL Developer throws error:
Table MySchema.ENT_NOTIFICATIONS contains one or more BLOB columns.
Cannot export in SQL format, use PL/SQL Developer format instead.
But, when I use PL/SQL Developer format (.pde) it is some kind of raw byte data and I can't change what I need.
Is there any solution to manage this?
Note: I use PL/SQL Developer 10.0.5.1710 and Oracle database 12c

Copy data from table with LONG RAW column from one database to another database

I need to create JOB in Pentaho Kettle to automate copying data from one database to another database. I am facing problem while copying data from table containing a long raw column.
I have tried below listed things:
I have used copy table wizard.But getting error "ORA-01461: can bind a LONG value only for insert into a LONG column" while copy for table containg LONG RAW column.Tables in both databases are exactly same.
I have tried creating a oracle function to use pl/sql to insert long raw data by binding the long raw column.
I am making call to oracle function in "Execute sql script" step in PENTAHO.
select function_name(prameter1,parameter2,long raw column,.....) from dual.
But getting error "String literal too long".
Any suggestion how to do copy long raw data having size around 89330 bytes from one table to another.
Tom Kyte writes:
August 26, 2008 - 7pm UTC:
long raw, not going to happen over a database link
(https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1101164500346436784)
You can try to create "temporary" table at one source DB where you convert from LONG RAW to CLOB using function TO_LOB.
And then you can transfer data to destination DB.
Then, if you absolutely need LONG RAW data type, you can convert back using method described here - Copying data from LOB Column to Long Raw Column

Storing table data as blob in column in different table(Oracle)

Requirement : We have around 500 tables from which around 10k rows in each tables are of interest. We want to store this data as blob in a table. All data when exported to a file is of 250 MB. Now one option is to store this 250 MB file in a blob (Oracle allows 4 GB) or store each table data as blob in a blob column i.e we will have one row for each table and blob column will have that table data.
Now with respect to performance, which option is better in terms of performance. Also this data needs to be fetched and insert into database.
Basically, this will be delivered to customer and our utility will read the data from blob and will insert into database.
Questions:
1) How to insert table data as blob in blob column
2) How to read from that blob column and then prepare insert statements.
3) Is there any benefit we can get from compression of table which contains blob data. If yes, then for reading how to uncompress that.
4) Does this approach will work on MSSQL and DB2 also.
What are the other considerations while designing tables having blob.
Please suggest
I have impression you want to go from structured content to non-structured.
I hope you know what you are trading off, but I do not have that impression reading your question.
Going BLOB you lose relationship / constraints between values.
It could be faster to read one block of data, but when you need to write minor change, you may need to write bigger "chunk" in case of big BLOBs.
To insert BLOB in database you can use any available API (OCI, JDBC. Even pl/sql if you access it only on server side).
For compression, you can use BLOB option. Also, you can DIY using some library (if you need to think about other RDBMS types).
Why do you want to store a table into a BLOB? For archive or transfer you could export the tables using exp or perferablyl expdp. These files you can compress and transfer or store as BLOB inside another Oracle database.
Max. size of LOB was 4 GB till Oracle release 9 as far as I remember. Today the limit is 8 TB to 128 TB, depending on your DB-Block Size.

Resources