Storing table data as blob in column in different table(Oracle) - oracle

Requirement : We have around 500 tables from which around 10k rows in each tables are of interest. We want to store this data as blob in a table. All data when exported to a file is of 250 MB. Now one option is to store this 250 MB file in a blob (Oracle allows 4 GB) or store each table data as blob in a blob column i.e we will have one row for each table and blob column will have that table data.
Now with respect to performance, which option is better in terms of performance. Also this data needs to be fetched and insert into database.
Basically, this will be delivered to customer and our utility will read the data from blob and will insert into database.
Questions:
1) How to insert table data as blob in blob column
2) How to read from that blob column and then prepare insert statements.
3) Is there any benefit we can get from compression of table which contains blob data. If yes, then for reading how to uncompress that.
4) Does this approach will work on MSSQL and DB2 also.
What are the other considerations while designing tables having blob.
Please suggest

I have impression you want to go from structured content to non-structured.
I hope you know what you are trading off, but I do not have that impression reading your question.
Going BLOB you lose relationship / constraints between values.
It could be faster to read one block of data, but when you need to write minor change, you may need to write bigger "chunk" in case of big BLOBs.
To insert BLOB in database you can use any available API (OCI, JDBC. Even pl/sql if you access it only on server side).
For compression, you can use BLOB option. Also, you can DIY using some library (if you need to think about other RDBMS types).

Why do you want to store a table into a BLOB? For archive or transfer you could export the tables using exp or perferablyl expdp. These files you can compress and transfer or store as BLOB inside another Oracle database.
Max. size of LOB was 4 GB till Oracle release 9 as far as I remember. Today the limit is 8 TB to 128 TB, depending on your DB-Block Size.

Related

Oracle CLOB data type to Redshift data type

we are in the process of migrating Oracle tables to redshift tables. We found that few tables are having CLOB data type. In redshift we converted CLOB to Varchar(65535) type. While doing copy command , we are getting
The length of the data column investigation_process is longer than the length defined in the table. Table: 65000, Data: 90123.
Which data type we need to use? Please share your suggestion.
Redshift isn't designed to store CLOB (or BLOB) data. Most databases that do store the CLOB separately from the table contents to not burden all queries with the excess data. A CLOB reference is stored in the table contents and a replacement of CLOB for reference is performed at result generation.
CLOBs should be stored in S3 and references to the appropriate CLOB (S3 key) stored in the Redshift table. The issue is that there isn't a prepackaged tool for doing the CLOB for reference replacement with Redshift AFAIK. Your solution will need some retooling to perform this replacement actions for all data users. It's doable, it's just going to take a data layer that performs the needed replacement.

Migration issue while using DMS . Incorrect junk data for empty columns

While migrating from MySQL to ORAcle using AWS DMS servcie, In the source side(MySQL DB instance), some huge column (mediumtext) values are empty for 75% of rows in a table. Whereas in the target (Oracle ), its migrated with some other value (Not Junk values) . For me it looks like the column values are copied incorrectly between rows.
Wherever there is empty values in the source side columns, it copied some other data. Around 75% of table data for some of the clob columns with empty values in source side, are incorrectly mapped with some other data in the oracle side. We used FULL LOB mode and 10000Kb as chunk size.
Some questions or requests -
1. Could you share the table DDL from source and target?
2. Are you sure there is no workload running on the target that could change values in the table outside the DMS process?
3. Full LOB mode migrates LOBs in chunks. Why are we specifying such a high LOB chunk size? Also, do we not know the max LOB size to use limited LOB mode.
4. Could you paste the task ARN here? I work for AWS DMS and can look to see what is going on? Once I find the root cause, I will also make sure I post an analysis here for all stackoverflow users.
Let me know.

SQL Server 2008 R2 Express 10GB Filesize limit

I have reached the file size limit on my SQL Server 2008 R2 Express database which I believe is 10Gb. I know this because I see Event ID 1101 in the event log.
Could not allocate a new page for database 'ExchangeBackup' because of insufficient disk space in filegroup 'PRIMARY'
I have removed some historic data to work around the problem for now but it is only a temporary fix. One table (PP4_MailBackup) is much larger than the others so when I created this database 12 months ago, I converted this table to be a Filestream table and the data is stored outside the FileGroup in the File System. This appeared to be working successfully until I received the error and new data was no longer being added to my database.
When I do a report on table sizes I see the Reserved(KB) column adds up to almost 10GB.
The folder that holds my FileStream data is 176 GB
The database .mdf file is indeed 10GB.
Does anyone have any idea why the table PP4_MailBackup is still using nearly 7GB?
Here is the "Standard Reports -> Disk Usage report" for this database:
Thanks in advance
David
Update
Here is some more info.
There are 868,520 rows in this table.
This cmd returns 1 so I'm assuming Ansipadding is on. I have never changed this from the default.
SELECT SESSIONPROPERTY('ANSI_PADDING')
The columns are defined like this
Even if every record for every column filled the full record size, by my rough calculation the table would be around 4,125,470,000 bytes. I understand that the nvarchar columns only use the actual space required.
I'm still missing a lot of space.
Not really an answer but more of a conclusion.
I have given up on this problem and resided myself to remove data to stay under the 10GB Primary file size limit. I figured out that the nvarchar columns store 2 bytes per character in order to deal with Unicode characters although they do only use the space required and don't pad out the column with spaces. So this would account for some of the space I can't find.
I tried to convert my char(500) columns to varchar(500) by adding new columns with the correct type copying data into them and then removing the old column. This worked but the table actually got bigger because removing the column is only a Meta data change and does not actually remove the data. To recover the space I would need to create a new table and copy the data across then remove the old table of course I don't have enough space in the primary file to do that.
I thought about copying the table to temp db removing the original table then copying it back but temp db doesn't support filestream columns (at least to my knowledge) so I would need to hold all 170GB within the temp db table. This sounded like a dubious solution and my test server didn't have enough space on the partition where temp db was stored. I couldn't find anything on the files size limit of tempdb on sql 2008 Express, but at this point it was all getting too hard.

Compress Oracle table

I need to compress a table. I used alter table tablename compress to compress the table. After doing this the table size remained the same.
How should I be compressing the table?
To compress the old blocks of the table use:
alter table table_name move compress;
This will reinsert the records in another blocks, compressed, and discard old blocks, so you'll gain space. And invalidates the indexex, so you will need to rebuild them.
Compress does not affect already stored rows. Please, check the official documentation:
" You specify table compression with the COMPRESS clause of
the CREATE TABLE statement. You can enable compression for an existing
table by using this clause in an ALTER TABLEstatement. In this case,
the only data that is compressed is the data inserted or updated after
compression is enabled..."
ALTER TABLE t MOVE COMPRESS is a valid answer. But if you use different non default options, especially with big data volume, do regression tests before using ALTER TABLE ... MOVE.
There were historically more problems (performance degradations and bugs) with it. If you have access, look Oracle bug database to see if there are known problems for features and version you use.)
You are on safer side if you: create new table insert data from original (old) table drop old table rename new table to old table name

Poor performance with ODP.net parameter array inserts when record contains BLOBs

I use ODP.net parameter arrays to achieve batch insert of records. This way performs very well when the records don't contain BLOB column - typically about 10,000 records can be inserted in one second.
If a record contains BLOB column, it has a poor performance - about 1,000 records need 8 ssconds.
Is there any method to batch insert records with BLOB column efficiently.
I find I use odp to insert record with blob column uncorrectly.
when insert record with blob column, I use byte array to store the blob value. this way will have a poor performance.
I replace another way, use OracleBlob type to store the blob value.
In this way, batch insert records will have a high performance.

Resources