Is it possible to set a BLOB size when creating table? - oracle

I am new to Oracle sql, but I have some experience with MSSQL. I was sent a script to create some tables, but because of the BLOB columns, I am getting a couple errors when I try to limit the size.
I tried asking my co-workers about this, but they aren't sure either. This was basically grabbed from somewhere else, so they're not sure how to fix this.
Essentially, the table looks something like this (table and column names were changed):
CREATE TABLE table1
(
ID CHAR(32),
NAME CHAR(50),
KEY $(BLOB)(64),
BUFFER $(BLOB)(20),
SORTNO NUMERIC(8) CONSTRAINT UK_WIU UNIQUE,
CONSTRAINT PK_ID PRIMARY KEY (ID)
)
;
When running this, I get the error "invalid character" because of the dollar sign ($). But if I change data type to BLOB(64), I get an error saying "missing right parenthesis." If I just do "BLOB," it runs fine. Is there any way to define the length for BLOB?
Thank you

BLOB column's size can't be restricted. It allows you to store up to [(4 gigabytes - 1) * (database block size)] data, and that's it. If you need less than that, fine, no problem.
More about BLOB (and other Oracle 18c datatypes) here: https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/Data-Types.html#GUID-1A71C635-188E-4EC9-B821-1DBEC2B45451
Table you meant to create would in Oracle look like this:
SQL> create table table1
2 (id varchar2(32) constraint pk_id primary key,
3 name varchar2(50),
4 sortno number(8) constraint uk_wiu unique,
5 key blob,
6 buffer blob
7 );
Table created.
SQL>
Yet another objection regarding your code: don't use CHAR but VARCHAR2 (unless you have to); CHAR will right-pad values with spaces, up to the maximum length of the column.

No, columns are just defined as a CLOB or BLOB.
There is no size associated with the column. You can put in something as big or small as you like, with a few restrictions.
"Maximum size: (4 GB - 1) * DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB)"..as of 11g of Oracle.
To limit the size, you could, do so at the application layer - prevent folks from uploading files larger than X, although someone with DB access could do this.
You could write a stored procedure for handling INSERT/UPDATES, and have logic in there to prevent DML if the size is 'too large.'
You might be able to do a check constraint using a function call to get the length of the lob...an example here

Related

Best way to save large column data in datawarehouse

I have a table that stores the changes to a transaction. All the changes are captured into a table. One of the column that comes as part of the transaction can have many comma separated values. The number of occurrences cannot be predicted. Also this field is not mandatory and can have null values as well.
The total number of transactions that i have in the table is around 100M. Out of those the number of records for which the value is populated is 1M. Out of the 1M transactions the number of records for which the length of the record exceeds 4000 is ~37K.
I mentioned the length as 4000 since in my oracle table the column which would save this has been defined as varchar2(4000).
I check at places and found that if I have to save something of unknown length then i should define the table column datatype as clob. But clob is expensive for me since only a very small amount of data has length > 4000. If I snowflake my star schema and create another table to store the values then even though, I have transactions for which the length is much smaller than 4000 would be saved as part of the clob column. This would be expensive both in terms of storage and performance.
Can someone suggest me an approach to solve this problem.
Thanks
S
You could create a master - detail table to store the comma separated values, then you could have rows rather than save all comma separated values in a single column. This could be managed with a foregn key using a pseudo key between master and detail table.
Here's one option.
Create two columns, e.g.
create table storage
(id number primary key,
long_text_1 varchar2(4000),
long_text_2 varchar2(4000)
);
Store values like
insert into storage (id, long_text_1, long_text_2)
values (seq.nextval,
substr(input_value, 1, 4000),
substr(input_value, 4001, 4000)
);
When retrieving them from the table, concatenate them:
select id,
long_text_1 || long_text_2 as long_text
from storage
where ...
You might benefit from using inline SecurFile CLOBs. With inline CLOBs, up to about 4000 bytes of data can be stored in rows like a regular VARCHAR2 and only the larger values will be stored in a separate CLOB segment. With SecureFiles, Oracle can significantly improve CLOB performance. (For example, import and export of SecureFiles is much faster than the old-fashioned BasicFile LOB format.)
Depending on your version, parameters, and table DDL, your database may already store CLOBs as inline SecureFiles. Ensure that your COMPATIBLE setting is 11.2 or higher, and that DB_SECUREFILE is one of "permitted", "always", or "preferred":
select name, value from v$parameter where name in ('compatible', 'db_securefile') order by 1;
Use a query like this to ensure that your tables were setup correctly, and nobody overrode the system settings:
select dbms_metadata.get_ddl('TABLE', 'YOUR_TABLE_NAME') from dual;
You should see something like this in the results:
... LOB ("CLOB_NAME") STORE AS SECUREFILE (... ENABLE STORAGE IN ROW ...) ...
One of the main problems with CLOBs is that they are stored in a separate segment, and a LOB index must be traversed to map each row in the table to a value in another segment. The below demo creates two tables to show that LOB segments do not need to be used when the the data is small and stored inline.
--drop table clob_test_inline;
--drop table clob_test_not_in;
create table clob_test_inline(a number, b clob) lob(b) store as securefile (enable storage in row);
create table clob_test_not_in(a number, b clob) lob(b) store as (disable storage in row);
insert into clob_test_inline select level, lpad('A', 900, 'A') from dual connect by level <= 10000;
insert into clob_test_not_in select level, lpad('A', 900, 'A') from dual connect by level <= 10000;
commit;
The inline table segment is large, because it holds all the data. The out of line table segment is small, because all of its data is held elsewhere.
select segment_name, bytes/1024/1024 mb_inline
from dba_segments
where segment_name like 'CLOB_TEST%'
order by 1;
SEGMENT_NAME MB_INLINE
---------------- ---------
CLOB_TEST_INLINE 27
CLOB_TEST_NOT_IN 0.625
Looking at the LOB segments, the sizes are reversed. The inline table doesn't store anything in the LOB segment.
select table_name, bytes/1024/1024 mb_out_of_line
from dba_segments
join dba_lobs
on dba_segments.owner = dba_lobs.owner
and dba_segments.segment_name = dba_lobs.segment_name
where dba_lobs.table_name like 'CLOB_TEST%'
order by 1;
TABLE_NAME MB_OUT_OF_LINE
------------ --------------
CLOB_TEST_INLINE 0.125
CLOB_TEST_NOT_IN 88.1875
Despite the above, I can't promise that CLOBs will still work for you. All I can say is that it's worth testing the data using CLOBs. You'll still need to look out for a few things. CLOBs store text slightly differently (UCS2 instead of UTF8), which may take up more space depending on your character sets. So check the segment sizes. But also beware that segment sizes can lie when they are small - there's a lot of auto-allocated overhead for sample data, so you'll want to use realistic sizes when testing.
Finally, as Raul pointed out, storing non-atomic values in a field is usually a terrible mistake. That said, there are rare times when data warehouses need to break the rules for performance, and data needs to be stored as compactly as possible. Before you store the data this way, ensure that you will never need to join based on those values, or query for individual values. If you think dealing with 100M rows is tough, just wait until you try to split 100M values and then join them to another table.

Change the datatype of a column in a partitioned table with billion rows

We have a table with 120 partitions on date range which in turn is subpartitioned again on range.
Each partition has around 200 million records, the conventional way of changing the datatype will make our production unresponsive for hours.Is there any better way for changing the datatype of such a huge table?
We have already tried the following options:
Exchange partition. This does not work.
Create a new table with the same structure as the existing one and the altered column, and inserting the data using /*+ append *. It again takes hours.
Currently the column size is varchar2(30). We need to change it to:
ALTER TABLE ORDERS MODIFY (INFO VARCHAR2(50) );
Changing varchar2(30) to varchar2(50) should work instantly and should not cause any trouble.
You modify just some meta data but actual table data is not touched.

#Deleted for values of a linked Oracle table in MS Access

I've created a simple table in Oracle 12c like so:
CREATE TABLE TEST
(
ID NUMBER GENERATED ALWAYS AS IDENTITY,
TEXT VARCHAR2(2000 CHAR),
CONSTRAINT ID_PK PRIMARY KEY (ID)
);
Then I linked it in MS Access using ODBC driver. The problem is that when I input value into TEXT and click away both ID and TEXT show #Deleted. My value gets recorded in the database but I have to requery in MS Access in order to see it.
I also noticed that if I change the datatype of TEXT field to NUMBER, it works fine. After saving the record in MS Access both auto generated ID and value in TEXT field are there. I don't have to requery anything.
This happens only when inserting. Updating works just fine.
Please advise.
So, it would appear you already found the solution, but this is more of an explanation as to why it works that way. Simply speaking, if the base-table uses non-integer values as primary keys, Access rounds these integers to the nearest whole number and then (since it was not a numeric value) Access can no longer find the applicable records. So, changing the data type from TEXT to INTEGER in the table structure would give you your desired result.
Alternatively, if you're using a query to run through these, if you cannot change the keys in the Oracle table then altering the Access query type to a snapshot (in the query properties) will also bypass this problem. But from the sounds of it, this is not how you are utilizing the data.
In my case, the Oracle ODBC driver (using the rather old version 11.02.00.01 that otherwise works ok and Microsoft Access 2016 32Bit) seems to use the unique indices and not the primary key constraint for determining the primary key.
I had a field with NUMBER(11) as PK with an unique index, then added a VARCHAR2 field with another unique index. The name of the index of the VARCHAR2 field was alphabetically before that of the NUMBER field.
Now, the linked table in Microsoft Access showed the VARCHAR2 field as primary key and I had the problem with '#Deleted' appearing after entering & saving a record as you describe.
After renaming the unique index on the NUMBER field in Oracle to be alphabetically before that of the VARCHAR2 field and re-linking the table in Microsoft Acces, the NUMBER field was the primary key again in Microsoft Access and the '#Deleted' problem was solved.

How to store unlimited characters in Oracle 11g?

We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons.
The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries.
Is there any better solution to this problem?
UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.
UPDATE 2: If maximum I need is 8000 characters, can I just add 3 more columns so that I will have 4 columns with 2000 char each to get 8000 chars. So when the first column is full, values would be spilled over to the next column and so on. Will this design have any bad side effects? Please suggest.
If a blob is what you need convince your dba it's what you need. Those data types are there for a reason and any roll your own implementation will be worse than the built in type.
Also you might want to look at the CLOB type as it will meet your needs quite well.
You could follow the way Oracle stored their stored procedures in the information schema. Define a table called text columns:
CREATE TABLE MY_TEXT (
IDENTIFIER INT,
LINE INT,
TEXT VARCHAR2 (4000),
PRIMARY KEY (INDENTIFIER, LINE));
The identifier column is the foreign key to the original table. The Line is a simple integer (not a sequence) to keep the text fields in order. This allows keeping larger chunks of data
Yes this is not as efficient as a blob, clob, or LONG (I would avoid LONG fields if at all possible). Yes, this requires more mainenance, buf if your DBAs are dead set against managing CLOB fields in the database, this is option two.
EDIT:
My_Table below is where you currently have the VARCHAR column you are looking to expand. I would keep it in the table for the short text fields.
CREATE TABLE MY_TABLE (
INDENTIFER INT,
OTHER_FIELD VARCHAR2(10),
REQUIRED_TEXT VARCHAR(4000),
PRIMERY KEY (IDENTFIER));
Then write the query to pull the data join the two tables, ordering by LINE in the MY_TEXT field. Your application will need to split the string into 2000 character chunks and insert them in line order.
I would do this in a PL/SQL procedure. Both insert and select. PL/SQL VARCHAR strings can be up to 32K characters. Which may or may not be large enough for your needs.
But like every other person answering this question, I would strongly suggest making a case to the DBA to make the column a CLOB. From the program perspective this will be a BLOB and therefore simple to manage.
You said no BLOB or LONG... but what about CLOB? 4GB character data.
BLOB is the best solution. Anything else will be less convenient and a bigger maintenance annoyance.
Is BFILE a viable alternative datatype for your DBAs?
I don't get it. A CLOB is the appropriate database datatype. If your weird programming language will deal with strings of 8000 (or whatever) characters, what stops it writing those to a CLOB.
More specifically, what error do you get (from Oracle or your programming language) when you try to insert an 8000 character string into a column defined as a CLOB.

Efficiency of storing an Oracle User Defined Type in an index organized table

Are there known issues with storing user defined types within index organized tables in Oracle 10G?
CREATE OR REPLACE TYPE MyList AS VARRAY(256) OF NUMBER(8,0);
CREATE TABLE myTable (
id NUMBER(10,0) NOT NULL,
my_list MyList NOT NULL)
CONSTRAINT pk_myTable_id PRIMARY KEY(id))
ORGANIZATION INDEX NOLOGGING;
With this type and table setup, I loaded via insert append ~2.4M records and it took 20G of space at which point I ran out of disk space. Looking at the size of the data types, this seemed to be taking up a lot of space for what was being stored. I then changed the table to be a regular table (not IOT) and I stored 6M+ records in ~7G of storage, added the PK index which took an additional 512M.
I've used IOT many times in the past, but not with a user defined type.
Why is it that the storage requirements when using a UDT and IOT are so high?
AFAICR, Oracle always stores VARRAY's out of row in the IOT's.
I'll now try to find the references in the docs to confirm this.

Resources