Why is the ora-archive-state column a varchar2 4000 chars? - oracle

Can someone explain why Oracle made the ora-archive-state column a varchar2 of 4000 chars? When using the in-database archiving feature of Oracle 12c, when the column is 0, the record is visible. When anything other than 0, the record is hidden.
What's the purpose of having the extra 3999 chars when simply setting the column to 1 accomplishes the goal? I'm doubting Oracle is just wasting the space.

Because it allows you to mark "archived" rows differently: you can update ORA_ARCHIVE_STATE to different values, for example: to_char(systimestamp,'yyyy-mm-dd hh24:mi:ssxff')
to set it to the date of archiving. And later you can analyze archived records by this column.
I'm doubting Oracle is just wasting the space.
Varchar2 doesn't waste space. It is variable-length character string. Ie varchar2(4000b) doesn't mean it will use 4000 bytes, or varchar2(4000c) ~ chars. That's just maximum allowed column length

Related

Accomodate more data in a Oracle column without increasing size

I have a scenario where I would like to know if we can accommodate more characters to an Oracle column without increasing the column size.
I have a Oracle column bname which is of type varchar2(256). The column is getting updated via Java code. I would like to know if there is any way to accommodate more than 256 characters in this column without increasing the size?
Wanted to know if there are any column compression techniques available to accommodate the same?
Use smaller font. Just kidding.
As far as I can tell, you can't do that. 256 is the limit you set, so - the only option is to
alter table that_table modify bname varchar2(500);
Depending on database version, you can go up to 4000 characters (or 32767, if MAX_STRING_SIZE is set to extended). If that's not enough, CLOB is your choice.
If you want to stored compressed data, then use BLOB datatype (so you'd e.g. put a ZIP file into that column).
~ o ~
Or, perhaps you could alter the table and add another column:
alter table that_table add bname_1 varchar2(256);
and make your Java code "split" value in two parts and store the first 256 characters into bname, and the rest into bname_1.
Other than that, no luck, I'm afraid.

ORA-12899: value too large for column

I am getting data from erp systems in the form of feeds ,in particular one column length in feed is 15 only.
In target table also corresponded column also length is varchar2(15) but when I am trying to load same into db it showing error like:
ORA-12899: value too large for column emp_name (actual: 16, maximum:
15)
I cant increase the column length since it is base table in the production.
have a look into this blog, the problem resolved for me by changing the column datatype from varchar(100) to varchar(100 char). in my case the data contains some umlaut characters.
http://gerardnico.com/wiki/database/oracle/byte_or_character
The usual reason for problems like this are non-ASCII characters that can be represented with one byte in the original database but require two (or more) bytes in the target database (due to different NLS settings).
To ensure your target column is large enough for 15 characters, you can modify it:
ALTER TABLE table_name MODIFY column_name VARCHAR2(15 CHAR)
(note the 15 CHAR - you can also use BYTE; if neither is present, the database uses the NLS_LENGTH_SEMANTICS setting as a default).
To check which values are larger than 15 bytes, you can
create a staging table in the target database with the column length set to 15 CHAR
insert the data from the source table into the staging table
find the offending rows with
SELECT * FROM staging WHERE lengthb(mycol) > 15
(note the use of LENGTHB as apposed to LENGTH - the former returns the length in bytes, whereas the latter returns the length in characters)
I found AL32UTF8 as the only valid setting. This varies from standard UTF8 with a few character having supplementary bytes, i.e, the characters are about 99% the same. I am guessing you have character conversion problems going on. In other words the data in table1 was written using one charset, and the new table has a slightly different charset.
If this is true, you have to find the source of the oddball charset. Because this will continue to happen.
Solution to:
ORA-12899: VALUE TOO LARGE FOR COLUMN(ACTUAL,MAXIMUM)
If you are facing problem while updating a column size of a table which already has data more than the new length below is the simple script that would work definitely.
ALTER TABLE TABLE_NAME ADD (NEW_COLUMN_NAME DATATYPE(DATASIZE));
UPDATE TABLE_NAME SET NEW_COLUMN_NAME = SUBSTR(OLD_COLUMN_NAME , 1, NEW_LENGTH);
ALTER TABLE TABLE_NAME DROP COLUMN OLD_COLUMN_NAME ;
ALTER TABLE TABLE_NAME RENAME COLUMN NEW_COLUMN_NAME TO OLD_COLUMN_NAME;
Meaning of the query:
ALTER TABLE TABLE_NAME ADD (NEW_COLUMN_NAME DATATYPE(DATASIZE));
It would just create a new column of the required new length in your existing table.
UPDATE TABLE_NAME SET NEW_COLUMN_NAME = SUBSTR(OLD_COLUMN_NAME , 1, NEW_LENGTH);
It will discard all the values after the new length value from old column values and set the trimmed values into the new column name.
ALTER TABLE TABLE_NAME DROP COLUMN OLD_COLUMN_NAME ;
It will remove the old column name as its absurd now and we have copied all the information into the new column.
ALTER TABLE TABLE_NAME RENAME COLUMN NEW_COLUMN_NAME TO OLD_COLUMN_NAME;
Renaming the new column name to the old column name would help you regain the original table structure except for the new column size as you wished.
Certainly the cause of error is that the value is too large for column data type. However, sometimes it is not visible at first sight. Except "byte versus char" differences mentioned in other answers, there can also be problem with line terminators.
I was trying to load CSV file using SQL*Loader in dockerized Oracle. The foo column of type char(1) was the last column. I got ORA-12899: value too large for column foo (actual: 2, maximum: 1) error despite all values of foo column were of length 1. Later I noticed the CSV file has been edited in Windows editor and accidentally saved with CRLF terminators. Since Linux in Docker container expects just LF, the CR was treated as part of column data.
This error made me confused a little bit.
VARCHAR2(x CHAR) means that the column will hold x characters but not
more than can fit into 4000 bytes. Internally, Oracle will set the
byte length of the column (DBA_TAB_COLUMNS.DATA_LENGTH) to MIN(x *
mchw, 4000), where mchw is the maximum byte width of a character in
the database character set. This is 1 for US7ASCII or WE8MSWIN1252, 2
for JA16SJIS, 3 for UTF8, and 4 for AL32UTF8.
For example, a VARCHAR2(3000 CHAR) column in an AL32UTF8 database will
be internally defined as having the width of 4000 bytes. It will hold
up to 3000 characters from the ASCII range (the character limit), but
only 1333 Chinese characters (the byte limit, 1333 * 3 bytes = 3999
bytes). A VARCHAR2(100 CHAR) column in an AL32UTF8 database will be
internally defined as having the width of 400 bytes. It will hold up
to any 100 Unicode characters.
Reference: https://community.oracle.com/tech/developers/discussion/421117/difference-between-varchar2-4000-byte-varchar2-4000-char

Oracle sqlload : split a source field in several columns?

i have a source file i want to load through sqlload in my Oracle 10g
the problem is one of the source field can be larger than 4000 character. Is it possible to tell oracle to split a source field in several columns ?
let's say one column would have 4000 first character and the second one the 4000 next
Thanks
I'd load it into a CLOB and then do the splitting (if necessary) using DBMS_LOB.SUBSTR over on the database side. But is there a critical business reason to get it into multiple varchar2 columns, or could it just stay in the CLOB?

Why is it necessary to specify length of a column in a table

I always wonder why should we limit a column length in a database table to some limit then the default one.
Eg. I have a column short_name in my table People the default length is 255 characters for the column but I restrict it to 100 characters. What difference will it make.
The string will be truncated to the maximum length ( in characters usually ).
The way it is actually implemented is up to the database engine you use.
For example:
CHAR(30) will always use up 30 characters in MySQL, and this allows
MySQL to speed up access because it is able to predict the value
length without parsing anything;
VARCHAR(30) will trim any lengthy strings to 30 characters in MySQL when strict mode is on, otherwise you may use longer strings and they will be fully stored;
In SQLite, you can store strings in any type of column, ignoring the type.
The reason many features of SQL are supported in those database engines eventhough they are not being utilized, or being utilized in different ways, is in order to maintain compliance to the SQL schema.

oracle: what is the oracle equivalent data type for COMMENT?

What is the datatype in oracle i should be using to store comment boxes? I was going to use LONG but it only allows one. Or should I just use VARCHAR2 and set it really large?
What is the longest comment you want to be able to support?
If your comments are less than 4000 bytes in length, you can use a VARCHAR2(4000). If your comments are longer than 4000 bytes in length, you can use a CLOB. A CLOB can store any character data supported by your database character set.

Resources