NVARCHAR(max) 4000 or 65000 characters? - max

My base:
VM Host Windows 2019
Backend SQL Server 2008
Front end MS Access
By relinking the tables via ODBC driver in Access.
Now show table columns that are created in SQL as NVARCHAR(max) when editing entries > 4000 characters, getting the well-known message
String data, right truncation (#0)
Before rebinding it was no problem to write up to 60000 characters in the NVARCHAR(max) field.
I can no longer edit these fields with the 4000 character limit!
I have not yet understood why there is talk of a maximum of 4000 characters and sometimes of a 2 GB limit = approx. 65000 characters.
What does that depend on? I need the maximum number of characters for the column.
Who knows?

Related

Why is the ora-archive-state column a varchar2 4000 chars?

Can someone explain why Oracle made the ora-archive-state column a varchar2 of 4000 chars? When using the in-database archiving feature of Oracle 12c, when the column is 0, the record is visible. When anything other than 0, the record is hidden.
What's the purpose of having the extra 3999 chars when simply setting the column to 1 accomplishes the goal? I'm doubting Oracle is just wasting the space.
Because it allows you to mark "archived" rows differently: you can update ORA_ARCHIVE_STATE to different values, for example: to_char(systimestamp,'yyyy-mm-dd hh24:mi:ssxff')
to set it to the date of archiving. And later you can analyze archived records by this column.
I'm doubting Oracle is just wasting the space.
Varchar2 doesn't waste space. It is variable-length character string. Ie varchar2(4000b) doesn't mean it will use 4000 bytes, or varchar2(4000c) ~ chars. That's just maximum allowed column length

length error getting while loading Cyrillic records into Oracle DB

Have a record which is having Cyrillic characters in it along with the english characters in MySql with datatype varchar(30). Getting "value too large" error while loading same through Informatica 9.6.1 to Oracle database having column datatype as varchar2(30). Could anyone explain it why is it happening like that. In both the DBs charset is UTF8.
For eg, data in mySQl is 'Александровском 2022'. Loading same to Oracle DB, getting below error:
ORA-12899: value too large for column "DB"."USER_DETAILS"."AUTHORITY_NAME" (actual: 31, maximum: 30)
In Oracle, you can specify if your column should have the maximum size of 300 BYTE or 300 CHAR.
You have defined (explicit or implicit) your column to have a maximum size if 300 BYTE.
So some of your strings with less than 300 characters will require more than 300 bytes as Cyrillic characters need more than 1 byte in UTF8.
You can change the definition of your column to varchar2(300 CHAR).
If [BYTE|CHAR] is omitted, the DB falls back to the setting defined in NLS_LENGTH_SEMANTICS. This can be set on DB or session.

CLOB storage Oracle 11g

I have a encountered very strange behavior of Oracle LOB.
Situation: We have partitioned IOT that contains CLOB column. CLOB has separate LOB storage set up with LOGGING RETENTION and DISABLE IN ROW STORAGE options. CHUNK size is 8192bytes. PCTFREE is set default(null in dba_tables).
Now, we need to create a test case with certain amount of CLOBs loaded. we have chosen 19.5KB CLOB. After loading this CLOB 40 million times(used for perf. testing, does not matter about content) - the size on file system and in dba_data_files is 1230GB.
Question:
We estimated size of 40mil. CLOBs with size 19.5KB to ~780GB. How did we get 450GB more? I would guess it has something to do with CHUNK size - 19.5KB would use 3 CHUNKs, thus being size 24KB, which is still only 960GB. LOB index is around 2GBs.
Does anybody have an idea?(sorry for poor explanation)(P.S. running ORACLE 11g)
Thank you in advance!
Your comment is correct: "Data in CLOB columns is stored in a format that is compatible with UCS-2 when the database character set is multibyte, such as UTF8 or AL32UTF8".
Although I would not say this is just an extrapolation of VARCHAR2. UTF8 is a varying width character set, and does not always require 2 bytes.
15760 characters is 31520 bytes, which can only fit in 4 blocks, 32768 bytes. 32768 * 40000000 / 1024 / 1024 / 1024 = 1220GB. Which doesn't perfectly
match your result, but is very close. We'd need to see some more detailed numbers to look for a perfect match.

SSIS Oracle Data Load is Incomplete

I have a data flow task where the data from oracle source is fetched and stored in SQL Server DB after nearly 400k rows the data flow fails with following error.
ORA-01489 result of string concatenation is too long
I the Execution results the [Oracle Source [1543]] Error: what this exactly means.
I'm assuming you are using varchar2 datatype which limits to 4000 chars.
This error is because the concatenated string returns more than 4000 chars of varchar2 which exceeds the limit try using CLOB datatype.
http://nimishgarg.blogspot.in/2012/06/ora-01489-result-of-string.html
use a derived column after your source to cut the strings to 4000 chars
Your data source (Oracle) is sending out strings that are larger than 4000 characters while your SSIS source expects something less than that. Check your source for any data that has a length > 4000.
After a long battle i decided to modify the package and it turns that deleting and creating the all the tasks again has solved the problem.
Real cause is still unknown to me.

Why is it necessary to specify length of a column in a table

I always wonder why should we limit a column length in a database table to some limit then the default one.
Eg. I have a column short_name in my table People the default length is 255 characters for the column but I restrict it to 100 characters. What difference will it make.
The string will be truncated to the maximum length ( in characters usually ).
The way it is actually implemented is up to the database engine you use.
For example:
CHAR(30) will always use up 30 characters in MySQL, and this allows
MySQL to speed up access because it is able to predict the value
length without parsing anything;
VARCHAR(30) will trim any lengthy strings to 30 characters in MySQL when strict mode is on, otherwise you may use longer strings and they will be fully stored;
In SQLite, you can store strings in any type of column, ignoring the type.
The reason many features of SQL are supported in those database engines eventhough they are not being utilized, or being utilized in different ways, is in order to maintain compliance to the SQL schema.

Resources