Find total number of characters in CLOB and XMLTYPE datatype column - oracle

I have Oracle 12c version database.
I created table with CLOB and XMLTYPE datatype columns and Inserted some sample data.
I need to find total number of character in CLOB and XMLTYPE column.
Whatever the character-set used in those columns, just need a count.
Those two columns have huge data. So string operation ( i.e length() ) is not possible.
How to find the total number of character in those two columns.
Thanks in advance.

you can use the getlength in dbms_lob package
select
dbms_lob.getlength(t.clob_col)
,dbms_lob.getlength(t.XML_col.getClobVal())
from table t
db<>fiddle here

Related

Changing data type of a column containing data in Oracle db

I am using Oracle 19c.
I have a table,which I need to change the data-type of one of its columns:
from number to number (24,8)
The column contains data, nearly 300.000 records and I am required to keep the data.
If I do this without truncating/deleting data:
Does it harm data integrity?
Does it effect the datatype of existing data?
The reason for this operation is that the column should have had 7 or 8 decimals but it has been limited to 4 decimal somehow, even though data type is number. Either my etl tool (informatica ) or oracle db has limited, I do not know.
thanks in advance
Your problem doesn't appear to be with Oracle.
CREATE TABLE T1 (
num NUMBER
);
INSERT INTO T1 (num) VALUES (123.12345678);
SELECT * FROM T1;
NUM
123.12345678

Automatic adjustment of a field in Oracle

Hello I'm trying to create a table under Oracle 18.1 (SQL Dev).
But I have an error "ORA-00906: missing right parenthesis"
CREATE TABLE DIM_TAB (
ID Number PRIMARY KEY,
TEST nvarchar2,
TEST_2 nvarchar,
DATE DATE not null
);
How to create a field without specifying the size of it in nvarchar (or nvarchar2) on Oracle? (I want the field size to adjust automatically)
Thank you
You have three problems. One, you must specify a maximum number of characters for a VARCHAR2 or NVARCHAR2 column. If you have data that will exceed 4000 bytes (not characters), then just use a CLOB. Second, there is no NVARCHAR data type. Third, you cannot create a column named "date," since that's a reserved word. What you want is something like this:
CREATE TABLE DIM_TAB (
id number PRIMARY KEY,
test nvarchar2(30),
test_2 nvarchar2(30),
the_date date not null
);
Personally, I would use a NUMBER(10) for your id, but that's a minor quibble.
You might want to read up on the NCHAR and NVARCHAR data types.

Hive unable to read decimal value from hdfs

My hive version is 0.13.
I have a file that contain decimal value and few other data types. This file is obtained after performing some Pig transformations. I created a Hive table on top of this HDFS file. When I try to do a select * from table_name, I find that the decimal values in the file are truncated into integer values. What could be the reason for this?
Below is my table:
CREATE TABLE FSTUDENT(
load_dte string COMMENT 'DATE/TIME OF FILE CREATION',
xyz DECIMAL,
student_id int
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\u0001'
LINES TERMINATED BY '\n'
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 'hdfs://clsuter1/tmp/neethu/part-m-00000';
The output for select * from table_name gives the decimal value for 1387.00000 as 1387.
Any help?
Thanks.
#Neethu: Altering table would not make any difference unless it is an external table.
As #K S Nidhin mentioned, As of Hive 0.13 users can specify scale and precision when creating tables with the DECIMAL datatype using a DECIMAL(precision, scale) syntax. If scale is not specified, it defaults to 0 (no fractional digits). If no precision is specified, it defaults to 10. You can find the same in hive docs
try dropping the table FSTUDENT and recreate the table with DECIMAL(precision, scale). Somthing like
CREATE TABLE FSTUDENT(
load_dte STRING,
xyz DECIMAL(10,5), -- in your case
student_id INT
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\u0001'
LINES TERMINATED BY '\n'
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
or
truncate the table/ insert overwrite the data in to the table after altering the column datatype. Hope this helps !
The issue is because you haven't mentioned the precision.
DECIMAL with out precision will Defaults to decimal(10,0).
So you have to add precision to get the required value.

Changing Storage Option for XMLType column in Oracle 11g

I am using XMLType column in some of my oracle database table. Earlier(in 11.2.0.2) the default storage type considered is CLOB. So If you issue a query for the XMLType columns, I can see the content of the column as XML string. But when I drop and re-create all the tables and inserted some data, I could not get the content of the XMLType columns. It simpley display the XMLType in the cloumn value. I have a doubt that whether the storage type is chaged in BINARY XML? So I issue the following alter statement:
ALTER TABLE "MYSCHEMA"."SYSTEMPROP"
MODIFY ("XMLCOL")
XMLTYPE COLUMN "XMLCOL" STORE AS CLOB;
Please note that there are already some data present in the table. Event after when I delete and insert a row, the content is showing as XMLType. I am using SQL developer UI tool. Can anybody suggest a way to fix this issue?
Edit:
Ok, Now we have decided that we will store the XMLType column content as SECURE FILE BINARY XML. So we have table like this:
CREATE TABLE XMYTYPETEST
(
ID NUMBER(8) NOT NULL,
VID NUMBER(4) NOT NULL,
UserName VARCHAR2(50),
DateModified TIMESTAMP(6),
Details XMLType
)XMLTYPE COLUMN Details STORE AS SECUREFILE BINARY XML;
Insert into XMYTYPETEST values(10001,1,'XXXX',sysdate,'<test><node1>BLOBTest</node1></test>');
Select * from XMYTYPETEST;
The XMLType colum is displayed as "SYS.XMLType" in sql developer. So how to get the content of the binary XML?
Edit:
SELECT x.ID,x.Vid, x.details.getCLOBVal() FROM XMYTYPETESTx where x.ID=100000;
The above query works out for me finally.
The underlying storage for xmldata inside oracle database is either CLOB or Binary.
And it defaults to Binary storage in 11g.
But irrespective of the storage, your queries on the xmltype column should yield you consistent results.
>>>> So how to get the content of the binary XML?
The way to get the content of an xmltype column using queries does not change.
select xmlquery(..)
select xmlcast(xmlquery(...))
select extract(), extractValue(), ...
These are some of the ways data within xml is extracted.
Hope this helps.

How to store unlimited characters in Oracle 11g?

We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons.
The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries.
Is there any better solution to this problem?
UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.
UPDATE 2: If maximum I need is 8000 characters, can I just add 3 more columns so that I will have 4 columns with 2000 char each to get 8000 chars. So when the first column is full, values would be spilled over to the next column and so on. Will this design have any bad side effects? Please suggest.
If a blob is what you need convince your dba it's what you need. Those data types are there for a reason and any roll your own implementation will be worse than the built in type.
Also you might want to look at the CLOB type as it will meet your needs quite well.
You could follow the way Oracle stored their stored procedures in the information schema. Define a table called text columns:
CREATE TABLE MY_TEXT (
IDENTIFIER INT,
LINE INT,
TEXT VARCHAR2 (4000),
PRIMARY KEY (INDENTIFIER, LINE));
The identifier column is the foreign key to the original table. The Line is a simple integer (not a sequence) to keep the text fields in order. This allows keeping larger chunks of data
Yes this is not as efficient as a blob, clob, or LONG (I would avoid LONG fields if at all possible). Yes, this requires more mainenance, buf if your DBAs are dead set against managing CLOB fields in the database, this is option two.
EDIT:
My_Table below is where you currently have the VARCHAR column you are looking to expand. I would keep it in the table for the short text fields.
CREATE TABLE MY_TABLE (
INDENTIFER INT,
OTHER_FIELD VARCHAR2(10),
REQUIRED_TEXT VARCHAR(4000),
PRIMERY KEY (IDENTFIER));
Then write the query to pull the data join the two tables, ordering by LINE in the MY_TEXT field. Your application will need to split the string into 2000 character chunks and insert them in line order.
I would do this in a PL/SQL procedure. Both insert and select. PL/SQL VARCHAR strings can be up to 32K characters. Which may or may not be large enough for your needs.
But like every other person answering this question, I would strongly suggest making a case to the DBA to make the column a CLOB. From the program perspective this will be a BLOB and therefore simple to manage.
You said no BLOB or LONG... but what about CLOB? 4GB character data.
BLOB is the best solution. Anything else will be less convenient and a bigger maintenance annoyance.
Is BFILE a viable alternative datatype for your DBAs?
I don't get it. A CLOB is the appropriate database datatype. If your weird programming language will deal with strings of 8000 (or whatever) characters, what stops it writing those to a CLOB.
More specifically, what error do you get (from Oracle or your programming language) when you try to insert an 8000 character string into a column defined as a CLOB.

Resources