First ever question here, because searching for these terms is way too vague. I don't know how the data got in to the table (I think just by "insert into tomtest(test) values (280.1);" but can't be sure) but this example seems odd. Why does the straight select show 1 decimal place, but the to_char select shows 14 decimal places?
SQL> set linesize 100
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE 12.2.0.1.0 Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production
SQL> describe tomtest
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
TEST NUMBER
SQL> select * from tomtest;
TEST
----------
280.1
SQL> select * from tomtest where test = 280.1;
no rows selected
SQL> select to_char(test) from tomtest;
TO_CHAR(TEST)
----------------------------------------
280.10000000000002
SQL> SELECT data_type, data_length, data_precision, data_scale FROM USER_TAB_COLS WHERE TABLE_NAME = 'TOMTEST' AND COLUMN_NAME = 'TEST';
DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
NUMBER 22
The reason is the default numberformat in sql plus.
in the database you have the value 280.10000000000002 and not 280.1
here is an example
SQL> select 128.000000000000000000006 as test from dual;
test
-------------------------
128
SQL> select 128.200000000000000000006 as test from dual;
test
-------------------------
128,2
you can change the number format and check again
SQL> SET NUMF 999999999.99999999999999999999999999999999
SQL> select 128.200000000000000000006 as test from dual;
test
-------------------------------------------
128.20000000000000000000600000000000
SQL>
Update:
This phenomenon can be explained by the fact that your data type is only defined as a number, without precise specification of precision.
This means that you can save a decimal number, but do not give an exact specification of the decimal places.
Here is an example that shows the differences.
CREATE TABLE test_num(
n1 NUMBER,
n2 NUMBER(23,15));
INSERT INTO test_num VALUES(208.20000000000006,208.20000000000006);
SELECT t.*FROM test_num t;
Resut in PLSQL/Developer:
N N2
--- ---
208,2 208,200000000000030
in plsql Developer you can display numbers as char. If you want to that activate a checkbox in the preferences: Tools -> Preferences -> SQL Window -> Number fields to_char
in your case, I would round a number value to a certain number of decimal places on insert/update or change your data type from number to number(p, s) (Where p is the precision and s is the scale.)
Related
What is the evaluation of a of value 123.89 in decemial storage in Oracle Databa for NUMBER(3) when there is no scale is defined step by step?
I don't know what Oracle does step-by-step, but - in the end, it rounds the value:
SQL> create table test (col number(3));
Table created.
SQL> insert into test values (123.89);
1 row created.
SQL> insert into test values (200.13);
1 row created.
SQL> select * from test;
COL
----------
124
200
SQL>
Why? Because numbers can have precision and scale. number(3) means that there's no scale (i.e. is equal to number(3, 0)) so - if there's no scale and value you enter exceeds it, then Oracle rounds it.
Read more about NUMBER datatype in documentation.
We have Oracle as source and HANA 1.0 sps12 as target. We are mirroring Oracle to HANA with Informatica CDC through real-time replication. In Oracle, for many columns we have datatype as CHAR i.e. fixed length datatype. As HANA officially doesn't support CHAR datatype so we are using NVARCHAR data type instead of same. Problem we are facing is -as in Oracle CHAR datatype is of fixed length and append spaces whenever actual string is of lesser length than datatype, we have lot of extra spaces in target HANA db for such columns.
For eg. If column col1 has data type
CHAR(5)
and value as 'A', it is replicated in HANA as 'A ' i.e. 'A' appended by four extra spaces, causing lot of problems in queries and data interpretation
Is it possible to implement CHAR like datatype in HANA?
You can use RPAD function in Informatica while transferring data to Hana. Just make sure if Hana doesn't trim automatically.
So, for the CHAR(5) source column you should use:
out_Column = RPAD(input_Column, 5)
Pretty much exactly, as the documentation says:
I don't know HANA and this is more a comment than an answer, but I chose to put it here as there's some code I'd like you to see.
Here's a table whose column is of a CHAR datatype:
SQL> create table test (col char(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
Column's length is 10 (which you already know):
SQL> select length(col) from test;
LENGTH(COL)
-----------
10
But, if you TRIM it, you get a better result, the one you're looking for:
SQL> select length( TRIM (col)) from test;
LENGTH(TRIM(COL))
-----------------
3
SQL>
So: if you can persuade the mirroring process to apply TRIM function to those columns, you might get what you want.
[EDIT, after seeing Lars' comment and re-reading the question]
Right; the problem seems to be just the opposite of what I initially understood. If that's the point, maybe RPAD would help. Here's an example:
SQL> create table test (col varchar2(10));
Table created.
SQL> insert into test values ('abc');
1 row created.
SQL> select length(col) from test;
LENGTH(COL)
-----------
3
SQL> insert into test values (rpad('def', 10, ' '));
1 row created.
SQL> select col, length(col) len from test;
COL LEN
---------- ----------
abc 3
def 10
SQL>
I have the following issue using SQL Loader to load a small table, my process populates a field which can be populated for possitive or negative decimal numbers, and the problem is that when the number is possitive, the process rounds with 2 positions, but if it's negative only use 1 position. I need 2 position, and this is the simple code:
UNRECOVERABLE LOAD DATA
TRUNCATE
INTO TABLE [SCHEMA].TABLE1
FIELDS TERMINATED BY '|'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
FIELD1 "trim(:FIELD1 )"
)
Example:
Source: 12345,78 Target: 1234,78
Source: -12345,78 Target: -1234,8
Edit1: It just happend with number with 7 or more integer length (X < -1000000), but I can insert this numbers with regular insert instead of SQLLoader.
Edit2: I have noticed that it is not a problem with sql*loader because the .ctl is already wrong. This file is generated with python and SQL with this properties:
set colsep |
set headsep off
set pagesize 0
set trimspool on
set linesize 10000
set termout off
set feedback off
set arraysize 250
But I dont know where it is defined the lenght of each field, and this looks the error.
I'd say that you're not looking correctly. Data is, probably, stored OK, but you don't see it. Here's an example:
SQL> create table test (field1 number);
Table created.
SQL>
Control file:
load data
infile *
replace into table test
fields terminated by '|' trailing nullcols
(
field1
)
begindata
12345,78
-12345,78
987654321
-987654321
987654321,01234
-987654321,01234
111333444,887766
-112333444,887766
Loading session & the result:
SQL> $sqlldr scott/tiger#orcl control=test26.ctl log=test26.log
SQL*Loader: Release 11.2.0.2.0 - Production on Uto Kol 21 08:52:10 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 7
Commit point reached - logical record count 8
SQL> select * from test;
FIELD1
----------
12345,78
-12345,78
987654321
-987654321
987654321
-987654321
111333445
-112333445
8 rows selected.
SQL>
Ooops! Seems to be wrong, eh? But, if you fix the column format ...
SQL> col field1 format 999g999g999g990d000000
SQL> select * from test;
FIELD1
-----------------------
12.345,780000
-12.345,780000
987.654.321,000000
-987.654.321,000000
987.654.321,012340
-987.654.321,012340
111.333.444,887766
-112.333.444,887766
8 rows selected.
SQL>
See? Everything is here.
I guess that, if you used a GUI, you wouldn't have such problems. The former sentence doesn't mean that you should NOT use SQL*Plus - on the contrary; just ... learn how to use it properly :)
[EDIT, after reading the comment]
It is SET NUMFORMAT you might be looking for:
SQL> select -12334.789123 from dual;
-12334.789123
-------------
-12334,789
SQL> set numformat 999g999g990d000000
SQL> select -12334.789123 from dual;
-12334.789123
-------------------
-12.334,789123
SQL>
I am using ALL_TABLES/ALL_TAB_COLUMNS to get count of tables in my schema (EDW_SRC) and another schema(EDW_STG). I get correct counts when i run the query in my sql developer as shown below. But if i put the same query inside a trigger, i get wrong count for other schema(EDW_STG).
Please refer below code:
(This is just a sample code to replicate the issue, not my business requirement. I am referring ALL_TAB_COLUMNS in my actual code to get the number of columns in a particular table in different schema, for which i have Select access.)
select user from dual;
USER
-----
EDW_SRC
DROP TABLE ABC;
Table ABC dropped.
CREATE TABLE ABC(ID NUMBER);
Table ABC created.
select count(1) EDW_STG_CNT
from all_tables
where owner='EDW_STG';--Different Schema
EDW_STG_CNT
----------
101
select count(1) EDW_SRC_CNT
from all_tables
where owner='EDW_SRC';--My Schema
EDW_SRC_CNT
------------
1554
create or replace trigger trig_test_dml_abc
before insert on abc
DECLARE
V_STG_CNT number :=NULL;
V_SRC_CNT number :=NULL;
begin
DBMS_OUTPUT.PUT_LINE('***** TRIGGER OUTPUT *****');
select count(1) into V_SRC_CNT from all_tables
where owner='EDW_SRC'; --My Schema
DBMS_OUTPUT.PUT_LINE('My Schema EDW_SRC_CNT :'||V_SRC_CNT);
select count(1) into V_STG_CNT from all_tables
where owner='EDW_STG'; --Different Schema
DBMS_OUTPUT.PUT_LINE('Different Schema EDW_STG_CNT :'||V_STG_CNT);
end;
Trigger TRIG_TEST_DML_ABC compiled
INSERT INTO ABC VALUES (2);
1 row inserted.
***** TRIGGER OUTPUT *****
My Schema EDW_SRC_CNT :1554
Different Schema EDW_STG_CNT :2
The Different Schema count should be 101. Why is it coming as 2.
Oracle Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
Thanks
K
I was trying to add a column to a table but got a surprising effect with the DEFAULT clause. In a table with existing rows, I added a new column like so:
alter table t add c char(1 char) default 'N' not null;
When I subsequently added a check constraint to the table, it failed:
alter table t add constraint chk check(c in ('N', 'Y'));
Which resulted in
ERROR at line 1:
ORA-02293: cannot validate (T.CHK) - check constraint violated.
Other info:
Because I'm setting the units explicitly (i.e., char(1 char) as opposed to just char(1)), I don't expect the value of nls_length_semanatics to be relevant.
After adding the column as char(1 char), the newly added "N"s are actually "N " and I'm not sure what the extra whitespace is.
Adding the column as char(1 byte) works as expected;
Adding the column without the "default 'N' not null" followed by updating all existing rows to 'N', followed by altering the column to 'not null' also works as expected.
NLS_CHARACTERSET is AL32UTF8, but I don't expect that to be relevant either.
Database is Oracle 11g; 11.2.0.1.0.
Thanks.
I believe that what you're seeing is a bug that relies on a couple different things interacting
First, the database character set has to be a variable width character set (i.e. AL32UTF8) so that a single character may require up to four bytes of storage.
Second, the column must be declared with character length semantics
Third, starting in 11.1, Oracle added an optimization so that if you add a column to the table that is declared NOT NULL and that has a DEFAULT that Oracle could do that simply by updating the data dictionary rather than actually storing the default value in every row of the table.
When both of those things are true, it appears that the value that is returned has a length of 4 and is padded with the CHR(0) character.
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> create table foo( col1 number );
Table created.
SQL> insert into foo values( 1 );
1 row created.
SQL> commit;
Commit complete.
SQL> alter table foo add c char(1 char) default 'N' not null;
Table altered.
SQL> alter table foo add constraint chk_foo check( c in ('Y', 'N') );
alter table foo add constraint chk_foo check( c in ('Y', 'N') )
*
ERROR at line 1:
ORA-02293: cannot validate (SCOTT.CHK_FOO) - check constraint violated
SQL> select c, dump(c) from foo;
C DUMP(C)
---- ------------------------------
N Typ=1 Len=4: 78,0,0,0
If you actually force the value to be stored in the table, you'll get the expected behavior where there is no CHR(0) padding. So if I insert a new row into the table, it passes.
SQL> insert into foo(col1) values (2);
1 row created.
SQL> select c, dump(c) from foo;
C DUMP(C)
---- ------------------------------
N Typ=1 Len=4: 78,0,0,0
N Typ=1 Len=1: 78
You could also issue an UPDATE to update the rows that aren't actually storing the value in the rows of the table
SQL> update foo
2 set c = 'N'
3 where c != 'N';
1 row updated.
SQL> select c, dump(c) from foo;
C DUMP(C)
---- ------------------------------
N Typ=1 Len=1: 78
N Typ=1 Len=1: 78
You tagged oracle11g, but you didn't specify a version.
This works for me on 11.2.0.2 on Linux x86-64.
SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 26 13:13:52 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and OLAP options
SQL> create table tt(a number);
Table created.
SQL> insert into tt values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> alter table tt add c char(1 char) default 'N' not null;
Table altered.
SQL> alter table tt add constraint chk check(c in('N','Y'));
Table altered.
SQL> select * from tt;
A C
---------- -
1 N
SQL> column dump(c) format a30
SQL> select c, length(c),dump(c) from tt;
C LENGTH(C) DUMP(C)
- ---------- ------------------------------
N 1 Typ=96 Len=1: 78
So....perhaps you have a bug in your version?
Hope that helps.
The reason for the ORA-02293 error, as you've already mentioned, is because it is inserting 'N ' (with a padded white space) rather than 'N'. So your constraint is violated.
The more interesting question is, why is it adding that space? Well, by definition, a CHAR is fixed width, where a VARCHAR is not. A CHAR will always pad white space to fill the entire memory space allocated for the column. Because you've chosen a width of 1 CHAR, and AL32UTF8 is a varying width character set, that would seem to conflict with the fixed width nature of CHAR. It looks like it gets padded to fill the extra byte(s) not used by the 'N'. Or, at least, I assume that is what is happening.