I need to insert values with a precision of 5 decimal places into an Oracle interface table via OPENQUERY because the values are originally stored in an SQL database. The data type of the Oracle table column is NUMBER (with no scale/precision specified). Using OPENQUERY to insert a value of 1.4, results in a value of 1.3999999999999999 stored in the Oracle table. I cannot change the data type of the Oracle table to NUMBER(38,5) because it is a standard Oracle table (GL_DAILY_RATES_INTERFACE).
According to Oracle https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1832
"If a precision is not specified, the column stores values as given."
Which means that if I insert 1.4, it should be stored in a NUMBER column as is. But it doesn't. So does that mean that when inserting through OPENQUERY to a linked Oracle server, the Oracle Provider for OLE DB does some addition conversion that results in a floating point error?
How do I insert values to a precision of 5 decimal places into an Oracle table NUMBER column that does not have precision or scale specified?
Update:
My insert statement does round the values when inserting. But it doesn't solve the issue.
For example:
INSERT INTO OPENQUERY(LINKEDSERVER, "SELECT CONVERSION_RATE FROM GL_DAILY_RATES_INTERFACE") VALUES(ROUND(1.4,5))
Since inserting values through OPENQUERY to a linked Oracle server causes some floating point error, I tried using EXEC('') AT LINKEDSERVER and it worked. Because the statement is executed directly on the ORACLE server, there is no longer any issue of the Oracle Provider for OLE DB doing any unexpected conversion.
My overall solution was to first insert values from the SQL table to the Oracle table using OPENQUERY, then use EXEC() to update and round the values in the Oracle table again.
Related
I am using Oracle 19c.
I have a table,which I need to change the data-type of one of its columns:
from number to number (24,8)
The column contains data, nearly 300.000 records and I am required to keep the data.
If I do this without truncating/deleting data:
Does it harm data integrity?
Does it effect the datatype of existing data?
The reason for this operation is that the column should have had 7 or 8 decimals but it has been limited to 4 decimal somehow, even though data type is number. Either my etl tool (informatica ) or oracle db has limited, I do not know.
thanks in advance
Your problem doesn't appear to be with Oracle.
CREATE TABLE T1 (
num NUMBER
);
INSERT INTO T1 (num) VALUES (123.12345678);
SELECT * FROM T1;
NUM
123.12345678
Using DBeaver, I'm trying to migrate a table from an Oracle Instance to another. I just right-click over the desired table, select Export Data and follow the wizard.
My problem is that the CLOB column is truncated. In the source database instance the max CLOB length is 6046297, but in the target it is 970823. The source has 340 records with the CLOB column value larger than 970823.
I've just noticed now that the source table has 24806 rows and the target has 12876. The table sequence id, the max value is 70191 in the source and 58185 in the target. The source has 22716 registers with id less than 58185 and the target has 12876, so it wasn't just a truncation. DBeaver is not transferring half of the registers.
I'm connecting to Oracle with the JDBC driver. Is there an configuration in DBeaver or in the connection or in the driver that would allow me to transfer this table? Maybe I just try to use another tool.
The tables to create the Table T_D_SVC_LOC are all from oracle database, In several of those tables, there are Dates with zeros and real number Dates. oracle database uses Numbers for Dates.
When I try to write to the T_D_PR_SVC_LOC table, which dates are converted by Oracle from numbers to dates; By Oracle.
One of these default date are 01010101 and returns the date as follows 01-JAN-01; Also notice that when the dates are not 0 , oracle uses the actual number for the date using the same OCIexecuter.
No abort and the job completes OK. However, Rows of data are missing or dropped for some reason.
We are trying to capture queries with sub-optimal unexpected column length from code hitting the database where columns defined are of limited length.
How to capture such full queries with column length in Oracle.
example:
Java application with hibernate had a string column defined in hbm without length, this is hitting DB table where that column is defined varchar2(50).
What we were complained is this application is throwing query with varchar2(2000) against the DB, how to capture this full query with column length coming from application ?
Env: Oracle 11g, 12c
Java 1.7 with Hibernate
I have to copy data from one table to another which one table is in Oracle and one is in MSSQL Server. I want to copy the data from MSSQL Server table to Oracle table. The problem is that the MSSQL Server table has one column which is of data type ntext and the destination column in Oracle table is clob.
When I use the query
insert into oracle.table select * from sqlserver.table#mssql; I get the following error:
SQL Error: ORA-00997: illegal use of LONG datatype
Can anyone advice on this please?
I tried it through a PL/SQL Procedure and it worked. I created a cursor, passed in the values to my variables declared in VARCHAR2 and then run an EXECUTE IMMEDIATE for the INSERT INTO....SELECT * FROM <TABLE_NAME>#MSSQL.