Cross connection query using toad data point - oracle

I have my source table in Oracle and target in hadoop. I am trying to compare both these tables using toad data point. I am trying to use the toad data point to do a complete table to table comparison but am getting errors for null pointer as there are columns in the table with null values.
The key column used for comparison is not having any null values. Can anyone please help on how toad data point be used for cross connection comparisons? Google doesn't seem to offer much help either.

Related

Precision Issue when using OPENQUERY to insert values into Oracle table

I need to insert values with a precision of 5 decimal places into an Oracle interface table via OPENQUERY because the values are originally stored in an SQL database. The data type of the Oracle table column is NUMBER (with no scale/precision specified). Using OPENQUERY to insert a value of 1.4, results in a value of 1.3999999999999999 stored in the Oracle table. I cannot change the data type of the Oracle table to NUMBER(38,5) because it is a standard Oracle table (GL_DAILY_RATES_INTERFACE).
According to Oracle https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1832
"If a precision is not specified, the column stores values as given."
Which means that if I insert 1.4, it should be stored in a NUMBER column as is. But it doesn't. So does that mean that when inserting through OPENQUERY to a linked Oracle server, the Oracle Provider for OLE DB does some addition conversion that results in a floating point error?
How do I insert values to a precision of 5 decimal places into an Oracle table NUMBER column that does not have precision or scale specified?
Update:
My insert statement does round the values when inserting. But it doesn't solve the issue.
For example:
INSERT INTO OPENQUERY(LINKEDSERVER, "SELECT CONVERSION_RATE FROM GL_DAILY_RATES_INTERFACE") VALUES(ROUND(1.4,5))
Since inserting values through OPENQUERY to a linked Oracle server causes some floating point error, I tried using EXEC('') AT LINKEDSERVER and it worked. Because the statement is executed directly on the ORACLE server, there is no longer any issue of the Oracle Provider for OLE DB doing any unexpected conversion.
My overall solution was to first insert values from the SQL table to the Oracle table using OPENQUERY, then use EXEC() to update and round the values in the Oracle table again.

Obiee measures show null values only

I have the following problem. In obiee when I create analysis from table A, its measure columns showing null, even though in database there are no null values in this table.
I used only this table and did not select any columns from other tables. Non-measure columns are fine though like client_id, client_name. However any calculated in rpd measure of that table shows null.
What can be the reason?
Edit. Log has the following :
The expression "measure_column" is converted to null because
None of the fact tables are compatible with the query request
If you really have measure columns in your dimensions it's time to completely go over what was done in the RPD. it seems to have severe issues built in.

Insert into a view in Hive

Can we insert into a view in Hive?
I have done this in the past with Oracle and Teradata.
But, doesn't seem to work in Hive.
create table t2 (id int, key string, value string, ds string, hr string);
create view v2 as select id, key, value, ds, hr from t2;
insert into v2 values (1,'key1','value1','ds1','hr1')
***Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: Unable to determine if null is encrypted: java.lang.NullPointerException***
These seems to be some sort of update support in view. But, I can't see anything on insert into a view.
https://cwiki.apache.org/confluence/display/Hive/UpdatableViews
Thanks for the feedback. Makes sense. The reason behind needing this functionality is, we use an ETL tool that has problems with handling high precision decimals (>15 digits). If the object(table->column in this case) is represented as string within the tool, we don't have a problem. So, i thought i'll define a bunch of views with string datatypes and use that in the tool instead. But, can't do inserts in hive to view. So, may be i need to think of something else. Have done this way before with oracle and teradata.
Can we have two tables with different structures point to the same underlying hdfs content? Probably wouldn't work because fo the parquet storage which stores schema. Sorry, not a hadoop expert.
Thanks a lot for your time.
It is not possible to insert data in a Hive view, Hive view is just a projection of a Hive table (you can see it as presaved query). From Hive documentation
Note that a view is a purely logical object with no associated
storage. (No support for materialized views is currently available in
Hive.) When a query references a view, the view's definition is
evaluated in order to produce a set of rows for further processing by
the query. (This is a conceptual description; in fact, as part of
query optimization, Hive may combine the view's definition with the
query's, e.g. pushing filters from the query down into the view.)
The link (https://cwiki.apache.org/confluence/display/Hive/UpdatableViews) seems to be for a proposed feature.
Per the official documentation:
Views are read-only and may not be used as the target of LOAD/INSERT/ALTER.

DB Index not being called

I know this question has been asked more than once here. But I am not able to resolve my issue so posting it again for help.
I have a table called Transaction in Oracle database (11g) with 2.7 million records. There is a not-null varchar2(20) (txn_id) column which contains numeric values. This is not the primary key of the table, and most of the values are unique. By most of the values I mean there are cases where one value can be there 3-4 times in the table.
If I perform a simple query of select based on TXN_ID it take about 5 seconds or more to return the result.
Select * from Transaction t where t.txn_id = 245643
I have an index created on this column, but when I check the explain plan for above query, it is using full table scan. This query is being used many times in the application which is making the application slow.
Can you please provide some help what might be causing this issue?
You are comparing a varchar column with a numeric literal (245643). This forces Oracle to convert one side of the equality, and off hand, it seems as though it's choosing the "wrong" side. Instead of having to guess how Oracle will handle this conversion, use a character literal:
SELECT * FROM Transaction t WHERE t.txn_id = '245643'

What Oracle data type is easily converted to BIT in MSSQL via SSIS?

I have a Data Flow from an Oracle table to an MSSQL table with one field of data type BIT. The Oracle table is using the characters Y and N at the moment (I'm unsure of the data type and have no way of checking), but the MSSQL table needs to be data type BIT. What type of cast can I use on the Oracle query so that the data is pulled smoothly over?
Use char(1) and then use a derived column transformation like this:
(DT_BOOL)(OracleField == "Y"?1:0)
Give this column a name like OracleFieldAsBool
and then use it instead of the original column in the rest of your data flow.

Resources