Hortonworks Hive ODBC Driver DB-240000 - hadoop

DB-240000 ODBC error: [Hortonworks][Hardy] (80) Syntax or semantic
analysis error thrown in server while executing query.
Error message from server: Error while compiling statement: FAILED:
ParseException line 1:12 cannot recognize input near 'ALL_ROWS'
'*' '/' in hint name SQLState: 37000
Sample query
WDB-200001 SQL statement 'SELECT /*+ ALL_ROWS */ A.test FROM table A' could not be executed.
Syntax looks right as per documentation (https://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#SQLRF51108)
Or is there a missing param on the odbc configuration?
https://hortonworks.com/wp-content/uploads/2015/10/Hortonworks-Hive-ODBC-Driver-User-Guide.pdf
Use Native Query Key Name Default Value Required UseNativeQuery Clear
(0) No Architecting the Future of Big Data Hortonworks Inc. Page 71
Description When this option is enabled (1), the driver does not
transform the queries emitted by an application, so the native query
is used. When this option is disabled (0), the driver transforms the
queries emitted by an application and converts them into an equivalent
from in HiveQL. Note: If the application is Hive-aware and already
emits Hive
Could this be an issue with HDP versioning?
Is there a missing Param
in the ODBC connection string?

Related

Error in SQL statement Arithmetic operation resulted in an overflow

I am trying to connect ODBC 64bit Driver to allow me execute query to extract data from JDE 8.12
After building open SQL Connection and execute simple query it appear an error "Error in SQL statement Arithmetic operation resulted in an overflow."
Could you please advise what is missing in order to allow the query to be execute?
Steps which I did:
1 - I selected Microsot OLE DB Provider for ODBC Driver
2- Selected Driver (Driver - Oracle in OraClient 11g64_home1)
3 -Test The connection and show successful
4 - Build simple query to test the Flow
5- After run the Flow I get error
"Error in SQL statement Arithmetic operation resulted in an overflow."
After successful connection , I was expecting simple query to be executed without issue.
It is possible that your Orale client is 32-bit only and the application using it is not x86 compiled.
Also, it's possible that you have a numeric field of certain limitation and a value is being computed arithmetically and then attempted to be stored into the field, overflowing its bounds.

SSIS For Loop ODBC

I’m extracting data from a table in Oracle.
I have an ODBC connection manager to the Oracle database and the query for extraction should include a where clause because the table contain transactional data and there is no reason to extract it all every time.
I want initialize the table once and do it in with a For Loop which will iterate the whole table.
Since it’s an ODBC connection I can’t just put a where clause because I need to use a variable hence I realized I need to parameterize the DataFlow task and write my query at the sqlcommand property containing the ODBC source.
The property value is:
SELECT *
FROM DDC.DDC_SALES_TBL
WHERE trunc(CALDAY) between to_date('"+ #[User::vstart]+"','MM/DD/YYYY')
and to_date('"+ #[User::vstop]+"','MM/DD/YYYY')
Where the #vstart and #vstop are variables containing the ‘from/to’ dates to be extracted based on a DATEADD function and another variable (#vcount) which supposed to be the iterator as follows:
(DT_WSTR, 2) MONTH( DATEADD( "day", #[User::vcount] , GETDATE() ) )+"/"+
(DT_WSTR, 2) DAY( DATEADD( "day", #[User::vcount] , GETDATE() ) )+"/"+
(DT_WSTR, 4) YEAR( DATEADD( "day", #[User::vcount] , GETDATE() ) )
What’s happening is that the first iteration works fine but the second one generates an error and the package fails.
I marked the variable as EvaluateAsExpression=True
I also marked the DelayValidation=True in both the For Loop and the DataFlow tasks.
The errors are:
(1)Data Flow Task:Error: SQLSTATE: HY010, Message: [Microsoft][ODBC Driver Manager] Function sequence error;
(2) Data Flow Task:Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "ODBC Source.Outputs[ODBC Source Output]" failed because error code 0xC020F450 occurred, and the error row disposition on "ODBC Source" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
(3) Data Flow Task:Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on ODBC Source returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Please assist.
I don't know why initially i didn't use OLEDB, as I thought it doesn't work.
What i tried was to use create an OLEDB via oracle driver and the connection manager worked so i used it.
As this way you can parameterize the source directly and the loop worked just fine.
Don't know what cause the conflict with the OBDC source but that's my workaround.
I didn't find a way to setup the sqlcommand property in ODBC source and using it in a loop which should change the the command every iteration. It crashed after the first iteration ni matter what i tried.
Thanks,
I was having the same issue when using Oracle Source, updating the Attunity Connectors for Oracle as well as the OLEDB driver for SQL Server worked to fix the problem.

How to create or set a variable inside Direct Database Request (DDR) filter query?

I am unable to use a Direct Database Request (DDR) as a filter query.
The requirement is to filter postal codes in my main subject area-based query using a DDR-based query which performs a REGEXP function, explained below.
Is there any way to define or set the value of a presentation or request variable inside my filter query?
What other approaches could I try?
The REGEXP function is not function-shippable to the RPD via EVALUATE() in an OBIEE analysis, due to being not supported in my NQSConfig.INI file.
And, this is an Oracle Sales Cloud instance, so I do not have access to modify my NQSConfig.INI file.
What I've tried so far:
I've created a DDR in OBIEE using the Connection Pool defined in my RPD.
This query retrieves all postal codes that are non-numeric:
SELECT
hz_parties.party_id,
hz_parties.postal_code
FROM
HZ_PARTIES
where
REGEXP_INSTR(Substr(HZ_PARTIES.POSTAL_CODE,0,2), '[0-9]{2}') = 0
This works.
But I have an OBIEE subject-area based analysis which needs a filter using operator is based on results of another analysis, using the DDR and matches against its party_id column.
This is the generated error and SQL of my main OBIEE analysis:
Error Details
Error Codes: YQCO4T56:OPR4ONWY:U9IM8TAC:OI2DL65P
Location: saw.views.evc.activate, saw.httpserver.processrequest, saw.rpc.server.responder, saw.rpc.server, saw.rpc.server.handleConnection, saw.rpc.server.dispatch, saw.threadpool.socketrpcserver, saw.threads
Odbc driver returned an error (SQLExecDirectW).
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 27047] Nonexistent table: "EXECUTE". (HY000)
SQL Issued: {call NQSGetLevelDrillability('SELECT "Contact"."Contact Row ID" saw_0 FROM "Sales - CRM Customers and Contacts Real Time" WHERE "Contact"."Contact Row ID" IN (SELECT saw_0 FROM (EXECUTE PHYSICAL CONNECTION POOL "CRM_OLTP"."Connection Pool" SELECT hz_parties.party_id, hz_parties.postal_code FROM HZ_PARTIES where REGEXP_INSTR(Substr(HZ_PARTIES.POSTAL_CODE,0,2), ''[0-9]{2}'') = 0 ) nqw_1 )')}
Seeing as you're in a cloud product I'd say this sounds like you should open an SR :)

Error when trying to use the lag function in explicit-pass through [Hive] [SAS over Hadoop]

The following query is giving me the error:
Execute error: Error while processing statement: FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Does anyone know why or how to resolve this issue?
proc sql;
connect to hadoop(server='xxx' port=10000 schema=xxx SUBPROTOCOL=hive2 sql_functions=all);
execute(
create table a as
select
*,
lag(claim_flg,1) over (order by ptnt_id,month) as lag1
from b
) by hadoop;
disconnect from hadoop;
quit;
It appears to be a limitation issue in HIVE database:
Hive Limit of 127 Expressions per Table
Due to a limitation in the Hive database, tables can contain a maximum of 127 expressions. When the 128th expression is read, the directive fails and the SAS log receives a message similar to the following:
ERROR: java.sql.SQLException: Error while processing statement: FAILED:
Execution Error, return
code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
ERROR: Unable to execute Hadoop query.
ERROR: Execute error.
SQL_IP_TRACE: None of the SQL was directly passed to the DBMS.
The Hive limitation applies anytime a table is read as part of a directive. For SAS Data Loader, the error can occur in aggregations, profiles, when viewing results, and when viewing sample data.
Source: http://support.sas.com/documentation/cdl/en/dmddug/67908/HTML/default/viewer.htm#p1fl149uastoudn1v7r2u5ff8aft.htm

Error running SQL queries with Liquibase

I'm using Liquibase to create tables in DB2. I have a simple example changelog that tries to drop and then create a table.
The SQL statements work fine via my DbVisualizer tool (which uses the same JDBC driver as Liquibase) and also works fine when submitted via the db2 command line tool.
Here's the Liquibase input file:
--changeset dank:1 runAlways=true failOnError:false
DROP TABLE AAA_SCHEMA.FOO
--changeset dank:2 runAlways=true
CREATE TABLE AAA_SCHEMA.FOO ( MYID INTEGER NOT NULL )
Here's the error message I get:
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error:
SQLCODE=-104, SQLSTATE=42601, SQLERRMC=DROP TABLE AAA_SCHEMA.FOO;
;, DRIVER=4.18.60
The IBM error code -104 is about syntax problems. Based on looking at the error message my guess is that it has something to do with the end of line character ";". But I've tried the query with and without the semi-colon. The semi-colon is accepted by IBM's own db2 too, so it seems like a valid choice.
Any help in figuring out the cause of this error is much appreciated.
The problem was me forgetting to start my native sql file with this required line:
--liquibase formatted sql
Doh!

Resources