Hive - error while using dynamic partition query - hadoop

I am trying to execute the query below:
INSERT OVERWRITE TABLE nasdaq_daily
PARTITION(stock_char_group)
select exchage, stock_symbol, date, stock_price_open,
stock_price_high, stock_price_low, stock_price_close,
stock_volue, stock_price_adj_close,
SUBSTRING(stock_symbol,1,1) as stock_char_group
FROM nasdaq_daily_stg;
I have already set hive.exec.dynamic.partition=true and hive.exec.dynamic.partiion.mode=nonstrict;.
Table nasdaq_daily_stg table contains proper information in the form of a number of CSV files. When I execute this query, I get this error message:
Caused by: java.lang.SecurityException: sealing violation: package org.apache.derby.impl.jdbc.authentication is sealed.
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.MapRedTask
The mapreduce job didnt start at all. So there are no logs present in the jobtracker web-UI for this error. I am using derby to store meta-store information.
Can someone help me fix this?

Please try this. This may be the issue. You may be having Derby classes twice on your classpath.
"SecurityException: sealing violation" when starting Derby connection

Related

Can we insert into external table

I am debugging a Big Data code in Production environment of my company. Hive return the following error:
Exception: org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out
Killing DAG...
Execution has failed.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:282)
at org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:392)
at HiveExec.main(HiveExec.java:159)
After investigation, I have found that this error could be caused by BoneCP in connectionPoolingType property, but the cluster support team told me that they fixed this bug by upgrading BoneCP.
My question is: can we INSERT INTO an external table in Hive, because I have doubt about the insertion script ?
Yes, you can insert into external table.

Oracle destination in SSIS data flow is failing with Error- ORA-01405: fetched column value is NULL

I have one SSIS package in which there is one DFT. In DFT, I have one Oracle source and one Oracle destination.
In Oracle destination I am using Data Access Mode as 'Table Name - Fast Load (Using Direct Path)'
There is one strange issue with that. It is failing with the following error
[Dest 1 [251]] Error: Fast Load error encountered during
PreLoad or Setup phase. Class: OCI_ERROR Status: -1 Code: 0 Note:
At: ORAOPRdrpthEngine.c:735 Text: ORA-00604: error occurred at
recursive SQL level 1 ORA-01405: fetched column value is NULL
I thought it is due to NULL values in source but there is no NOT NULL constraint in the destination table, so it should not be an issue. And to add into this, the package is working fine in case of 'Normal Load' but 'Fast Load'.
I have tried using NVL in case of NULL values from source but still no luck.
I have also recreated the DFT with these connections but that too in vain.
Can some one please help me with this?
It worked fine after recreating the oracle table with the same script

Error when trying to use the lag function in explicit-pass through [Hive] [SAS over Hadoop]

The following query is giving me the error:
Execute error: Error while processing statement: FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Does anyone know why or how to resolve this issue?
proc sql;
connect to hadoop(server='xxx' port=10000 schema=xxx SUBPROTOCOL=hive2 sql_functions=all);
execute(
create table a as
select
*,
lag(claim_flg,1) over (order by ptnt_id,month) as lag1
from b
) by hadoop;
disconnect from hadoop;
quit;
It appears to be a limitation issue in HIVE database:
Hive Limit of 127 Expressions per Table
Due to a limitation in the Hive database, tables can contain a maximum of 127 expressions. When the 128th expression is read, the directive fails and the SAS log receives a message similar to the following:
ERROR: java.sql.SQLException: Error while processing statement: FAILED:
Execution Error, return
code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
ERROR: Unable to execute Hadoop query.
ERROR: Execute error.
SQL_IP_TRACE: None of the SQL was directly passed to the DBMS.
The Hive limitation applies anytime a table is read as part of a directive. For SAS Data Loader, the error can occur in aggregations, profiles, when viewing results, and when viewing sample data.
Source: http://support.sas.com/documentation/cdl/en/dmddug/67908/HTML/default/viewer.htm#p1fl149uastoudn1v7r2u5ff8aft.htm

Hive Select Count(*) filenotfound exception for job.splitmetainfo

I have a hiveserver2 running and wrote a java program to query from hive.
I tried this query
SELECT * FROM table1
where, 'table1' is the table name in hive, and its works fine and gave me the results.
But when i tried to run
SELECT COUNT(*) FROM table1
it threw an exception
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
I check the logs and this was recorded
Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://vseccoetv04:9000/tmp/hadoop-yarn/staging/anonymous/.staging/job_1453359797695_0017/job.splitmetainfo
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390)
....
I checked in a number of places, and other people too got 'FileNotFoundException' but not dude to this reason.
Is there any way to solve this problem?
Okay,
I figured out the problem myself :)
I had added some properties in the hive-site.xml file earlier to check support for transactions. I think i might have added some wrong values there. Now, I have removed the properties which i added, and have restarted hive service. Everything works fine :D

Error running SQL queries with Liquibase

I'm using Liquibase to create tables in DB2. I have a simple example changelog that tries to drop and then create a table.
The SQL statements work fine via my DbVisualizer tool (which uses the same JDBC driver as Liquibase) and also works fine when submitted via the db2 command line tool.
Here's the Liquibase input file:
--changeset dank:1 runAlways=true failOnError:false
DROP TABLE AAA_SCHEMA.FOO
--changeset dank:2 runAlways=true
CREATE TABLE AAA_SCHEMA.FOO ( MYID INTEGER NOT NULL )
Here's the error message I get:
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error:
SQLCODE=-104, SQLSTATE=42601, SQLERRMC=DROP TABLE AAA_SCHEMA.FOO;
;, DRIVER=4.18.60
The IBM error code -104 is about syntax problems. Based on looking at the error message my guess is that it has something to do with the end of line character ";". But I've tried the query with and without the semi-colon. The semi-colon is accepted by IBM's own db2 too, so it seems like a valid choice.
Any help in figuring out the cause of this error is much appreciated.
The problem was me forgetting to start my native sql file with this required line:
--liquibase formatted sql
Doh!

Resources