Error while using FROM_UNIXTIME(UNIX_TIMESTAMP() in Hive - hadoop

I am trying to run this function to get the Current Date in Hive but I am getting the following error:
Error while compiling statement: FAILED: SemanticException No valid privileges Required privileges for this query: Server=server1->Db=_dummy_database->Table=_dummy_table->action=select;
I have searched online and being suggested following functions to get Current Date in Hive but all are giving same error:
SELECT from_unixtime(unix_timestamp()); --/Selecting Current Time stamp/
SELECT CURRENT_DATE; --/Selecting Current Date/
SELECT CURRENT_TIMESTAMP; --/Selecting Current Time stamp/
But all are showing error if I run them as they are given.

Right Answers:
1. SELECT from_unixtime(unix_timestamp()); - Only works for Impala
SELECT from_unixtime(unix_timestamp()) from any_table_name; - Works in HIVE
NOTE: Must use FROM clause with any_table_name present in database for HIVE

select unix_timestamp(current_timestamp) from table_name;

Related

How to delete an external table in Hive when the hdfs path has been deleted?

I've removed my HDFS path /user/abc, and some Hive tables were stored in /user/abc/data/abc.db , with a rm -R command.
Despite having my regular tables correctly deleted with Hive SQL, my external tables didn't drop, with the following error:
[Code: 1, SQL State: 08S01] Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Failed to load storage handler: Error in loading storage handler.org.apache.phoenix.hive.PhoenixStorageHandler)
How can I safely delete the tables?
I tried using:
delete from TBL_COL_PRIVS where TBL_ID=[myexternaltableID];
delete from TBL_PRIVS where TBL_ID=[myexternaltableID];
delete from TBLS where TBL_ID=[myexternaltableID];
But it didn't work with the following error message:
[Code: 10297, SQL State: 42000] Error while compiling statement: FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table sys.TBLS that is not transactional
Thank you,
NB: I know a schema is supposed to be deleted more safely with HiveQL but on this particular case, this was not done this way.
Solution is to delete the tables from Hive Metastore (PostgreSQL) with
delete from "TABLE_PARAMS" where "TBL_ID"='[myexternaltableID]';
delete from "TBL_COL_PRIVS" where "TBL_ID"='[myexternaltableID]';
delete from "TBL_PRIVS" where "TBL_ID"='[myexternaltableID]';
delete from "TBLS" where "TBL_ID"='[myexternaltableID]';
NB: Order is important.

Hive Index Creation failed

I am using hive version 3.1.0 in my project I have created one external table using below command.
CREATE EXTERNAL TABLE IF NOT EXISTS testing(ID int,DEPT int,NAME string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
I am trying to create an index for the same external table using the below command.
CREATE INDEX index_test ON TABLE testing(ID)
AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'
WITH DEFERRED REBUILD ;
But I am getting below error.
Error: Error while compiling statement: FAILED: ParseException line 1:7 cannot recognize input near 'create' 'index' 'user_id_user' in ddl statement (state=42000,code=40000)
According to Hive documentation, Hive indexing is removed since version 3.0
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing#LanguageManualIndexing-IndexingIsRemovedsince3.0

Cannot create Hive external table using jdbcStorageHandler

I am running a small cluster in Amazone EMR in order to play with Apache Hive 2.3.5. It is my understanding that Apache Hive can import data from a remote database and have the cluster to run queries. I was following an example that is provided in Apache Hive web documentation (https://cwiki.apache.org/confluence/display/Hive/JdbcStorageHandler) and created the following code:
CREATE EXTERNAL TABLE hive_table
(
col1 int,
col2 string,
col3 date
)
STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
TBLPROPERTIES (
'hive.sql.database.type'='POSTGRES',
'hive.sql.jdbc.driver'='org.postgresql.Driver',
'hive.sql.jdbc.url'='jdbc:postgresql://<url>/<dbname>',
'hive.sql.dbcp.username'='<username>',
'hive.sql.dbcp.password'='<password>',
'hive.sql.table'='<dbtable>',
'hive.sql.dbcp.maxActive'='1'
);
But I get the following error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException java.lang.IllegalArgumentException: Property hive.sql.query is required.)
According to the documentation, I need to specify either “hive.sql.table” or “hive.sql.query” to tell how to get data from jdbc database. But if I replace hive.sql.table with hive.sql.query I get the following error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException java.lang.IllegalArgumentException: No enum constant org.apache.hive.storage.jdbc.conf.DatabaseType.POSTGRES)
I tried looking in the web for a solution and it doesn't look like anyone experience the same issues that I am having. Do I need to modify a config file or am I missing something critical in my code?
I think you are using a version of the jar which doesn't support POSTGRES.
Download the latest jar from this link:
http://repo1.maven.org/maven2/org/apache/hive/hive-jdbc-handler/3.1.2/hive-jdbc-handler-3.1.2.jar
Put this downloaded jar into a hdfs location.
Run hive normally.
Run command: add jar ${HDFS_PATH_TO_DOWNLOADED_JAR}
Run your create table command

Error: while processing statement: FAILED: Hive Internal Error: hive.mapred.supports.subdirectories must be true

i stumbled in an error
Error while processing statement: FAILED: Hive Internal Error:
hive.mapred.supports.subdirectories must be true if any one of
following is true: hive.optimize.listbucketing ,
mapred.input.dir.recursive and hive.optimize.union.remove.
this error occured when i tried to load data recursively from HDFS directory to hive table
i tried to set following parameters:
SET mapred.input.dir.recursive=true; SET
hive.mapred.supports.subdirectories=true;
SETmapred.input.dir.recursive=true;
but it keeps throwing the same error, what could be wrong?
thanks for the advice
This appears to be an issue with Hue in Cloudera. Currently, I am using CDH 5.11.2 just experienced this issue while trying to set the same statements.
If you connect through beeline (command line) to access hive and perform your set statements and queries there, it should work. I just verified this.

[Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement

I'm getting the following error while executing queries against a database in impala. With other databases its working fine.
Error trace is as follows.
[Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: select * from test_table limit 1, SQL state: {1}, Query: {2}.[]
java.sql.SQLException: [Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: [Simba][JSQLEngine](12010) The table "test_table" could not be found., SQL state: HY000, Query: select count(*) from test_table.
at com.cloudera.impala.hivecommon.dataengine.HiveJDBCDataEngine.prepare(Unknown Source)
at com.cloudera.impala.jdbc.common.SStatement.executeNoParams(Unknown Source)
at com.cloudera.impala.jdbc.common.SStatement.executeQuery(Unknown Source)
Caused by: com.cloudera.impala.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: [Simba][JSQLEngine](12010) The table "test_table" could not be found., SQL state: HY000, Query: select count(*) from test_table.
... 3 more
if I execute show tables its listing the table name.
If I execute it from hue its not displaying anything in the result.
I tried by invalidating the metadata.
I tried by changing to latest driver jdbc41 same problem.
Where might be the problem?
In my case, this error was caused by not having a /user/scott directory on hdfs with write permissions for the Hiveserver (running as cloudera-scm user)(my jdbc connection uses scott as the user id). Once I created the dir and chmod'ed it, I could run all queries. Earlier only select * worked but select count(*) did not.
The problem was in the .avro file format. My teamlead has fixed it, Not sure what he has done he just said it was the problem of file format.

Resources