I am running a Quickstart VM Cloudera on a Windows 7 computer, with 8Go of RAM and 4Go dedicated to the VM.
I loaded tables from a SQL database into Hive, using Sqoop (Cloudera VM tutorial exercise 1). Using the Hive Query Editor OR Impala Shell, everything works fine (i.e. "show tables" shows me the tables that were imported).
Using the Impala Query Editor, whatever I type, I get the same error message:
AnalysisException: Syntax error in line 1: USE `` ^ Encountered: EMPTY IDENTIFIER Expected: IDENTIFIER CAUSED BY...
I have the same if I type "show tables;" ...
I checked that Impala-services were up and running and it was the case, and everything works fine in the Impala shell:
I googled around but could not find any answer, many thanks in advance for your answer !!
Need to use the Hive Query Editor. The error shows up if you use the Impala or other Query Editor because you're using a library written for Hive.
Query -> Editor -> Hive
Yes, try selecting a database and if one does not appear, try either clearing your browser cache and reloading the page and also verify that your user has permissions to view the default database. Although since you said that Hive query editor works fine, it sounds like permissions are not the issue.
I solved this issue cleaning history from Firefox. After that i signed again on HUE and the databases on Impala Query Editor was showed again.enter image description here
Impala does not support ORC file format I changed to sequence file it works
Related
I have created the necessary storage plugins and the relevant databases in hive show up when issuing the show database command.
When using one of the hive databases though using the use command, I found that I cannot select any tables which are within that database. Looking further, when issuing the show table command, no tables within that database show up via Apache Drill whereas they appear fine in Hive.
Is there anything I am missing by any chance in terms of granting permission via Hive to any user? How exactly does Apache Drill connect to Hive to run the relevant jobs?
Appreciate your responses.
Show tables; will not list hive tables as of now. It's better to create views on top of hive tables. These Views will show up on show tables; command.
After restarting the Impala server, we are not able to see the tables(i.e. tables are not coming up).Anyone help me what order we have to follow to avoid this issue.
Thanks,
Srinivas
You should try running "invalidate metadata;" from impala-shell. This usually clears up tables not being visible as impala caches metadata.
From:
https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_invalidate_metadata.html
The following example shows how you might use the INVALIDATE METADATA
statement after creating new tables (such as SequenceFile or HBase tables) through the Hive shell. Before the INVALIDATE METADATA statement was issued, Impala would give a "table not found" error if you tried to refer to those table names.
Team,
I am using HUE-BEEWAX (Hive UI) to execute hive queries. So far, I have been always able to access the query results of queries execute on the same day, but today I see lot of the queries results shown as expired despite running them just an hour back.
my question is?
When does query result set become expired?
What settings control this?
Is it possible to retain this result-set somewhere in HDFS? (how?)
Regards
My understanding is that it's controlled by Hive, not Hue (Beeswax). When HiveServer is restarted it cleans up the scratch directories.
This is controlled by this setting : hive.start.cleanup.scratchdir.
Are you restarting your HiveServers?
Looking through some code, I found that Beeswax sets the scratch directory to "/tmp/hive-beeswax-" + Hadoop Username.
I'm on CDH4, in HUE, I have a database in Metastore Manager named db1. I can run Hive queries that create objects in db1 with no problem. I put those same queries in scripts and run them through Oozie and they fail with this message:
FAILED: SemanticException 0:0 Error creating temporary folder on: hdfs://lad1dithd1002.thehartford.com:8020/appl/hive/warehouse/db1.db. Error encountered near token 'TOK_TMP_FILE'
I created db1 in the Metastore Manager as HUE user db1, and as HUE user admin, and as HUE user db1, and nothing works. The db1 user also has a db1 ID on the underlying Linux cluster, if that helps.
I have chmod'd the /appl/hive/warehouse/db1.db to read, write, execute to owner, group, other, and none of that makes a difference.
I'm almost certain it's a rights issue, but what? Oddly, I have this working under another ID where I had hacked some combination of things that seemed to have worked, but I'm not sure how. It was all in HUE, so if possible, I'd like a solution doable in HUE so I can easily hand it off to folks who prefer to work at the GUI level.
Thanks!
Did you also add hive-site.xml into your Files and Job XML fields? Hue has great tutorial about how to run Hive job. Watch it here. Adding of hive-site.xml is described around 4:20.
Exact same error on Hadoop MapR.
Root cause : Main database and temporary(scrat) database were created by different users.
Resolution : Creating both folders with same ID might help with this.
I have installed cdh4.4. And hive client is working properly and i am able to create, and display all the hive tables.
But when i use tools like talend i am getting the error 10001 table not found.
Can anybody tell where i am going wrong?
This is problem is due to the reason that the tool talend searches the default database.
Hence give database.tablename in the table field. This will solve the problem.
Regards,
Nagaraj