The table is of parquet format and sits in Minio storage. I create the parquet file. After the parquet file is inserted into Minio, I run the following command to add the new partition:
call system.sync_partition_metadata('myschema', 'mytable', 'ADD', true)
However, it errors out
[Code: 65551, SQL State: ] Query failed (#20221210_041833_00033_4iqfp): unexpected partition name: mypartitionfield=HIVE_DEFAULT_PARTITION != []
Earlier on I did insert an empty partition value. How can I remedy this issue? The mypartitionfield is of type date.
I tried dropping and creating the table, as well as deleting the Minio folder structure and creating it again. None worked.
Related
I've removed my HDFS path /user/abc, and some Hive tables were stored in /user/abc/data/abc.db , with a rm -R command.
Despite having my regular tables correctly deleted with Hive SQL, my external tables didn't drop, with the following error:
[Code: 1, SQL State: 08S01] Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Failed to load storage handler: Error in loading storage handler.org.apache.phoenix.hive.PhoenixStorageHandler)
How can I safely delete the tables?
I tried using:
delete from TBL_COL_PRIVS where TBL_ID=[myexternaltableID];
delete from TBL_PRIVS where TBL_ID=[myexternaltableID];
delete from TBLS where TBL_ID=[myexternaltableID];
But it didn't work with the following error message:
[Code: 10297, SQL State: 42000] Error while compiling statement: FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table sys.TBLS that is not transactional
Thank you,
NB: I know a schema is supposed to be deleted more safely with HiveQL but on this particular case, this was not done this way.
Solution is to delete the tables from Hive Metastore (PostgreSQL) with
delete from "TABLE_PARAMS" where "TBL_ID"='[myexternaltableID]';
delete from "TBL_COL_PRIVS" where "TBL_ID"='[myexternaltableID]';
delete from "TBL_PRIVS" where "TBL_ID"='[myexternaltableID]';
delete from "TBLS" where "TBL_ID"='[myexternaltableID]';
NB: Order is important.
I am using hive version 3.1.0 in my project I have created one external table using below command.
CREATE EXTERNAL TABLE IF NOT EXISTS testing(ID int,DEPT int,NAME string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
I am trying to create an index for the same external table using the below command.
CREATE INDEX index_test ON TABLE testing(ID)
AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'
WITH DEFERRED REBUILD ;
But I am getting below error.
Error: Error while compiling statement: FAILED: ParseException line 1:7 cannot recognize input near 'create' 'index' 'user_id_user' in ddl statement (state=42000,code=40000)
According to Hive documentation, Hive indexing is removed since version 3.0
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing#LanguageManualIndexing-IndexingIsRemovedsince3.0
I am running the insert script in hive by load '.csv' file from local file system into hive table as below:
load data local inpath 'xxx.csv' into table xxx;
I got an error say
Failed with exception Unable to move source
file:/home/hadoop/hbase-data/xxx.csv to destination
hdfs://xxx.xxx.xxx:8020/test/xxx.csv FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
Can anyone help me out with this?
Thanks so much for your effort.
Is it possible to use file in LOCATION for external table in HIVE?
CREATE EXTERNAL TABLE table1
(
line string
)
LOCATION '/hdp_in/fd/file.txt.gz';
cause I get an error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.fs.FileAlreadyExistsException Parent path is not a directory: /hdp_in/fd/file.txt.gz file.txt.gz
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.mkdirs(FSDirectory.java:1957)
(...)
Do I have to use only directories? I haven't found that info in Manual Reference...
Regards
Pawel
Yes you will have to put this file in a directory and then create an external table on top of it. As per the documentation : An EXTERNAL table points to any HDFS location for its storage, rather than being stored in a folder specified by the configuration property hive.metastore.warehouse.dir
Even if you create an internal table hive by default creates a directory for it inside the hive.metastore.warehouse.dir and the same behavior is expected while creating an external table except for the fact that the default directory is not used.
I was getting the below given error, When I run the application:
Caused by: org.hibernate.exception.GenericJDBCException: could not execute native bulk manipulation query
.
.
Caused by: java.sql.SQLException: ORA-01157: cannot identify/lock data file - see DBWR trace file
ORA-01110: data file : '/fld1/fld2/mytemp_tablespace.dbf'
I tried to find out this files and came to know that there is no folders. I have ,
then created the respective folders and a new empty mytemptemp_tablespace.dbf file. But still the same error is getting over there.
Any idea why this error is happening?If it is an SQL exception it could have happened at the right beginning itself.
What I have done is, I have created a new schema and exported the db from the old to this new one.
Also how can I see or get the DBWR trace file.
This could be the result of a restored database and during the restore rman was not able to create the tempfiles because of a missing directory.
Solution is quite simple, once the directories are created, just add one or more tempfiles:
alter tablespace mytemp_tablespace add tempfile '/fld1/fld2/mytemp_tablespace01.dbf';
when the temp tablespace has it's storage, your actions can succeed.