I am trying to load a flat file in a table in hive and get below error.
FAILED: IllegalArgumentException java.net.UnknownHostException: nameservice1
Not sure what is required to do here.
The table is created as
CREATE TABLE IF NOT EXISTS poc_yi2 ( IndexValid_fg STRING ) ROW FORMAT delimited fields terminated by ',' STORED AS TEXTFILE
The data file contains one line which is
Yes,
The command to load the data is:
load data local inpath '/home/user1/testx/1' overwrite into table poc_yi2;
Is this a configuration param? I am relatively new to Hive. Can someone please assist
Looks like some problem with your cluster configuration. Please make sure you have properly set the properties like :
dfs.nameservices=nameservice1
dfs.ha.namenodes.nameservice1=namenode1,namenode2
Stop the daemons, make all the necessary modifications and restart your cluster. If the problem still persists, please show me your log files along with the config files.
Related
I am trying to insert data into Hive Server with command "load data local inpath 'C:\User\HiveData_Employ.csv' into table table1swa;" command. The csv is in my local machine. And the data in the CSV is {21,Name1}. But i am getting an error like below:
FAILED: IllegalArgumentException java.net.URISyntaxException: Relative path in absolute URI: C:%5CSwarup%5CHiveData_Employ.csv (state=42000,code=40000)
What am i doing wrong here as i think i should mention local as i am loading data from my local machine and not from HDFS path.Also please confirm the input data is correct..
Try Changing Backslash
as below
C:/User/HiveData_Employ.csv' into table table1swa
Also the input data looks good
I have a csv on my local machine, and I access Hive through Qubole web console. I am trying to upload the csv as a new table, but couldn't figure out. I have tried the following:
LOAD DATA LOCAL INPATH <path> INTO TABLE <table>;
I get the error saying No files matching path file
I am guessing that the csv has to be in some remote server where hive is actually running, and not on my local machine. The solutions I saw doesn't explain how to handle this issue. Can someone help me out reg. this?
Qubole allows you to define hive external/managed tables on the data sitting on your cloud storage ( s3 or azure storage ) - so LOAD from your local box wont work. you will have to upload this on your cloud storage and then define an external table against it -
CREATE External TABLE orc1ext(
`itinid` string, itinid1 string)
stored as ORC
LOCATION
's3n://mybucket/def.us.qubole.com/warehouse/testing.db/orc1';
INSERT INTO TABLE orc1ext SELECT itinid, itinid
FROM default.default_qubole_airline_origin_destination LIMIT 5;
First, create a table on hive using the field names present in your csv file.syntax which you are using seems correct.
Use below syntax for creating table
CREATE TABLE foobar(key string, stats map<string, bigint>)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY ':' ;
and then load data using below format,then mention path name correctly
LOAD DATA LOCAL INPATH '/yourfilepath/foobar.csv' INTO TABLE foobar;
I have created an external table in Hive using following:
create external table hpd_txt(
WbanNum INT,
YearMonthDay INT ,
Time INT,
HourlyPrecip INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
stored as textfile
location 'hdfs://localhost:9000/user/hive/external';
Now this table is created in location */hive/external.
Step-1: I loaded data in this table using:
load data inpath '/input/hpd.txt' into table hpd_txt;
the data is successfully loaded in the specified path ( */external/hpd_txt)
Step-2: I delete the table from */hive/external path using following:
hadoop fs -rmr /user/hive/external/hpd_txt
Questions:
why is the table deleted from original path? (*/input/hpd.txt is deleted from hdfs but table is created in */external path)
After I delete the table from HDFS as in step 2, and again I use show tables; It still gives the table hpd_txt in the external path.
so where is this coming from.
Thanks in advance.
Hive doesn't know that you deleted the files. Hive still expects to find the files in the location you specified. You can do whatever you want in HDFS and this doesn't get communicated to hive. You have to tell hive if things change.
hadoop fs -rmr /user/hive/external/hpd_txt
For instance the above command doesn't delete the table it just removes the file. The table still exists in hive metastore. If you want to delete the table then use:
drop if exists tablename;
Since you created the table as an external table this will drop the table from hive. The files will remain if you haven't removed them. If you want to delete an external table and the files the table is reading from you can do one of the following:
Drop the table and then remove the files
Change the table to managed and drop the table
Finally the location of the metastore for hive is by default located here /usr/hive/warehouse.
The EXTERNAL keyword lets you create a table and provide a LOCATION so that Hive does not use a default location for this table. This comes is handy if you already have data generated. Else, you will have data loaded (conventionally or by creating a file in the directory being pointed by the hive table)
When dropping an EXTERNAL table, data in the table is NOT deleted from the file system.
An EXTERNAL table points to any HDFS location for its storage, rather than being stored in a folder specified by the configuration property hive.metastore.warehouse.dir.
Source: Hive docs
So, in your step 2, removing the file /user/hive/external/hpd_txt removes the data source(data pointing to the table) but the table still exists and would continue to point to hdfs://localhost:9000/user/hive/external as it was created
#Anoop : Not sure if this answers your question. let me know if you have any questions further.
Do not use load path command. The Load operation is used to MOVE ( not COPY) the data into corresponding Hive table. Use put Or copyFromLocal to copy file from non HDFS format to HDFS format. Just provide HDFS file location in create table after execution of put command.
Deleting a table does not remove HDFS file from disk. That is the advantage of external table. Hive tables just stores metadata to access data files. Hive tables store actual data of data file in HIVE tables. If you drop the table, the data file is untouched in HDFS file location. But in case of internal tables, both metadata and data will be removed if you drop table.
After going through you helping comments and other posts, I have found answer to my question.
If I use LOAD INPATH command then it "moves" the source file to the location where external table is being created. Which although, wont be affected in case of dropping the table, but changing the location is not good. So use local inpath in case of loading data in Internal tables .
To load data in external tables from a file located in the HDFS, use the location in the CREATE table query which will point to the source file, for example:
create external table hpd(WbanNum string,
YearMonthDay string ,
Time string,
hourprecip string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
stored as textfile
location 'hdfs://localhost:9000/input/hpd/';
So this sample location will point to the data already present in HDFS in this path. so no need to use LOAD INPATH command here.
Its a good practice to store a source files in their private dedicated directories. So that there is no ambiguity while external tables are created as data is in a properly managed directory system.
Thanks a lot for helping me understand this concept guys! Cheers!
CREATE TABLE test1 (Column1 string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
LOAD DATA INPATH 'asv://hivetest#mystorageaccount.blob.core.windows.net/foldername' OVERWRITE INTO TABLE test1 ;
Loading the data generates the following error:
FAILED: Error in semantic analysis: Line 1:18 Path is not legal
''asv://hivetest#mystorageaccount.blob.core.windows.net/foldername'':
Move from:
asv://hivetest#mystorageaccount.blob.core.windows.net/foldername to:
asv://hdi1#hdinsightstorageaccount.blob.core.windows.net/hive/warehouse/test1
is not valid. Please check that values for params "default.fs.name"
and "hive.metastore.warehouse.dir" do not conflict.
The container hivetest is not my default HDInsight container. It is even located on a different storage account. However, the problem is probably not with the account credentials, as I have edited core-site.xml to include mystorageaccount.
How can I load data from a non-default container?
Apparently it's impossible by design to load data into a Hive table from a non-default container. The workaround suggested by the answer in the link is using an external table.
I was trying to use a non-external table so I can take advantage of partitioning, but apparently it's possible to partition even an external table, as explained here.
I have to copy a certain chunk of data from one hadoop cluster to another. I wrote a hive query which dumps the data into hdfs. After copying the file to the destination cluster, I tried to load the data using the command "load data inpath '/a.txt' into table data". I got the following error message
Failed with exception Wrong file format. Please check the file's format.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
I had dumped the data as a sequence file. Can anybody let me know what am I missing here ?
You should use STORED AS SEQUENCEFILE while creating the table if you want to store sequence file in the table. And you have written that you have dumped data as Sequence file but your file name is a.txt. I didn't get that.
If you want to load a text file into a table that expects Sequence file as the data source you could do one thing. First create a normal table and load the text file into this table. Then do :
insert into table seq_table select * from text_table;