SAS to HIVE2 Cloudera - Error trying to write - hadoop

I have the following error while trying to write on the hive2 db :
ERROR: java.io.IOException: Could not get block locations. Source file "/tmp/sasdata-e1-...dlv - Aborting...block==null
the error appears when trying to write a new table or append rows to an existing table. I can connect correctly to the db (through a libname), read tables from the schema and when trying to create a new table the new table get created but empty because the error above happens .
Can someone help pls?
Thank you

Remember that hive is mostly just a metadatastore that helps you to read files from HDFS. Yes, it does this through a database paradigm but it's really operating on HDFS. Each table is created in an HDFS directory, and files are created.
This sounds like you don't have write permissions to the hdfs folder you are writing to. (but you have read)
To solve this problem you need to understand what user you are using and where the data is being written.
If you are creating a simple table you need to check if you can write to the hive warehouse directory. If you are purposely creating files in a specific hDFS folder you should check that.
Here's a command to help you determine where the data is being written to.
show create table [mytable]
If it doesn't mention a HDFS location you need to find get permissions to the hive warehouse. (Typicallys located hdfs:/user/hive/warehouse , but is actually defined in $HIVE_HOME/conf/hive-default.xml if it's located elsewhere).

Related

Duplicate directory in HDFS

I've created an external hive table stored in "/cntt_sondn/hive/tables/test_orc" in hdfs using command:create external table test_lab.test_orc (col1 string, col2 string) stored as orc location '/cntt_sondn/hive/tables/test_orc';. It seems ok and the directory was created successfully by user hive. Then, I use NiFi to put a orc file in the directory, there's no problem, no error or warning thrown. However, when I browse to directory on NameNode UI, there are 2 directory with same name "test_orc", ones created by hive, the other created by my user. See image below:
In addition, it seems NiFi puts my orc file in the directory owned by user ra_vtg, so the directory created by user hive is empty. Therefore, no data inserted into the hive table.
Please explain why can these strange things happen.

Hive insert and load data query is not working

My hive query is not working. Hive allowing me to create the databases, show databases and create table as well but it don't allow me to move local file to into HDFS table and insert query is also not working.
I tried reinitialize my metastore and format namenode and created again every directory. But still anything is not working.
My datanode is not starting. Is this problem related to datanode? What should I do.
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Error caching map.xml.
This error is coming when I try to run any query except create table and databases.
From the errors above ,You will not be able to write in hdfs.
Hive allowing me to create the databases, show databases and create
table as well but it don't allow me to move local file to into HDFS
table and insert query is also not working.
Freeing up the HDFS Space would work

Can Hive table automatically update when underlying directory is changed

If I build a Hive table on top of some S3 (or HDFS) directory like so:
create external table newtable (name string)
row format delimited
fields terminated by ','
stored as textfile location 's3a://location/subdir/';
When I add files to that S3 location, the Hive table doesn't automatically update. The new data is only included if I create a new Hive table on that location. Is there a way to build a Hive table (maybe using partitions) so that whenever new files are added to the underlying directory, the Hive table automatically shows that data (without having to recreate the Hive table)?
On HDFS each file scanned each time table being queried as #Dudu Markovitz pointed. And files in HDFS are immediately consistent.
Update: S3 is also strongly consistent now, so removed part about eventual consistency.
Also there may be a problem with using statistics when querying table after adding files, see here: https://stackoverflow.com/a/39914232/2700344
Everything #leftjoin says is correct, with one extra detail: s3 doesn't offer immediate consistency on listings. A new blob can be uploaded, HEAD/GET will return it but a list operation on the parent path may not see it. This means that Hive code which lists the directory may not see the data. Using unique names doesn't fix this, only using a consistent DB like Dynamo which is updated as files are added/removed. Even there, you have added a new thing to keep in sync...

How Hive stores the data (loaded from HDFS)?

I am fairly new to Hadoop (HDFS and Hbase) and Hadoop Eco system (Hive, Pig, Impala etc.). I have got a good understanding of Hadoop components such as NamedNode, DataNode, Job Tracker, Task Tracker and how they work in tandem to store the data in efficient manner.
While trying to understand fundamentals of data access layer such as Hive, I need to understand where exactly a table’s data (created in Hive) gets stored? We can create external and internal table in Hive. As external tables can be in HDFS or any other file system, Hive doesnt store data for such tables in warehouse. What about internal tables? This table will be created as a directory on one of the data nodes on Hadoop Cluster. Once we load data in these tables from local or HDFS file system, are there further files getting created to store data in tables created in Hive?
Say for example:
A sample file named test_emp_feedback.csv was brought from local file system to HDFS.
A table (emp_feedback) was created in Hive with a structure similar to csv file structure. This lead to creation of a directory in Hadoop cluster say /users/big_data/hive/emp_feedback
Now once I create the table and load data in emp_feedback table from test_emp_feedback.csv
Is Hive going to create a copy of file in emp_feedback directory? Wont it cause data redundancy?
Creating a Managed table will create a directory with Same name as table name at Hive warehouse directory(Usually at /user/hive/warehouse/dbname/tablename).Also the table structure(Hive Metadata) is created in the metastore(RDBMS/HCat).
Before you load the data on the table, this directory(with the same name as table name under hive warehouse) is empty.
There could be 2 possible scenarios.
If the table is external the data is not copied to warehouse directory at all.
If the table is managed(not external), when you load your data to the table it is moved(not Copied) from current HDFS location to Hive warehouse directory9/user/hive/warehouse//). So this will not replicate the data.
Caution: It is always advisable to create external table unless the data is only used by hive. Dropping a managed table would delete the data from HDFS(Warehouse of HIVE).
HadoopGig
To answer you Question :
For External Tables:
Hive does not move the data into its warehouse directory. If the external table is dropped, then the table metadata is deleted but not the data.
For Internal tables
Hive moves data into its warehouse directory. If the table is dropped, then the table metadata and the data will be deleted.
For your reference
Difference between Internal & External tables:
For External Tables
External table stores files on the HDFS server but tables are not linked to the source file completely.
If you delete an external table the file still remains on the HDFS server.
As an example if you create an external table called “table_test” in HIVE using HIVE-QL and link the table to file “file”, then deleting “table_test” from HIVE will not delete “file” from HDFS.
External table files are accessible to anyone who has access to HDFS file structure and therefore security needs to be managed at the HDFS file/folder level.
Meta data is maintained on master node, and deleting an external table from HIVE only deletes the metadata not the data/file.
For Internal Tables
Stored in a directory based on settings in hive.metastore.warehouse.dir, by default internal tables are stored in the following directory /user/hive/warehouse you can change it by updating the location in the config file.
Deleting the table deletes the metadata and data from master-node and HDFS respectively.
Internal table file security is controlled solely via HIVE. Security needs to be managed within HIVE, probably at the schema level (depends on organization).
Hive may have internal or external tables, this is a choice that affects how data is loaded, controlled, and managed.
Use EXTERNAL tables when:
The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn’t lock the files.
Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schema (tables or views) at a single data set or if you are iterating through various possible schema.
Hive should not own data and control settings, directories, etc., you may have another program or process that will do those things.
You are not creating table based on existing table (AS SELECT).
Use INTERNAL tables when:
The data is temporary.
You want Hive to completely manage the life-cycle of the table and data.
Source:
HDInsight: Hive Internal and External Tables Intro
Internal & external tables in Hadoop- HIVE
It would not cause data redundancy. For managed (not external) tables Hive moves the data into its warehouse directory. In your example, the data will be moved from original location on HDFS to '/users/big_data/hive/emp_feedback'.
Be careful with the removal of the managed table, it will lead to removal data on HDFS also.
You can send data in two days
A) use LOAD DATA INPATH 'file_location_of_csv' INTO TABLE emp_feedback;
Note that this command will remove content at source directory and create a internal table
OR)
B) Use copyFromLocal or put command to copy local file into HDFS and then create external table and copy the data into table. Now data won't be moved from source. You can drop external table but still source data is available.
e.g.
create external table emp_feedback (
emp_id int,
emp_name string
)
location '/location_in_hdfs_for_csv file';
When you drop an external table, it only drops the meta data of HIVE table. Data still exists at HDFS file location.
Got it. This is what I was able to understand so far.
It all depends upon which type of table is being created and where from the file is picked up. Below are possible use cases
enter image description here

how to load data in hive automatically

recently I want to load the log files into hive tables, I want a tool which can read data from a certain directory and load them into hive automatically. This directory may include lots of subdirectories, for example, the certain directory is '/log' and the subdirectories are '/log/20130115','/log/20130116','/log/201301017'. Is there some ETL tools which can achieve the function that:once the new data is stored in the certain directory, the tool can detect this data automatically and load them into hive table. Is there such tools, do I have to write script by myself?
You can easily do this using Hive external tables and partitioning your table by day. For example, create your table as such:
create external table mytable(...)
partitioned by (day string)
location '/user/hive/warehouse/mytable';
This will essentially create an empty table in the metastore and make it point to /user/hive/warehouse/mytable.
Then you can load your data in this directory with the format key=value where key is your partition name (here "day") and value is the value of your partition. For example:
hadoop fs -put /log/20130115 /user/hive/warehouse/mytable/day=20130115
Once your data is loaded there, it is in the HDFS directory, but the Hive metastore doesn't know yet that it belongs to the table, so you can add it this way:
alter table mytable add partition(day='20130115');
And you should be good to go, the metastore will be updated with your new partition, and you can now query your table on this partition.
This should be trivial to script, you can create a cron job running once a day that will do these command in order and find the partition to load with the date command, for example continuously doing this command:
hadoop fs -test /log/`date +%Y%m%d`
and checking if $? is equal to 0 will tell you if the file is here and if it is, you can transfer it and add the partition as described above.
You can make use of LOAD DATA command provided by Hive. It exactly matches your use case. Specify a directory in your local file system and make Hive tables from it.
Example usage -
LOAD DATA LOCAL INPATH '/home/user/some-directory'
OVERWRITE INTO TABLE table

Resources