I am new to hadoop and hive, I am trying to use
hadoop distcp -overwrite hdfs://source_cluster/apps/hive/warehouse/test.db hdfs://destination_cluster/apps/hive/warehouse/test.db
this command runs properly and there is no error, still I can't see test.db on the target hdfs cluster
You've copied files, but haven't modified the Hive metastore that actually registers table information.
If you want to copy between clusters, I suggest looking into a tool called Circus Train, otherwise, use SparkSQL to interact with the Hiveserver of both cluster rather than use hdfs only tooling
After copying files and directories, it is necessary to recreate the tables (ddl) so that data about those tables appears in the metastore
Related
I have scenario in which i have to pull data from Hadoop cluster into AWS.
I understand running dist-cp on the hadoop cluster is a way to copy the data into s3, but i have a restriction here, i wont be able to run any commands in the cluster. I should be able to pull the files from hadoop cluster into AWS. The data is available in hive.
I thought of the below options:
1) Sqoop data from Hive ? Is it possible ?
2) S3-distcp (running it on aws), if so what would be the configuration needed ?
Any Suggestions ?
If the hadoop cluster is visible from EC2-land, you could run a distcp command there, or, if it's a specific bit of data, some hive query which uses hdfs:// as input and writes out to s3. You'll need to deal with kerberos auth though: you cannot use distcp in an un-kerberized cluster to read data from a kerberized one, though you can go the other way.
You can also run distcp locally in 1+ machine, though you are limited by the bandwidth of those individual systems. distcp is best when it schedules the uploads on the hosts which actually have the data.
Finally, if it is incremental backup you are interested in, you can use the HDFS audit log as a source of changed files...this is what incremental backup tools tend to use
Hive installation guide says that Hive can be applied to RDBMS, my question is, sounds like Hive can exist without Hadoop, right? It's an independent HQL engineer that could work with any data source?
You can run Hive in local mode to use it without Hadoop for debugging purposes. See below url
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-Hive,Map-ReduceandLocal-Mode
Hive provided JDBC driver to query hive like JDBC, however if you are planning to run Hive queries on production system, you need Hadoop infrastructure to be available. Hive queries eventually converts into map-reduce jobs and HDFS is used as data storage for Hive tables.
I'm just beginning with Hadoop based systems and I am working in Cloudera 5.2 at the moment. I am trying to get metadata out of HDFS/Hive and into some other software. When I say metadata I mean stuff like:
- for Hive: database schema and table schema
- for HDFS: the directory structure in HDFS, creation and modification times, owner and access controls
Does anyone know how to export the table schema's from Hive into a table or CSV file?
It seems that the Hive EXPORT function doesn't support only providing the schema. I found the Pig DESCRIBE function but I'm not sure how to get the output into a table-like structure; seems to only be available on the screen.
Thanks
Cloudera Navigator can be used to manage/export metadata from HDFS and Hive. The Navigator Metadata Server periodically collects the cluster's metadata information and provides a REST API for retrieving metadata information. More details at http://www.cloudera.com/content/cloudera/en/documentation/cloudera-navigator/v2-latest/Cloudera-Navigator-Installation-and-User-Guide/cnui_metadata_arch.html.
I'm not familiar with Hive, but you can also extract HDFS metadata by:
Fetching the HDFS fsimage. "hdfs dfsadmin -fetchImage ./fsimage"
Processing the fsimage using the OfflineImageViewer. "hdfs oiv XML -i ./fsimage -o ./fsimage.out"
More information about the HDFS OIV at https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html.
I'm having a problem about hive server that I don't understand. I've just set up a hadoop cluster and want to access to it from a hive service. First try I did was running the hive server in one of the cluster machines.
Everything worked nicely but I wanted to move the hive service to another machine outside the hadoop cluster.
So I just started a new machine outside this hadoop cluster. I've just install hive (+ hadoop libraries) and copied the hadoop config from the cluster. When I run the hiveserver almost everything goes ok. I can connect with the hive cli from a different machine to my hiveserver, create new tables in the hive warehouse within the hdfs filesystem in the hadoop cluster, query then and so on.
The thing I don't understand is that hiveserver seems to not recognize old tables which were created in my first try.
Some notes about my config are that all tables are handled by Hive and stored in HDFS. Hive configuration is the default one. I suppose that it has to do with my hive metastore but it couldn't say what.
Thank you!!
I am looking into using Hive on our Hadoop cluster to then use Presto to do some analytics on the data stored in Hadoop but I am still confused about some things:
Files are stored in Hadoop (some kind of file manager)
Hive needs tables to store data from Hadoop (data manager)
Do both Hadoop and Hive store their data separate or does Hive just use the files from Hadoop? (in terms of hard disk space and so on?)
-> So does Hive import data from Hadoop in tables and leave Hadoop alone or how must I see this?
Can Presto be used without Hive and just on Hadoop directly?
Thanks in advance for answering my questions :)
First things first: files are stored in Hadoop Distributed File System (HDFS). Is that what you call Data manager?
Actually Hive can use both - "regular" files in HDFS or tables which are once again "regular" files with additional metadata stored in special datastore (it is called warehouse).
Concerning Presto - it has a built-in support for Hive metastore, but you can also write your own connector plugin for any data source.
Please read more info about Hive connector configuration here and about connector plugins here.