Recently I started working on HBase. Now I need to get the application data on one HBase system to another. How can I export dump of whole HBase from one system to another.
You can use hbase Export/Import command for transferring data from one hbase cluster to another.
Please refer below link.
http://hbase.apache.org/0.94/book/ops_mgt.html
Related
I have some data in hbase table. I have to take its backup. I am using 0.94.18 version. Now I have used following command for export.
hbase org.apache.hadoop.hbase.mapreduce.Driver export hbasetable /home/user/backup/
Now what happened actually is that data is copied to hdfs with exactly same path as I given. I am expecting this should copy to my local file system, but its not.
Where is the problem ?
Second how to backup table schema also in hbase?
For the first part of your answer, take a look at How to copy Hbase data to local file system (external drive)
Since the data is in hadoop, you just need to copy from hadoop to local system.
As for the second par, the good old docs do the tricks: http://hbase.apache.org/0.94/book/ops.backup.html
Basically they are telling two solutions: either do the backup with the system offline, or use another cluster to hold a backup of your live system.
Currently i have 2 machines one of them is the Horton sandbox i have configured it as name node and decommissioned the data node from it and other machine which i have made and made it as a data node and i have installed hive server on it.
Also and assigned the slave role to it and i used Ambari to finish it .
My question is as its my first time ever to use hadoop my plan is to transfer data from sql database to the hadoop so does this mean i have to install mysql on datanode while i will be using sqoop and other thing what will the name node do ?shall i query it and it passes the queries to the datanode am really very much confused and really having huge pressure to finish so forgive me as am newbie the installations of the machines are all default i have chosen datanode for the First machine and nodemanager for the second one with no special configurations appreciate if You have a simple example from which i can understand .
Thanks alot fellows
Sqoop is a tool designed to transfer data between Hadoop and relational database servers. It is used to import data from relational databases such as MySQL, Oracle to Hadoop HDFS, and export from Hadoop file system to relational databases.
example like- you have some data in mysql in other machine and you have to transfer the data into your hadoop hdfs. In this condition sqoop will be used
NameNode stores MetaData(No of Blocks, On Which Rack which DataNode the data is stored and other details) about the data being stored in DataNodes whereas the DataNode stores the actual Data.
I have set up a single node hadoop cluster on ubuntu.I have installed hadoop 2.6 version on in my machine.
Problem:
Everytime i create HIVE tables and load data into it , i can see the data by querying on it but once i shut-down my hadoop , tables gets wiped out. Is there any way i can retain them or is there any setting i am missing?
I tried some online solution provided , but nothing worked , kindly help me out with this.
Blockquote
Thanks
B
The hive table data is on the hadoop hdfs, hive just add a meta data and provide users sql like commands to prevent them from writing basic MR jobs.So if you shutdown the hadoop cluster,Hive cant find the data in the table.
But if you are saying when you restart the hadoop cluster, the data is lost.That's another problem.
seems you are using default derby as metastore.configure the hive metastore.i am pointing the link.please fallow it.
Hive is not showing tables
I am looking into using Hive on our Hadoop cluster to then use Presto to do some analytics on the data stored in Hadoop but I am still confused about some things:
Files are stored in Hadoop (some kind of file manager)
Hive needs tables to store data from Hadoop (data manager)
Do both Hadoop and Hive store their data separate or does Hive just use the files from Hadoop? (in terms of hard disk space and so on?)
-> So does Hive import data from Hadoop in tables and leave Hadoop alone or how must I see this?
Can Presto be used without Hive and just on Hadoop directly?
Thanks in advance for answering my questions :)
First things first: files are stored in Hadoop Distributed File System (HDFS). Is that what you call Data manager?
Actually Hive can use both - "regular" files in HDFS or tables which are once again "regular" files with additional metadata stored in special datastore (it is called warehouse).
Concerning Presto - it has a built-in support for Hive metastore, but you can also write your own connector plugin for any data source.
Please read more info about Hive connector configuration here and about connector plugins here.
I'm trying to create tables pragmatically using JDBC. However, I can't really see the table I created from the hive shell. What's worse, when i access hive shell from different directories, i see different result of the database.
Is any setting i need to configure?
Thanks in advance.
Make sure you run hive from the same directory every time because when you launch hive CLI for the first time, it creates a metastore derby db in the current directory. This derby DB contains metadata of hive tables. If you change directories, you will have unorganized metadata for hive tables. Also the Derby DB cannot handle multiple sessions. To allow for concurrent Hive access you would need to use a real database to manage the Metastore rather than the wimpy little derbyDB that comes with it. You can download mysql for this and change hive properties for jdbc connection to mysql type 4 pure java driver.
Try emailing the Hive userlist or the IRC channel.
You probably need to setup the central Hive metastore (by default, Derby, but it can be mySQL/Oracle/Postgres). The metastore is the "glue" between Hive and HDFS. It tells Hive where your data files live in HDFS, what type of data they contain, what tables they belong to, etc.
For more information, see http://wiki.apache.org/hadoop/HiveDerbyServerMode
Examine your hadoop logs. For me this happened when my hadoop system was not setup properly. The namenode was not able to contact the datanodes on other machines etc.
Yeah, it's due to the metastore not being set up properly. Metastore stores the metadata associated with your Hive table (e.g. the table name, table location, column names, column types, bucketing/sorting information, partitioning information, SerDe information, etc.).
The default metastore is an embedded Derby database which can only be used by one client at any given time. This is obviously not good enough for most practical purposes. You, like most users, should configure your Hive installation to use a different metastore. MySQL seems to be a popular choice. I have used this link from Cloudera's website to successfully configure my MySQL metastore.