HBase master fail to start - impl.MetricsSystemImpl: Source name ugi already exists - hadoop

I have configured HBase on top of HDFS for distributed mode.I formatted Hadoop namenode, HDFS is configured correctly. It is up and running.
HBase master is not starting. The error is "impl.MetricsSystemImpl: Source name ugi already exists". The following is detailed error log for HBase master
apreduce/*:/contrib/capacity-scheduler/*.jar
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:CLASS_PATH=.
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=193.60.151.202 36343 192.168.0.84 22
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HADOOP_COMMON_LIB_NATIVE_DIR=/home/ubuntu/hadoop/lib/native
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/1000
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/home/ubuntu/hbase-0.98.20-hadoop1/bin/..
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HOME=/home/ubuntu
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2016-07-13 14:06:19,377 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.91-b14
2016-07-13 14:06:19,377 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, - XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/home/ubuntu/hbase-0.98.20- hadoop1/bin/../logs, -Dhbase.log.file=hbase-ubuntu-master-master.log, - Dhbase.home.dir=/home/ubuntu/hbase-0.98.20-hadoop1/bin/.., - Dhbase.id.str=ubuntu, -Dhbase.root.logger=INFO,RFA, - Djava.library.path=/home/ubuntu/hadoop/lib, -Dhbase.security.logger=INFO,RFAS]
2016-07-13 14:06:19,435 DEBUG [main] master.HMaster: master/master/192.168.0.84:60000 HConnection server-to-server retries=350
2016-07-13 14:06:19,649 INFO [main] ipc.RpcServer: master/master/192.168.0.84:60000: started 10 reader(s).
2016-07-13 14:06:19,722 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2016-07-13 14:06:19,801 INFO [main] impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2016-07-13 14:06:19,803 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-07-13 14:06:19,803 INFO [main] impl.MetricsSystemImpl: HBase metrics system started
2016-07-13 14:06:19,807 INFO [main] impl.MetricsSourceAdapter: MBean for source jvm registered.
2016-07-13 14:06:19,810 INFO [main] impl.MetricsSourceAdapter: MBean for source IPC,sub=IPC registered.
2016-07-13 14:06:19,988 INFO [main] impl.MetricsSourceAdapter: MBean for source ugi registered.
2016-07-13 14:06:19,988 WARN [main] impl.MetricsSystemImpl: Source name ugi already exists!
2016-07-13 14:06:20,188 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3119)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:193)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3133)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:852)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:790)
Wed Jul 13 15:57:58 UTC 2016 Stopping hbase (via master)
Wed Jul 13 16:06:47 UTC 2016 Stopping hbase (via master)
Wed Jul 13 16:11:11 UTC 2016 Starting master on master
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 125284
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
Any pointer to resolve the error is much appreciated....

Related

Oozie workflow giving error on Hive job when underlying job completes successfully

Part of self-learning I am exploring Oozie, and I am practicing on Hortonworks Sandbox VM. The problem is that Oozie workflow is getting error and getting killed as a result when underlying job given by the link in Oozie UI shows success.
I have looked at this question and have included
<job-xml>hive-site.xml</job-xml>
in the job description, and have copied hive-site.xml to HDFS to the correct folder but to no avail. Additionally, I have double checked all URLs and everything is right.
I am running the Oozie job from command line. I have no idea where to start debugging or how to get a more detailed error. Following are screenshots:
Oozie Error
Underlying Hive job indicates successful completion.
I do not see the final result as hive table as I am supposed to see.
Following is the log output of the Map task:
<<< Invocation of Hive command completed <<<
Hadoop Job IDs executed by Hive:
Intercepting System.exit(12)
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [12]
Oozie Launcher failed, finishing Hadoop job gracefully
Oozie Launcher, uploading action data to HDFS sequence file: hdfs://sandbox.hortonworks.com:8020/user/root/oozie-oozi/0000005-160711211729704-oozie-oozi-W/define_congress_table--hive/action-data.seq
2016-07-12 05:30:57,817 INFO [main] zlib.ZlibFactory (ZlibFactory.java:<clinit>(49)) - Successfully loaded & initialized native-zlib library
2016-07-12 05:30:57,818 INFO [main] compress.CodecPool (CodecPool.java:getCompressor(153)) - Got brand-new compressor [.deflate]
Oozie Launcher ends
2016-07-12 05:30:57,836 INFO [main] mapred.Task (Task.java:done(1038)) - Task:attempt_1468271868299_0037_m_000000_0 is done. And is in the process of committing
2016-07-12 05:30:57,878 INFO [main] mapred.Task (Task.java:commit(1199)) - Task attempt_1468271868299_0037_m_000000_0 is allowed to commit now
2016-07-12 05:30:57,887 INFO [main] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(582)) - Saved output of task 'attempt_1468271868299_0037_m_000000_0' to hdfs://sandbox.hortonworks.com:8020/user/root/oozie-oozi/0000005-160711211729704-oozie-oozi-W/define_congress_table--hive/output/_temporary/1/task_1468271868299_0037_m_000000
2016-07-12 05:30:57,936 INFO [main] mapred.Task (Task.java:sendDone(1158)) - Task 'attempt_1468271868299_0037_m_000000_0' done.
Log Type: syslog
Log Upload Time: Tue Jul 12 05:31:05 +0000 2016
Log Length: 2781
2016-07-12 05:30:48,083 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
2016-07-12 05:30:48,151 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-07-12 05:30:48,152 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2016-07-12 05:30:48,163 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2016-07-12 05:30:48,163 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1468271868299_0037, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#1fbe7534)
2016-07-12 05:30:48,212 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: RM_DELEGATION_TOKEN, Service: 10.0.2.15:8050, Ident: (owner=root, renewer=oozie mr token, realUser=oozie, issueDate=1468301434802, maxDate=1468906234802, sequenceNumber=22, masterKeyId=90)
2016-07-12 05:30:48,257 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2016-07-12 05:30:48,496 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /hadoop/yarn/local/usercache/root/appcache/application_1468271868299_0037
2016-07-12 05:30:48,955 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2016-07-12 05:30:49,414 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2016-07-12 05:30:49,414 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2016-07-12 05:30:49,423 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2016-07-12 05:30:49,475 WARN [main] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Unexpected: procfs stat file is not in the expected format for process with pid 4558
2016-07-12 05:30:49,647 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit#1f16b6e6
2016-07-12 05:30:49,654 INFO [main] org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2016-07-12 05:30:49,700 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
2016-07-12 05:30:50,069 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
2016-07-12 05:30:50,253 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
To find the error check:
Check the "Child Job URLs" check each child job.
Check stdout and stderr
Sometime the error is in the child job and you will need to do some drill down on every one of them.

MapReduce job is failing with an error failed to write data

I'm trying to export data from teradata to hadoop. but my export query is failing by giving an error "Failed to write data".Please look at the Mapreduce and application logs below:
Log Type: syslog
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 4931
2016-03-08 22:47:07,414 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
2016-03-08 22:47:07,499 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-03-08 22:47:07,499 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2016-03-08 22:47:07,509 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2016-03-08 22:47:07,510 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1457504560070_0004, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#175b9425)
2016-03-08 22:47:07,556 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: RM_DELEGATION_TOKEN, Service: 39.7.48.2:8032,39.7.48.3:8032, Ident: (owner=hive, renewer=oozie mr token, realUser=oozie, issueDate=1457506410968, maxDate=1458111210968, sequenceNumber=908, masterKeyId=280)
2016-03-08 22:47:07,599 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2016-03-08 22:47:07,848 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /data1/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data2/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data3/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data4/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data5/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data6/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data7/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data8/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data9/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data10/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data12/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004
2016-03-08 22:47:08,132 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2016-03-08 22:47:08,623 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2016-03-08 22:47:08,840 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataInputSplit#2ece4966
2016-03-08 22:47:08,844 INFO [main] com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataRecordReader: recordreader class com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataRecordReaderinitialize time is: 1457506028844
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 300417020(1201668080)
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1146
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 841167680
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1201668096
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 300417020; length = 75104256
2016-03-08 22:47:09,515 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2016-03-08 22:47:09,518 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
2016-03-08 22:47:09,518 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2016-03-08 22:47:09,848 WARN [main] org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.metastore.local does not exist
2016-03-08 22:47:09,914 INFO [main] hive.metastore: Trying to connect to metastore with URI thrift://apus2.labs.teradata.com:9083
2016-03-08 22:47:09,951 INFO [main] hive.metastore: Connected to metastore.
2016-03-08 22:47:10,407 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
2016-03-08 22:47:10,452 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2016-03-08 22:47:10,453 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.work.output.dir is deprecated. Instead, use mapreduce.task.output.dir
2016-03-08 22:47:10,453 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
2016-03-08 22:47:10,457 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
APPLICATION Master LOGS:
Log Type: stderr
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 240
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Log Type: stdout
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 0
Log Type: syslog
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 66959
Showing 4096 bytes of 66959 total. Click here for the full log.
ILED
2016-03-08 22:59:19,325 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_FAILED
2016-03-08 22:59:19,456 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://C423A:8020/user/hive/.staging/job_1457504560070_0004/job_1457504560070_0004_1.jhist to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp
2016-03-08 22:59:19,550 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp
2016-03-08 22:59:19,562 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://C423A:8020/user/hive/.staging/job_1457504560070_0004/job_1457504560070_0004_1_conf.xml to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp
2016-03-08 22:59:19,614 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp
2016-03-08 22:59:19,645 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004.summary_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004.summary
2016-03-08 22:59:19,654 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml
2016-03-08 22:59:19,666 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist
2016-03-08 22:59:19,666 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2016-03-08 22:59:19,671 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to Task failed task_1457504560070_0004_m_000004
Job failed as tasks failed. failedMaps:1 failedReduces:0
2016-03-08 22:59:19,672 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: History url is http://apus2.labs.teradata.com:19888/jobhistory/job/job_1457504560070_0004
2016-03-08 22:59:19,680 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Waiting for application to be successfully unregistered.
2016-03-08 22:59:20,682 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:7 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:7 ContRel:0 HostLocal:6 RackLocal:1
2016-03-08 22:59:20,684 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://C423A /user/hive/.staging/job_1457504560070_0004
2016-03-08 22:59:20,711 INFO [Thread-89] org.apache.hadoop.ipc.Server: Stopping server on 46067
2016-03-08 22:59:20,712 INFO [IPC Server listener on 46067] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 46067
2016-03-08 22:59:20,712 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-03-08 22:59:20,714 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted.
Plese help me in resolving the issue.
You must be using sqoop to bring data to hadoop. Please paste command you are running. For "Failed to write data" , there can be multiple issues. destination parent directory is not avialble, space is not there at cluster etc.Only command can give explanation.

Error while doing bulkload in HBase

Im trying to do bulkload in HBase but below exception is coming while loading the data...
Application application_1439213972129_0080 initialization failed (exitCode=255) with output: Requested user root is not whitelisted and has id 0,which is below the minimum allowed 500
Failing this attempt. Failing the application.
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,personal:Name,Profession:Position_Title,Profession:Department,personal:Employee_Annual_Salary -Dimporttsv.separator=',' /tables/emp_salary_new1 /mapr/MapRDev/apps/Datasets/Employee_Details.csv
2015-08-13 18:24:33,076 INFO [main] mapreduce.TableMapReduceUtil: Setting speculative execution off for bulkload operation
2015-08-13 18:24:33,123 INFO [main] mapreduce.TableMapReduceUtil: Configured 'hbase.mapreduce.mapr.tablepath' to /tables/emp_salary_new1
2015-08-13 18:24:33,220 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-08-13 18:24:33,372 INFO [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2015-08-13 18:24:33,735 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-08-13 18:24:33,770 INFO [main] mapreduce.TableOutputFormat: Created table instance for /tables/emp_salary_new1
2015-08-13 18:24:34,252 INFO [main] input.FileInputFormat: Total input paths to process : 1
2015-08-13 18:24:34,294 INFO [main] mapreduce.JobSubmitter: number of splits:1
2015-08-13 18:24:34,535 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1439213972129_0055
2015-08-13 18:24:34,792 INFO [main] security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
2015-08-13 18:24:35,031 INFO [main] impl.YarnClientImpl: Submitted application application_1439213972129_0055
2015-08-13 18:24:35,114 INFO [main] mapreduce.Job: The url to track the job: http://hadoop-c02n02.ss.sw.ericsson.se:8088/proxy/application_1439213972129_0055/
2015-08-13 18:24:35,115 INFO [main] mapreduce.Job: Running job: job_1439213972129_0055
2015-08-13 18:24:53,253 INFO [main] mapreduce.Job: Job job_1439213972129_0055 running in uber mode : false
2015-08-13 18:24:53,256 INFO [main] mapreduce.Job: map 0% reduce 0%
2015-08-13 18:24:53,281 INFO [main] mapreduce.Job: Job job_1439213972129_0055 failed with state FAILED due to: Application application_1439213972129_0055 failed 2 times due to AM Container for appattempt_1439213972129_0055_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://hadoop-c02n02.ss.sw.ericsson.se:8088/cluster/app/application_1439213972129_0055Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e02_1439213972129_0055_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:304)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:354)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:87)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Shell output: main : command provided 1
main : user is mapradm
main : requested yarn user is mapradm
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2015-08-13 18:24:53,320 INFO [main] mapreduce.Job: Counters: 0
Looks like you are loading data in MapR DB not in hbase.But its fine hbase commands are compatible with MarDB. I just a small change in your command and see if that works for you.
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,personal:Name,Profession:Position_Title,Profession:Department,personal:Employee_Annual_Salary '-Dimporttsv.separator=,' /tables/emp_salary_new1 /mapr/MapRDev/apps/Datasets/Employee_Details.csv

Running Spark on the slave node (YARN) doesn't work

I can run SparkPi example on the master node, but when I try the same command
"spark-submit --class SparkPi --master yarn-client sparkpi.jar 10"
on the slave node, I got an error:
2015-05-19 14:05:44,881 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing view acls to: maintainer
2015-05-19 14:05:44,886 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing modify acls to: maintainer
2015-05-19 14:05:44,887 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(maintainer); users with modify permissions: Set(maintainer)
2015-05-19 14:05:45,389 INFO [sparkDriver-akka.actor.default-dispatcher-4] slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
2015-05-19 14:05:45,443 INFO [sparkDriver-akka.actor.default-dispatcher-4] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2015-05-19 14:05:45,641 INFO [sparkDriver-akka.actor.default-dispatcher-3] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on addresses :[akka.tcp://sparkDriver#slave2.com:33055]
2015-05-19 14:05:45,644 INFO [sparkDriver-akka.actor.default-dispatcher-3] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting now listens on addresses: [akka.tcp://sparkDriver#slave2.com:33055]
2015-05-19 14:05:45,653 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'sparkDriver' on port 33055.
2015-05-19 14:05:45,674 INFO [main] spark.SparkEnv (Logging.scala:logInfo(59)) - Registering MapOutputTracker
2015-05-19 14:05:45,688 INFO [main] spark.SparkEnv (Logging.scala:logInfo(59)) - Registering BlockManagerMaster
2015-05-19 14:05:45,707 INFO [main] storage.DiskBlockManager (Logging.scala:logInfo(59)) - Created local directory at /tmp/spark-local-20150519140545-c81b
2015-05-19 14:05:45,712 INFO [main] storage.MemoryStore (Logging.scala:logInfo(59)) - MemoryStore started with capacity 265.4 MB
2015-05-19 14:05:46,205 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-19 14:05:46,408 INFO [main] spark.HttpFileServer (Logging.scala:logInfo(59)) - HTTP File server directory is /tmp/spark-e95a2b5b-efea-41eb-93b9-0a9f7d6f6701
2015-05-19 14:05:46,413 INFO [main] spark.HttpServer (Logging.scala:logInfo(59)) - Starting HTTP Server
2015-05-19 14:05:46,477 INFO [main] server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
2015-05-19 14:05:46,499 INFO [main] server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started SocketConnector#0.0.0.0:52737
2015-05-19 14:05:46,500 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'HTTP file server' on port 52737.
2015-05-19 14:05:46,790 INFO [main] server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
2015-05-19 14:05:46,805 INFO [main] server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started SelectChannelConnector#0.0.0.0:4040
2015-05-19 14:05:46,805 INFO [main] util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'SparkUI' on port 4040.
2015-05-19 14:05:46,808 INFO [main] ui.SparkUI (Logging.scala:logInfo(59)) - Started SparkUI at http://slave2.com:4040
2015-05-19 14:05:47,058 INFO [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added JAR file:/home/maintainer/myjars/sparkpi.jar at http://[ip]:52737/jars/sparkpi.jar with timestamp 1432033547057
2015-05-19 14:05:47,190 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
2015-05-19 14:09:45,861 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
**2015-05-19 14:09:47,067 INFO [main] ipc.Client (Client.java:handleConnectionFailure(842)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-19 14:09:48,068 INFO [main] ipc.Client (Client.java:handleConnectionFailure(842)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...**
Aside from specifying yarn.resourcemanager.hostname property in yarn-site.xml, it's also necessary to propagate configuration files to workers.
It might be done with this line (before running spark-submit):
export SPARK_YARN_DIST_FILES=$(ls $HADOOP_CONF_DIR* | sed 's#^#file://#g' | tr '\n' ',' | sed 's/,$//')
If everything's configured correctly, you'll see RM hostname instead of 0.0.0.0 in this line:
2015-05-19 14:05:47,190 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8032
Exporting correct values for HADOOP_CONF_DIR fixed the issue.
export HADOOP_CONF_DIR=/your-path/hadoop/conf

Hbase daemon crashes at start

I am trying to run Hbase 0.96.1.1 for Hadoop 2 on a Mac book air. When I run ./start-hbase.sh,
starting master, logging to.....
but it crashes right after.
I checked the log file and this the error message it spat out:
Fri Mar 28 12:49:20 PDT 2014 Starting master on ms12
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
2014-03-28 12:49:21,203 INFO [main] util.VersionInfo: HBase 0.96.1.1-hadoop2
2014-03-28 12:49:21,203 INFO [main] util.VersionInfo: Subversion file:///home/jon/proj/hbase-svn/hbase-0.96.1.1 -r Unknown
2014-03-28 12:49:21,204 INFO [main] util.VersionInfo: Compiled by jon on Tue Dec 17 12:22:12 PST 2013
2014-03-28 12:49:21,894 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-03-28 12:49:21,894 INFO [main] server.ZooKeeperServer: Server environment:host.name=guest-wireless-nup-nat-206-117-89-004.usc.edu
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.6.0_65
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=Apple Inc.
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.home=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.class.path=/Users/hbase/hbase-0.96.1.1-hadoop2/conf:/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib/tools.jar:/Users/hbase/hbase-0.96.1.1-hadoop2:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/activation-1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/aopalliance-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/asm-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/avro-1.7.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-beanutils-1.7.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-beanutils-core-1.8.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-cli-1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-codec-1.7.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-collections-3.2.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-compress-1.4.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-configuration-1.6.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-daemon-1.0.13.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-digester-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-el-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-httpclient-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-io-2.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-lang-2.6.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-logging-1.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-math-2.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-net-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/core-3.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/findbugs-annotations-1.3.9-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/gmbal-api-only-3.0.0-b023.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-framework-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-server-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-servlet-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-rcm-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guava-12.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guice-3.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guice-servlet-3.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-annotations-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-auth-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-client-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-hdfs-2.2.0-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-hdfs-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-app-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-core-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-api-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-client-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-server-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-server-nodemanager-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hamcrest-core-1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-client-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-common-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-common-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-examples-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-hadoop-compat-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-hadoop2-compat-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-it-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-it-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-prefix-tree-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-protocol-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-server-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-server-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-shell-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-testing-util-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-thrift-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/high-scale-lib-1.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/htrace-core-2.01.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/httpclient-4.1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/httpcore-4.1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-core-asl-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-jaxrs-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-mapper-asl-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-xc-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jamon-runtime-2.3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jasper-compiler-5.5.23.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jasper-runtime-5.5.23.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.inject-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.servlet-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.servlet-api-3.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jaxb-api-2.2.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jaxb-impl-2.2.3-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-client-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-core-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-grizzly2-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-guice-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-json-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-server-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-test-framework-core-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-test-framework-grizzly2-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jets3t-0.6.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jettison-1.3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-sslengine-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-util-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jruby-complete-1.6.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsch-0.1.42.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-2.1-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-api-2.1-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-api-2.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsr305-1.3.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/junit-4.11.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/libthrift-0.9.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/log4j-1.2.17.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/management-api-3.0.0-b012.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/metrics-core-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/netty-3.6.6.Final.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/paranamer-2.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/protobuf-java-2.5.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/servlet-api-2.5-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/servlet-api-2.5.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/slf4j-api-1.6.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/snappy-java-1.0.4.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/stax-api-1.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/xmlenc-0.52.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/xz-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/zookeeper-3.4.5.jar:
2014-03-28 12:49:21,897 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=.:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/var/folders/ww/vvdhqz_d2ggcht76g3fp2zh00000gn/T/
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.name=Mac OS X
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.arch=x86_64
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.version=10.9.2
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.name=ms12
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.home=/Users/ms12
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/Users/hbase/hbase-0.96.1.1-hadoop2/bin
2014-03-28 12:49:21,921 INFO [main] server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2 snapdir /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2
2014-03-28 12:49:21,962 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
2014-03-28 12:49:21,972 INFO [main] persistence.FileTxnSnapLog: Snapshotting: 0x0 to /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2/snapshot.0
2014-03-28 12:49:22,269 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53624
2014-03-28 12:49:22,278 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Processing stat command from /127.0.0.1:53624
2014-03-28 12:49:22,283 INFO [Thread-3] server.NIOServerCnxn: Stat command output
2014-03-28 12:49:22,284 INFO [Thread-3] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53624 (no session established for client)
2014-03-28 12:49:22,287 INFO [main] zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 2181
2014-03-28 12:49:22,328 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:140)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:200)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:150)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:177)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2779)
Caused by: java.net.UnknownHostException: No such interface $iface
at org.apache.hadoop.net.DNS.getIPs(DNS.java:183)
at org.apache.hadoop.net.DNS.getIPs(DNS.java:145)
at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:344)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:362)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:341)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:414)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:256)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:137)
... 7 more
It seems that iface is an network interface on Linux system. Does that mean this version can not be run on Mac?
Edited:
I tested hbase version 0.98 also. Same issue. The only version that is working is hbase 0.94 but it is not compatible with hadoop 2.
It sounds like you used the instructions here:
http://opentsdb.net/setup-hbase.html
But did not do them correctly. The string $iface should not actually show up in your hbase-site.xml. It is expanded to the value of your loopback interface device when you write out your config using the exact commands given in those instructions. If you just copy-paste the config from there it won't work. On a mac it should result in lo0 for each of the below properties...
<property>
<name>hbase.zookeeper.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.master.dns.interface</name>
<value>lo0</value>
</property>
I had the same issue running HBase 98.6-hadoop2 on Ubuntu 12.04. It seems that something changed in the configuration needed for a standalone run mode. Try this in your hbase-site.xml configuration file
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///{your hbase data directory}</value>
</property>
<property>
<name>hbase.zookeper.property.dataDir</name>
<value>file:///{your zookeper stuff directory}</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>default</value>
</property>
<property>
<name>hbase.master.dns.interface</name>
<value>default</value>
</property>
</configuration>
Maybe these links can be of some help
http://hbase.apache.org/book/config.files.html#hbase_default_configurations
http://www.sujee.net/tech/articles/hadoop/hadoop-dns/

Resources