Ambari metrics not show metrics after Cleaning up Ambari Metrics System Data - metrics

we have ambari with HDP version 2.6.5
we want to clean all metrics data , according to the following instructions on link - https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
so we did the following
note - Metrics Service operation mode - distributed
we stop the metrics service from ambari
we clean all data: ( from hdfs )
hdfs dfs -rm -r -f /apps/ams/metrics/*
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/.tmp' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/.tmp
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/MasterProcWALs' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/MasterProcWALs
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/WALs' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/WALs
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/archive' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/archive
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/data' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/data
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/hbase.id' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/hbase.id
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/hbase.version' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/hbase.version
20/02/13 06:10:01 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsha/apps/ams/metrics/oldWALs' to trash at: hdfs://hdfsha/user/hdfs/.Trash/Current/apps/ams/metrics/oldWALs
And we clean also the following folders
ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/
ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/
We start the metrics services from ambari
But from ambari metrics graphs not appears , and metrics collector service have alert
Not clearly why metrics not created after full metrics cleaning ,
From the log we can see the following:
2020-02-13 06:15:33,024 INFO [ProcedureExecutorThread-5] procedure2.ProcedureExecutor: Rolledback procedure CreateTableProcedure (table=SYSTEM.CATALOG) id=6 owner=ams state=ROLLEDBACK exec-time=239msec exception=org.apache.hadoop.hbase.TableExistsException: SYSTEM.CATALOG
2020-02-13 06:15:44,356 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2020-02-13 06:16:21,223 INFO [RpcServer.FifoWFPBQ.default.handler=28,queue=1,port=61300] master.HMaster: Client=ams/null List Table Descriptor for the SYSTEM.CATALOG table fails
2020-02-13 06:16:21,236 INFO [RpcServer.FifoWFPBQ.default.handler=28,queue=1,port=61300] master.HMaster: Client=ams/null create 'SYSTEM.CATALOG', {TABLE_ATTRIBUTES => {PRIORITY => '2000', coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|805306366|', coprocessor$6 => '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|805306367|'}, {NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1000', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
2020-02-13 06:16:21,349 INFO [ProcedureExecutorThread-6] procedure.CreateTableProcedure: CreateTableProcedure (table=SYSTEM.CATALOG) id=7 owner=ams state=RUNNABLE execute state=CREATE_TABLE_PRE_OPERATION
2020-02-13 06:16:21,360 WARN [ProcedureExecutorThread-6] procedure.CreateTableProcedure: The table SYSTEM.CATALOG does not exist in meta but has a znode. run hbck to fix inconsistencies.
2020-02-13 06:16:21,652 INFO [ProcedureExecutorThread-6] procedure2.ProcedureExecutor: Rolledback procedure CreateTableProcedure (table=SYSTEM.CATALOG) id=7 owner=ams state=ROLLEDBACK exec-time=305msec exception=org.apache.hadoop.hbase.TableExistsException: SYSTEM.CATALOG
2020-02-13 06:17:14,354 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2020-02-13 06:17:58,076 INFO [RpcServer.FifoWFPBQ.default.handler=28,queue=1,port=61300] master.HMaster: Client=ams/null List Table Descriptor for the SYSTEM.CATALOG table fails
2020-02-13 06:17:58,093 INFO [RpcServer.FifoWFPBQ.default.handler=28,queue=1,port=61300] master.HMaster: Client=ams/null create 'SYSTEM.CATALOG', {TABLE_ATTRIBUTES => {PRIORITY => '2000', coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|805306366|', coprocessor$6 => '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|805306367|'}, {NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1000', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
2020-02-13 06:17:58,206 INFO [ProcedureExecutorThread-7] procedure.CreateTableProcedure: CreateTableProcedure (table=SYSTEM.CATALOG) id=8 owner=ams state=RUNNABLE execute state=CREATE_TABLE_PRE_OPERATION
2020-02-13 06:17:58,218 WARN [ProcedureExecutorThread-7] procedure.CreateTableProcedure: The table SYSTEM.CATALOG does not exist in meta but has a znode. run hbck to fix inconsistencies.
2020-02-13 06:17:58,484 INFO [ProcedureExecutorThread-7] procedure2.ProcedureExecutor: Rolledback procedure CreateTableProcedure (table=SYSTEM.CATALOG) id=8 owner=ams state=ROLLEDBACK exec-time=279msec exception=org.apache.hadoop.hbase.TableExistsException: SYSTEM.CATALOG
2020-02-13 06:19:24,358 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2020-02-13 06:19:34,540 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=156.56 KB, freeSize=147.69 MB, max=147.84 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=30, evicted=0, evictedPerRun=0.0

Did you check the Ambari-server.log. You can definitely find something there.

Related

HBase : Create table command taking long time

I am new to HBase and i'm followig the book "Hadoop-The Definitve Guide".
I have started all application on my local system, which means there should be not network overhead. But when i ran the simple command to create a table in base, it is taking around nine seconds.
Here is the procedure which i used to start the hbase and create table:
./start-hbase.sh
./hbase shell
create 'test' , 'data'
And here is the console logs showing it takes around 8.3 seconds:
hbase(main):002:0> create 'test' , 'data'
0 row(s) in 8.3310 seconds
=> Hbase::Table - test
Although there is no error or exception in the logs of the base. For reference, here is my hbase-KV-master-KV.local.log file:
2017-06-20 16:23:49,335 INFO [B.defaultRpcServer.handler=9,queue=0,port=64717] master.HMaster: Client=KV//127.0.0.1 create 'test', {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
2017-06-20 16:23:49,463 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x15cc51a8c440000 type:create cxid:0x2ca zxid:0x47 txntype:-1 reqpath:n/a Error Path:/hbase/table-lock/test Error:KeeperErrorCode = NoNode for /hbase/table-lock/test
2017-06-20 16:23:54,615 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion: creating HRegion test HTD == 'test', {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} RootDir = hdfs://172.**.**.168/var/folders/sm/814w032j2q3d9npm7c4509xm0000gn/T/hbase-KV/hbase/.tmp Table name == test
2017-06-20 16:23:55,041 INFO [RegionOpenAndInitThread-test-1] regionserver.HRegion: Closed test,,1497956029330.a5c5e9076c1e38f4255f4dc8eea50f97.
2017-06-20 16:23:55,164 INFO [ProcedureExecutor-1] hbase.MetaTableAccessor: Added 1
2017-06-20 16:23:55,273 INFO [ProcedureExecutor-1] zookeeper.ZKTableStateManager: Moving table test state from null to ENABLING
2017-06-20 16:23:55,278 INFO [ProcedureExecutor-1] master.AssignmentManager: Assigning 1 region(s) to localhost,64720,1497955472691
2017-06-20 16:23:55,285 INFO [ProcedureExecutor-1] master.RegionStates: Transition {a5c5e9076c1e38f4255f4dc8eea50f97 state=OFFLINE, ts=1497956035278, server=null} to {a5c5e9076c1e38f4255f4dc8eea50f97 state=PENDING_OPEN, ts=1497956035285, server=localhost,64720,1497955472691}
2017-06-20 16:23:55,290 INFO [PriorityRpcServer.handler=14,queue=0,port=64720] regionserver.RSRpcServices: Open test,,1497956029330.a5c5e9076c1e38f4255f4dc8eea50f97.
2017-06-20 16:23:55,303 INFO [AM.ZK.Worker-pool2-t10] master.RegionStates: Transition {a5c5e9076c1e38f4255f4dc8eea50f97 state=PENDING_OPEN, ts=1497956035285, server=localhost,64720,1497955472691} to {a5c5e9076c1e38f4255f4dc8eea50f97 state=OPENING, ts=1497956035303, server=localhost,64720,1497955472691}
2017-06-20 16:23:55,307 INFO [StoreOpener-a5c5e9076c1e38f4255f4dc8eea50f97-1] hfile.CacheConfig: Created cacheConfig for data: blockCache=LruBlockCache{blockCount=0, currentSize=867896, freeSize=844179528, maxSize=845047424, heapSize=867896, minSize=802795072, minFactor=0.95, multiSize=401397536, multiFactor=0.5, singleSize=200698768, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-20 16:23:55,307 INFO [StoreOpener-a5c5e9076c1e38f4255f4dc8eea50f97-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-20 16:23:55,319 INFO [RS_OPEN_REGION-localhost:64720-1] regionserver.HRegion: Onlined a5c5e9076c1e38f4255f4dc8eea50f97; next sequenceid=2
2017-06-20 16:23:55,322 INFO [PostOpenDeployTasks:a5c5e9076c1e38f4255f4dc8eea50f97] regionserver.HRegionServer: Post open deploy tasks for test,,1497956029330.a5c5e9076c1e38f4255f4dc8eea50f97.
2017-06-20 16:23:55,325 INFO [PostOpenDeployTasks:a5c5e9076c1e38f4255f4dc8eea50f97] hbase.MetaTableAccessor: Updated row test,,1497956029330.a5c5e9076c1e38f4255f4dc8eea50f97. with server=localhost,64720,1497955472691
2017-06-20 16:23:55,327 INFO [AM.ZK.Worker-pool2-t11] master.RegionStates: Transition {a5c5e9076c1e38f4255f4dc8eea50f97 state=OPENING, ts=1497956035303, server=localhost,64720,1497955472691} to {a5c5e9076c1e38f4255f4dc8eea50f97 state=OPEN, ts=1497956035327, server=localhost,64720,1497955472691}
2017-06-20 16:23:55,329 INFO [ProcedureExecutor-1] zookeeper.ZKTableStateManager: Moving table test state from ENABLING to ENABLED
Any suggestion, what could be the issue? Why is it taking so long time?

Does sqoop spill temporary data to disk

As I understand sqoop, it launches few mappers on different data nodes making jdbc connection with RDBMS. Once connection is formed data is transferred to HDFS.
Just trying to understand, does sqoop mapper spill data temporary on disk (data node)? I know spilling happens in MapReduce but not sure about sqoop job.
It seems sqoop-import runs on mapper and doesn't spill. And sqoop-merge runs on map-reduce and does spill. You can check it on Job tracker during sqoop import run.
Have a look at this part of sqoop import log, it does not spill, fetches and writes to hdfs:
INFO [main] ... mapreduce.db.DataDrivenDBRecordReader: Using query: SELECT...
[main] mapreduce.db.DBRecordReader: Executing query: SELECT...
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
INFO [Thread-16] ...mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1489705733959_2462784_m_000000_0 is done. And is in the process of committing
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1489705733959_2462784_m_000000_0' to hdfs://
Have a look at this sqoop-merge log(skipped some rows), it spills on disk (note Spilling map output in the log):
INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://bla-bla/part-m-00000:0+48322717
...
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
...
INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1024
INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 751619264
INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1073741824
INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452; length = 67108864
INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$**MapOutputBuffer**
INFO [main] com.pepperdata.supervisor.agent.resource.r: Datanode bla-bla is LOCAL.
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]
...
INFO [main] org.apache.hadoop.mapred.MapTask: **Starting flush of map output**
INFO [main] org.apache.hadoop.mapred.MapTask: **Spilling map output**
INFO [main] org.apache.hadoop.mapred.MapTask: **bufstart** = 0; **bufend** = 184775274; bufvoid = 1073741824
INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452(1073741808); kvend = 267347800(1069391200); length = 1087653/67108864
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
[main] org.apache.hadoop.mapred.MapTask: Finished spill 0
...Task:attempt_1489705733959_2479291_m_000000_0 is done. And is in the process of committing

unable to load data in hbase table from hive

I am using hadoop version 2.7.0, hive version 1.1.0, HBase version hbase-0.98.14-hadoop2.
I have created a hbase table from hive successfully.
hive (Koushik)> CREATE TABLE hive_hbase_emp_test(eid int, ename string, esal double)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES
> ("hbase.columns.mapping" = ":key,cfstr:enm,cfsal:esl")
> TBLPROPERTIES ("hbase.table.name" = "hive_hbase_emp_test");
OK
Time taken: 0.874 seconds
hbase(main):004:0> describe 'hive_hbase_emp_test'
Table hive_hbase_emp_test is ENABLED
hive_hbase_emp_test
COLUMN FAMILIES DESCRIPTION
{NAME => 'cfsal', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VER
SIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
{NAME => 'cfstr', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VER
SIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
2 row(s) in 3.0650 seconds
But when I am trying to load the table from hive it is failing.
hive (Koushik)> INSERT OVERWRITE TABLE hive_hbase_emp_test SELECT empid,empname,empsal FROM hive_employee;
Query ID = hduser_20150921110000_249675d5-9da7-49fe-b03e-3a2d813ac898
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1442836788507_0011, Tracking URL = http://localhost:8088/proxy/application_1442836788507_0011/
Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1442836788507_0011
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2015-09-21 11:01:39,041 Stage-0 map = 0%, reduce = 0%
2015-09-21 11:02:39,429 Stage-0 map = 0%, reduce = 0%
2015-09-21 11:02:45,814 Stage-0 map = 100%, reduce = 0%
Ended Job = job_1442836788507_0011 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1442836788507_0011_m_000000 (and more) from job job_1442836788507_0011
Task with the most failures(4):
-----
Task ID:
task_1442836788507_0011_m_000000
URL:
http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1442836788507_0011&tipid=task_1442836788507_0011_m_000000
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 9 more
Caused by: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 17 more
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:147)
... 22 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.lazy.LazyUtils.getByte(Ljava/lang/String;B)B
at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.collectSeparators(LazySerDeParameters.java:223)
at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:90)
at org.apache.hadoop.hive.hbase.HBaseSerDeParameters.<init>(HBaseSerDeParameters.java:95)
at org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:117)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:344)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)
at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425)
at org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)
at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:193)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
at org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:427)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:126)
... 22 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
The content of auxlib folder in hive is as below
hduser#ubuntu:/usr/lib/hive/auxlib$ ls
activation-1.1.jar
aopalliance-1.0.jar
apacheds-i18n-2.0.0-M15.jar
apacheds-kerberos-codec-2.0.0-M15.jar
api-asn1-api-1.0.0-M20.jar
api-util-1.0.0-M20.jar
asm-3.1.jar
avro-1.7.4.jar
aws-java-sdk-1.7.4.jar
azure-storage-2.0.0.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.2.jar
commons-codec-1.7.jar
commons-collections-3.2.1.jar
commons-compress-1.4.1.jar
commons-configuration-1.6.jar
commons-daemon-1.0.13.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-lang-2.6.jar
commons-lang3-3.3.2.jar
commons-logging-1.1.1.jar
commons-math-2.1.jar
commons-math3-3.1.1.jar
commons-net-3.1.jar
curator-client-2.7.1.jar
curator-framework-2.7.1.jar
curator-recipes-2.7.1.jar
findbugs-annotations-1.3.9-1.jar
gmbal-api-only-3.0.0-b023.jar
grizzly-framework-2.1.2.jar
grizzly-http-2.1.2.jar
grizzly-http-server-2.1.2.jar
grizzly-http-servlet-2.1.2.jar
grizzly-rcm-2.1.2.jar
gson-2.2.4.jar
guava-12.0.1.jar
guice-3.0.jar
guice-servlet-3.0.jar
hadoop-annotations-2.7.0.jar
hadoop-ant-2.7.0.jar
hadoop-archives-2.7.0.jar
hadoop-auth-2.7.0.jar
hadoop-aws-2.7.0.jar
hadoop-azure-2.7.0.jar
hadoop-client-2.2.0.jar
hadoop-common-2.2.0.jar
hadoop-datajoin-2.7.0.jar
hadoop-distcp-2.7.0.jar
hadoop-extras-2.7.0.jar
hadoop-gridmix-2.7.0.jar
hadoop-hdfs-2.7.0.jar
hadoop-hdfs-2.7.0-tests.jar
hadoop-hdfs-nfs-2.7.0.jar
hadoop-mapreduce-client-app-2.7.0.jar
hadoop-mapreduce-client-common-2.7.0.jar
hadoop-mapreduce-client-core-2.7.0.jar
hadoop-mapreduce-client-hs-2.7.0.jar
hadoop-mapreduce-client-hs-plugins-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0-tests.jar
hadoop-mapreduce-client-shuffle-2.7.0.jar
hadoop-mapreduce-examples-2.7.0.jar
hadoop-openstack-2.7.0.jar
hadoop-rumen-2.7.0.jar
hadoop-sls-2.7.0.jar
hadoop-streaming-2.7.0.jar
hadoop-yarn-api-2.7.0.jar
hadoop-yarn-applications-distributedshell-2.7.0.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.7.0.jar
hadoop-yarn-client-2.7.0.jar
hadoop-yarn-common-2.7.0.jar
hadoop-yarn-registry-2.7.0.jar
hadoop-yarn-server-applicationhistoryservice-2.7.0.jar
hadoop-yarn-server-common-2.7.0.jar
hadoop-yarn-server-nodemanager-2.7.0.jar
hadoop-yarn-server-resourcemanager-2.7.0.jar
hadoop-yarn-server-sharedcachemanager-2.7.0.jar
hadoop-yarn-server-tests-2.7.0.jar
hadoop-yarn-server-web-proxy-2.7.0.jar
hamcrest-core-1.3.jar
hbase-annotations-0.98.14-hadoop2.jar
hbase-checkstyle-0.98.14-hadoop2.jar
hbase-client-0.98.14-hadoop2.jar
hbase-common-0.98.14-hadoop2.jar
hbase-common-0.98.14-hadoop2-tests.jar
hbase-examples-0.98.14-hadoop2.jar
hbase-hadoop2-compat-0.98.14-hadoop2.jar
hbase-hadoop-compat-0.98.14-hadoop2.jar
hbase-it-0.98.14-hadoop2.jar
hbase-it-0.98.14-hadoop2-tests.jar
hbase-prefix-tree-0.98.14-hadoop2.jar
hbase-protocol-0.98.14-hadoop2.jar
hbase-resource-bundle-0.98.14-hadoop2.jar
hbase-rest-0.98.14-hadoop2.jar
hbase-server-0.98.14-hadoop2.jar
hbase-server-0.98.14-hadoop2-tests.jar
hbase-shell-0.98.14-hadoop2.jar
hbase-testing-util-0.98.14-hadoop2.jar
hbase-thrift-0.98.14-hadoop2.jar
high-scale-lib-1.1.1.jar
hive-hbase-handler-1.2.1.jar
hive-serde-1.2.1.jar
htrace-core-2.04.jar
htrace-core-3.1.0-incubating.jar
httpclient-4.1.3.jar
httpclient-4.2.5.jar
httpcore-4.1.3.jar
httpcore-4.2.5.jar
jackson-annotations-2.2.3.jar
jackson-core-2.2.3.jar
jackson-core-asl-1.8.8.jar
jackson-core-asl-1.9.13.jar
jackson-databind-2.2.3.jar
jackson-jaxrs-1.8.8.jar
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.8.8.jar
jackson-mapper-asl-1.9.13.jar
jackson-xc-1.9.13.jar
jamon-runtime-2.3.1.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
javax.inject-1.jar
java-xmlbuilder-0.4.jar
javax.servlet-3.1.jar
javax.servlet-api-3.0.1.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jcodings-1.0.8.jar
jersey-client-1.8.jar
jersey-core-1.8.jar
jersey-core-1.9.jar
jersey-grizzly2-1.9.jar
jersey-guice-1.9.jar
jersey-json-1.9.jar
jersey-server-1.9.jar
jersey-test-framework-core-1.9.jar
jersey-test-framework-grizzly2-1.9.jar
jets3t-0.9.0.jar
jettison-1.1.jar
jettison-1.3.1.jar
jetty-6.1.26.jar
jetty-sslengine-6.1.26.jar
jetty-util-6.1.26.jar
joda-time-2.7.jar
joni-2.1.2.jar
jruby-complete-1.6.8.jar
jsch-0.1.42.jar
jsp-2.1-6.1.14.jar
jsp-api-2.1-6.1.14.jar
jsp-api-2.1.jar
jsr305-3.0.0.jar
junit-4.11.jar
leveldbjni-all-1.8.jar
libthrift-0.9.0.jar
log4j-1.2.17.jar
management-api-3.0.0-b012.jar
metrics-core-3.0.1.jar
mockito-all-1.8.5.jar
netty-3.6.6.Final.jar
paranamer-2.3.jar
protobuf-java-2.5.0.jar
servlet-api-2.5-6.1.14.jar
servlet-api-2.5.jar
slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.4.jar1
snappy-java-1.0.4.1.jar
stax-api-1.0-2.jar
xmlenc-0.52.jar
xz-1.0.jar
zookeeper-3.4.6.jar
What's I am missing here??
It looks like there is a version compatibility issue. The org.apache.hadoop.hive.serde2.lazy.LazyUtils.getByte is added to this class in this commit, which is released in Hive 1.2. See here
Actually I made a mistake. I have kept hive-hbase-handler-1.2.1.jar & hive-serde-1.2.1.jar in the auxlib path, which was causing the problem. When I removed 1.2.1 version of jars and then it is working fine with hive-hbase-handler-1.1.0.jar & hive-serde-1.1.0.jar. So the problem resolved with hive version 1.1.0 only (with habse version 0.98.14 and hadoop version 2.7.0).
NoSuchMethodError represents JVM could find the Class, but the Method is not found.
May be the class(runtime), is not the same with your hive version.
You can start hive cli in debug mode(bin/hive -hiveconf hive.root.logger=DEBUG,console). It will show all jars in and find the jar version in the logs.

Copy Data from one hbase table to another

I have created one table hivetest which also create the table in hbase with name of 'hbasetest'. Now I want to copy 'hbasetest' data into another hbase table(say logdata) with the same schema. So, can anyone help me how do copy the data from 'hbasetest' to 'logdata' without using the hive.
CREATE TABLE hivetest(cookie string, timespent string, pageviews string, visit string, logdate string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = "m:timespent, m:pageviews, m:visit, m:logdate")
TBLPROPERTIES ("hbase.table.name" = "hbasetest");
Updated question :
I have created the table logdata like this. But, I am getting the following error.
create 'logdata', {NAME => ' m', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS =>'0', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
13/09/23 12:57:19 INFO mapred.JobClient: Task Id : attempt_201309231115_0025_m_000000_0, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 755 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family m does not exist in region logdata,,1379920697845.30fce8bcc99bf9ed321720496a3ec498. in table 'logdata', {NAME => 'm', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3773)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
: 755 times, servers with issues: master:60020,
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1674)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:953)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:109)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
13/09/23 12:57:29 INFO mapred.JobClient: Task Id : attempt_201309231115_0025_m_000000_1, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 755 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family m does not exist in region logdata,,1379920697845.30fce8bcc99bf9ed321720496a3ec498. in table 'logdata', {NAME => 'm', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3773)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
: 755 times, servers with issues: master:60020,
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1674)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:953)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:109)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
13/09/23 12:57:38 INFO mapred.JobClient: Task Id : attempt_201309231115_0025_m_000000_2, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 755 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family m does not exist in region logdata,,1379920697845.30fce8bcc99bf9ed321720496a3ec498. in table 'logdata', {NAME => 'm', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3773)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
: 755 times, servers with issues: master:60020,
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1674)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:953)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:109)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
13/09/23 12:57:53 INFO mapred.JobClient: Job complete: job_201309231115_0025
13/09/23 12:57:53 INFO mapred.JobClient: Counters: 7
13/09/23 12:57:53 INFO mapred.JobClient: Job Counters
13/09/23 12:57:53 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=34605
13/09/23 12:57:53 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/09/23 12:57:53 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/09/23 12:57:53 INFO mapred.JobClient: Rack-local map tasks=4
13/09/23 12:57:53 INFO mapred.JobClient: Launched map tasks=4
13/09/23 12:57:53 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/09/23 12:57:53 INFO mapred.JobClient: Failed map tasks=1
Use the copyTable command. Example :
$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=logdata hbasetest
Actually i am using hive-0.9.0. Which has a bug
https://issues.apache.org/jira/browse/HIVE-3243.
So, while creating the table SerDe of HBaseStorageHandler doesn't ignore white space between comma and column family. Hence you need to remove the white spaces. Then it will work fine.

Datanode starts but not namenode

After a bit of struggling I had eventually managed to use hadoop in pseudo-distributed node, with a namenode and a jobtracker working perfectly (at http://localhost:50070 and http://localhost:50030)
Yesterday I tried to restart my namenode, datanode, etc with:
$hadoop namenode -format
$start-all.sh
And jps gives me the following output:
17148 DataNode
17295 SecondaryNameNode
17419 JobTracker
17669 Jps
Namenode doesn't seem to be willing to start anymore ... And Jobtracker dies a few seconds later.
Mark that I hadn't restarted my computer and that I've tried the solution given in the following thread Namenode not getting started but it didn't help.
Here is the log of the namenode, with a bunch of errors. I don't know how to solve my issue at all
2013-08-16 09:02:21,647 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.lan/192.168.1.94
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_25
************************************************************/
2013-08-16 09:02:21,839 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-16 09:02:21,868 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-08-16 09:02:22,098 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-16 09:02:22,103 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-08-16 09:02:22,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-08-16 09:02:22,111 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932118528
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=rlk
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-16 09:02:22,271 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-08-16 09:02:22,320 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-08-16 09:02:22,321 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-08-16 09:02:22,363 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/rlk/hduser/dfs/name/current/fsimage
2013-08-16 09:02:22,364 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-08-16 09:02:22,372 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-08-16 09:02:22,375 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes loaded in 0 seconds.
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /home/rlk/hduser/dfs/name/current/edits, reached end of edit log Number of transactions found: 0. Bytes read: 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/home/rlk/hduser/dfs/name/current/edits) ...
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/home/rlk/hduser/dfs/name/current/edits):
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Padding position = -1 (-1 means padding not found)
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit log length = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Read length = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Corruption length = 0
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /home/rlk/hduser/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-08-16 09:02:22,387 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes saved in 0 seconds.
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 776 msecs
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.9990000128746033
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 30000
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 21 msec
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-08-16 09:02:22,962 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-08-16 09:02:22,972 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-08-16 09:02:22,983 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-08-16 09:02:23,026 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-08-16 09:02:23,029 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort8020 registered.
2013-08-16 09:02:23,030 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort8020 registered.
2013-08-16 09:02:23,037 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost.localdomain/127.0.0.1:8020
2013-08-16 09:02:23,195 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-08-16 09:02:23,306 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-08-16 09:02:23,318 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-08-16 09:02:23,329 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-08-16 09:02:23,331 INFO org.mortbay.log: jetty-6.1.26
2013-08-16 09:02:23,386 INFO org.mortbay.log: Extract jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs to /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08/webapp
2013-08-16 09:02:25,171 WARN org.mortbay.log: failed jsp: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,215 WARN org.mortbay.log: failed org.mortbay.jetty.webapp.WebAppContext#12305d34{/,jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs}: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,225 WARN org.mortbay.log: failed ContextHandlerCollection#25370a40: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,226 ERROR org.mortbay.log: Error starting handlers
java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:99)
at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:638)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Caused by: java.lang.ClassNotFoundException: javax.servlet.jsp.JspFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 27 more
2013-08-16 09:02:25,307 INFO org.mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
2013-08-16 09:02:25,307 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:rlk cause:java.io.IOException: Problem in starting http server. Server handlers failed
2013-08-16 09:02:25,308 INFO org.mortbay.log: Stopped SelectChannelConnector#0.0.0.0:50070
2013-08-16 09:02:25,308 ERROR org.mortbay.log: EXCEPTION
java.lang.NullPointerException
at org.apache.jasper.servlet.JspServlet.destroy(JspServlet.java:282)
at org.mortbay.jetty.servlet.ServletHolder.destroyInstance(ServletHolder.java:318)
at org.mortbay.jetty.servlet.ServletHolder.doStop(ServletHolder.java:289)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.servlet.ServletHandler.doStop(ServletHandler.java:185)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.servlet.SessionHandler.doStop(SessionHandler.java:125)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.handler.ContextHandler.doStop(ContextHandler.java:592)
at org.mortbay.jetty.webapp.WebAppContext.doStop(WebAppContext.java:537)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerCollection.doStop(HandlerCollection.java:169)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.Server.doStop(Server.java:283)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.apache.hadoop.http.HttpServer.stop(HttpServer.java:688)
at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:604)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-16 09:02:25,336 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
2013-08-16 09:02:25,337 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
at java.lang.Thread.run(Thread.java:724)
2013-08-16 09:02:25,339 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,403 INFO org.apache.hadoop.ipc.Server: Stopping server on 8020
2013-08-16 09:02:25,411 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2013-08-16 09:02:25,412 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Problem in starting http server. Server handlers failed
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:662)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-16 09:02:25,413 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.lan/192.168.1.94
************************************************************/
I also give you my hadoop configuration (I'm using hadoop-1.2.1) :
core-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/rlk/hduser</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
</configuration>
hdfs-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- mapred-site.xml -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
I found the solution : it was some jar collisions. I had duplicate jar files both in hadoop-x.y.z/ and hadoop-x.y.z/lib and in path-to-java/jre/lib/ext/.
I just removed the formers and everything works fine again.
you did not mention port number for master node in core-site.xml.
<property>
<name>fs.default.name</name>
<value>hdfs://master:port</value>
</property>
this problem in core-site.xml please set proper
<property>
<name>hadoop.tmp.dir</name>
<value>/home/rlk/hduser</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/90000</value>
</property>

Resources