Hbase configuration with Hdfs HA - hadoop

I am trying to setup hbase ha with hadoop ha. I have done setting hadoop ha setup and tested it.
But in hbase setup, while starting, I am getting this following error
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2426)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:231)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:137)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2436)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfs-nameservice
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:570)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:381)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2419)
... 5 more
Caused by: java.net.UnknownHostException: hdfs-nameservice
... 25 more
I think my hbase setup doesn't recognize my nameservice hdfs-nameservice.
I am using Hbase 1.2.4 & Hadoop 2.7.3.
My hbase-site.xml has
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdfs-nameservice/hbase</value>
</property>
core-site.xml has
<property>
<name>fs.defaultFS</name>
<value>hdfs://hdfs-nameservice:8020</value>
</property>
and hdfs-site.xml has
<property>
<name>dfs.nameservices</name>
<value>hdfs-nameservice</value>
</property>
<property>
<name>dfs.ha.namenodes.hdfs-nameservice</name>
<value>namenode1,namenode2</value>
</property>
//with rpc, http address confs for both namenode1&2
Tried:
1. Copied both the core and hdfs-site xmls in hbase conf directory.
2. Added Hadoop conf path in both environmental variable and hbase-env.sh
Still couldn't figure out how to make my hbase recognize hdfs-nameservice.
Any help would be much appreciated.

Related

Replace HDFS form local disk to s3 getting error (org.apache.hadoop.service.AbstractService)

We are trying to setup Cloudera 5.5 where HDFS will be working on s3 only for that we have already configured the necessory properties in Core-site.xml
<property>
<name>fs.s3a.access.key</name>
<value>################</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>###############</value>
</property>
<property>
<name>fs.default.name</name>
<value>s3a://bucket_Name</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>s3a://bucket_Name</value>
</property>
After setting it up we were able to browse the files for s3 bucket from command
hadoop fs -ls /
And it shows the files available on s3 only.
But when we start the yarn services JobHistory server fails to start with below error and on launching pig jobs we are getting same error
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
Unable to create default file context [s3a://kyvosps]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
On serching on Internet we found that we need to set following properties as well in core-site.xml
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The implementation class of the S3A Filesystem</description>
</property>
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The FileSystem for S3A Filesystem</description>
</property>
After setting the above properties we are getting following error
org.apache.hadoop.service.AbstractService
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)
The jars needed for this is in place but still getting the error any help will be great. Thanks in advance
Update
I tried to remove the property fs.AbstractFileSystem.s3a.impl but it give me the same first exception the one i was getting previously which is
org.apache.hadoop.security.UserGroupInformation
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils
Unable to create default file context [s3a://bucket_name]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
The problem is not with the location of the jars.
The problem is with the setting:
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description>The FileSystem for S3A Filesystem</description>
</property>
This setting is not needed. Because of this setting, it is searching for following constructor in S3AFileSystem class and there is no such constructor:
S3AFileSystem(URI theUri, Configuration conf);
Following exception clearly tells that it is unable to find a constructor for S3AFileSystem with URI and Configuration parameters.
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
To resolve this problem, remove fs.AbstractFileSystem.s3a.impl setting from core-site.xml. Just having fs.s3a.impl setting in core-site.xml should solve your problem.
EDIT:
org.apache.hadoop.fs.s3a.S3AFileSystem just implements FileSystem.
Hence, you cannot set value of fs.AbstractFileSystem.s3a.impl to org.apache.hadoop.fs.s3a.S3AFileSystem, since org.apache.hadoop.fs.s3a.S3AFileSystem does not implement AbstractFileSystem.
I am using Hadoop 2.7.0 and in this version s3A is not exposed as AbstractFileSystem.
There is JIRA ticket: https://issues.apache.org/jira/browse/HADOOP-11262 to implement the same and the fix is available in Hadoop 2.8.0.
Assuming, your jar has exposed s3A as AbstractFileSystem, you need to set the following for fs.AbstractFileSystem.s3a.impl:
<property>
<name>fs.AbstractFileSystem.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3A</value>
</property>
That will solve your problem.

Hive Create Failed : Resource Not Available Exception

I am trying to create table in hive but it is giving me a below exception.
2015-08-20 22:47:12,145 INFO [main]: exec.DDLTask (DDLTask.java:createTable(4074)) - Default to LazySimpleSerDe for table test_s
2015-08-20 22:47:12,157 ERROR [main]: exec.DDLTask (DDLTask.java:execute(478)) - org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.io.IOException Error: Resource temporarily unavailable(11), file: test_s, user name: mapr, ID: 5000)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:620)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4189)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1508)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1275)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1093)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: MetaException(message:Got exception: java.io.IOException Error: Resource temporarily unavailable(11), file: test_s, user name: mapr, ID: 5000)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:24255)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:24223)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result.read(ThriftHiveMetastore.java:24149)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_table_with_environment_context(ThriftHiveMetastore.java:893)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_table_with_environment_context(ThriftHiveMetastore.java:879)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:566)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
at com.sun.proxy.$Proxy5.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:614)
... 20 more
2015-08-20 22:47:12,157 ERROR [main]: ql.Driver (SessionState.java:printError(545)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: java.io.IOException Error: Resource temporarily unavailable(11), file: test_s, user name: mapr, ID: 5000)
Did anyone faced such issue? Just to give more information I am using MapR cluster and looking at MapR management UI , everything seems to be normal.
Any idea?
have you check the status of Metastore Service of hive, is the metastore configure and running????? Also the Hive Service/thrift service should be running in the Node.
to Configure Meta Store Service. below update required in hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://myhost/metastore</value>
<description>the URL of the MySQL database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mypassword</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://<n.n.n.n>:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
If You are using MySQL as Backend DB for Hive, for more details. you check Link which helps to get more details of configuration.
Hope these will help.

The node /hbase is not in ZooKeeper

I am a newbie in Hadoop trying to install Hbase in pseudo distributed mode, version hbase-0.98.10.1-hadoop1-bin, with Hadoop 2.5.2 . I am not able to add a table.
Following error continues when I try to create a table :
client.HConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
It finally after displaying the error many times (about 50 times) gives the final error as:
ERROR: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Latest entry of log file is:
2015-02-23 16:38:39,456 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3017)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:186)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3031)
Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:942)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:533)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3012)
... 5 more
Hdbase-site Configuration file:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/Hbase/zookeeper</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
Output of jps is:
7584 Main
8532 HQuorumPeer
4435 SecondaryNameNode
4880 NodeManager
4269 DataNode
7735 FsShell
4592 ResourceManager
4141 NameNode
9128 Jps
3147 ZKServerTool
3651 HRegionServer
2992 HMaster
What could be the possible error? Any help is appreciated.
It Just Worked after using a different hbase version. I was using hbase-0.98.10.1-hadoop1-bin, which was not compatible with my hadoop 2.5.2, so I changed hbase version to hbase-X.XX.XX.X-hadoop2-bin (which was compatible for hadoop 2.X ) and followed Apache's installation steps.
Thank you all..
It means the zookeeper has not the node '/hbase',so create a node name '/hbase' in zookeeper.go to the zkCli and run the 'create /hbase "" ' command.
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/Hbase/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
</property>
</configuration>
make sure zookeeper should be start and dataDir should be exist.
i use it in Talend,when i add hbase connnection in the hadoop cluster,
the step 2/2, i choose repository in hadoop cluster,when i write server and port(default 2181),point the button 'Check',
the system prompt:
Connection failure. You must change the Database Settings.
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master
then,you need add hadoop properties,click the button and add to the property
zookeeper.znode.parent=/hbase-unsecure
click ok,then you click 'Check',the system prompt successfull.
Above answer is correct but a bit more lengthy. I was able to solve this problem by just adding following property in hbase-site.xml ( used hbase-1.2.1 )
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/data/zookeeper</value>
</property>
I didnt had to create /mnt/data/zookeeper either. Since I was using HBASE as a standalone I didnt had to run Zookeeper infact it gave an error when I did so
the complete hbase-site.xml configuration file looks like
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///mnt/data/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/data/zookeeper</value>
</property>
</configuration>

YARN ResourceTrackerService failed in state STARTED

I am trying to setup a hadoop cluster on a few machines with Hadoop Directory on a shared disk. HDFS worked well. But when I try to start YARN, ResourceTracker throws a BindException. The node (ahti.d.umn.edu-131.212.41.9) on which ResourceTracker is cofigured to run is reachable (I can SSH into it) and the port (28025) is also open.
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [ahti.d.umn.edu:28025] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [ahti.d.umn.edu:28025] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.serviceStart(ResourceTrackerService.java:159)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:503)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:898)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:938)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:935)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:935)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:979)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1104)
Following is my yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>131.212.41.9</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>131.212.41.9:28025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>131.212.41.9:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>131.212.41.9:8050</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>131.212.41.9:8041</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/scratch/dfs/yarn</value>
</property>
<property>
<name>yarn.log.dir</name>
<value>/scratch/hadoop/yarn/logs</value>
</property>
</configuration>
If it matters I am running java-8.
Any clues on how to fix it?
Looks like it could be because of two reasons
May be some other instance of Resource manager already running that uses the port. Kill that Resource manager instance and start again. Find the process id of resource manager using the command ps aux | grep -i resourcemanager, then kill the same using the command kill -9 <RESOURCE_MANAGER_PID>
Hadoop doesn't fully support JDK-8. See the link for Hadoop supported Java versions, If option 1 is not working, try downgrade your java version to JDK7

Error: E0902: Exception occured: [User: Root is not allowed to impersonate root

I am trying to follow the steps given at http://www.rohitmenon.com/index.php/apache-oozie-installation/
Note: I am not using cloudera distibution of hadoop
The above link is similar to http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html
but with more descriptive seems to me
however while running the below command as a root user i am getting exception
./bin/oozie-setup.sh sharelib create -fs
Note: i have two live node shown at dfshealth.jsp . and i have updated the core-site.xml for all three(including namenode) with property as below
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
i understand this is point where i am making mistake Could someone please guide me
Stacktrace
org.apache.oozie.service.HadoopAccessorException: E0902: Exception occured: [User: root is not allowed to impersonate root]
at
org.apache.oozie.service.HadoopAccessorService.createFileSystem(HadoopAccessorService.java:430)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:144)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:52)
Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate root
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:135)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:276)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:241)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.oozie.service.HadoopAccessorService$2.run(HadoopAccessorService.java:422)
at org.apache.oozie.service.HadoopAccessorService$2.run(HadoopAccessorService.java:420)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.oozie.service.HadoopAccessorService.createFileSystem(HadoopAccessorService.java:420)
... 2 more
--------------------------------------
Note: Getting E0902: Exception occured: [User: oozie is not allowed to impersonate oozie] i have followed this link as well but not able to solve my problem
if i change the core-site.xml as below only for NameNode
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>[NAMENODE IP]</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>hadoop</value>
</property>
I get the exception as
Unauthorized connection for super-user: hadoop
After adding the property files into core-site.xml restart your hadoop and try. Even though if it not works format the namenode and start hadoop it will work.
You need to add these properties in core-site.xml for impersonation in order to solve your whitelist error
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
Hope this fixes your issue.
Follow the advice in the article below. Hadoop before 1.1.0 doesn't support wildcard so you have to explicitly specified the hosts and the groups
http://mail-archives.apache.org/mod_mbox/oozie-user/201212.mbox/%3CCAOcnVr1TZZ5X0Mrb7fFA8JdW6rO6PgoJ9u0=2UYbfXf_o8r=DA#mail.gmail.com%3E
I solved the problem by adding those lines in the core-site.xml-file
hadoop.proxyuser.root.hosts
value = *
hadoop.proxyuser.root.groups
value = *
and it works perfectly all my databases and tables are shown.
./oozie-setup.sh sharelib create -fs hdfs://localhost:9000
try to run this command using sudo.
check for hdfs if this path already exits i.e., /user/user_name/share/lib, if it exists remove it using
hadoop fs -rmr /user/user_name
After that run sudo ./oozied.sh. oozie will be started. Then check for your localhost:11000.

Resources