I have some integration tests that use HDFS with Kerberos authentication. When I run them, I get this exception:
java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "Serbans-MacBook-Pro.local/1.2.3.4"; destination host is: "10.0.3.33":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
I believe that everything is configured correctly:
System.setProperty("java.security.krb5.realm", "...");
System.setProperty("java.security.krb5.kdc", "...");
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://10.0.3.33:8020");
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
conf.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("user#...", "/Users/user/user.keytab");
What do you believe the problem is? On my host (10.0.3.33) I have core-site.xml and hdfs-site.xml configured correctly. But I am not running from that host, as the exception suggests.
Any ideas what to do, in order to be able to run the tests from any host?
Thanks,
Serban
If you are using Hadoop's older version less than 2.6.2, default pattern property is not available in hdfs-site.xml file then you need to specify pattern property manually.
config.set("dfs.namenode.kerberos.principal.pattern", "hdfs/*#BDBIZVIZ.COM");
Related
I have tried executing storm hdfs sample program. I have created a jar file and copied the jar to the edge node of the cluster.
Then I executed the below command
java -cp storm-.1.jar:lib/*:lib/log4j.properties test.storm.sample.StormSampleTopology
But, I am getting the below exception
17434 [Thread-11-hdfs_bolt] WARN org.apache.hadoop.ipc.Client - Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
17448 [Thread-9-hdfs_bolt] WARN org.apache.hadoop.ipc.Client - Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
17457 [Thread-11-hdfs_bolt] ERROR backtype.storm.util - Async loop died!
java.lang.RuntimeException: Error preparing HdfsBolt: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "xxxx2002.xx.xxxxx.com/11.11.111.221"; destination host is: "x-xxxxx.xxxx.com":8020;
at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:96) ~[storm-hdfs-0.1.2.jar:na]
at backtype.storm.daemon.executor$fn__3441$fn__3453.invoke(executor.clj:692) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.util$async_loop$fn__464.invoke(util.clj:461) ~[storm-core-0.9.3.jar:0.9.3]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "xxx2002.xx.xxx.com/11.11.111.221"; destination host is: "x-xxxx.xxx.com":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1472) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1399) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at com.sun.proxy.$Proxy14.create(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295) ~[hadoop-hdfs-2.6.0.2.2.0.0-2041.jar:na]
My key-tab file is located in the edge node itself. I have specified the location also in the property file
Please help me to resolve this issue.
I am trying to implement Kerberos authentication. I am using Hadoop 2.3 version of hadoop on cdh5.0.1. I have done the following changes :
Added following properties to core-site.xml
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
After restarting the daemon when i am issuing hadoop fs -ls / command, I am getting following error :
ls: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "cldx-xxxx-xxxx/xxx.xx.xx.xx"; destination host is: "cldx-xxxx-xxxx":8020;
Please help me out.
Thanks in advance,
Ankita Singla
There is a lot more to configuring a secure HDFS cluster than just specifying hadoop.security.authentication as Kerberos. See Configuring Hadoop Security in CDH 5 about the required config settings. You'll need to create appropriate keytab files. Only after you configured everything and you confirmed that none of the Hadoop services report any error in their respective logs (namenode, datanode on all hosts, resourcemanager, nodemanager on all nodes etc) can you attempt to connect.
I am using Hive 0.11 and Metastore in local mode. When I try to start the Metastore daemon, it exits after spitting the following error message:
2013-11-21 08:47:19.541 GMT Thread[main,5,main] java.io.FileNotFoundException: derby.log (Permission denied)
2013-11-21 08:47:19.646 GMT Thread[main,5,main] Cleanup action starting
ERROR XBM0H: Directory /metastore_db cannot be created.
This is my hive-site.xml. I am using MySQL as Metastore storage. What I don't understand is why is Hive trying to create metastore_db locally.
Thanks.
Set hive.metastore.local property as false. (Removed as of Hive 0.10: If hive.metastore.uris is empty local mode is assumed, remote otherwise)
Set hive.metastore.uris property with valid uri (Host and port for the Thrift metastore server)
For eg:
<property>
<name>hive.metastore.uris</name>
<value>thrift://hap-db:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
Hi faced similar issue on hive 0.14. I had installed hive as root user and was trying to run hive services as a sudo user i use for all hadoop jobs.
Once i changed the installation owner to sudo and restarted it worked . so this error is mostly related to file permissions issue.
I am not able to start Hbase, whenever i start i get only Hmaster and Hregionserver in jps. Hquorompeer keeps missing.I checked logs and i am getting below error:
java.lang.RuntimeException: Unable to run quorum server
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:454)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:151)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:80)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:70)
Caused by: java.io.IOException: Failed to process transaction type: 1 error: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:153)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
... 4 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:211)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:151)
The reason that you are encountering this error could be the data directory where Zookeeper stores snapshots and logs is corrupted.
In order to avoid the HQuorumpeer daemon to die out, you need to provide a path to a new directory where zookeeper can store its snapshots. To do this you need to add the following property in Hbase.site.xml
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>location of the newly created folder</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
the default path of "hbase.zookeeper.property.dataDir" is /tmp/hbase-*/zookeeper(/tmp/hbase-hadoop/zookeeper), remove it and try to start zookeeper again
Removing all files from ZooKeeper directory solved the issue. In my case
rm /var/lib/zookeeper/version-2/*
EDIT: I was able to get this to work. I created a tutorial to show how:
http://www.dreamsyssoft.com/blog/blog.php?/archives/5-How-to-use-HBase-Hadoop-Clustered.html
I can get HBase working just fine when I set the hbase-site.xml property:
<name>hbase.rootdir</name>
<value>file:///app/hbase/hbase/</value>
This works just fine, it stores the data in the directory as expected, however I want it to connect to my running hadoop instance now instead of using a local file. I set it to
<value>hdfs://localhost:9000/</value>
Instead of the local file and it will not work. Is there some extra configuration I need to do on the hadoop side to support this? Hadoop is running and using port 9000.
Here's my core-site.xml from hadoop:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Also here's the error when trying to do a "list" in hbase shell:
hbase(main):001:0> list
TABLE
12/06/28 15:26:29 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
12/06/28 15:26:29 WARN zookeeper.ZKUtil: hconnection Unable to set watcher on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:226)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:76)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:580)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:569)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:186)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:98)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
I believe Hbase cannot find the zookeeper quorum, you have to set the hbase.zookeeper.quorum property in hbase-site.xml. Also check if classpath is set properly or not, check this doc out
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
I used the configuration suggested in http://hbase.apache.org/book.html#zookeeper
stop Ipatables and start hbase with root mode.
there may be compatibility issue please check http://rajkrrsingh.blogspot.in/2013/11/hbase-compatibility-with-hadoop-2x.html