we have CDH 5.2 with Cloudera Manager 5.
We want to copy data from nameservice2 to nameservice1
Both clusters are on same CDH version
When I tried hadoop distcp hdfs://nameservice2/foo/bar hdfs://nameservice1/bar/foo
I got error
java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice2
So I added following config from Nameservice2 to Nameservice1
HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml in Cloudera manager (Gateway Default Group)
<property>
<name>dfs.nameservices</name>
<value>nameservices2</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.nameservices2</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices2</name>
<value>namenode36,namenode405</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.cc:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50470</value>
</property>
But I am still getting same error.
Any workaround this ?
thanks
In HA enabled HDFS namenode nameservice1,nameservice2 are logical names, you cannot use ports along with that logical name.
you have two methods.
Easy method is to find the active namenodes and use the active namenode:port in the distcp command as follows. Namenode web UI can be used for finding active namenodes of two clusters.
hadoop distcp hdfs://hnn001.prod.cc:8020:8020/foo/bar hdfs://<dest-cluster-active-nn-hostname>:8020/bar/foo
Another method is to use logical names of two clusters as follow, But before trying the below command make sure you have properly configured nameservice1 and nameservice2 in your client hdfs-site.xml.
hadoop distcp hdfs://nameservice2/foo/bar hdfs://nameservice1/bar/foo
Confiruting remote cluster's nameservice in local cluster.
Looks like nameservice2 is your local and nameservice1 is your remote. You need to keep the all associated properties of nameservice1 and nameservice2 in the local cluster ie. Your local cluster's client hdfs-site.xml files should be as follows.
<configuration>
<!-- Available nameservices -->
<property>
<name>dfs.nameservices</name>
<value>nameservices1,nameservices2</value>
</property>
<!-- Local nameservice2 properties -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices2</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices2</name>
<value>namenode36,namenode405</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.cc:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50470</value>
</property>
<!-- Remote nameservice1 properties -->
<!-- You can find these properties in the remote machine's hdfs-site.xml file -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices1</name>
<value>namenodeXX,namenodeYY</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices1.namenode**XX**</name>
<value><Remote-nn1>:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:50470</value>
</property>
<!-- Other properties -->
</configuration>
In the above configuration files replace all place holders like YY XX with corresponding values in the remote machine's hdfs site.xml.
Related
I am trying to deploy hadoop using Apache Bigtop 3.1.1 with puppet.Hadoop version is 3.2.4 .
OS I am using is CentOS 7.
Deployment works fine without kerberos. But with kerberos, hadoop datanode not stating up. Tried to start it manually with
sudo systemctl start hadoop-hdfs-datanode.service and
this is the error it gives:
Nov 21 12:15:59 master.local hadoop-hdfs-datanode[13646]: ERROR: You must be a privileged user in order to run a secure service. Nov 21 12:16:04 master.local hadoop-hdfs-datanode[13646]: Failed to start Hadoop datanode. Return value: 3[FAILED] Nov 21 12:16:04 master.local systemd[1]: hadoop-hdfs-datanode.service: control process exited, code=exited status=3 Nov 21 12:16:04 master.local systemd[1]: Failed to start LSB: Hadoop datanode.
This is my site.yaml file
---
bigtop::hadoop_head_node: "master.local"
hadoop::hadoop_storage_dirs:
- /data/1
- /data/2
- /data/3
- /data/4
hadoop_cluster_node::cluster_components:
- hdfs
- spark
- hive
- tez
- sqoop
- zookeeper
- kafka
- livy
- oozie
- zeppelin
- solrcloud
- kerberos
- httpfs
bigtop::bigtop_repo_uri: "http://10.42.65.70:90/bigtop/3.1.1/rpm/"
# - "https://archive.apache.org/dist/bigtop/bigtop-3.1.1/repos/centos-7/"
hadoop::common_hdfs::hadoop_http_authentication_signature_secret: "FaztheBits123!"
# Kerberos
hadoop::hadoop_security_authentication: "kerberos"
kerberos::krb_site::domain: "bigtop.apache.org"
kerberos::krb_site::realm: "BIGTOP.APACHE.ORG"
kerberos::krb_site::kdc_server: "%{hiera('bigtop::hadoop_head_node')}"
kerberos::krb_site::kdc_port: "88"
kerberos::krb_site::admin_port: "749"
kerberos::krb_site::keytab_export_dir: "/var/lib/bigtop_keytabs"
hadoop::common_hdfs::hadoop_http_authentication_type: "%{hiera('hadoop::hadoop_security_authentication')}"
# to enable tez in hadoop, uncomment the lines below
hadoop::common::use_tez: true
hadoop::common_mapred_app::mapreduce_framework_name: "yarn-tez"
# to enable tez in hive, uncomment the lines below
hadoop_hive::common_config::hive_execution_engine: "tez"
I tried changing workers in /etc/hadoop/conf/workers with proper hostnames. My core-site.xml is this:
<configuration>
<property>
<!-- URI of NN. Fully qualified. No IP.-->
<name>fs.defaultFS</name>
<value>hdfs://master.local:8020</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>hudson,testuser,root,hadoop,jenkins,oozie,hive,httpfs,users</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>hudson,testuser,root,hadoop,jenkins,oozie,hive,httpfs,users</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>hudson,testuser,root,hadoop,jenkins,oozie,hive,httpfs,users</value>
</property>
<!-- enable proper authentication instead of static mock authentication as
Dr. Who -->
<property>
<name>hadoop.http.filter.initializers</name>
<value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
</property>
<!-- disable anonymous access -->
<property>
<name>hadoop.http.authentication.simple.anonymous.allowed</name>
<value>false</value>
</property>
<!-- enable kerberos authentication -->
<property>
<name>hadoop.http.authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.http.authentication.kerberos.principal</name>
<value>HTTP/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>hadoop.http.authentication.kerberos.keytab</name>
<value>/etc/HTTP.keytab</value>
</property>
<!-- provide secret for cross-service-cross-machine cookie -->
<property>
<name>hadoop.http.authentication.signature.secret.file</name>
<value>/etc/hadoop/conf/hadoop-http-authentication-signature-secret</value>
</property>
<!-- make all services on all hosts use the same cookie domain -->
<property>
<name>hadoop.http.authentication.cookie.domain</name>
<value>local</value>
</property>
<property>
<name>hadoop.security.key.provider.path</name>
<value>kms://http#master.local:9600/kms</value>
</property>
</configuration>
My hdfs-site.xml is:
<configuration>
<!-- non HA -->
<property>
<name>dfs.namenode.rpc-address</name>
<value>master.local:8020</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>master.local:50070</value>
</property>
<property>
<name>dfs.namenode.https-address</name>
<value>master.local:50470</value>
</property>
<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<!-- NameNode security config -->
<property>
<name>dfs.https.address</name>
<value>master.local:50475</value>
</property>
<property>
<name>dfs.https.port</name>
<value>50475</value>
</property>
<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<!-- Secondary NameNode security config -->
<property>
<name>dfs.secondary.http.address</name>
<value>master.local:0</value>
</property>
<property>
<name>dfs.secondary.https.address</name>
<value>master.local:50495</value>
</property>
<property>
<name>dfs.secondary.https.port</name>
<value>50495</value>
</property>
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/etc/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>hdfs/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.https.principal</name>
<value>host/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<!-- DataNode security config -->
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1004</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1006</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST#BIGTOP.APACHE.ORG</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/1/hdfs,file:///data/2/hdfs,file:///data/3/hdfs,file:///data/4/hdfs</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/1/namenode,file:///data/2/namenode,file:///data/3/namenode,file:///data/4/namenode</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
<description>The name of the group of super-users.</description>
</property>
<!-- increase the number of datanode transceivers way above the default of 256
- this is for hbase -->
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
<!-- Configurations for large cluster -->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
I don't seem to find log for datanode. I can find logs for namenode in
/var/log/hadoop-hdfs but not datanode logs.
What am I doing wrong.
I configured short-circuit settings on both hdfs-site.xml and hbase-site.xml. And I run importtsv on hbase to import data from HDFS to HBase on Hbase cluster. I look over the log on each datanode and all datanode have ConnectException i said to the title.
2017-03-31 21:59:01,273 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error creating DomainSocket
java.net.ConnectException: connect(2) error: No such file or directory when trying to connect to '50010'
at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
at org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:753)
at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:469)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:783)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:717)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:421)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:332)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:617)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:841)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:889)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVIntInRange(WritableUtils.java:348)
at org.apache.hadoop.io.Text.readString(Text.java:471)
at org.apache.hadoop.io.Text.readString(Text.java:464)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2017-03-31 21:59:01,277 WARN [main] org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: ShortCircuitCache(0x34f7234e): failed to load 1073750370_BP-642933002-"IP_ADDRESS"-1490774107737
EDIT
hadoop 2.6.4
hbase 1.2.3
hdfs-site.xml
<property>
<name>dfs.namenode.dir</name>
<value>/home/hadoop/hdfs/nn</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/home/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hdfs/dn</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:50090</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>hadoop1:8020</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>50</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>50</value>
</property>
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>hbase</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>775</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>_PORT</value>
</property>
<property>
<name>dfs.client.domain.socket.traffic</name>
<value>true</value>
</property>
hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop1/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop1,hadoop2,hadoop3,hadoop4,hadoop5,hadoop6,hadoop7,hadoop8</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>50</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.5</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.size</name>
<value>0.3</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.size.lower.limit</name>
<value>0.65</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>_PORT</value>
</property>
Short-circuit reads make use of a UNIX domain socket. This is a special path in the filesystem that allows the Client and the DataNodes to communicate. You will need to set a path (not port) to this socket. The DataNode should be able to create this path.
The parent directory of the path value (for ex: /var/lib/hadoop-hdfs/) must exist and should be owned by the hadoop superuser. Also make sure any user except the HDFS user or root has no access to this path.
mkdir /var/lib/hadoop-hdfs/
chown hdfs_user:hdfs_user /var/lib/hadoop-hdfs/
chmod 750 /var/lib/hadoop-hdfs/
Add this property to hdfs-site.xml on all datanodes and clients.
<property>
<name>dfs.domain.socket.path</name>
<value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>
Restart the services after making the changes.
Note: Paths under /var/run or /var/lib are commonly used.
i have configured high availability in my cluster
which consists of three nodes
hadoop-master(192.168.4.128)(name node)
hadoop-slave-1(192.168.4.111) (another name node )
hadoop-slave-2 (192.168.4.106) (data node)
without formatting name node ( converting a non-HA-enabled cluster to be HA-enabled) as described here
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
but i got two name nodes working as standby
so i tried to move the transition of one of these two nodes to active by applying the following command
hdfs haadmin -transitionToActive mycluster --forcemanual
with the following out put
17/04/03 08:07:35 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop-master/192.168.4.128:8020
17/04/03 08:07:36 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop-slave-1/192.168.4.111:8020
Illegal argument: Unable to determine service address for namenode 'mycluster'
my core-site is
<property>
<name>dfs.tmp.dir</name>
<value>/opt/hadoop/data15</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:8020</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/journal/node/local/data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp</value>
</property>
my hdfs-site.xml is
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/data16</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/data17</value>
<final>true</final>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-slave-1:50090</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
<final>true</final>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>hadoop-master,hadoop-slave-1</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.hadoop-master</name>
<value>hadoop-master:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.hadoop-slave-1</name>
<value>hadoop-slave-1:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.hadoop-master</name>
<value>hadoop-master:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.hadoop-slave-1</name>
<value>hadoop-slave-1:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop-master:8485;hadoop-slave-2:8485;hadoop-slave-1:8485/mycluster</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop-master:2181,hadoop-slave-1:2181,hadoop-slave-2:2181</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>3000</value>
</property>
what should the service address value be ? and what are possible solutions i can apply in order
to turn on one name node of the two nodes to active state ?
note the zookeeper server on all three nodes is stopped
I met the same issue, and it turn out that I didn't format zookeeper and start ZKFC
everyone
I want to use GridGain in Hadoop 2.4.0
my hadoop config under that
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-data</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>ggfs://ggfs#R</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/hadoop-data/journal</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>r,host002,host004</value>
</property>
<property>
<name>fs.AbstractFileSystem.ggfs.impl</name>
<value>org.gridgain.grid.ggfs.hadoop.v2.GridGgfsHadoopFileSystem</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>
</configuration>
finish setting and start hdfs
I use
hadoop fs -ls /
ls: No FileSystem for scheme: ggfs
How should I do
Thanks
Add the followings to the core-site.xml:
<property>
<name>fs.ggfs.impl</name>
<value>org.gridgain.grid.ggfs.hadoop.v1.GridGgfsHadoopFileSystem</value>
</property>
The second version of Hadoop File System API is used rarely. The most of parts of Hadoop ecosystem works through first version of API.
And if you want to use GGFS only you don't need to start HDFS services.
I am facing a problem to start the Hive web UI. Although the hive-hwi-0.11.0.war file did exist under /usr/local/hive-0.11.0/lib/, the same error message always appeared when I tried to start HWI:
...FATAL hwi.HWIServer: HWI WAR file not found at /usr/local/hive-0.11.0/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
It seemed that the $HIVE_HOME path was repeated twice when the .war file was being searched regardless how I set the value for hive.hwi.war.file.
Values that I have tried:
setup 1: ${HIVE_HOME}/lib/hive-hwi-0.11.0.war
setup 2: /usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
setup 3: lib/hive-hwi-0.11.0.war
BTW, I set up all the hive configurations in $HIVE_HOME/conf/hive-site.xml. Anyone has a solution for this issue? Thanks!
Below is my hive-site.xml:
<configuration>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://client2/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.servermode</name>
<value>thrift</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master1</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://client2:9083</value>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>10.19.209.100</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>
It appears that you're setting $HIVE_HOME and then passing the full path in the hive-site.xml resulting in the incorrect path that you see in your error output.
Try changing the hive-site.xml file by just passing the lib location to append to the already set $HIVE_HOME path variable as follows:
<property>
<name>hive.hwi.war.file</name>
<value>/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
Then restart Hive and try the WebUI again.
Just to add to #apesa's answer, you might need to add two more properties along with what #apesa mentioned.
<property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
hive.hwi.listen.host and hive.hwi.listen.port are optional only if the things are working with the default values.
Hope this helps...!!!