Cannot start Hive Web UI - hadoop

I am facing a problem to start the Hive web UI. Although the hive-hwi-0.11.0.war file did exist under /usr/local/hive-0.11.0/lib/, the same error message always appeared when I tried to start HWI:
...FATAL hwi.HWIServer: HWI WAR file not found at /usr/local/hive-0.11.0/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
It seemed that the $HIVE_HOME path was repeated twice when the .war file was being searched regardless how I set the value for hive.hwi.war.file.
Values that I have tried:
setup 1: ${HIVE_HOME}/lib/hive-hwi-0.11.0.war
setup 2: /usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
setup 3: lib/hive-hwi-0.11.0.war
BTW, I set up all the hive configurations in $HIVE_HOME/conf/hive-site.xml. Anyone has a solution for this issue? Thanks!
Below is my hive-site.xml:
<configuration>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://client2/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.servermode</name>
<value>thrift</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master1</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://client2:9083</value>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>10.19.209.100</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>

It appears that you're setting $HIVE_HOME and then passing the full path in the hive-site.xml resulting in the incorrect path that you see in your error output.
Try changing the hive-site.xml file by just passing the lib location to append to the already set $HIVE_HOME path variable as follows:
<property>
<name>hive.hwi.war.file</name>
<value>/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
Then restart Hive and try the WebUI again.

Just to add to #apesa's answer, you might need to add two more properties along with what #apesa mentioned.
<property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
hive.hwi.listen.host and hive.hwi.listen.port are optional only if the things are working with the default values.
Hope this helps...!!!

Related

Getting error while running hive on windows

I am getting following error when I try to run hive on windows, my hadoop is running fine on windows:
Connecting to jdbc:hive2://
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Beeline version 2.1.1 by Apache Hive
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Connection is already closed.
Here is my hive-site.xml:
<property>
<name>hive.metastore.local</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>hive.metastore.local</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepwd</value>
<description>password for connecting to mysql server </description>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.server2.enable.impersonation</name>
<value>true</value>
</property>
<property>
<name>hive.exec.script.wrapper</name>
<value/>
<description/>
</property>
<property>
<name>hive.exec.plan</name>
<value/>
<description/>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
Here is my hive-env.sh:
> export HADOOP_HOME=C:\Users\namaagarwal\Desktop\hadoop-2.6.2 export
> JAVA_HOME=C:\Progra~1\Java\jdk1.8.0_151
> HIVE_CONF_DIR=C:\Users\namaagarwal\Desktop\hadoop-2.6.2\hive\apache-hive-2.1.1-bin\conf
mysql user is created and derbyclient.jar and mysql-connector-java-5.0.5.jar is placed in hive/bin/lib directory. Also I have created schema for my database by using SOURCE C:/Users/namaagarwal/Desktop/hadoop-2.6.2/hive/apache-hive-2.1.1-bin/scripts/metastore/upgrade/mysql/hive-txn-schema-2.0.0.mysql.sql;
I have searched a lot but didn't get any solution for this.
What am I missing??

yarn in docker - __spark_libs__.zip does not exist

I have looked through this StackOverflow post but they haven't helped me much.
I am trying to get Yarn working on an existing cluster. So far we have been using spark standalone manger as our resource allocator and it has been working as expected.
This is a basic overview of our architecture. Everything in the white boxes run in docker containers.
From master-machine I can run the following command from within the yarn resource manager container and get a spark-shell running that uses yarn: ./pyspark --master yarn --driver-memory 1G --executor-memory 1G --executor-cores 1 --conf "spark.yarn.am.memory=1G"
However, if I try to run the same command from client-machine within the jupyter container I get the following error in the YARN-UI.
Application application_1512999329660_0001 failed 2 times due to AM
Container for appattempt_1512999329660_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://master-machine:5000/proxy/application_1512999329660_0001/Then, click on links to logs of each attempt.
Diagnostics: File file:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/__spark_libs__5915104925224729874.zip does not exist
java.io.FileNotFoundException: File file:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/__spark_libs__5915104925224729874.zip does not exist
I can find file:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/ on the client-machine but I am unable to find spark-58732bb2-f513-4aff-b1f0-27f0a8d79947on the master machine
As a note, spark-shell works from the client-machine when it points to the standalone spark manager on the master machine.
No logs are printed to the yarn log directories on the worker-machines either.
If I run a spark-submit on spark/examples/src/main/python/pi.py I get the same error as above.
Here is the yarn-site.xml
<configuration>
<property>
<description>YARN hostname</description>
<name>yarn.resourcemanager.hostname</name>
<value>master-machine</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
<!-- <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler</value> -->
<!-- <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> -->
</property>
<property>
<description>The address of the RM web application.</description>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:5000</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<description>The address of the scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<description>The address of the applications manager interface in the RM.</description>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<description>The address of the RM admin interface.</description>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<description>Set to false, to avoid ip check</description>
<name>hadoop.security.token.service.use_ip</name>
<value>false</value>
</property>
<property>
<name>yarn.scheduler.capacity.maximum-applications</name>
<value>1000</value>
<description>Maximum number of applications in the system which
can be concurrently active both running and pending</description>
</property>
<property>
<description>Whether to use preemption. Note that preemption is experimental
in the current version. Defaults to false.</description>
<name>yarn.scheduler.fair.preemption</name>
<value>true</value>
</property>
<property>
<description>Whether to allow multiple container assignments in one
heartbeat. Defaults to false.</description>
<name>yarn.scheduler.fair.assignmultiple</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
</configuration>
And here is the spark.conf:
# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.
# DRIVER PROPERTIES
spark.driver.port 7011
spark.fileserver.port 7021
spark.broadcast.port 7031
spark.replClassServer.port 7041
spark.akka.threads 6
spark.driver.cores 4
spark.driver.memory 32g
spark.master yarn
spark.deploy.mode client
# DRIVER AND EXECUTORS
spark.blockManager.port 7051
# EXECUTORS
spark.executor.port 7101
# GENERAL
spark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory
spark.port.maxRetries 10
spark.local.dir /sparktmp
spark.scheduler.mode FAIR
# SPARK UI
spark.ui.port 4140
# DYNAMIC ALLOCATION AND SHUFFLE SERVICE
# http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation
spark.dynamicAllocation.enabled false
spark.shuffle.service.enabled false
spark.shuffle.service.port 7061
spark.dynamicAllocation.initialExecutors 5
spark.dynamicAllocation.minExecutors 0
spark.dynamicAllocation.maxExecutors 8
spark.dynamicAllocation.executorIdleTimeout 60s
# LOGGING
spark.executor.logs.rolling.maxRetainedFiles 5
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 100000000
# JMX
# Testing
# spark.driver.extraJavaOptions -Dcom.sun.management.jmxremote.port=8897 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
# Spark Yarn Configs
spark.hadoop.yarn.resourcemanager.address <master-machine IP>:8032
spark.hadoop.yarn.resourcemanager.hostname master-machine
And this shell script is run on all the mahcines:
# The main ones
export CONDA_DIR=/cluster/conda
export HADOOP_HOME=/usr/hadoop
export SPARK_HOME=/usr/spark
export JAVA_HOME=/usr/java/latest
export PATH=$PATH:$SPARK_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$CONDA_DIR/bin:/cluster/libs-python:/cluster/batch
export PYTHONPATH=/cluster/libs-python:$SPARK_HOME/python:$PY4JPATH:$PYTHONPATH
export SPARK_CLASSPATH=/cluster/libs-java/*:/cluster/libs-python:$SPARK_CLASSPATH
# Core spark configuration
export PYSPARK_PYTHON="/cluster/conda/bin/python"
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_PORT=7078
export SPARK_MASTER_WEBUI_PORT=7080
export SPARK_WORKER_WEBUI_PORT=7081
export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Duser.timezone=UTC+02:00"
export SPARK_WORKER_DIR="/sparktmp"
export SPARK_WORKER_CORES=22
export SPARK_WORKER_MEMORY=43G
export SPARK_DAEMON_MEMORY=1G
export SPARK_WORKER_INSTANCEs=1
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_EXECUTOR_MEMORY=4G
export SPARK_EXECUTOR_CORES=2
export SPARK_LOCAL_IP=$(hostname -I | cut -f1 -d " ")
export SPARK_PUBLIC_DNS=$(hostname -I | cut -f1 -d " ")
export SPARK_MASTER_OPTS="-Duser.timezone=UTC+02:00"
This is the hdfs-site.xml on the master-machine(namenodes):
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hdfs</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hdfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.replication.max</name>
<value>3</value>
</property>
<property>
<name>dfs.replication.min</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>supergroup</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>true</value>
</property>
<property>
<name>fs.permissions.umask-mode</name>
<value>002</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<property>
<!-- 1000Mbit/s -->
<name>dfs.balance.bandwidthPerSec</name>
<value>125000000</value>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/cluster/config/hadoopconf/namenode/dfs.hosts.exclude</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.replication.work.multiplier.per.iteration</name>
<value>10</value>
</property>
<property>
<name>dfs.namenode.replication.max-streams</name>
<value>50</value>
</property>
<property>
<name>dfs.namenode.replication.max-streams-hard-limit</name>
<value>100</value>
</property>
</configuration>
And this is the hdfs-site.xml on the worker-machines (data-node):
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hdfs,/hdfs2,/hdfs3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hdfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.replication.max</name>
<value>3</value>
</property>
<property>
<name>dfs.replication.min</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>supergroup</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>true</value>
</property>
<property>
<name>fs.permissions.umask-mode</name>
<value>002</value>
</property>
<property>
<!-- 1000Mbit/s -->
<name>dfs.balance.bandwidthPerSec</name>
<value>125000000</value>
</property>
</configuration>
This is the core-site.xml on the worker-machines (datanodes)
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master-machine:54310/</value>
</property>
</configuration>
This is the core-site.xml on the master-machine (name node):
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master-machine:54310/</value>
</property>
</configuration>
After a lot of debugging I was able to identify that for some reason the jupyter container was not looking in the correct hadoop conf directory even though the HADOOP_HOME environment variable was pointing to the correct location. All I had to do to resolve the above problem was to point HADOOP_CONF_DIR to the correct directory and everything started working again.

ConnectException: connect error: No such file or directory when trying to connect to '50010' using importtsv on hbase

I configured short-circuit settings on both hdfs-site.xml and hbase-site.xml. And I run importtsv on hbase to import data from HDFS to HBase on Hbase cluster. I look over the log on each datanode and all datanode have ConnectException i said to the title.
2017-03-31 21:59:01,273 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error creating DomainSocket
java.net.ConnectException: connect(2) error: No such file or directory when trying to connect to '50010'
at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
at org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:753)
at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:469)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:783)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:717)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:421)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:332)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:617)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:841)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:889)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVIntInRange(WritableUtils.java:348)
at org.apache.hadoop.io.Text.readString(Text.java:471)
at org.apache.hadoop.io.Text.readString(Text.java:464)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2017-03-31 21:59:01,277 WARN [main] org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: ShortCircuitCache(0x34f7234e): failed to load 1073750370_BP-642933002-"IP_ADDRESS"-1490774107737
EDIT
hadoop 2.6.4
hbase 1.2.3
hdfs-site.xml
<property>
<name>dfs.namenode.dir</name>
<value>/home/hadoop/hdfs/nn</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/home/hadoop/hdfs/snn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hdfs/dn</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:50090</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>hadoop1:8020</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>50</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>50</value>
</property>
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>hbase</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>775</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>_PORT</value>
</property>
<property>
<name>dfs.client.domain.socket.traffic</name>
<value>true</value>
</property>
hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop1/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop1,hadoop2,hadoop3,hadoop4,hadoop5,hadoop6,hadoop7,hadoop8</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>50</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.5</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.size</name>
<value>0.3</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.size.lower.limit</name>
<value>0.65</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>_PORT</value>
</property>
Short-circuit reads make use of a UNIX domain socket. This is a special path in the filesystem that allows the Client and the DataNodes to communicate. You will need to set a path (not port) to this socket. The DataNode should be able to create this path.
The parent directory of the path value (for ex: /var/lib/hadoop-hdfs/) must exist and should be owned by the hadoop superuser. Also make sure any user except the HDFS user or root has no access to this path.
mkdir /var/lib/hadoop-hdfs/
chown hdfs_user:hdfs_user /var/lib/hadoop-hdfs/
chmod 750 /var/lib/hadoop-hdfs/
Add this property to hdfs-site.xml on all datanodes and clients.
<property>
<name>dfs.domain.socket.path</name>
<value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>
Restart the services after making the changes.
Note: Paths under /var/run or /var/lib are commonly used.

two name nodes are stand by after configuring HA

i have configured high availability in my cluster
which consists of three nodes
hadoop-master(192.168.4.128)(name node)
hadoop-slave-1(192.168.4.111) (another name node )
hadoop-slave-2 (192.168.4.106) (data node)
without formatting name node ( converting a non-HA-enabled cluster to be HA-enabled) as described here
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
but i got two name nodes working as standby
so i tried to move the transition of one of these two nodes to active by applying the following command
hdfs haadmin -transitionToActive mycluster --forcemanual
with the following out put
17/04/03 08:07:35 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop-master/192.168.4.128:8020
17/04/03 08:07:36 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop-slave-1/192.168.4.111:8020
Illegal argument: Unable to determine service address for namenode 'mycluster'
my core-site is
<property>
<name>dfs.tmp.dir</name>
<value>/opt/hadoop/data15</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:8020</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/journal/node/local/data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp</value>
</property>
my hdfs-site.xml is
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/data16</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/data17</value>
<final>true</final>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-slave-1:50090</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
<final>true</final>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>hadoop-master,hadoop-slave-1</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.hadoop-master</name>
<value>hadoop-master:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.hadoop-slave-1</name>
<value>hadoop-slave-1:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.hadoop-master</name>
<value>hadoop-master:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.hadoop-slave-1</name>
<value>hadoop-slave-1:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop-master:8485;hadoop-slave-2:8485;hadoop-slave-1:8485/mycluster</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop-master:2181,hadoop-slave-1:2181,hadoop-slave-2:2181</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>3000</value>
</property>
what should the service address value be ? and what are possible solutions i can apply in order
to turn on one name node of the two nodes to active state ?
note the zookeeper server on all three nodes is stopped
I met the same issue, and it turn out that I didn't format zookeeper and start ZKFC

distcp between nameservice1 and nameservice2

we have CDH 5.2 with Cloudera Manager 5.
We want to copy data from nameservice2 to nameservice1
Both clusters are on same CDH version
When I tried hadoop distcp hdfs://nameservice2/foo/bar hdfs://nameservice1/bar/foo
I got error
java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice2
So I added following config from Nameservice2 to Nameservice1
HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml in Cloudera manager (Gateway Default Group)
<property>
<name>dfs.nameservices</name>
<value>nameservices2</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.nameservices2</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices2</name>
<value>namenode36,namenode405</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.cc:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50470</value>
</property>
But I am still getting same error.
Any workaround this ?
thanks
In HA enabled HDFS namenode nameservice1,nameservice2 are logical names, you cannot use ports along with that logical name.
you have two methods.
Easy method is to find the active namenodes and use the active namenode:port in the distcp command as follows. Namenode web UI can be used for finding active namenodes of two clusters.
hadoop distcp hdfs://hnn001.prod.cc:8020:8020/foo/bar hdfs://<dest-cluster-active-nn-hostname>:8020/bar/foo
Another method is to use logical names of two clusters as follow, But before trying the below command make sure you have properly configured nameservice1 and nameservice2 in your client hdfs-site.xml.
hadoop distcp hdfs://nameservice2/foo/bar hdfs://nameservice1/bar/foo
Confiruting remote cluster's nameservice in local cluster.
Looks like nameservice2 is your local and nameservice1 is your remote. You need to keep the all associated properties of nameservice1 and nameservice2 in the local cluster ie. Your local cluster's client hdfs-site.xml files should be as follows.
<configuration>
<!-- Available nameservices -->
<property>
<name>dfs.nameservices</name>
<value>nameservices1,nameservices2</value>
</property>
<!-- Local nameservice2 properties -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices2</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices2</name>
<value>namenode36,namenode405</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.cc:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode36</name>
<value>hnn001.prod.com:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices2.namenode405</name>
<value>hnn002.prod.com:50470</value>
</property>
<!-- Remote nameservice1 properties -->
<!-- You can find these properties in the remote machine's hdfs-site.xml file -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices1</name>
<value>namenodeXX,namenodeYY</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices1.namenode**XX**</name>
<value><Remote-nn1>:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices1.namenodeXX</name>
<value><Remote-nn1>:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.nameservices1.namenodeYY</name>
<value><Remote-nn2>:50470</value>
</property>
<!-- Other properties -->
</configuration>
In the above configuration files replace all place holders like YY XX with corresponding values in the remote machine's hdfs site.xml.

Resources