HDFS NFS startup error: “ERROR mount.MountdBase: Failed to start the TCP server...ChannelException: Failed to bind..." - hadoop

Attempting to use / startup HDFS NFS following the docs (ignoring the instructions to stop the rpcbind service and did not start the hadoop portmap service given that the OS is not SLES 11 and RHEL 6.2), but running into error when trying to set up the NFS service starting the hdfs nfs3 service:
[root#HW02 ~]#
[root#HW02 ~]#
[root#HW02 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
[root#HW02 ~]#
[root#HW02 ~]#
[root#HW02 ~]# service nfs status
Redirecting to /bin/systemctl status nfs.service
Unit nfs.service could not be found.
[root#HW02 ~]#
[root#HW02 ~]#
[root#HW02 ~]# service nfs stop
Redirecting to /bin/systemctl stop nfs.service
Failed to stop nfs.service: Unit nfs.service not loaded.
[root#HW02 ~]#
[root#HW02 ~]#
[root#HW02 ~]# service rpcbind status
Redirecting to /bin/systemctl status rpcbind.service
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-07-23 13:48:54 HST; 28s ago
Process: 27337 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 27338 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─27338 /sbin/rpcbind -w
Jul 23 13:48:54 HW02.ucera.local systemd[1]: Starting RPC bind service...
Jul 23 13:48:54 HW02.ucera.local systemd[1]: Started RPC bind service.
[root#HW02 ~]#
[root#HW02 ~]#
[root#HW02 ~]# hdfs nfs3
19/07/23 13:49:33 INFO nfs3.Nfs3Base: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting Nfs3
STARTUP_MSG: host = HW02.ucera.local/172.18.4.47
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:
...
<a bunch of other jars>
...
STARTUP_MSG: build = git#github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T12:26Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
19/07/23 13:49:33 INFO nfs3.Nfs3Base: registered UNIX signal handlers for [TERM, HUP, INT]
19/07/23 13:49:33 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
19/07/23 13:49:33 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
19/07/23 13:49:33 INFO impl.MetricsSystemImpl: Nfs3 metrics system started
19/07/23 13:49:33 INFO oncrpc.RpcProgram: Will accept client connections from unprivileged ports
19/07/23 13:49:33 INFO security.ShellBasedIdMapping: Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.
19/07/23 13:49:33 INFO nfs3.WriteManager: Stream timeout is 600000ms.
19/07/23 13:49:33 INFO nfs3.WriteManager: Maximum open streams is 256
19/07/23 13:49:33 INFO nfs3.OpenFileCtxCache: Maximum open streams is 256
19/07/23 13:49:34 INFO nfs3.DFSClientCache: Added export: / FileSystem URI: / with namenodeId: -1408097406
19/07/23 13:49:34 INFO nfs3.RpcProgramNfs3: Configured HDFS superuser is
19/07/23 13:49:34 INFO nfs3.RpcProgramNfs3: Delete current dump directory /tmp/.hdfs-nfs
19/07/23 13:49:34 INFO nfs3.RpcProgramNfs3: Create new dump directory /tmp/.hdfs-nfs
19/07/23 13:49:34 INFO nfs3.Nfs3Base: NFS server port set to: 2049
19/07/23 13:49:34 INFO oncrpc.RpcProgram: Will accept client connections from unprivileged ports
19/07/23 13:49:34 INFO mount.RpcProgramMountd: FS:hdfs adding export Path:/ with URI: hdfs://hw01.ucera.local:8020/
19/07/23 13:49:34 INFO oncrpc.SimpleUdpServer: Started listening to UDP requests at port 4242 for Rpc program: mountd at localhost:4242 with workerCount 1
19/07/23 13:49:34 ERROR mount.MountdBase: Failed to start the TCP server.
org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:4242
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.apache.hadoop.oncrpc.SimpleTcpServer.run(SimpleTcpServer.java:89)
at org.apache.hadoop.mount.MountdBase.startTCPServer(MountdBase.java:83)
at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:98)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:79)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
...
...
19/07/23 13:49:34 INFO util.ExitUtil: Exiting with status 1: org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:4242
19/07/23 13:49:34 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down Nfs3 at HW02.ucera.local/172.18.4.47
************************************************************/
Not sure how to interpret any of the errors seen here (and have not installed any packages like nfs-utils, assuming that Ambari would have installed all needed packages when cluster was initially installed).
Any debugging suggestions or solutions for what to do about this?
** UPDATE:
After looking at the error, I can see
Caused by: java.net.BindException: Address already in use
and looking into what is already using it, we see...
[root#HW02 ~]# netstat -ltnp | grep 4242
tcp 0 0 0.0.0.0:4242 0.0.0.0:* LISTEN 98067/jsvc.exec
The process jsvc.exec appears to be related to running java applications. Given that hadoop runs on java, I assume it would be bad to just kill the process. Is it not supposed to be on this port (since interferes with NFS Gateway)? Not sure what to do about this.

TLDR: nfs gateway service was already running (by default, apparently) and the service that I thought was blocking the hadoop nfs3 service (jsvc.exec) from starting was (I'm assuming) part of that service already running.
What made me suspect this was that when shutting down the cluster, the service also stopped plus the fact that it was using the port I needed for nfs. The way that I confirmed this was just from following the verification steps in the docs and seeing that my output was similar to what should be expected.
[root#HW02 ~]# rpcinfo -p hw02
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 4242 mountd
100005 2 udp 4242 mountd
100005 3 udp 4242 mountd
100005 1 tcp 4242 mountd
100005 2 tcp 4242 mountd
100005 3 tcp 4242 mountd
100003 3 tcp 2049 nfs
[root#HW02 ~]# showmount -e hw02
Export list for hw02:
/ *
Another thing that could told me that the jsvc process was part of an already running hdfs nfs service would have been checking the process info...
[root#HW02 ~]# ps -feww | grep jsvc
root 61106 59083 0 14:27 pts/2 00:00:00 grep --color=auto jsvc
root 163179 1 0 12:14 ? 00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/hadoop-hdfs-root-nfs3-HW02.ucera.local.out -errfile /var/log/hadoop/root/privileged-root-nfs3-HW02.ucera.local.err -pidfile /var/run/hadoop/root/hadoop-hdfs-root-nfs3.pid -nodetach -user hdfs -cp /usr/hdp/3.1.0.0-78/hadoop/conf:...
...
hdfs 163193 163179 0 12:14 ? 00:00:17 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/hadoop-hdfs-root-nfs3-HW02.ucera.local.out -errfile /var/log/hadoop/root/privileged-root-nfs3-HW02.ucera.local.err -pidfile /var/run/hadoop/root/hadoop-hdfs-root-nfs3.pid -nodetach -user hdfs -cp /usr/hdp/3.1.0.0-78/hadoop/conf:...
and seeing jsvc.exec -Dproc_nfs3 ... to get the hint that jsvc (which apparently is for running java apps on linux) was being used to run the very nfs3 service I was trying to start.
And for anyone else with this problem, note that I did not stop all the services that the docs want you to stop (since using centos7)
[root#HW01 /]# service nfs status
Redirecting to /bin/systemctl status nfs.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root#HW01 /]# service rpcbind status
Redirecting to /bin/systemctl status rpcbind.service
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-07-19 15:17:02 HST; 6 days ago
Main PID: 2155 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─2155 /sbin/rpcbind -w
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
Also note that I did not follow any of the config file settings recommended in the docs (and that some of the properties instructed in the docs could not even be found in the Ambari-managed HDFS configs (so if anyone can explain why this is still working for me despite that, please do)).
** Update:
After talking with some people more experienced with using HDP (v3.1) than me, the docs that I linked to for setting up NFS for HDFS may not be totally up to date (when setting up NFS via Ambari mgnt. in any case)...
Can have a cluster node act as an NFS gateway by checking it off as a NFS node in the Ambari host management UI:
Needed configs can be set like so in the HDFS mgnt. UI...
Can confirm that HDFS NFS gateway is running by looking at the Host > Summary > Components section in Ambari...

Related

YARN complains java.net.NoRouteToHostException: No route to host (Host unreachable)

Attempting to run h2o on a HDP 3.1 cluster and running into error that appears to be about YARN resource capacity...
[ml1user#HW04 h2o-3.26.0.1-hdp3.1]$ hadoop jar h2odriver.jar -nodes 3 -mapperXmx 10g
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 192.168.122.1]
[Possible callback IP address: 172.18.4.49]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.18.4.49:46015
(You can override these with -driverif and -driverport/-driverportrange and/or specify external IP using -extdriverif.)
Memory Settings:
mapreduce.map.java.opts: -Xms10g -Xmx10g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent: 10
mapreduce.map.memory.mb: 11264
Hive driver not present, not generating token.
19/07/25 14:48:05 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:48:06 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
19/07/25 14:48:07 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/ml1user/.staging/job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: number of splits:3
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.JobSubmitter: Executing with tokens: []
19/07/25 14:48:08 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
19/07/25 14:48:08 INFO impl.YarnClientImpl: Submitted application application_1564020515809_0006
19/07/25 14:48:08 INFO mapreduce.Job: The url to track the job: http://HW01.ucera.local:8088/proxy/application_1564020515809_0006/
Job name 'H2O_47159' submitted
JobTracker job ID is 'job_1564020515809_0006'
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Waiting for H2O cluster to come up...
ERROR: Timed out waiting for H2O cluster to come up (120 seconds)
ERROR: (Try specifying the -timeout option to increase the waiting time limit)
Attempting to clean up hadoop job...
19/07/25 14:50:19 INFO impl.YarnClientImpl: Killed application application_1564020515809_0006
Killed.
19/07/25 14:50:23 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/07/25 14:50:23 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://HW03.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW04.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW02.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
----- Queues -----
Queue name: default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 1.00
Maximum capacity: 1.00
Application count: 0
Queue 'default' approximate utilization: 0.0 / 45.0 GB used, 0 / 9 vcores used
----------------------------------------------------------------------
ERROR: Unable to start any H2O nodes; please contact your YARN administrator.
A common cause for this is the requested container size (11.0 GB)
exceeds the following YARN settings:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1564020515809_0006'
Looking in the YARN configs in Ambari UI, these properties are nowhere to be found. But checking the YARN logs in the YARN resource manager UI and checking some of the logs for the killed application, I see what appears to be unreachable-host errors...
Container: container_e05_1564020515809_0006_02_000002 on HW03.ucera.local_45454_1564102219781
LogAggregationType: AGGREGATED
=============================================================================================
LogType:stderr
LogLastModifiedTime:Thu Jul 25 14:50:19 -1000 2019
LogLength:2203
LogContents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/11/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/ml1user/appcache/application_1564020515809_0006/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.YarnChild).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
java.net.NoRouteToHostException: No route to host (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
....
at java.net.Socket.<init>(Socket.java:211)
at water.hadoop.EmbeddedH2OConfig$BackgroundWriterThread.run(EmbeddedH2OConfig.java:38)
End of LogType:stderr
***********************************************************************
Taking note of "java.net.NoRouteToHostException: No route to host (Host unreachable)". However, I can access all the other nodes from each other and they can all ping each other, so not sure what is going on here. Any suggestions for debugging or fixing?
Think I found the problem, TLDR: firewalld (nodes running on centos7) was still running, when should be disabled on HDP clusters.
From another community post:
For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:
systemctl disable firewalld
service firewalld stop
So apparently iptables and firewalld need to be disabled across the cluster (supporting docs can be found here, I only disabled them on the Ambari installation node). After stopping these services across the cluster (I recommend using clush), was able to run the yarn job without incident.
Normally, this problem is either due to bad DNS configuration, firewalls, or network unreachability. To quote this official doc:
The hostname of the remote machine is wrong in the configuration files
The client's host table /etc/hosts has an invalid IPAddress for the target host.
The DNS server's host table has an invalid IPAddress for the target host.
The client's routing tables (In Linux, iptables) are wrong.
The DHCP server is publishing bad routing information.
Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs
For me, the problem was that the driver was inside a Docker container which made it impossible for the workers to send data back to it. In other words, workers and the driver not being in the same subnet. The solution as given in this answer was to set the following configurations:
spark.driver.host=<container's host IP accessible by the workers>
spark.driver.bindAddress=0.0.0.0
spark.driver.port=<forwarded port 1>
spark.driver.blockManager.port=<forwarded port 2>

Spark Cluster starting issue

I'm new to spark, and trying to setup spark cluster. I did following things to set-up and check status of spark cluster, but not sure about status.
I tried to check master-ip:8081 (8080, 4040, 4041) in the browser, but didn't see any results. To start with, I set-up and started hadoop cluster.
JPS gives:
2436 SecondaryNameNode
2708 NodeManager
2151 NameNode
5495 Master
2252 DataNode
2606 ResourceManager
5710 Jps
Question (Was it necessary to start hadoop?)
In Master /usr/local/spark/conf/slaves
localhost
slave-node-1
slave-node-2
Now, to start Spark; Master Started with
$SPARK_HOME/sbin/start-master.sh
And tested with
ps -ef|grep spark
hduser 5495 1 0 18:12 pts/0 00:00:04 /usr/local/java/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/*:/usr/local/hadoop/etc/hadoop/ -Xmx1g org.apache.spark.deploy.master.Master --host master-hostname --port 7077 --webui-port 8080
On slave node 1
$SPARK_HOME/sbin/start-slave.sh spark://205.147.102.19:7077
Tested with
ps -ef|grep spark
hduser 1847 1 20 18:24 pts/0 00:00:04 /usr/local/java/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master-ip:7077
Same on slave-node 2
$SPARK_HOME/sbin/start-slave.sh spark://master-ip:7077
ps -ef|grep spark
hduser 1948 1 3 18:18 pts/0 00:00:03 /usr/local/java/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master-ip:7077
I was not able to see anything on the web console of spark.. so I thought problem may be with firewall. Here is my iptables..
iptables -L -nv
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
6136 587K fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22
151K 25M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
6 280 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
579 34740 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
34860 2856K ACCEPT all -- eth1 * 0.0.0.0/0 0.0.0.0/0
145 7608 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
56156 5994K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT 3531 packets, 464K bytes)
pkts bytes target prot opt in out source destination
Chain fail2ban-ssh (1 references)
pkts bytes target prot opt in out source destination
2 120 REJECT all -- * * 218.87.109.153 0.0.0.0/0 reject-with icmp-port-unreachable
5794 554K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
I'm trying all I can to see if spark-cluster is set-up and how to check it properly. And if cluster is set-up, why am I not able to check that on web-console? What could be wrong? Any pointers would be helpful...
EDIT - ADDING LOGS after spark-shell --master local command (in the master)
17/01/11 18:12:46 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077.
17/01/11 18:12:47 INFO master.Master: Starting Spark master at spark://master:7077
17/01/11 18:12:47 INFO master.Master: Running Spark version 2.1.0
17/01/11 18:12:47 INFO util.log: Logging initialized #3326ms
17/01/11 18:12:47 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#20f0b5ff{/app,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#734e74b2{/app/json,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#1bc45d76{/,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6a274a23{/json,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#4f5d45d5{/static,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#4fb65368{/app/kill,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#76208805{/driver/kill,null,AVAILABLE}
17/01/11 18:12:47 INFO server.ServerConnector: Started ServerConnector#258dbadd{HTTP/1.1}{0.0.0.0:8080}
17/01/11 18:12:47 INFO server.Server: Started #3580ms
17/01/11 18:12:47 INFO util.Utils: Successfully started service 'MasterUI' on port 8080.
17/01/11 18:12:47 INFO ui.MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://master:8080
17/01/11 18:12:47 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#1cfbb7e9{/,null,AVAILABLE}
17/01/11 18:12:47 INFO server.ServerConnector: Started ServerConnector#2f7af4e{HTTP/1.1}{master:6066}
17/01/11 18:12:47 INFO server.Server: Started #3628ms
17/01/11 18:12:47 INFO util.Utils: Successfully started service on port 6066.
17/01/11 18:12:47 INFO rest.StandaloneRestServer: Started REST server for submitting applications on port 6066
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#799d5f4f{/metrics/master/json,null,AVAILABLE}
17/01/11 18:12:47 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#647c46e3{/metrics/applications/json,null,AVAILABLE}
17/01/11 18:12:47 INFO master.Master: I have been elected leader! New state: ALIVE
In slave nodes-
17/01/11 18:22:46 INFO Worker: Connecting to master master:7077...
17/01/11 18:22:46 WARN Worker: Failed to connect to master master:7077
Tonnes of java errors..
17/01/11 18:31:18 ERROR Worker: All masters are unresponsive! Giving up.
Spark Web UI start when you are creating SparkContext
Try to run spark-shell --master yourmaster:7077 and then open Spark UI. You can also use spark-sumit to submit some application, then SparkContext will be created.
Example spark-submit, from Spark documentation:
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://207.184.161.138:7077 \
--deploy-mode cluster \
--supervise \
--executor-memory 20G \
--total-executor-cores 100 \
/path/to/examples.jar \
1000
Answer to first question: you must start Hadoop components if you want to use HDFS or YARN. If not, they can be not started
Also you can go to /etc/hosts/ and remove line with 127.0.0.1 or set MASTER_IP variable in Spark configuration to proper host name
The problem was IP tables. most other things were fine. So I just followed instructions here https://wiki.debian.org/iptables to fix IP tables, and it worked for me. Only thing that you should know which ports will be used for spark/hadoop etc. I opened 8080, 54310, 50070, 7077 (some defaults used by many for hadoop and spark installation)...

Unable to access Couldera Manager 5 web console after installation

I am setting up a hadoop cluster(2.6) on CentOS 7 machine with three nodes, cluster is running fine now. However, I am not able to access the Cloudera manager(5.6) web console after completing the CM installation though its services seems to be running.
Below are my findings, please help me what could be the possible reasons:
All process are up and running !
[root#vm-txxxxxx1 ~]# jps
27978 ResourceManager
15368 Main
27052 Jps
27400 DataNode
27639 SecondaryNameNode
28106 NodeManager
27258 NameNode
Firewall stopped
[root#vm-txxxxx1 ~]# service iptables stop
Redirecting to /bin/systemctl stop iptables.service
[root#vm-txxxxxx1 ~]# service iptabes status
Redirecting to /bin/systemctl status iptabes.service
iptabes.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
Mar 24 19:24:05 vm-txxxxx1 systemd[1]: Stopped IPv4 firewall with iptables.
Listening on port 7180 and tested the same locally
[root#vm-txxxxxx1 ~]# netstat -tulpn | grep 7180
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 15368/java
[root#vm-txxxxx1 ~]# telnet localhost 7180
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SELINUX Disabled:
[root#vm-txxxxxx1 ~]# getenforce
Disabled
Hostfile entries
[root#vm-txxxxxx1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
172.16.xx.x1 vm-txxxxxx1
172.16.xx.x2 vm-xxxxxxx2
172.16.xx.x4 del1-vm-poc04
Verify if Cloudera Manager is running:
[root#vm-txxxxxx1 ~]# service cloudera-scm-server status
cloudera-scm-server.service - LSB: Cloudera SCM Server
Loaded: loaded (/etc/rc.d/init.d/cloudera-scm-server)
Active: active (exited) since Tue 2016-03-22 17:09:55 IST; 2 days ago
Process: 15344 ExecStart=/etc/rc.d/init.d/cloudera-scm-server start (code=exited, status=0/SUCCESS)
Mar 22 17:09:50 vm-txxxxxx1 systemd[1]: Starting LSB: Cloudera SCM Server...
Mar 22 17:09:50 vm-txxxxx1 su[15366]: (to cloudera-scm) root on none
Mar 22 17:09:55 vm-txxxxxx1 cloudera-scm-server[15344]: Starting cloudera-scm-server:...]
Mar 22 17:09:55 vm-txxxxxx1 systemd[1]: Started LSB: Cloudera SCM Server.
Hint: Some lines were ellipsized, use -l to show in full.
Below are the lines from Cloudera servers logs
[root#vm-txxxxx1 ~]# tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
2016-03-24 18:21:00,398 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Reaped total of 0 deleted commands
2016-03-24 18:21:00,400 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Found no commands older than 2014-03-25T12:51:00.399Z to reap.
2016-03-24 18:21:00,400 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Wizard is active, not reaping scanners or configurators
I am accessing the Cloudera Manager page http://172.16.xx.1x:7180
at the end it says "The connection has timeout", it looks like my http request is not able to reach out to the server, that's why nothing comes up in the logs. Please suggest if I am missing something.
Thanks in advance!
#Havnar: Thanks for the suggestion, I am confirming SSL is not enabled now
and sharing the curl result.
[root#vm-txxxx1 ~]# curl -i -u 'admin:admin' http://localhost:7180/api/v1/tools/echo
HTTP/1.1 200 OK
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: CLOUDERA_MANAGER_SESSIONID=1etaj5o42vprlndf43ua7rbaf;Path=/;HttpOnly
Content-Type: application/json
Date: Fri, 25 Mar 2016 05:50:36 GMT
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{
"message" : "Hello, World!"
I tried stop and restarted the cloudera service, nothing find suspicious, there was one warning which is looking little bit suspicious, search them google, nothing looks relevant.
[root#vm-txxxxx1 ~]# vi /var/log/cloudera-scm-server/cloudera-scm-server.log
2016-03-24 20:22:29,002 WARN main:org.hibernate.cache.ehcache.AbstractEhcacheRegionFactory: HHH020003: Could not find a specific ehcache configuration for cache named [org.hibernate.cache.internal.StandardQueryCache]; using defaults.
2016-03-24 20:22:28,581 INFO main:org.hibernate.engine.jdbc.internal.LobCreatorBuilder: HHH000424: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException
#Havnar : I didn't get what do you meant by "try a cat on the machine running the CM", let me know if anything else need to be checked.
Thanks

Pig keeps trying to connect to job history server (and fails)

I'm running a Pig job that fails to connect to the Hadoop job history server.
The task (usually any task with GROUP BY) runs for a while and then it starts with a message like:
2015-04-21 19:05:22,825 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2015-04-21 19:05:26,721 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-04-21 19:05:29,721 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
It then continues for a while retrying the connection. Sometimes it precedes further with the job. Othertimes it throws this exception:
2015-04-21 19:05:55,822 [main] WARN org.apache.pig.tools.pigstats.mapreduce.MRJobStats - Unable to get job counters
java.io.IOException: java.io.IOException: java.net.NoRouteToHostException: No Route to Host from cluster-01/10.10.10.11 to 0.0.0.0:10020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getCounters(HadoopShims.java:132)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addCounters(MRJobStats.java:284)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:235)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
I found this question here but in my case the job history server is started. If I run netstat, I find:
tcp 0 0 0.0.0.0:10020 0.0.0.0:* LISTEN 12073/java off (0.00/0/0)
Where 12073 is ...
12073 pts/4 Sl 0:07 /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Dproc_historyserver -Xmx1000m -Djava.library.path=/data/hadoop/hadoop/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/data/hadoop/hadoop-2.3.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/data/hadoop/hadoop-2.3.0 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/data/hadoop/hadoop/logs -Dhadoop.log.file=mapred-hadoop-historyserver-cluster-01.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
I tried opening the port 10200 in case it was a firewall issue:
ACCEPT tcp -- anywhere anywhere tcp dpt:10020
... but no luck.
After a few minutes, some of the tasks just arbitrarily continue to the next part.
I'm using Hadoop 2.3 and Pig 0.14.
My question is:
1) What are the possible reasons why Pig cannot connect to the job history server (JHS) given that the JHS is running on the same port that Pig looks for it?
... or failing that ...
2) Is there any way to just tell Pig to stop trying to connect to the JHS and continue with the task?
It seems that most Hadoop installation/configuration guides neglect to mention configuring the Job History Server. It seems that Pig, in particular, relies on this server. It also seems like the default (local) settings for the JHS won't work in a multi-node cluster.
The solution was to add the hostname of the server into the configuration in mapred-site.xml to make sure it could be accesses from the other machines. (In my version of the file, the lines had to be added as "new" ... there were no previous settings.)
<property>
<name>mapreduce.jobhistory.address</name>
<value>cm:10020</value>
<description>Host and port for Job History Server (default 0.0.0.0:10020)</description>
</property>
Then restart the job history server:
mr-jobhistory-daemon.sh stop historyserver
mr-jobhistory-daemon.sh start historyserver
If you get a bind exception (port in use), it means the stop didn't work. Either
Use ps ax | grep -e JobHistory to get the process and kill it manually with kill -9 [pid]. Then call the start command above again. Or
Use a different port in the configuration
Pig should pick up the new settings automatically. Run a Pig script and hope for the best.
start history server in hadoop bin using the below command
bin$ ./mr-jobhistory-daemon.sh start historyserver
run pig using the below command
$pig
Config mapreduce.jobhistory.address in hadoop/etc/hadoop/mapred-site.xml,
then:
mapred --daemon start
The solution was the History server was not running:
[user#vm9 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/user/hadoop-2.7.7/logs/mapred-user-historyserver-vm9.out
[user#vm9 sbin]$ jps
5683 NameNode
6309 NodeManager
5974 SecondaryNameNode
8075 RunJar
6204 ResourceManager
8509 JobHistoryServer
5821 DataNode
8542 Jps
[user#vm9 sbin]$
Now pig can run properly and it will connect to the job history server and the dump command is working fine.

how to establish the RegionServer of Hbase to master

Please tell me how to establish the RegionServer of Hbase to master.
I configured 5 region servers, however, only 2 server is worked properly.
hbase(main):001:0> status
2 servers, 0 dead, 1.5000 average load
The hostname of this two servers are sm3-10 and sm3-12 from http://hbase-master:60010.
But the other servers like sm3-8 not work.
I'd like to know the trouble shooting step and resolutions.
sm3-10:slave, work well
[root#sm3-10 ~]# jps
2581 QuorumPeerMain
2761 SecondaryNameNode
2678 DataNode
19913 Jps
2551 HRegionServer
[root#sm3-10 ~]# lsof -i:54310
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
java 2678 hdfs 52r IPv6 27608 TCP sm3-10:33316->sm3-12:54310 (ESTABLISHED)
[root#sm3-10 ~]# lsof -i:3888
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
java 2581 zookeeper 19u IPv6 7239 TCP *:ciphire-serv (LISTEN)
java 2581 zookeeper 20u IPv6 7242 TCP sm3-10:ciphire-serv->sm3-11:53593 (ESTABLISHED)
java 2581 zookeeper 25u IPv6 27011 TCP sm3-10:ciphire-serv->sm3-12:40352 (ESTABLISHED)
java 2581 zookeeper 29u IPv6 25573 TCP sm3-10:ciphire-serv->sm3-8:44271 (ESTABLISHED)
sm3-8:slave, not work properly, however, the status looks good
[root#sm3-8 ~]# jps
3489 Jps
2249 HRegionServer
2463 DataNode
2297 QuorumPeerMain
2686 SecondaryNameNode
[root#sm3-8 ~]# lsof -i:54310
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
java 2463 hdfs 51u IPv6 9919 TCP sm3-8.nos-seamicro.local:40776->sm3-12:54310 (ESTABLISHED)
[root#sm3-8 ~]# lsof -i:3888
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
java 2297 zookeeper 18u IPv6 5951 TCP *:ciphire-serv (LISTEN)
java 2297 zookeeper 19u IPv6 9839 TCP sm3-8.nos-seamicro.local:52886->sm3-12:ciphire-serv (ESTABLISHED)
java 2297 zookeeper 20u IPv6 5956 TCP sm3-8.nos-seamicro.local:44271->sm3-10:ciphire-serv (ESTABLISHED)
java 2297 zookeeper 24u IPv6 5959 TCP sm3-8.nos-seamicro.local:47922->sm3-11:ciphire-serv (ESTABLISHED)
Mastet:sm3-12
[root#sm3-12 ~]# jps
2760 QuorumPeerMain
3035 NameNode
3096 SecondaryNameNode
2612 HRegionServer
4330 Jps
2872 DataNode
3723 HMaster
[root#sm3-12 ~]# lsof -i:54310
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
java 2872 hdfs 51u IPv6 7824 TCP sm3-12:45482->sm3-12:54310 (ESTABLISHED)
java 3035 hdfs 54u IPv6 7783 TCP sm3-12:54310 (LISTEN)
java 3035 hdfs 70u IPv6 7873 TCP sm3-12:54310->sm3-8:40776 (ESTABLISHED)
java 3035 hdfs 71u IPv6 7874 TCP sm3-12:54310->sm3-11:54990 (ESTABLISHED)
java 3035 hdfs 72u IPv6 7875 TCP sm3-12:54310->sm3-10:33316 (ESTABLISHED)
java 3035 hdfs 74u IPv6 7877 TCP sm3-12:54310->sm3-12:45482 (ESTABLISHED)
[root#sm3-12 ~]#
[root#sm3-12 ~]# cat /etc/hbase/conf/hbase-site.xml
hbase.rootdir
hdfs://sm3-12:54310/hbase
true
hbase.zookeeper.quorum
sm3-8,sm3-10,sm3-11,sm3-12,sm3-13
true
--- snip ---
[root#sm3-12 ~]# cat /etc/zookeeper/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/zookeeper
clientPort=2181
server.1=sm3-10:2888:3888
server.2=sm3-11:2888:3888
server.3=sm3-12:2888:3888
server.4=sm3-8:2888:3888
[root#sm3-12 ~]#
Thanks in advance,
Hiromi
check to make sure your dns is configured properly on all of the hosts, and each server can do a reverse lookup

Resources