Namenode HA (UnknownHostException: nameservice1) - hadoop

We enable Namenode High Availability through Cloudera Manager, using
Cloudera Manager >> HDFS >> Action > Enable High Availability >> Selected Stand By Namenode & Journal Nodes
Then nameservice1
Once the whole process completed then Deployed Client Configuration.
Tested from Client Machine by listing HDFS directories (hadoop fs -ls /) then manually failover to standby namenode & again listing HDFS directories (hadoop fs -ls /). This test worked perfectly.
But When I ran hadoop sleep job using following command it failed
$ hadoop jar /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hadoop-0.20-mapreduce/hadoop-examples.jar sleep -m 1 -r 0
java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:103)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:980)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:974)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:974)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:948)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1410)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:174)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:237)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.SleepJob.main(SleepJob.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.net.UnknownHostException: nameservice1
... 37 more
I dont know why its not able to resolved nameservice1 even after deploying client configuration.
When I google this issue I found only one solution to this issue
Add the below entry in configuration entry for fix the issue
dfs.nameservices=nameservice1
dfs.ha.namenodes.nameservice1=namenode1,namenode2
dfs.namenode.rpc-address.nameservice1.namenode1=ip-10-118-137-215.ec2.internal:8020
dfs.namenode.rpc-address.nameservice1.namenode2=ip-10-12-122-210.ec2.internal:8020
dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
My impression was Cloudera Manager take cares of it. I checked client for this configuration & configuration was there (/var/run/cloudera-scm-agent/process/1998-deploy-client-config/hadoop-conf/hdfs-site.xml).
Also some more details of config files :
[11:22:37 root#datasci01.dev:~]# ls -l /etc/hadoop/conf.cloudera.*
/etc/hadoop/conf.cloudera.hdfs:
total 16
-rw-r--r-- 1 root root 943 Jul 31 09:33 core-site.xml
-rw-r--r-- 1 root root 2546 Jul 31 09:33 hadoop-env.sh
-rw-r--r-- 1 root root 1577 Jul 31 09:33 hdfs-site.xml
-rw-r--r-- 1 root root 314 Jul 31 09:33 log4j.properties
/etc/hadoop/conf.cloudera.hdfs1:
total 20
-rwxr-xr-x 1 root root 233 Sep 5 2013 container-executor.cfg
-rw-r--r-- 1 root root 1890 May 21 15:48 core-site.xml
-rw-r--r-- 1 root root 2546 May 21 15:48 hadoop-env.sh
-rw-r--r-- 1 root root 1577 May 21 15:48 hdfs-site.xml
-rw-r--r-- 1 root root 314 May 21 15:48 log4j.properties
/etc/hadoop/conf.cloudera.mapreduce:
total 20
-rw-r--r-- 1 root root 1032 Jul 31 09:33 core-site.xml
-rw-r--r-- 1 root root 2775 Jul 31 09:33 hadoop-env.sh
-rw-r--r-- 1 root root 1450 Jul 31 09:33 hdfs-site.xml
-rw-r--r-- 1 root root 314 Jul 31 09:33 log4j.properties
-rw-r--r-- 1 root root 2446 Jul 31 09:33 mapred-site.xml
/etc/hadoop/conf.cloudera.mapreduce1:
total 24
-rwxr-xr-x 1 root root 233 Sep 5 2013 container-executor.cfg
-rw-r--r-- 1 root root 1979 May 16 12:20 core-site.xml
-rw-r--r-- 1 root root 2775 May 16 12:20 hadoop-env.sh
-rw-r--r-- 1 root root 1450 May 16 12:20 hdfs-site.xml
-rw-r--r-- 1 root root 314 May 16 12:20 log4j.properties
-rw-r--r-- 1 root root 2446 May 16 12:20 mapred-site.xml
[11:23:12 root#datasci01.dev:~]#
I doubt its issue with old configuration in /etc/hadoop/conf.cloudera.hdfs1 & /etc/hadoop/conf.cloudera.mapreduce1 , but not sure.
looks like /etc/hadoop/conf/* never got updated
# ls -l /etc/hadoop/conf/
total 24
-rwxr-xr-x 1 root root 233 Sep 5 2013 container-executor.cfg
-rw-r--r-- 1 root root 1979 May 16 12:20 core-site.xml
-rw-r--r-- 1 root root 2775 May 16 12:20 hadoop-env.sh
-rw-r--r-- 1 root root 1450 May 16 12:20 hdfs-site.xml
-rw-r--r-- 1 root root 314 May 16 12:20 log4j.properties
-rw-r--r-- 1 root root 2446 May 16 12:20 mapred-site.xml
Anyone has any idea about this issue?

Looks like you are using wrong client configuration in /etc/hadoop/conf directory. Sometimes Cloudera Manager (CM) deploy client configurations option may not work.
As you have enabled NN HA, you should have valid core-site.xml and hdfs-site.xml files in your hadoop client configuration directory. For getting the valid site files, Go to HDFS service from CM Choose Download client configuration option from the Actions Button. you will get configuration files in zip format, extract the zip files and replace /etc/hadoop/conf/core-site.xml and /etc/hadoop/conf/hdfs-site.xml files with the extracted core-site.xml,hdfs-site.xml files.

Got it resolved. wrong config was linked to "/etc/hadoop/conf/" --> "/etc/alternatives/hadoop-conf/" --> "/etc/hadoop/conf.cloudera.mapreduce1"
It has to be "/etc/hadoop/conf/" --> "/etc/alternatives/hadoop-conf/" --> "/etc/hadoop/conf.cloudera.mapreduce"

below statement in my code resolved problem by specifying the host and port
val dfs = sqlContext.read.json("hdfs://localhost:9000//user/arvindd/input/employee.json")

I resolved this issue my putting the complete line to create RDD
myfirstrdd = sc.textFile("hdfs://192.168.35.132:8020/BUPA.txt")
and then I was able to do other RDD transformation .. Make sure you have the w/r/x to the file or you can do chmod 777

Related

Apache will not deliver resources in sub directories of /var/www/html

I currently have a server running Ubuntu 18.04 with Apache2. I am not able to access png's and svg files in sub directories. Example: /var/www/html/icons/new.svg when the page is located in /var/www/html/index.php. However, Apache will deliver images from within the page directory so all photos within /var/www/html will be delivered.
The error code for the images is just a plain 404. I am able to access pages within /var/www/html/sub/index.php. All images are using relative links if that matters.
I do have a non verified ssl on my server but even on plain http it doesn't deliver if that matters.
It's probably a dumb question but thanks for your time anyways.
All Code worked on a local wamp server before being put on a lamp server.
Example Code:
<img src="icons/new.svg"> <!--Wont work-->
<img src="logo.svg"> <!--Will Work-->
Inside /var/www/html
drwxr-xr-x 6 root root 4096 Jun 26 18:04 .
drwxr-xr-x 3 root root 4096 Jun 22 18:55 ..
drwxr-xr-x 4 root root 4096 Jun 26 17:50 icons
-rw-r--r-- 1 root root 4340 Jun 26 18:11 index.php
-rw-r--r-- 1 root root 4172 Jun 26 18:11 logo.svg
-rw-r--r-- 1 root root 1856 Jun 26 18:11 mainstyle.css
drwxr-xr-x 2 root root 4096 Jun 26 17:50 PHP
drwxr-xr-x 2 root root 4096 Jun 26 17:50 plandetails
drwxr-xr-x 2 root root 4096 Jun 26 17:50 planicons
-rw-r--r-- 1 root root 295915 Jun 26 18:11 searchbkg.jpg
-rw-r--r-- 1 root root 7366 Jun 26 18:11 searchbkg.svg
Inside the icons folder
drwxr-xr-x 4 root root 4096 Jun 26 17:50 .
drwxr-xr-x 6 root root 4096 Jun 26 18:04 ..
-rw-r--r-- 1 root root 446 Jun 26 18:37 arrowleft.svg
-rw-r--r-- 1 root root 446 Jun 26 18:37 arrowrt.svg
-rw-r--r-- 1 root root 7863 Jun 26 18:37 bestoffer.svg
-rw-r--r-- 1 root root 4024 Jun 26 18:37 free.svg
-rw-r--r-- 1 root root 477 Jun 26 18:37 informationbubble.svg
-rw-r--r-- 1 root root 3404 Jun 26 18:37 new.svg
drwxr-xr-x 2 root root 4096 Jun 26 17:50 plans
drwxr-xr-x 2 root root 4096 Jun 26 17:50 prices
-rw-r--r-- 1 root root 2272 Jun 26 18:37 save.svg
Updated /var/www/html perms
drwxr-xr-x 6 root root 4096 Jun 26 18:04 .
drwxr-xr-x 3 root root 4096 Jun 22 18:55 ..
drwxr-xr-x 4 root root 4096 Jun 26 17:50 icons
-rw-r--r-- 1 root root 4340 Jun 26 18:37 index.php
-rw-r--r-- 1 root root 4172 Jun 26 18:37 logo.svg
-rw-r--r-- 1 root root 1856 Jun 26 18:37 mainstyle.css
drwxr-xr-x 2 root root 4096 Jun 26 17:50 PHP
drwxr-xr-x 2 root root 4096 Jun 26 17:50 plandetails
drwxr-xr-x 2 root root 4096 Jun 26 17:50 planicons
-rw-r--r-- 1 root root 295915 Jun 26 18:37 searchbkg.jpg
-rw-r--r-- 1 root root 7366 Jun 26 18:37 searchbkg.svg
For anyone wondering I used the combination of the help below and this
This is not a php question, this is an apache question and is a matter of permissions, try:
chmod a+rx /var/www/html/sub/
and
chmod a+r /var/www/html/*
EDIT:
Your Virtual Host should be:
DocumentRoot /var/www/html
<Directory /var/www/html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Require all granted
</Directory>

HDInsight Oozie 4.2.0.2.5 Spark2 Action Jackson collision

I was following the following guide:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_spark-component-guide/content/ch_oozie-spark-action.html#spark-config-oozie-spark2
This enabled me to configure the following workflow.xml
<workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:0.3">
<start to = "Raw-To-Parquet" />
<action name="Raw-To-Parquet">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<!--
<prepare>
<delete path="[PATH]"/>
<mkdir path="[PATH]"/>
</prepare>
<job-xml>[SPARK SETTINGS FILE]</job-xml>
<configuration>
<property>
<name>[PROPERTY-NAME]</name>
<value>[PROPERTY-VALUE]</value>
</property>
</configuration>
-->
<master>${master}</master>
<!--
<mode>[SPARK MODE]</mode>
-->
<name>Raw-To-Parquet</name>
<class>org.apache.spark.examples.SparkPi</class>
<jar>${nameNode}/spark-examples_2.11-2.0.2.2.5.6.3-5.jar</jar>
<spark-opts>--conf spark.yarn.jars=spark2/* --conf spark.driver.extraJavaOptions=-Dhdp.version=2.5.6.3-5 --conf spark.executor.extraJavaOptions=-Dhdp.version=2.5.6.3-5</spark-opts>
<!--SPARK_JAVA_OPTS="-Dhdp.verion=xxx"
<arg>[ARG-VALUE]</arg>
<arg>[ARG-VALUE]</arg>
-->
</spark>
<ok to="End"/>
<error to="Fail"/>
</action>
<kill name = "Fail">
<message>Job failed</message>
</kill>
<end name = "End" />
</workflow-app>
Job.properties
master=yarn-cluster
nameNode=wasb://hdi-adam-ak#hdiadamakstore.blob.core.windows.net
jobTracker=hn1-hdi-ad.hcgue2snotaezkuexzoymd0nlh.ax.internal.cloudapp.net:8088
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/project-example/oozie
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.action.sharelib.for.spark=spark2
Which begins a workflow, but then dies due to a collision with Jackson jars.
18/05/02 12:39:12 ERROR ApplicationMaster: User class threw exception: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.JavaType.isReferenceType()Z
java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.JavaType.isReferenceType()Z
at com.fasterxml.jackson.databind.ser.BasicSerializerFactory.findSerializerByLookup(BasicSerializerFactory.java:302)
at com.fasterxml.jackson.databind.ser.BeanSerializerFactory._createSerializer2(BeanSerializerFactory.java:218)
at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:153)
at com.fasterxml.jackson.databind.SerializerProvider._createUntypedSerializer(SerializerProvider.java:1203)
at com.fasterxml.jackson.databind.SerializerProvider._createAndCacheUntypedSerializer(SerializerProvider.java:1157)
at com.fasterxml.jackson.databind.SerializerProvider.findValueSerializer(SerializerProvider.java:481)
at com.fasterxml.jackson.databind.SerializerProvider.findTypedValueSerializer(SerializerProvider.java:679)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:107)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3559)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2927)
at org.apache.spark.rdd.RDDOperationScope.toJson(RDDOperationScope.scala:52)
See the below to see the contents of my oozie sharelibs
oozie#hn0-hdi-ad:/quantexa/oozie$ hadoop fs -ls -R /user/oozie/share/lib/lib_20180502121937 | grep jackson
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/distcp/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/distcp/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/distcp/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/hcatalog/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/hcatalog/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 232248 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/hcatalog/jackson-core-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/hcatalog/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 780664 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/hcatalog/jackson-mapper-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 18336 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive/jackson-jaxrs-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 27084 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive/jackson-xc-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive2/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive2/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/hive2/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/mapreduce-streaming/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/mapreduce-streaming/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/mapreduce-streaming/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/oozie/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/oozie/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/oozie/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 232248 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-core-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 18336 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-jaxrs-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 780664 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-mapper-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 27084 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/pig/jackson-xc-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 46983 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-annotations-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 258876 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-core-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 232248 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-core-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 1171380 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-databind-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 48418 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-dataformat-cbor-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 18336 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-jaxrs-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 780664 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-mapper-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 41263 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-module-paranamer-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 515604 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-module-scala_2.11-2.6.5.jar
-rw-r--r-- 1 oozie supergroup 27084 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/jackson-xc-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 40341 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/json4s-jackson_2.11-3.2.11.jar
-rw-r--r-- 1 oozie supergroup 1048110 2018-05-02 12:21 /user/oozie/share/lib/lib_20180502121937/spark2/parquet-jackson-1.7.0.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 232248 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-core-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-databind-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 780664 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-mapper-asl-1.9.13.jar
-rw-r--r-- 1 oozie supergroup 549415 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/jackson-module-scala_2.10-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 39953 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/json4s-jackson_2.10-3.2.10.jar
-rw-r--r-- 1 oozie supergroup 1048110 2018-05-02 12:20 /user/oozie/share/lib/lib_20180502121937/spark_orig/parquet-jackson-1.7.0.jar
-rw-r--r-- 1 oozie supergroup 38605 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/sqoop/jackson-annotations-2.4.0.jar
-rw-r--r-- 1 oozie supergroup 225302 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/sqoop/jackson-core-2.4.4.jar
-rw-r--r-- 1 oozie supergroup 1076926 2018-05-02 12:19 /user/oozie/share/lib/lib_20180502121937/sqoop/jackson-databind-2.4.4.jar
You can then see the contents of the container which YARN creates:
root#wn0-hdi-ad:/mnt/resource/hadoop/yarn/local/usercache/oozie/appcache# ll application_1525249303830_0045/container_1525249303830_0045_01_000001/
total 1020
drwx--x--- 3 yarn hadoop 20480 May 2 12:38 ./
drwx--x--- 16 yarn hadoop 4096 May 2 12:40 ../
//removed due to word count
lrwxrwxrwx 1 yarn hadoop 85 May 2 12:38 __app__.jar -> /mnt/resource/hadoop/yarn/local/filecache/261/spark-examples_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 75 May 2 12:38 jackson-annotations-2.4.0.jar -> /mnt/resource/hadoop/yarn/local/filecache/498/jackson-annotations-2.4.0.jar*
lrwxrwxrwx 1 yarn hadoop 75 May 2 12:38 jackson-annotations-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/504/jackson-annotations-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 68 May 2 12:38 jackson-core-2.4.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/293/jackson-core-2.4.4.jar*
lrwxrwxrwx 1 yarn hadoop 68 May 2 12:38 jackson-core-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/304/jackson-core-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 73 May 2 12:38 jackson-core-asl-1.9.13.jar -> /mnt/resource/hadoop/yarn/local/filecache/467/jackson-core-asl-1.9.13.jar*
lrwxrwxrwx 1 yarn hadoop 72 May 2 12:38 jackson-databind-2.4.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/507/jackson-databind-2.4.4.jar*
lrwxrwxrwx 1 yarn hadoop 72 May 2 12:38 jackson-databind-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/514/jackson-databind-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 79 May 2 12:38 jackson-dataformat-cbor-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/357/jackson-dataformat-cbor-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 70 May 2 12:38 jackson-jaxrs-1.9.13.jar -> /mnt/resource/hadoop/yarn/local/filecache/509/jackson-jaxrs-1.9.13.jar*
lrwxrwxrwx 1 yarn hadoop 75 May 2 12:38 jackson-mapper-asl-1.9.13.jar -> /mnt/resource/hadoop/yarn/local/filecache/446/jackson-mapper-asl-1.9.13.jar*
lrwxrwxrwx 1 yarn hadoop 80 May 2 12:38 jackson-module-paranamer-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/373/jackson-module-paranamer-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 jackson-module-scala_2.11-2.6.5.jar -> /mnt/resource/hadoop/yarn/local/filecache/391/jackson-module-scala_2.11-2.6.5.jar*
lrwxrwxrwx 1 yarn hadoop 67 May 2 12:38 jackson-xc-1.9.13.jar -> /mnt/resource/hadoop/yarn/local/filecache/392/jackson-xc-1.9.13.jar*
//removed due to word count
lrwxrwxrwx 1 yarn hadoop 85 May 2 12:38 spark-catalyst_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/296/spark-catalyst_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 82 May 2 12:38 spark-cloud_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/351/spark-cloud_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 __spark_conf__ -> /mnt/resource/hadoop/yarn/local/usercache/oozie/filecache/1630/__spark_conf__.zip/
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 spark-core_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/482/spark-core_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 83 May 2 12:38 spark-graphx_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/295/spark-graphx_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 spark-hive_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/453/spark-hive_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 94 May 2 12:38 spark-hive-thriftserver_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/317/spark-hive-thriftserver_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 85 May 2 12:38 spark-launcher_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/374/spark-launcher_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 82 May 2 12:38 spark-mllib_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/359/spark-mllib_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 88 May 2 12:38 spark-mllib-local_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/291/spark-mllib-local_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 91 May 2 12:38 spark-network-common_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/433/spark-network-common_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 92 May 2 12:38 spark-network-shuffle_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/333/spark-network-shuffle_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 spark-repl_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/425/spark-repl_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 83 May 2 12:38 spark-sketch_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/407/spark-sketch_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 80 May 2 12:38 spark-sql_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/329/spark-sql_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 86 May 2 12:38 spark-streaming_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/452/spark-streaming_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 spark-tags_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/423/spark-tags_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 83 May 2 12:38 spark-unsafe_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/428/spark-unsafe_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 81 May 2 12:38 spark-yarn_2.11-2.0.2.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/478/spark-yarn_2.11-2.0.2.2.5.6.3-5.jar*
lrwxrwxrwx 1 yarn hadoop 66 May 2 12:38 spire_2.11-0.7.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/300/spire_2.11-0.7.4.jar*
lrwxrwxrwx 1 yarn hadoop 73 May 2 12:38 spire-macros_2.11-0.7.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/297/spire-macros_2.11-0.7.4.jar*
lrwxrwxrwx 1 yarn hadoop 59 May 2 12:38 ST4-4.0.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/289/ST4-4.0.4.jar*
lrwxrwxrwx 1 yarn hadoop 64 May 2 12:38 stax-api-1.0.1.jar -> /mnt/resource/hadoop/yarn/local/filecache/458/stax-api-1.0.1.jar*
lrwxrwxrwx 1 yarn hadoop 64 May 2 12:38 stax-api-1.0-2.jar -> /mnt/resource/hadoop/yarn/local/filecache/404/stax-api-1.0-2.jar*
lrwxrwxrwx 1 yarn hadoop 62 May 2 12:38 stream-2.7.0.jar -> /mnt/resource/hadoop/yarn/local/filecache/345/stream-2.7.0.jar*
lrwxrwxrwx 1 yarn hadoop 70 May 2 12:38 stringtemplate-3.2.1.jar -> /mnt/resource/hadoop/yarn/local/filecache/315/stringtemplate-3.2.1.jar*
lrwxrwxrwx 1 yarn hadoop 65 May 2 12:38 super-csv-2.2.0.jar -> /mnt/resource/hadoop/yarn/local/filecache/469/super-csv-2.2.0.jar*
drwx--x--- 2 yarn hadoop 4096 May 2 12:38 tmp/
lrwxrwxrwx 1 yarn hadoop 73 May 2 12:38 univocity-parsers-2.1.1.jar -> /mnt/resource/hadoop/yarn/local/filecache/416/univocity-parsers-2.1.1.jar*
lrwxrwxrwx 1 yarn hadoop 76 May 2 12:38 validation-api-1.1.0.Final.jar -> /mnt/resource/hadoop/yarn/local/filecache/500/validation-api-1.1.0.Final.jar*
lrwxrwxrwx 1 yarn hadoop 71 May 2 12:38 xbean-asm5-shaded-4.4.jar -> /mnt/resource/hadoop/yarn/local/filecache/493/xbean-asm5-shaded-4.4.jar*
lrwxrwxrwx 1 yarn hadoop 66 May 2 12:38 xercesImpl-2.9.1.jar -> /mnt/resource/hadoop/yarn/local/filecache/476/xercesImpl-2.9.1.jar*
lrwxrwxrwx 1 yarn hadoop 61 May 2 12:38 xmlenc-0.52.jar -> /mnt/resource/hadoop/yarn/local/filecache/491/xmlenc-0.52.jar*
lrwxrwxrwx 1 yarn hadoop 56 May 2 12:38 xz-1.0.jar -> /mnt/resource/hadoop/yarn/local/filecache/456/xz-1.0.jar*
lrwxrwxrwx 1 yarn hadoop 75 May 2 12:38 zookeeper-3.4.6.2.5.6.3-5.jar -> /mnt/resource/hadoop/yarn/local/filecache/449/zookeeper-3.4.6.2.5.6.3-5.jar*
So from the above it seems that regardless of
oozie.action.sharelib.for.spark=spark2
in the Job.properties, YARN/Oozie is loading all of the jars including the old version of jackson into the container. I am setting --conf spark.yarn.jars=spark2/* on the spark job itself too.
So I think that Oozie is spawning a map-reduce job with all of the oozie sharelib jar. This job then spawns a new container for the Spark action which contains all the jars causing the collision. I need the spark container to only include the spark jars.
I am using an Oozie version < 4.3.1
see below fixes:
https://issues.apache.org/jira/browse/OOZIE-2606
https://issues.apache.org/jira/browse/OOZIE-2658
https://issues.apache.org/jira/browse/OOZIE-2787
https://issues.apache.org/jira/browse/OOZIE-2802
These fixes ensure that the Spark container only contains the correct Jars on the path avoiding the Jackson collision.

Changed ***key-certbot.pem file caused server offline

Server: Nginx on Ubuntu 16.04 Xenial
Our sites "crashed" just now due to a certificate issue:
nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/letsencrypt/keys/0000_key-certbot.pem") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
nginx: configuration file /etc/nginx/nginx.conf test failed
In the Virtual hosts, we have these lines:
ssl_certificate_key /etc/letsencrypt/keys/0003_key-certbot.pem;
ssl_certificate /etc/letsencrypt/live/[domain]/fullchain.pem;
On checking the /etc/letsencrypt/keys/ folder, I found these results
/etc/letsencrypt/keys # ls -la
total 40
drwx------ 2 root root 4096 Jul 5 15:33 .
drwxr-xr-x 11 root root 4096 Apr 18 10:58 ..
-rw------- 1 root root 1704 Apr 18 11:01 0000_key-certbot.pem
-rw------- 1 root root 1708 Jan 31 14:37 0000_key-letsencrypt.pem
-rw------- 1 root root 1704 Apr 18 11:18 0001_key-certbot.pem
-rw------- 1 root root 1704 Jan 31 14:37 0001_key-letsencrypt.pem
-rw------- 1 root root 1704 Apr 18 11:19 0002_key-certbot.pem
-rw------- 1 root root 1708 Feb 2 11:47 0002_key-letsencrypt.pem
-rw------- 1 root root 1708 Jun 17 12:01 0003_key-certbot.pem
-rw------- 1 root root 1704 Jul 5 15:33 0004_key-certbot.pem
The (3) virtual host files were all referencing 0000_key-certbot.pem, after changing that to 0003_key-certbot.pem the sites were working again.
How can we prevent the sites from crashing every 90 days?
While typing this I think I found the solution, I shouldn't have been using
ssl_certificate_key /etc/letsencrypt/keys/0003_key-certbot.pem;
ssl_certificate /etc/letsencrypt/live/[domain]/fullchain.pem;
But instead
ssl_certificate_key /etc/letsencrypt/live/[domain]/privkey.pem;
ssl_certificate /etc/letsencrypt/live/[domain]/fullchain.pem;
Hope this helps someone

Conda not using .condarc in root environment

We are using a centralized conda install. The Continuum docs say:
A .condarc file may also be located in the root environment, in which case it overrides any in the home directory.
Perhaps I'm not understanding what "root environment" means. I put a .condarc at the top level in the conda install directory. However anytime I run any conda operation (including just conda list), it overrides the one in the root environment and creates one in my home directory.
With the newest version of conda under debian I copy the config file as such
root#e42dc1ece1e3:/home/jonb4# ls -la /opt/conda/
total 24
drwxr-xr-x 11 root root 155 Sep 28 11:53 .
drwxr-xr-x 3 root root 19 Jun 4 08:23 ..
-rw-r--r-- 1 root root 1058 Sep 28 11:53 .condarc
-rw-rw-r-- 1 root root 3699 May 12 20:59 LICENSE.txt
drwxr-xr-x 2 root root 4096 Jun 4 08:24 bin
drwxr-xr-x 2 root root 4096 Jun 4 08:24 conda-meta
drwxr-xr-x 2 root root 6 Jun 4 08:24 envs
drwxr-xr-x 3 root root 18 Jun 4 08:24 etc
drwxr-xr-x 5 root root 314 Jun 4 08:24 include
drwxr-xr-x 8 root root 4096 Jun 4 08:24 lib
drwxr-xr-x 28 root root 4096 Jun 4 08:24 pkgs
drwxr-xr-x 4 root root 29 Jun 4 08:24 share
drwxr-xr-x 3 root root 71 Jun 4 08:24 ssl
Then as a user my correct config settings are read without a problem.

Invalid URI for NameNode address

I'm trying to set up a Cloudera Hadoop cluster, with a master node containing the namenode, secondarynamenode and jobtracker, and two others nodes containing the datanode and tasktracker. The Cloudera version is 4.6, the OS is ubuntu precise x64. Also, this cluster is being created from a AWS instance. ssh passwordless has been set as well, Java instalation Oracle-7.
Whenever I execute sudo service hadoop-hdfs-namenode start I get:
2014-05-14 05:08:38,023 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:329)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:317)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:422)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:442)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
My core-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://<master-ip>:8021</value>
</property>
</configuration>
hdfs-site.xml:
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
I tried using public ip, private-ip, public dns and fqdn, but the result is the same.
The directory /etc/hadoop/conf.empty looks like:
-rw-r--r-- 1 root root 2998 Feb 26 10:21 capacity-scheduler.xml
-rw-r--r-- 1 root hadoop 1335 Feb 26 10:21 configuration.xsl
-rw-r--r-- 1 root root 233 Feb 26 10:21 container-executor.cfg
-rwxr-xr-x 1 root root 287 May 14 05:09 core-site.xml
-rwxr-xr-x 1 root root 2445 May 14 05:09 hadoop-env.sh
-rw-r--r-- 1 root hadoop 1774 Feb 26 10:21 hadoop-metrics2.properties
-rw-r--r-- 1 root hadoop 2490 Feb 26 10:21 hadoop-metrics.properties
-rw-r--r-- 1 root hadoop 9196 Feb 26 10:21 hadoop-policy.xml
-rwxr-xr-x 1 root root 332 May 14 05:09 hdfs-site.xml
-rw-r--r-- 1 root hadoop 8735 Feb 26 10:21 log4j.properties
-rw-r--r-- 1 root root 4113 Feb 26 10:21 mapred-queues.xml.template
-rwxr-xr-x 1 root root 290 May 14 05:09 mapred-site.xml
-rw-r--r-- 1 root root 178 Feb 26 10:21 mapred-site.xml.template
-rwxr-xr-x 1 root root 12 May 14 05:09 masters
-rwxr-xr-x 1 root root 29 May 14 05:09 slaves
-rw-r--r-- 1 root hadoop 2316 Feb 26 10:21 ssl-client.xml.example
-rw-r--r-- 1 root hadoop 2251 Feb 26 10:21 ssl-server.xml.example
-rw-r--r-- 1 root root 2513 Feb 26 10:21 yarn-env.sh
-rw-r--r-- 1 root root 2262 Feb 26 10:21 yarn-site.xml
and slaves lists the ip addresses of the two slave machines:
<slave1-ip>
<slave2-ip>
Executing
update-alternatives --get-selections | grep hadoop
hadoop-conf auto /etc/hadoop/conf.empty
I've done a lot of search, but didn't get anything that could help me fix my problem. Could someone offer any clue what's going on?
I was facing the same issue and fixed by formatting the namenode. Below is the command:
hdfs namenode -format
core-site.xml entry is :
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
That will definitely solve the problem.
I ran into this same thing. I found I had to add a fs.defaultFS property to hdfs-site.xml to match the fs.defaultFS property in core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
Once I added this, the secondary namenode started OK.
Make sure you have set the HADOOP_PREFIX variable correctly as indicated in the link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
Even i faced the same issue as yours and it got rectified by setting this variable
Might be you had given a wrong syntax for dfs.datanode.data.dir
or dfs.namenode.data.dir in hdfs-site.xml. If you miss / in the value you will get this error.
Check the syntax of
file:///home/hadoop/hdfs/

Resources