FATAL datanode.DataNode: Exception in secureMain - hadoop

I'm very new in Hadoop. After following manual http://toodey.com/2015/08/10/hadoop-installation-on-windows-without-cygwin-in-10-mints/ and run my hadoop i got 3 errors:
1) FATAL datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741515:
2) FATAL nodemanager.NodeManager: Error starting NodeManager
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)
3)ERROR namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
I googled many hours but unfortunately no results.
What can be wrong? Thank you in advance.

Solved - Inslalled ubuntu instead of Windows

Related

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO Fails to start DFS

I have installed/configured Hadoop on windows hadoop-2.6.0
I couldn't successfully start "sbin\start-dfs" run command.
I am getting below error
16/12/20 13:03:56 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a
ccess0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:5
57)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze
Storage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSI
mage.java:308)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam
esystem.java:1020)
There was a similar question for running YARN. And it was told that including hadoop-2.6.0/sbin and hadoop-2.6.0/bin in path would resolve the problem. But still i am facing the error.
Can anyone help me in fixing this?
Please make sure you have proper access permission to namenode directory. Also, format the namenode and start the hdfs services.

Pig permission denied while hdfs file is readable

Running the following commands is sucessful
hadoop fs -ls /path/
hadoop fs -cat /path/.pig_schema
And all the files in that dir has a -rwxr-xr-x permission
However, in the pig console, when running:
A = LOAD '/path/' USING PigStorage();
B = LIMIT A 5;
DUMP B;
Encounters a permission error
2015-08-27 08:47:59,734 [main] ERROR org.apache.pig.tools.grunt.Grunt - You don't have permission to perform the operation. Error from the server: Permission denied
2015-08-27 08:47:59,735 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2017: Internal error creating job configuration.
Any idea why ?
EDIT 1: Added error log
================================================================================ Pig Stack Trace
--------------- ERROR 2017: Internal error creating job configuration.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias B at
org.apache.pig.PigServer.openIterator(PigServer.java:857) at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:746)
at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:196)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:171)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69) at
org.apache.pig.Main.run(Main.java:543) at
org.apache.pig.Main.main(Main.java:157) Caused by:
org.apache.pig.PigException: ERROR 1002: Unable to store alias B at
org.apache.pig.PigServer.storeEx(PigServer.java:956) at
org.apache.pig.PigServer.store(PigServer.java:919) at
org.apache.pig.PigServer.openIterator(PigServer.java:832) ... 7 more
Caused by:
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
ERROR 2017: Internal error creating job configuration. at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:874)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:297)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:177)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1285) at
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1270)
at org.apache.pig.PigServer.storeEx(PigServer.java:952) ... 9 more
Caused by: java.io.IOException: Permission denied at
java.io.UnixFileSystem.createFileExclusively(Native Method) at
java.io.File.createTempFile(File.java:1879) at
java.io.File.createTempFile(File.java:1923) at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:538)
... 14 more

Jobtrackernotyetrunning Error?

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: JobTracker is not yet RUNNING
When i am trying to run a MR job, I am getting this error.
Any clues ?
It was the problem with namenode not being formatted.

FileNotFoundException dfs.include - Hortonworks for windows

I tried to install Hortonworks 2.2.0.2.0.6.0-0009 for Windows on a Windows Server 2012.
Everything seems clean during the installation except when launching "start_local_hdp_services.cmd" to start hadoop services. There, namenode and historyserver services fail to start and generate folowing logs :
For "hadoop-namenode-M1BY1HADOOP.log" :
2014-03-06 09:39:06,755 ERROR org.apache.hadoop.hdfs.server.namenode.HostFileManager: failed to read include file 'c:\hdp\hadoop-2.2.0.2.0.6.0-0009/etc/hadoop/dfs.include'. Continuing to use previous include list.
java.io.FileNotFoundException: c:\hdp\hadoop-2.2.0.2.0.6.0-0009\etc\hadoop\dfs.include (The system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.hadoop.util.HostsFileReader.readFileToSet(HostsFileReader.java:54)
at org.apache.hadoop.hdfs.server.namenode.HostFileManager$MutableEntrySet.readFile(HostFileManager.java:265)
at org.apache.hadoop.hdfs.server.namenode.HostFileManager.refresh(HostFileManager.java:284)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:176)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:237)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:609)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:567)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
For "hadoop-historyserver-M1BY1HADOOP.log":
014-03-06 09:39:20,130 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://VMHADOOP:8020/mapred/history/done]
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://VMHADOOP:8020/mapred/history/done]
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:503)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:88)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:93)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:155)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:165)
Does anybody know the reason of this error and can help me to solve it please?
Thank you
It is a bug in hadoop (https://issues.apache.org/jira/browse/AMBARI-2355), create empty dfs.include and dfs.exclude file and the problem would vanish.

Fatal disk error on datanode DataNode failed volumes:

i am getting the following log on my namenode and its removing my datanode from execution
2013-02-08 03:25:54,345 WARN namenode.NameNode (NameNodeRpcServer.java:errorReport(825)) - Fatal disk error on xxx.xxx.xxx.xxx:50010: DataNode failed volumes:/home/srikmvm/hadoop-0.23.0/tmp/current;
2013-02-08 03:25:54,349 INFO net.NetworkTopology (NetworkTopology.java:remove(367)) - Removing a node: /default-rack/xxx.xxx.xxx.xxx:50010
Can anyone suggest how to rectify this ?
Data Node Logs:
2013-02-08 03:25:54,718 WARN datanode.DataNode (FSDataset.java:checkDirs(871)) - Removing failed volume /home/srikmvm/hadoop-0.23.0/tmp/current:
org.apache.hadoop.util.DiskChecker$DiskErrorException: can not create directory: /home/srikmvm/hadoop-0.23.0/tmp/current/BP-876979163-137.132.153.125-13602411944‌​23/current/finalized
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:87)
Things that make be causing this error message:
Does the directory / path to the directory exist
Does the user under which the datanode process is running, have permissions to create / write to this directory
/home/srikmvm/hadoop-0.23.0/tmp/current/BP-876979163-137.132.153.125-13602411944‌​23/current/finalized

Resources