Datanode is Not Starting in Hadoop after Kerberos Auth - hadoop

I have given permission to /app/hadoop/tmp/dfs/data.
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /app/hadoop/tmp/dfs/data :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:727)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2383)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)
2017-03-14 20:10:51,169 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/app/hadoop/tmp/dfs/data/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2392)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)
2017-03-14 20:10:51,172 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-03-14 20:10:51,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************

Does the owner of that directory on the local file system match the service of the Kerberos principal the datanode is now using? So if it is hdfs/, then the directory (and all under it) should be owned by hdfs.

Related

Why Datanode process didn't run in wsl?

I have set up and configured my hadoop in wsl, but when I start dataNode, it did't work.
This is it's log
2020-03-24 23:47:08,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-03-24 23:47:09,809 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-03-24 23:47:10,199 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /mnt/d/hadoop/hadoop-2.7.1/tmp/dfs/data :
ExitCodeException exitCode=1: chmod: changing permissions of '/mnt/d/hadoop/hadoop-2.7.1/tmp/dfs/data': Operation not permitted
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:728)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2344)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2386)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
2020-03-24 23:47:10,207 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/mnt/d/hadoop/hadoop-2.7.1/tmp/dfs/data/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2395)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2368)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2260)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2307)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2484)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2508)
2020-03-24 23:47:10,209 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2020-03-24 23:47:10,213 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at DESKTOP-U1EOV4J.localdomain/127.0.1.1
************************************************************/
I want to let the dataNode and nameNode directory in my windows directory(such as in mnt/d/hadoop/hadoop-2.7.1/tmp) but failed.
I tried to use chmod 777 -R hadoop /mnt/d/hadoopbut it still can't start. Also I tired to delete the tmp directory and reformat namenode.
But when I change the directory to /home/hadoop/tmp(my wsl directory), dataNode can start normally.
According to the log file, I think this is a problem of authority but I don't konw why. How can I fix this?
I have solved this problem, this is caused by windows authority.
You can't change the file's authority under /mnt in wsl.
When you use chown or chmod, they actually don't work.
You need to use the following instructions first.
sudo umount /mnt/d
sudo mount -t drvfs D: /mnt/d -o metadata
Then you can change the file's authority in wsl.
You can get more information here: https://devblogs.microsoft.com/commandline/chmod-chown-wsl-improvements/

hadoop datanode failed to start with invalid location

all the services are running except datanode. when i start datanode, i get the errors below.
> ************************************************************/
2017-09-05 10:19:06,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-09-05 10:19:06,675 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data1/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration.
2017-09-05 10:19:06,675 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data2/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration.
2017-09-05 10:19:06,679 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data3/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration.
2017-09-05 10:19:07,441 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hadoop/hzes#HZ.DATA.COM using keytab file /opt/keytab/hadoop.keytab
2017-09-05 10:19:08,208 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /data1/hdfs/dn :
java.io.FileNotFoundException: File file:/data1/hdfs/dn does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:811)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:588)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2516)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2558)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2540)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2432)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2479)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2661)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
...............
2017-09-05 10:19:08,218 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/data1/hdfs/dn" "/data2/hdfs/dn" "/data3/hdfs/dn"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2567)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2540)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2432)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2479)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2661)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
2017-09-05 10:19:08,221 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-09-05 10:19:08,233 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
my relative configuration:
<name>dfs.datanode.data.dir</name>
<value>/data1/hdfs/dn,/data2/hdfs/dn,/data3/hdfs/dn</value>
why? i runinig all services with root
when i change the configuration to another location(eq: /home/hadoop), it works.
What are the permissions and owner of /data1/hdfs/dn,/data2/hdfs/dn,/data3/hdfs/dn directories.
You don't have to mention /hdfs/dn/, it will automatically create /dfs/dn/.

Hadoop 2.6.1 Single Node Set up : Data Node is not up

I tried to set up Hadoop 2.6.1 based on instructions from here
But my data node is not up. When I do JPS , I get only the below process
▶ jps
8406 ResourceManager
7744 NameNode
8527 NodeManager
8074 SecondaryNameNode
9121 Jps
DataNode Log:
2015-10-07 13:02:24,144 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /home/vinod/.hadoopdata/hdfs/datanode :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:652)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:490)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
2015-10-07 13:02:24,147 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/home/vinod/.hadoopdata/hdfs/datanode/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2350)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
2015-10-07 13:02:24,148 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-10-07 13:02:24,150 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at BBDSK0201/127.0.1.1
************************************************************/
Please help what's that I could be missing
1)Make sure the directory has right owner and permissions.
$ sudo chown -R hduser:hadoop /home/vinod/.hadoopdata/hdfs/datanode
2) Delete the contents given in tmp directory. It is the parameter given for hadoop.tmp.dir
3) Format the namenode.
Start all the process again. Hope this helps...

invalid last txid in stream

I am trying to configure hadoop namenode HA and resourcemanager HA as well. However, when I start namenode as standby, I got IllegalArgumentException as below:
=====================================================
About to bootstrap Standby ID nn2 from:
Nameservice ID: mycluster
Other Namenode ID: nn1
Other NN's HTTP address: http://my1.namenode.com:50070
Other NN's IPC address: my1.namenode.com/xxx.xxx.xxx.xxx:8020
Namespace ID: 1915209867
Block pool ID: BP-740716617-xxx.xxx.xxx.xxx-1409206617148
Cluster ID: CID-51cea219-ffe7-4a52-8a6c-fb83d501ccaa
Layout version: -56
=====================================================
Data exists in Storage Directory /hadoop1/hadoop/hdfs/nn. Formatting anyway.
14/11/05 16:41:20 INFO common.Storage: Storage directory /hadoop1/hadoop/hdfs/nn has been successfully formatted.
14/11/05 16:41:20 WARN common.Util: Path /hadoop1/hadoop/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
14/11/05 16:41:20 WARN common.Util: Path /hadoop1/hadoop/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
14/11/05 16:41:21 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: java.lang.IllegalArgumentException: invalid last txid in stream: http://my3.namenode.com:8480/getJournal?jid=mycluster&segmentTxId=74823&storageInfo=-56%3A1915209867%3A0%3ACID-51cea219-ffe7-4a52-8a6c-fb83d501ccaa
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:317)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1306)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395)
Caused by: java.lang.IllegalArgumentException: invalid last txid in stream: http://my3.namenode.com:8480/getJournal?jid=mycluster&segmentTxId=74823&storageInfo=-56%3A1915209867%3A0%3ACID-51cea219-ffe7-4a52-8a6c-fb83d501ccaa
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.<init>(RedundantEditLogInputStream.java:101)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.chainAndMakeRedundantStreams(JournalSet.java:300)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:494)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:260)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1399)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1418)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.checkLogsAvailableForRead(BootstrapStandby.java:236)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.doRun(BootstrapStandby.java:203)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.access$000(BootstrapStandby.java:69)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:106)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:102)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:102)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:312)
... 2 more
14/11/05 16:41:21 INFO util.ExitUtil: Exiting with status 1
14/11/05 16:41:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at
************************************************************/
All others are working well and I've checked hdfs-site.xml configuration for the problem but I couldn't find anything.
Please help me...
Thank you
Restart :
./sbin/hadoop-daemon.sh start journalnode

ERROR namenode.FSNamesystem: FSNamesystem initialization failed

I'm running hadoop in pseudo distributed mode on an ubuntu VM. I recently decided to increase the RAM and number of cores available to my VM, and that seems to have completely screwed hdfs. First, it was in safemode and I manually released that using:
hadoop dfsadmin -safemode leave
Then I ran:
hadoop fsck -blocks
and practically every block was corrupt or missing. So I figured, this is just for my learning, I deleted everything in "/user/msknapp" and everything in "/var/lib/hadoop-0.20/cache/mapred/mapred/.settings". So the block errors were gone. Then I try:
hadoop fs -put myfile myfile
and get (abridged):
12/01/07 15:05:29 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
12/01/07 15:05:29 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/01/07 15:05:29 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/msknapp/myfile" - Aborting...
put: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
12/01/07 15:05:29 ERROR hdfs.DFSClient: Exception closing file /user/msknapp/myfile : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
at ...
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
at ...
So I tried to stop and restart the namenode and datanode. No luck:
hadoop namenode
12/01/07 15:13:47 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)
12/01/07 15:13:47 ERROR namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)
12/01/07 15:13:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
Would somebody please help me out here? I have been trying to fix this for hours.
Go into where you have configured the hdfs. delete everything there, format namenode and you are good to go. It usually happens if you don't shut down your cluster properly!
Following error means, fsimage file not have permission
namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
so give permission to fsimage file,
$chmod -R 777 fsimage

Resources