Cannot create directory in hdfs NameNode is in safe mode - hadoop

I upgrade to the latest version of cloudera.Now I am trying to create directory in HDFS
hadoop fs -mkdir data
Am getting the following error
Cannot Create /user/cloudera/data Name Node is in SafeMode.
How can I do this?

When you start hadoop, for some time limit hadoop stays in safemode. You can either wait (you can see the time limit being decreased on Namenode web UI) until the time limit or You can turn it off with
hadoop dfsadmin -safemode leave
The above command turns off the safemode of hadoop

In addition to Ramesh Maharjan answer, By default, cloudera machine(Cloudera Quick Start#5.12) doesn't allow to SET OFF safe mode, it's required to specify the -u options as shown below:
sudo -u hdfs hdfs dfsadmin -safemode leave

For me, I was immediately using hive command to go into hive shell after starting hadoop using start-all.sh. I re-tried using hive command after waiting for 10-20 seconds.

Might need the full path to hdfs command
/usr/local/hadoop/bin/hdfs dfsadmin -safemode leave

Related

Role of 'prepare' command & 'safemode' in HDFS Rolling Upgrade

In HDFS rolling upgrade page at a high level I am seeing the steps mentioned like below
hdfs dfsadmin -rollingUpgrade prepare
Upgrade standby NN2 and start NN2 with hdfs dfsadmin -rollingUpgrade started
Shutdown, upgrade NN1 and start NN1 with hdfs dfsadmin -rollingUpgrade started
But in cloudera documentation of Hadoop to prepare the cluster for upgrade it is just saying to enter NN in safemode & save namespace
sudo -u hdfs hdfs dfsadmin -safemode enter
sudo -u hdfs hdfs dfsadmin -saveNamespace
Can some one let me know why there is a difference in the above steps?
Is it just sufficient to enter in -safemode before doing the upgrade?
If yes what does -rollingUpgrade started do then?
In the cloudera documentation I am not seeing anything about -safemode leave. When it will leave the safemode? Will it auto leave the safemode ?
The Cloudera instructions are not for upgrading with zero downtime on the core Hadoop services. It explicitly says to shutdown everything.
I assume rollingUpgrade started command flags the namenode process to tell it not to attempt to become the active in case the standby fails over during the upgrade and it also makes the namenode become the standby, which is different from safe mode. Safe mode prevents metadata updates in the middle of the upgrade process
I do not think it automatically leaves safe mode. Please comment on the answer once you get there and figure that out

cant get past the error: "mkdir: Cannot create directory /user/hadoop. Name node is in safe mode." [duplicate]

root# bin/hadoop fs -mkdir t
mkdir: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/root/t. Name node is in safe mode.
not able to create anything in hdfs
I did
root# bin/hadoop fs -safemode leave
But showing
safemode: Unknown command
what is the problem?
Solution: http://unmeshasreeveni.blogspot.com/2014/04/name-node-is-in-safe-mode-how-to-leave.html?m=1
In order to forcefully let the namenode leave safemode, following command should be executed:
bin/hadoop dfsadmin -safemode leave
You are getting Unknown command error for your command as -safemode isn't a sub-command for hadoop fs, but it is of hadoop dfsadmin.
Also after the above command, I would suggest you to once run hadoop fsck so that any inconsistencies crept in the hdfs might be sorted out.
Update:
Use hdfs command instead of hadoop command for newer distributions. The hadoop command is being deprecated:
hdfs dfsadmin -safemode leave
hadoop dfsadmin has been deprecated and so is hadoop fs command, all hdfs related tasks are being moved to a separate command hdfs.
try this, it will work
sudo -u hdfs hdfs dfsadmin -safemode leave
The Command did not work for me but the following did
hdfs dfsadmin -safemode leave
I used the hdfs command instead of the hadoop command.
Check out http://ask.gopivotal.com/hc/en-us/articles/200933026-HDFS-goes-into-readonly-mode-and-errors-out-with-Name-node-is-in-safe-mode- link too
safe mode on means (HDFS is in READ only mode)
safe mode off means (HDFS is in Writeable and readable mode)
In Hadoop 2.6.0, we can check the status of name node with help of the below commands:
TO CHECK THE name node status
$ hdfs dfsadmin -safemode get
TO ENTER IN SAFE MODE:
$ hdfs dfsadmin -safemode enter
TO LEAVE SAFE mode
~$ hdfs dfsadmin -safemode leave
If you use Hadoop version 2.6.1 above, while the command works, it complains that its depreciated. I actually could not use the hadoop dfsadmin -safemode leave because I was running Hadoop in a Docker container and that command magically fails when run in the container, so what I did was this. I checked doc and found dfs.safemode.threshold.pct in documentation that says
Specifies the percentage of blocks that should satisfy the minimal
replication requirement defined by dfs.replication.min. Values less
than or equal to 0 mean not to wait for any particular percentage of
blocks before exiting safemode. Values greater than 1 will make safe
mode permanent.
so I changed the hdfs-site.xml into the following (In older Hadoop versions, apparently you need to do it in hdfs-default.xml:
<configuration>
<property>
<name>dfs.safemode.threshold.pct</name>
<value>0</value>
</property>
</configuration>
Try this
sudo -u hdfs hdfs dfsadmin -safemode leave
check status of safemode
sudo -u hdfs hdfs dfsadmin -safemode get
If it is still in safemode ,then one of the reason would be not enough space in your node, you can check your node disk usage using :
df -h
if root partition is full, delete files or add space in your root partition and retry first step.
Namenode enters into safemode when there is shortage of memory. As a result the HDFS becomes readable only. That means one can not create any additional directory or file in the HDFS. To come out of the safemode, the following command is used:
hadoop dfsadmin -safemode leave
If you are using cloudera manager:
go to >>Actions>>Leave Safemode
But it doesn't always solve the problem. The complete solution lies in making some space in the memory. Use the following command to check your memory usage.
free -m
If you are using cloudera, you can also check if the HDFS is showing some signs of bad health. It probably must be showing some memory issue related to the namenode. Allot more memory by following the options available. I am not sure what commands to use for the same if you are not using cloudera manager but there must be a way. Hope it helps! :)
Run the command below using the HDFS OS user to disable safe mode:
sudo -u hdfs hadoop dfsadmin -safemode leave
use below command to turn off the safe mode
$> hdfs dfsadmin -safemode leave

Hadoop: Cannot delete a directory. Name node is in safe mode

When I am trying to delete a directory in the HDFS file system, I am getting the following error:
Cannot delete directory. Name node is in safe mode.
How to solve this issue? Please advice.
If you see that error that means the Namenode is in safe mode and it is almost equivalent to read-only mode.
To leave the namenode from the safemode run the below command:
$ hadoop dfsadmin –safemode leave
if you are using hadoop 2.9.0 or higher, use
hdfs dfsadmin -safemode leave
In my case the hadoop dfsadmin -safemode leave canceled safemode but as soon as I tried to delete the old directory, the system returned to safemode.
I deleted all the tmp folders that I could find related to hadoop installations but the old directory did not disappear and it could not be deleted.
Finally I used:
ps aux | grep -i namenode
and discovered that there was a running process that was using parameters from an older Hadoop implementation (different version). I killed the process using kill pid and finally this action removed the old directory.

hadoop.hdfs_clusters.default.webhdfs_url Error in Hue

Can anyone help me I'm getting this error in hue.
Current value: http://localhost:50070/webhdfs/v1
Failed to create temporary file "/tmp/hue_config_validation.15785472045199379485"
FYI, I'm using Cloudera Manager 5.1.3 and Hue 3.6.
Ok I solve my own problem. The cause of the error is NameNode in safe mode.
This command will make your Namenode leave safemode.
sudo -u hdfs hdfs dfsadmin -safemode leave
For more information why your NameNode get into safe mode.
https://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Safemode

SafeModeException : Name node is in safe mode

I tried copying files from my local disk to hdfs . At first it gave SafeModeException. While searching for solution I read that the problem does not appear if one executes same command again. So I tried again and it didn't gave exception.
hduser#saket:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg/ /user/hduser/gutenberg
copyFromLocal: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hduser/gutenberg. Name node is in safe mode.
hduser#saket:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg/ /user/hduser/gutenberg
Why is this happening?. Should I keep safemode off by using this code?
hadoop dfs -safemode leave
NameNode is in safemode until configured percent of blocks reported to be online by the data nodes. It can be configured by parameter dfs.namenode.safemode.threshold-pct in the hdfs-site.xml
For small / development clusters, where you have very few blocks - it makes sense to make this parameter lower then its default 0.9999f value. Otherwise 1 missing block can lead to system to hang in safemode.
Go to the hadoop path into bin(in my system /usr/local/hadoop/bin/),
cd /usr/local/hadoop/bin/
Check there is a file hadoop,
hadoopuser#arul-PC:/usr/local/hadoop/bin$ ls
the o/p will be,
hadoop hadoop-daemons.sh start-all.sh start-jobhistoryserver.sh stop-balancer.sh stop-mapred.sh
hadoop-config.sh rcc start-balancer.sh start-mapred.sh stop-dfs.sh task-controller
hadoop-daemon.sh slaves.sh start-dfs.sh stop-all.sh stop-jobhistoryserver.sh
Then you have to off safe mode by using command ./hadoop dfsadmin -safemode leave,
hadoopuser#arul-PC:/usr/local/hadoop/bin$ ./hadoop dfsadmin -safemode leave
you will get response as,
Safe mode is OFF
Note: I created Hadoop user with the name of hadoopuser.

Resources