I have installed hadoop, I have set the JAVA_HOME, but still getting this error, why?
/opt/hadoop/2.5.1/sbin: $JAVA_HOME
-bash: /opt/java/6.0: Is a directory
/opt/hadoop/2.5.1/sbin: ./start-dfs.sh
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Error: JAVA_HOME is not set and could not be found.
/opt/hadoop/2.5.1/sbin:
If I try:
sh start-dfs.sh
start-dfs.sh: 82: /opt/hadoop/2.5.1/sbin/../libexec/hadoop-config.sh: Syntax error: word unexpected (expecting ")")
Use bash and not sh to invoke the scripts. That solved my problem.
Open the file hadoop.env.sh within hadoop-xxx/etc/hadoop and add the following line
JAVA_HOME = "address of java"
You need to set Java Enviornment in .bashrc file. You may also need to update java_home value in hadoop.env.sh
Follow steps from my answer and your hadoop installation will go just fine :
[Problems with installing Hadoop on Ubuntu 12.04
Related
I followed this tutorial for installation of Hadoop. Unfortunately, when I run the dfs namenode -format script - The following error was printed on console:
but at the end i see this msg
dfs namenode -format
WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
mkdir: cannot create directory ‘/home/hdoop/hadoop-3.2.1/logs’: Permission denied
ERROR: Unable to create /home/hdoop/hadoop-3.2.1/logs. Aborting.
thank u
also when i run
./start-dfs.sh
Starting namenodes on [localhost]
localhost: WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
Starting datanodes
Starting secondary namenodes [blabla]
blabla: Warning: Permanently added 'blabla,192.168.100.10' (ECDSA) to the list of known hosts.
change the permission of /home/hdoop to the correct one!
i sloved this link here
According to my configuration i didn't set JAVA_HOME within PATH
$ which java
$ echo $JAVA_HOME
Also,i change the value of HADOOP_OPTS in hadoop-env.sh as given below.
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/"
the image before and after
Create Directory :- Create "logs" directory by using root access yourself.As in this case directory was logs create it at /home/hdoop/hadoop-3.2.1/ (eg. "/home/{username}/{extracted hadoop directory}/" )
Give Access of Directory :- Make it accessible by using sudo chmod 777 {directory location}
In this case - sudo chmod 777 /home/hdoop/hadoop-3.2.2/logs To see it worked in my case see following image :
solved it by giving access of following directory
I am very new to hadoop and am trying to set a psuedo-distributed mode execution with Hadoop-3.1.2.
When I try to start yarn service I get the following error, please see the code snippet below.
$ sbin/start-yarn.sh
Starting resourcemanagers on []
localhost: ERROR: Cannot set priority of resourcemanager process 13209
pdsh#manager-4: localhost: ssh exited with exit code 1
Starting nodemanagers
localhost: ERROR: Cannot set priority of nodemanager process 13366
pdsh#manager-4: localhost: ssh exited with exit code 1
I tried solutions at this stackoverflow question, which is very similar to my problem. But nothing worked out. A problem same as mine is posted in another forum here. However, no solution is available there as well.
Then, I tried another option which I am describing in the following text.
I set following exports in the file sbin/start-yarn.sh.
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"
Then executed with sbin/start-yarn.sh and I got the following error. Please note that I have done all the settings for passwordless ssh for root#localhost.
$ sudo sbin/start-yarn.sh
Starting resourcemanagers on []
localhost: Permission denied (publickey).
pdsh#manager-4: localhost: ssh exited with exit code 255
Starting nodemanagers
localhost: Permission denied (publickey).
pdsh#manager-4: localhost: ssh exited with exit code 255
In addition to the steps suggested by zhao, ephraimbuddy and qitian.
Please make sure that if you have a firewall running than the firewall is not blocking it in anyway. Also make sure that the user with which you are executing the command has enough permissions to update the priorities.
Before running the start-yarn script, try the command: ssh localhost
When you have set passwordless ssh for localhost change the pdsh_rcmd_type value to ssh:
export PDSH_RCMD_TYPE=ssh
this error info actually very confuse me, later i find it happens because i have not correctly config cgroup. so you can firstly check your config make sure they are all right, you can check you resourcemanager logs
I had the same issue, what helped me was the guide I found in this link!
The message "Cannot set priority of resourcemanager process" is misleading. I checked the resource manager logs and found that there was an error as follows
Unexpected close tag </property>; expected </configuration>
I had the same issue and was finally able to solve it. I got ResourceManager and NodeManager to run. If you're running Hadoop 3.3 and up, the issue might be with the java version you're using. hadoop_compatibility
" Apache Hadoop 3.3 and upper supports Java 8 and Java 11 (runtime only)
Please compile Hadoop with Java 8. Compiling Hadoop with Java 11 is not supported"
Solution:
Try switching to java 8.
Then make sure your JAVA_HOME path variables are pointing to java 8 (including any JAVA_HOME path variables in hadoop-env.sh).
If the issue persists, check the error messages in resourcemanager log located in $HADOOP_HOME/logs/.
i try to start start-dfs.sh in hadoop environment that i just trying to create and i got this message, i don't know what this mean.. can anyone help me please,
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 300: /usr/local/hadoop/logs: Is a directory
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 301: Java: command not found
Starting namenodes on [master]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Starting datanodes
WARNING: HADOOP_SECURE_DN_LOG_DIR has been replaced by HADOOP_SECURE_LOG_DIR. Using value of HADOOP_S ECURE_DN_LOG_DIR.
localhost: ubuntu#localhost: Permission denied (publickey).
Starting secondary namenodes [ip-172-31-93-240]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
comment or remove this line in $HADOOP_HOME/etc/hadoop/hadoop-env.sh (or maybe you defined this env variable on other files like bashrc):
export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
then, unset HADOOP_WORKERS environment variable:
$ HADOOP_WORKERS=''
I have installed the hadoop 2.6 on ubuntu 14.04.I just followed this blog.
While I am trying to format the namenode, I am hitting with below error:
hduser#data1:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
/usr/local/hadoop/bin/hdfs: line 276: /home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java: No such file or directory
/home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java: No such file or directory
This error occurs because the JAVA_HOME you have provided does not have java.
Just add this line in hadoop-env.sh and /home/hduser/.bashrc:
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
I think you have already set the $JAVA_HOME but you did it wrong (just a guess):
/home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java
It would be :
/usr/lib/jvm/java-7-openjdk-amd64/bin/java
You probably have added ~ before the path when you exported that JAVA_HOME and this added the home directory /home/hduser.
To check this out, type java -version and see if java is working. And type echo $JAVA_HOME and check the path manually.
I figured out. The entry we made was for amd64. it is really i386 computers. Please verify the path and that should fix the issue.
I am trying to install hadoop on windows7.i have installed cygwin, when i do ./start-dfs.sh i am getting the following error:
Error: Could not find or load main class org.apache.hadoop.hdfs.tools.GetConf
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-kalai-namenode kalai-PC.out
localhost: Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-kalai-datanode-kalai-PC.out
localhost: Error: Could not find or load main class org.apache.hadoop.hdfs.server.datanode.DataNode
Error: Could not find or load main class org.apache.hadoop.hdfs.tools.GetConf
Can anyone let me know what i'm doing here wrong?
The above issue gets cleared when I used Command Prompt with admin privileges for Formatting namenode and starting services.
Removed the C:\tmp and C:\data directories manually
Open a cmd.exe with 'Run as Administrator"
Format the namenode and start the services.