cannot starting start-dfs.sh in hadoop environment hadoop-env.sh problem? - hadoop

i try to start start-dfs.sh in hadoop environment that i just trying to create and i got this message, i don't know what this mean.. can anyone help me please,
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 300: /usr/local/hadoop/logs: Is a directory
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 301: Java: command not found
Starting namenodes on [master]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Starting datanodes
WARNING: HADOOP_SECURE_DN_LOG_DIR has been replaced by HADOOP_SECURE_LOG_DIR. Using value of HADOOP_S ECURE_DN_LOG_DIR.
localhost: ubuntu#localhost: Permission denied (publickey).
Starting secondary namenodes [ip-172-31-93-240]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.

comment or remove this line in $HADOOP_HOME/etc/hadoop/hadoop-env.sh (or maybe you defined this env variable on other files like bashrc):
export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
then, unset HADOOP_WORKERS environment variable:
$ HADOOP_WORKERS=''

Related

Hadoop installation Issue:Permission denied

I followed this tutorial for installation of Hadoop. Unfortunately, when I run the dfs namenode -format script - The following error was printed on console:
but at the end i see this msg
dfs namenode -format
WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
mkdir: cannot create directory ‘/home/hdoop/hadoop-3.2.1/logs’: Permission denied
ERROR: Unable to create /home/hdoop/hadoop-3.2.1/logs. Aborting.
thank u
also when i run
./start-dfs.sh
Starting namenodes on [localhost]
localhost: WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
Starting datanodes
Starting secondary namenodes [blabla]
blabla: Warning: Permanently added 'blabla,192.168.100.10' (ECDSA) to the list of known hosts.
change the permission of /home/hdoop to the correct one!
i sloved this link here
According to my configuration i didn't set JAVA_HOME within PATH
$ which java
$ echo $JAVA_HOME
Also,i change the value of HADOOP_OPTS in hadoop-env.sh as given below.
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/"
the image before and after
Create Directory :- Create "logs" directory by using root access yourself.As in this case directory was logs create it at /home/hdoop/hadoop-3.2.1/ (eg. "/home/{username}/{extracted hadoop directory}/" )
Give Access of Directory :- Make it accessible by using sudo chmod 777 {directory location}
In this case - sudo chmod 777 /home/hdoop/hadoop-3.2.2/logs To see it worked in my case see following image :
solved it by giving access of following directory

start-all.sh: command not found. How do I fix this?

I tried installing hadoop using this tutorial, link (timestamped the video from where problem occurs)
However, after formatting the namenode(hdfs namenode -format) I don't get the "name" folder in /abc.
Also the start-all.sh and other /sbin commands dont work.
P.S I did try installing hadoop as a single node which didnt work so I tried removing it, redoing everything as a double node setup, so i had to reformat the namenode..i dont know if that affected this somehow.
EDIT 1: I fixed the start-all.sh command not working because there was a mistake in .bashrc that i corrected.
However I get these error messages when running start-all.sh or start-dfs.sh etc.
hadoop#linux-virtual-machine:~$ start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:a37ThJJRRW+AlDso9xrOCBHzsFCY0/OgYet7WczVbb0.
Are you sure you want to continue connecting (yes/no)? no
0.0.0.0: Host key verification failed.
EDIT 2: Fixed the above error my changing the permissions to hadoop folder (in my case both hadoop-2.10.0 and hadoop)
start-all.sh works perfectly but namenode doesnt show up.
It's not clear how you setup your PATH variable. Or how the scripts are not "working". Did you chmod +x them to make them executable? Any logs output that comes from them at all?
The start-all script is available in the sbin directory of where you downloaded Hadoop, so just /path/to/sbin/start-all.sh is all you really need.
Yes, the namenode needs formatted on a fresh cluster. Using the official Apache Guide is the most up-to-date source and works fine for most.
Otherwise, I would suggest you learn about Apache Amabri, which can automate your installation. Or just use a Sandbox provided by Cloudera, or use many of the Docker containers that already exist for Hadoop if you don't care about fully "installing" it.

How to remove ERROR start-dfs.sh in Hadoop-3.2.0

Getting following errors when running start-dfs.sh to start hadoop services:
Starting namenodes on [localhost]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [ahsan-Lenovo-G570]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
In hadoop home directory open etc/hadoop/hadoop-env.sh file and add below lines to remove error:
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
You can add your user name by replacing root in above commands.

localhost: Permission denied (publickey,password)

I have hadoop 3.0.0 stand alone working on Ubuntu 16.04 and converting to Suedo distributed and at the stage of running Master and salve Nodes using the comand
$ sudo /usr/local/hadoop/sbin/start-dfs.sh
The results come up as
[sudo] password for tc:
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [localhost]
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
localhost: Permission denied (publickey,password).
Starting datanodes
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
localhost: Permission denied (publickey,password).
Starting secondary namenodes [tc-ThinkCentre-M91p-Invalid-entry-length-16-Fixed-up-to-11]
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
tc-ThinkCentre-M91p-Invalid-entry-length-16-Fixed-up-to-11: Permission denied (publickey,password).
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
I can't understand why it's coming up saying;
localhost: Permission denied (publickey,password).
when I have entered my root user password
I have set hadoop-env.sh up as;
export HDFS_DATANODE_USER=root
export HADOOP_SECURE_DN_USER=hdfs
export HDFS_NAMENODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
Can someone tell me why I'm getting this message ?
localhost: Permission denied (publickey,password).
And should I be concerned with the
hadoop-env.sh: line 1: #: command not found

Hadoop: installation problems

I have installed hadoop, I have set the JAVA_HOME, but still getting this error, why?
/opt/hadoop/2.5.1/sbin: $JAVA_HOME
-bash: /opt/java/6.0: Is a directory
/opt/hadoop/2.5.1/sbin: ./start-dfs.sh
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Error: JAVA_HOME is not set and could not be found.
/opt/hadoop/2.5.1/sbin:
If I try:
sh start-dfs.sh
start-dfs.sh: 82: /opt/hadoop/2.5.1/sbin/../libexec/hadoop-config.sh: Syntax error: word unexpected (expecting ")")
Use bash and not sh to invoke the scripts. That solved my problem.
Open the file hadoop.env.sh within hadoop-xxx/etc/hadoop and add the following line
JAVA_HOME = "address of java"
You need to set Java Enviornment in .bashrc file. You may also need to update java_home value in hadoop.env.sh
Follow steps from my answer and your hadoop installation will go just fine :
[Problems with installing Hadoop on Ubuntu 12.04

Resources