localhost: Permission denied (publickey,password) - hadoop

I have hadoop 3.0.0 stand alone working on Ubuntu 16.04 and converting to Suedo distributed and at the stage of running Master and salve Nodes using the comand
$ sudo /usr/local/hadoop/sbin/start-dfs.sh
The results come up as
[sudo] password for tc:
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [localhost]
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
localhost: Permission denied (publickey,password).
Starting datanodes
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
localhost: Permission denied (publickey,password).
Starting secondary namenodes [tc-ThinkCentre-M91p-Invalid-entry-length-16-Fixed-up-to-11]
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 1: #: command not found
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
tc-ThinkCentre-M91p-Invalid-entry-length-16-Fixed-up-to-11: Permission denied (publickey,password).
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
I can't understand why it's coming up saying;
localhost: Permission denied (publickey,password).
when I have entered my root user password
I have set hadoop-env.sh up as;
export HDFS_DATANODE_USER=root
export HADOOP_SECURE_DN_USER=hdfs
export HDFS_NAMENODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
Can someone tell me why I'm getting this message ?
localhost: Permission denied (publickey,password).
And should I be concerned with the
hadoop-env.sh: line 1: #: command not found

Related

start-all.sh: command not found. How do I fix this?

I tried installing hadoop using this tutorial, link (timestamped the video from where problem occurs)
However, after formatting the namenode(hdfs namenode -format) I don't get the "name" folder in /abc.
Also the start-all.sh and other /sbin commands dont work.
P.S I did try installing hadoop as a single node which didnt work so I tried removing it, redoing everything as a double node setup, so i had to reformat the namenode..i dont know if that affected this somehow.
EDIT 1: I fixed the start-all.sh command not working because there was a mistake in .bashrc that i corrected.
However I get these error messages when running start-all.sh or start-dfs.sh etc.
hadoop#linux-virtual-machine:~$ start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:a37ThJJRRW+AlDso9xrOCBHzsFCY0/OgYet7WczVbb0.
Are you sure you want to continue connecting (yes/no)? no
0.0.0.0: Host key verification failed.
EDIT 2: Fixed the above error my changing the permissions to hadoop folder (in my case both hadoop-2.10.0 and hadoop)
start-all.sh works perfectly but namenode doesnt show up.
It's not clear how you setup your PATH variable. Or how the scripts are not "working". Did you chmod +x them to make them executable? Any logs output that comes from them at all?
The start-all script is available in the sbin directory of where you downloaded Hadoop, so just /path/to/sbin/start-all.sh is all you really need.
Yes, the namenode needs formatted on a fresh cluster. Using the official Apache Guide is the most up-to-date source and works fine for most.
Otherwise, I would suggest you learn about Apache Amabri, which can automate your installation. Or just use a Sandbox provided by Cloudera, or use many of the Docker containers that already exist for Hadoop if you don't care about fully "installing" it.

cannot starting start-dfs.sh in hadoop environment hadoop-env.sh problem?

i try to start start-dfs.sh in hadoop environment that i just trying to create and i got this message, i don't know what this mean.. can anyone help me please,
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 300: /usr/local/hadoop/logs: Is a directory
/usr/local/hadoop/etc/hadoop/hadoop-env.sh: line 301: Java: command not found
Starting namenodes on [master]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Starting datanodes
WARNING: HADOOP_SECURE_DN_LOG_DIR has been replaced by HADOOP_SECURE_LOG_DIR. Using value of HADOOP_S ECURE_DN_LOG_DIR.
localhost: ubuntu#localhost: Permission denied (publickey).
Starting secondary namenodes [ip-172-31-93-240]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
comment or remove this line in $HADOOP_HOME/etc/hadoop/hadoop-env.sh (or maybe you defined this env variable on other files like bashrc):
export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
then, unset HADOOP_WORKERS environment variable:
$ HADOOP_WORKERS=''

Problems using start-dfs.sh

I have used this link to create a 4 node cluster: https://blog.insightdatascience.com/spinning-up-a-free-hadoop-cluster-step-by-step-c406d56bae42, but once I reach the part to start the hadoop cluster I get errors like so:
$HADOOP_HOME/sbin/start-dfs.sh
Starting namenodes on [namenode_dns]
namenode_dns: mkdir: cannot create
directory ‘/usr/local/hadoop/logs’: Permission denied
namenode_dns: chown: cannot access
'/usr/local/hadoop/logs': No such file or directory
namenode_dns: starting namenode, logging
to /usr/local/hadoop/logs/hadoop-ubuntu-namenode-ip-172-31-2-168.out
namenode_dns:
/usr/local/hadoop/sbin/hadoop-daemon.sh: line 159:
/usr/local/hadoop/logs/hadoop-ubuntu-namenode-ip-172-31-2-168.out: No
such file or directory
namenode_dns: head: cannot open
'/usr/local/hadoop/logs/hadoop-ubuntu-namenode-ip-172-31-2-168.out'
for reading: No such file or directory
namenode_dns:
/usr/local/hadoop/sbin/hadoop-daemon.sh: line 177:
/usr/local/hadoop/logs/hadoop-ubuntu-namenode-ip-172-31-2-168.out: No
such file or directory
namenode_dns:
/usr/local/hadoop/sbin/hadoop-daemon.sh: line 178:
/usr/local/hadoop/logs/hadoop-ubuntu-namenode-ip-172-31-2-168.out: No
such file or directory
ip-172-31-1-82: starting datanode, logging to
/usr/local/hadoop/logs/hadoop-ubuntu-datanode-ip-172-31-1-82.out
ip-172-31-7-221: starting datanode, logging to
/usr/local/hadoop/logs/hadoop-ubuntu-datanode-ip-172-31-7-221.out
ip-172-31-14-230: starting datanode, logging to
/usr/local/hadoop/logs/hadoop-ubuntu-datanode-ip-172-31-14-230.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/usr/local/hadoop/logs’:
Permission denied
0.0.0.0: chown: cannot access '/usr/local/hadoop/logs': No such file
or directory
0.0.0.0: starting secondarynamenode, logging to
/usr/local/hadoop/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-2-
168.out
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159:
/usr/local/hadoop/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-2-
168.out: No such file or directory
0.0.0.0: head: cannot open '/usr/local/hadoop/logs/hadoop-ubuntu-
secondarynamenode-ip-172-31-2-168.out' for reading: No such file or
directory
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177:
/usr/local/hadoop/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-2-
168.out: No such file or directory
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178:
/usr/local/hadoop/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-2-
168.out: No such file or directory
Here is what happens when I run jps:
20688 Jps
I'm not sure where I went wrong with the configuration and such. I am new to hadoop and map reduce so please keep it simple.
It's a permission related issue, Looks like the user(I'nk it's ubuntu) you are using to start hadoop services doesn't have write permission in the log directory(/usr/local/hadoop) - You would've copied hadoop files as sudo/root. Try to change Hadoop Home directory ownership recursively or Give write access to /usr/local/hadoop/logs directory.
chown -R ububunt:ubuntu /usr/local/hadoop
or
chmod 777 /usr/local/hadoop/logs

start-all.sh error while installing hadoop on ubuntu 12.04lts

I have been referring to this link for hadoop-1.1.1 installation.
All my files and permissions have been set according to this link.
But I am getting this error.Please help.
hduser#ubuntu:/usr/local/hadoop$ bin/start-all.sh mkdir: cannot create
directory /usr/local/hadoop/libexec/../logs': Permission denied
chown: cannot access/usr/local/hadoop/libexec/../logs': No such file
or directory starting namenode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-ubuntu.out
/usr/local/hadoop/bin/hadoop-daemon.sh: line 136:
/usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-ubuntu.out:
No such file or directory head: cannot open
/usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-ubuntu.out'
for reading: No such file or directory localhost: mkdir: cannot create
directory/usr/local/hadoop/libexec/../logs': Permission denied
localhost: chown: cannot access /usr/local/hadoop/libexec/../logs':
No such file or directory localhost: starting datanode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-ubuntu.out
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 136:
/usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-ubuntu.out:
No such file or directory localhost: head: cannot open
/usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-ubuntu.out'
for reading: No such file or directory localhost: mkdir: cannot create
directory /usr/local/hadoop/libexec/../logs': Permission denied
localhost: chown: cannot access/usr/local/hadoop/libexec/../logs':
No such file or directory localhost: starting secondarynamenode,
logging to
/usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-ubuntu.out
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 136:
/usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-ubuntu.out:
No such file or directory localhost: head: cannot open
/usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-ubuntu.out'
for reading: No such file or directory mkdir: cannot create directory
/usr/local/hadoop/libexec/../logs': Permission denied chown: cannot
access /usr/local/hadoop/libexec/../logs': No such file or directory
starting jobtracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-ubuntu.out
/usr/local/hadoop/bin/hadoop-daemon.sh: line 136:
/usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-ubuntu.out:
No such file or directory head: cannot open
/usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-ubuntu.out'
for reading: No such file or directory localhost: mkdir: cannot create
directory /usr/local/hadoop/libexec/../logs': Permission denied
localhost: chown: cannot access/usr/local/hadoop/libexec/../logs':
No such file or directory localhost: starting tasktracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-ubuntu.out
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 136:
/usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-ubuntu.out:
No such file or directory localhost: head: cannot open
`/usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-ubuntu.out'
for reading: No such file or directory
As the error suggests you're having a permission problem.
You need to give hduser proper permissions. Try:
sudo chown -R hduser /usr/local/hadoop/
Run this command to change the permission of the hadoop directory
sudo chmod 750 /app/hadoop
Below are 2 very helpful suggestions:
It is good to check whether HADOOP_HOME and JAVA_HOME is set in
.bashrc file. Sometimes, not setting up these environment variables
may also cause error while starting the hadoop cluster.
It is also useful to debug the error by going through the log files generated in /usr/local/hadoop/logs directory.

Unable to start daemons using start-dfs.sh

We are using cdh4-0.0 distribution from cloudera. We are unable to start the daemons using the below command.
>start-dfs.sh
Starting namenodes on [localhost]
hduser#localhost's password:
localhost: mkdir: cannot create directory `/hduser': Permission denied
localhost: chown: cannot access `/hduser/hduser': No such file or directory
localhost: starting namenode, logging to /hduser/hduser/hadoop-hduser-namenode-canberra.out
localhost: /home/hduser/work/software/cloudera/hadoop-2.0.0-cdh4.0.0/sbin/hadoop-daemon.sh: line 150: /hduser/hduser/hadoop-hduser-namenode-canberra.out: No such file or directory
localhost: head: cannot open `/hduser/hduser/hadoop-hduser-namenode-canberra.out' for reading: No such file or directory
Looks like you're using tarballs?
Try to set an override the default HADOOP_LOG_DIR location in your etc/hadoop/hadoop-env.sh config file like so:
export HADOOP_LOG_DIR=/path/to/hadoop/extract/logs/
And then retry sbin/start-dfs.sh, and it should work.
In packaged environments, the start-stop scripts are tuned to provide a unique location for each type of service, via the same HADOOP_LOG_DIR env-var, so they do not have the same issue you're seeing.
If you are using packages instead, don't use these scripts and instead just do:
service hadoop-hdfs-namenode start
service hadoop-hdfs-datanode start
service hadoop-hdfs-secondarynamenode start

Resources