Command not found but ./command works - hadoop

I am trying to execute hadoop command from bin folder of hadoop. It doesn't work. But ./hadoop in bin folder works. What would be the problem?
Thanks,
Madhu

Related

Format local file system HDFS and launch Hadoop

I am working on OpenClassroom and trying to understand Hadoop, but I have some problems installing it (I am kinda new on Linux):
I have installed and configured Hadoop (I have changed the files etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml, etc/hadoop/mapred-site.xml and etc/hadoop/yarn-site.xml as asked in the website)
But after, they tell me to do that in order to launch Hadoop:
$ hdfs namenode -format
$ start-dfs.sh
$ start-yarn.sh
But when I do it, it gives me:
hdfs : command can't be found
What am I doing wrong?
The issue can happen in the below scenarios:-
1) hdfs binaries are not installed properly
2) The location of the hdfs execuatble script is not present in $PATH
of the user who is executing the command. To verify the same, try the
below steps:-
A) Please clarify if the hdfs binaries is installed by navigating to
the location "/usr/hdp//hadoop-hdfs/bin" directory.
B) Please check if /usr/bin directory and HADOOP_HOME is present in
the $PATH environment variable? (echo $PATH)
C) Output of the command ls -ltr /usr/bin/hdfs. By default a softlink
is created for hdfs script in usr/bin directory.

Why I am getting command not found in hadoop?

I am working on a hadoop project on Ubuntu 14.04. Whenever I give the start-all.sh or start-dfs.sh, it gives me command not found message. What should I do?
You are not running the command in right environment.
The start-all.sh(deprecated) or start-dfs.sh command lies in /hadoop/bin directory. You have to find your hadoop home directory and find bin folder in it, then run the command
./start-dfs.sh
Do below inside your ~/.bashrc
export PATH=$PATH:$HADOOP_HOME/bin
then run source ~/.bashrc file. Now command should work.
This situation should be that the bin environment variable of Hadoop is not configured properly.
Modify the vi /etc/profile file
export $HADOOP_HOME=/usr/hadoop #the directory where your hadoop installed
export PATH=$HADOOP_HOME/bin:$PATH
then
source /etc/profile

Apache Spark with Hadoop distribution failing to run on Windows

I tried running spark-1.5.1-bin-hadoop2.6 distribution (and newer versions of Spark with same results) on Windows using Cygwin.
When trying to execute spark-shell script in the bin folder, I get below output:
Error: Could not find or load main class org.apache.spark.launcher.Main
I tried to set CLASSPATH to the location of lib/spark-assembly-1.5.1-hadoop2.6.0.jar but to no avail.
(FYI: I am able to run the same distribution fine on my MAC with no extra setup steps required)
Please assist in finding resolution for Cygwin execution on Windows.
I ran into and solved a similar problem with cywin on Windows 10 and spark-1.6.0.
build with Maven (maybe you're past this step)
mvn -DskipTests package
make sure JAVA_HOME is set to a JDK
$ export JAVA_HOME="C:\Program Files\Java\jdk1.8.0_60"
$ ls "$JAVA_HOME"
bin include LICENSE THIRDPARTYLICENSEREADME.txt ....
use the Windows batch file. Launch from PowerShell or CommandPrompt if you have terminal problems with cygwin.
$ chmod a+x bin/spark-shell.cmd
$ ./bin/spark-shell.cmd
My solution to the problem was to move the Spark installation into a path that didn't have spaces in it. Under Program Files I got the above error, but moving it directly under C:\ and running spark-shell.bat file cleared it up.

bin/hadoop no such file or directory

I`m trying to install hadoop 2.6 on Ubuntu 14.04.
When I write this command line
bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'"
this is the cmd
araziz#araziz-HP-EliteBook-8440p:~$ cd hadoop
araziz#araziz-HP-EliteBook-8440p:~/hadoop$ ls
hadoop-2.6.0-src hadoop-2.6.0-src.tar.gz
araziz#araziz-HP-EliteBook-8440p:~/hadoop$ cd ha*
araziz#araziz-HP-EliteBook-8440p:~/hadoop/hadoop-2.6.0-src$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
bash: bin/hadoop: No such file or directory
In all hadoop tutorials, bin/hadoop is the location of hadoop, you could see it also as $HADOOP_HOME/bin/hadoop. $HADOOP_HOME it's where hadoop it's located. In my case, it's in /usr/local/hadoop. But, again, it depends on the instructions that you are following. Check more closely your tutorial !
Before running Hadoop commands you need to set $HADOOP_HOME in .bashrc file
To help in this situations, I've created some scripts in this repository: https://github.com/lalosam/EasyHadoop.
hadoop.sh script download, unpack, configure hadoop, install required dependencies and set environment variables according with the latest (hadoop 2.7.1) official Getting Started tutorial. I developed it on Linux Mint but it should work in Ubuntu since they are using the same package manager (apt-get).

hadoop single node setup bash: bin/hadoop permission denied

I have an error when trying to setup a single node hadoop cluster at the step of
formatting namenode.
The command:
bin/hadoop namenode -format
The error:
bash: bin/hadoop : permission denied
I tried this on ubuntu 12.10, 12.04, 11.04 and got the same error for all.
What can I do?
change the permissions of your HADOOP_HOME. for a detailed help you can visit this link
Check permissions of hadoop script:
do ls -l ./bin/hadoop
Thanks alot all,,
it was missing execute permission for files in bin/hadoop directory and it's now working after providing execute permission using chmod command

Resources