How to start Datanode? (Cannot find start-dfs.sh script) - hadoop

We are setting up automated deployments on a headless system: so using the GUI is not an option here.
Where is start-dfs.sh script for hdfs in Hortonworks Data Platform? CDH / cloudera packages those files under the hadoop/sbin directory. However when we search for those scripts under HDP they are not found:
$ pwd
/usr/hdp/current
Which scripts exist in HDP ?
[stack#s1-639016 current]$ find -L . -name \*.sh
./hadoop-hdfs-client/sbin/refresh-namenodes.sh
./hadoop-hdfs-client/sbin/distribute-exclude.sh
./hadoop-hdfs-datanode/sbin/refresh-namenodes.sh
./hadoop-hdfs-datanode/sbin/distribute-exclude.sh
./hadoop-hdfs-nfs3/sbin/refresh-namenodes.sh
./hadoop-hdfs-nfs3/sbin/distribute-exclude.sh
./hadoop-hdfs-secondarynamenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-secondarynamenode/sbin/distribute-exclude.sh
./hadoop-hdfs-namenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-namenode/sbin/distribute-exclude.sh
./hadoop-hdfs-journalnode/sbin/refresh-namenodes.sh
./hadoop-hdfs-journalnode/sbin/distribute-exclude.sh
./hadoop-hdfs-portmap/sbin/refresh-namenodes.sh
./hadoop-hdfs-portmap/sbin/distribute-exclude.sh
./hadoop-client/sbin/hadoop-daemon.sh
./hadoop-client/sbin/slaves.sh
./hadoop-client/sbin/hadoop-daemons.sh
./hadoop-client/etc/hadoop/hadoop-env.sh
./hadoop-client/etc/hadoop/kms-env.sh
./hadoop-client/etc/hadoop/mapred-env.sh
./hadoop-client/conf/hadoop-env.sh
./hadoop-client/conf/kms-env.sh
./hadoop-client/conf/mapred-env.sh
./hadoop-client/libexec/kms-config.sh
./hadoop-client/libexec/init-hdfs.sh
./hadoop-client/libexec/hadoop-layout.sh
./hadoop-client/libexec/hadoop-config.sh
./hadoop-client/libexec/hdfs-config.sh
./zookeeper-client/conf/zookeeper-env.sh
./zookeeper-client/bin/zkCli.sh
./zookeeper-client/bin/zkCleanup.sh
./zookeeper-client/bin/zkServer-initialize.sh
./zookeeper-client/bin/zkEnv.sh
./zookeeper-client/bin/zkServer.sh
Notice: there are ZERO start/stop sh scripts..
In particular I am interested in the start-dfs.sh script that starts the namenode(s) , journalnode, and datanodes.

How to start DataNode
su - hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode";
Github - Hortonworks Start Scripts
Update
Decided to hunt for it myself.
Spun up a single node with Ambari, installed HDP 2.2 (a), HDP 2.3 (b)
sudo find / -name \*.sh | grep start
Found
(a) /usr/hdp/2.2.8.0-3150/hadoop/src/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/s‌​tart-dfs.sh
Weird that it doesn't exist in /usr/hdp/current, which should be symlinked.
(b) /hadoop/yarn/local/filecache/10/mapreduce.tar.gz/hadoop/sbin/start-dfs.sh

The recommended way to administer your hadoop cluster would be via the administrator panel. Since you are working on Hotronworks distribution, it makes more sense for you to use Ambari instead.

Related

start-all.sh command not found

I have just installed Cloudera VM setup for hadoop. But when I open the command prompt and want to start all daemons for hadoop using command 'start-all.sh' , I get an error stating "bash : start-all.sh: command not found".
I have tried 'start-dfs.sh' too yet still gives the same error. When I use 'jps' command, I can see that none of the daemons have been started.
You can find start-all.sh and start-dfs.sh scripts in bin or sbin folders. You can use the following command to find that. Go to hadoop installation folder and run this command.
find . -name 'start-all.sh' # Finds files having name similar to start-all.sh
Then you can specify the path to start all the daemons using bash /path/to/start-all.sh
If you're using the QuickStart VM then the right way to start the cluster (as #cricket_007 hinted) is by restarting it in the Cloudera Manager UI. The start-all.sh scripts will not work since those only apply to the Hadoop servers (Name Node, Data Node, Resource Manager, Node Manager ...) but not all the services in the ecosystem (like Hive, Impala, Spark, Oozie, Hue ...).
You can refer to the YouTube video and the official documentation Starting, Stopping, Refreshing, and Restarting a Cluster

What are the required steps to use Beeline to query a remote Hadoop instance?

I have a Hadoop cluster running on another server. I am able to ssh into that server and use Hive to run queries. I'm trying to determine if I can query that server remotely, using Hive or Beeline; would prefer Beeline, since it's not being deprecated.
I used Homebrew to install Hadoop and Hive. However it complains about missing environment variables and path. But it seems like those things are set, so I must not have configured it correctly. So, what are the steps I need to go through to execute queries on a remote Hadoop from my Mac? Do I have to go through all the steps to set up a local Hadoop instance just so I can query a remote Hadoop?
~ (master) 10:24:30
# next line is from the docs
$ beeline -u jdbc:hive2://localhost:10000/default -n scott -w password_file
Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path
~ (master) 10:25:05
$ which hadoop
/usr/local/bin/hadoop
~ (master) 10:25:18
$ echo $HADOOP_HOME
/usr/local/Cellar/hadoop/2.7.3/bin

Could not find and execute start-all.sh and Stop-all.sh on Cloudera VM for Hadoop

How to start / Stop services from command line CDH4 --. I am new to Hadoop. Installed VM from Cloudera. Could not find start-all.sh and stop-all.sh . How to stop or start the task tracker or data node if I want. It is a single node cluster which I am using on Centos. I haven't dont any modifications.
More over I see there are changes in the directory structures in all flavours. I could not locate these sh files on the VM for my installation.
[cloudera#localhost ~]$ stop-all.sh
bash: stop-all.sh: command not found
Highly appreciate your support.
use Sudo su hdfs to start and to stop just type exit it will stop all the services.

MapR client not executing hadoop - Windows

I have an Amazon Windows VM where i did install MapR-Client 2.1.2, and another MapR cluster waiting for the jobs to be executed.
I set up MAPR_HOME in C:\opt\mapr, and when I execute
hadoop fs -ls /
from C:\opt\mapr\hadoop\hadoop-0.20.2\bin I get:
The system cannot find the path specified
I also configured the MapR-Client with
server\configure.bat -c -C <ip_addr>:7222
and in config\mapr-clusters.conf I can see:
my.cluster.com <ip_addr>:7222
Also I made sure I'm able to ping and ssh from the Windows MapR Client to the cluster master ...
Could someone please help me with that?
Please use
hadoop.bat fs -ls /
and let me know if it's working.

HADOOP_HOME and hadoop streaming

Hi I am trying to run hadoop on a server that has hadoop installed but I have no idea the directory where hadoop resides. The server was configure by the server admin.
In order to load hadoop I use the use command from the dotkit package.
There may be several solutions but wanted to know where the hadoop package was installed, how to set up the $HADOOP_HOME variable, and how to approp run a hadoop streaming job, such as $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/mapred/contrib/streaming/hadoop-streaming.jar, aka, http://wiki.apache.org/hadoop/HadoopStreaming.
Thanks! any help would be greatly appreciated!
If you're using a cloudera distribution then it's most probably in /usr/lib/hadoop, otherwise it could be anywhere (at the discretion of your system admin).
There are some tricks you can use to try and locate it:
locate hadoop-env.sh (assuming that locate has been installed and updatedb has been run recently)
If the machine you're running this on is running a hadoop service (such as data node, job tracker, task tracker, name node), then you can perform a process list and grep for the hadoop command: ps axww | grep hadoop
Failing the above two, look for the hadoop root directory in some common locations such as: /usr/lib, /usr/local, /opt
Failing all this, and assuming your current user has the permissions: find / -name hadoop-env.sh
If you're install with rpm then it's most probably in /etc/hadoop.
Why don't you try:
echo $HADOOP_HOME
Obiviously the above env variable has to be set before you could even issue hadoop executables from anywhere on the box.

Resources