How do i started ambari hortonworks services? - hadoop

I just installed Hortonwroks Sandbox via virtualbox. And when i started Ambari every services was red like you can see in this screenshot . Have i missed something? i'm a beginner in hadoop

Actually, when we start HDP Sandbox all Services services go into the starting stage except Strome, Atlas, Hbase (This can be checked by Gear Icon on the top right side from there you can check the reason behind of failed Services).
Try to Manually Start services in the following manner
Zookeeper
HDFS
YARN
MapReduce
Hive

Related

Cannot start Ambari services

I installed Hortonworks Sandbox via Virtualbox. And when i started ambari every service is stopped like you can see in this screenshot screenshot
I tried to start manually each of the services but nothing happens when i click the start button. And plus, i have many erros in my notifications section.
Also this is what my ambari agent logs looks like log1 log2
Any idea on how i can resolve this?
#emeric
You must first start HDFS. You may get warnings that require executing this command:
hdfs dfsadmin -safemode leave
Once HDFS is started, you can begin to start YARN, and other services. If you do not have enough resources in your environment you will not be able to start everything. So only start what you need.

What's the order to start up hdp services manually?

I face some problems to launch hortonworks services through Ambari by starting all services, So I decide to start those services manually and I'm not sure if there is a order I should respect when starting those services. I've installed almost all sevices that we could find on hortonworks data platform.
To start hortonworks data platform services manually through Ambari, there is a order to respect, the following link displays the list of the most frequent services we can use on HDP :
Ranger
Knox
ZooKeeper
HDFS
YARN
HBase
Hive Metastore
HiveServer2
WebHCat
Oozie
Hue
Storm
Kafka
Atlas
To be precise, Ambari starts services by following the Role command order definition files. These files may be defined per-service, or once for the entire stack.
So you may take a look at role_command_order.json files in your stack. For example, here is the role_command_order.json file for HDP-2.0.6 stack.
If the role_command_order.json file is missing, then it is inherited from some other stack. For example the <extends> tag here means that HDP-2.6 stack extends HDP-2.5 stack. Basically, all HDP-2.x stacks inherit
the role_command_order.json file for HDP-2.0.6 stack.

Spark History UI not working | Ambari | YARN

I have a hadoop cluster setup using Ambari which has services like HDFS,YARN,spark running on the hosts.
When i run the sample spark pi in cluster mode as master yarn, the application gets successfully executed and I can view the same from resource manager logs.
But when i click on the history link, it does not show the spark history UI. How to enable/view the same?
First, check if your spark-history server is already configured by looking for spark.yarn.historyServer.address in spark-defaults.conf file.
If not configured, this link should help you configure the server: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.6/bk_installing_manually_book/content/ch19s04s01.html
If already configured, check if the history server host is accessible from all the nodes in the cluster, and also the port is open.

HDP 2.5: Spark History Server UI won't show incomplete applications

I set-up a new Hadoop Cluster with Hortonworks Data Platform 2.5. In the "old" cluster (installed HDP 2.4) I was able to see the information about running Spark jobs via the History Server UI by clicking the link show incomplete applications:
Within the new installation this link opens the page, but it always sais No incomplete applications found! (when there's still an application running).
I just saw, that the YARN ResourceManager UI shows two different kind of links in the "Tracking UI" column, dependent on the status of the Spark application:
application running: Application Master
this link opens http://master_url:8088/proxy/application_1480327991583_0010/
application finished: History
this link opens http://master_url:18080/history/application_1480327991583_0009/jobs/
Via the YARN RM link I can see the running Spark app infos, but why can't I access them via Spark History Server UI? Was there somethings changed from HDP 2.4 to 2.5?
I solved it, it was a network problem: Some of the cluster hosts (Spark slaves) couldn't reach each other due to a incorrect switch configuration. Found it out, as I tried to ping each host from each other.
Since all hosts can ping each other hosts the problem is gone and I can see active and finished jobs in my Spark History server UI again!
I didn't noticed the problem, because the ambari-agents worked on each host, and the ambari-server was also reachable from each cluster host! However, since ALL hosts can reach each other the problem is solved!

Hadoop Web Interface <ip_address>:50070 is not working

I have established a single node hadoop cluster on Cent OS 7. It is successfully installed and all the daemons are up.
My challenge is when I go to web interface like ***IP_Address:50070*** it is not showing up anything. Please suggest how do I resolve it.
Things I tried:
Reconfigured properties, formatted HDFS and restarted all the daemons.
Please suggest. Thanks!

Resources