Hortonwork Ambari services not started - hortonworks-data-platform

I am using Hortonwork sandbox using Virtual box. I am not able to start Ambari services like Hive, Spark, etc. I am going through below steps :
http://localhost:1080
Advanced HDP quicklink
http://localhost:4200/ -> root /hadoop
Ambari (http://localhost:8080/) -> Not able to start services here.

Please start the following service manually in the mentioned order.
1. Zookeeper
2. HDFS
3. YARN
4. MapReduce
5. HIVE
6. Spark
Another thing, I've also used HDP Sandbox and default user with administration-level access is "raj_ops", and default credentials of this user is
Username: raj_ops
Password: raj_ops
Please told me is this useful or not?

Related

How do i started ambari hortonworks services?

I just installed Hortonwroks Sandbox via virtualbox. And when i started Ambari every services was red like you can see in this screenshot . Have i missed something? i'm a beginner in hadoop
Actually, when we start HDP Sandbox all Services services go into the starting stage except Strome, Atlas, Hbase (This can be checked by Gear Icon on the top right side from there you can check the reason behind of failed Services).
Try to Manually Start services in the following manner
Zookeeper
HDFS
YARN
MapReduce
Hive

does Apache Kylin need a Apache Derby or Mysql for run the sample cube

I installed Java and Hadoop and Hbase and Hive and Spark and Kylin.
hadoop-3.0.3
hbase-1.2.6
apache-hive-2.3.3-bin
spark-2.2.2-bin-without-hadoop
apache-kylin-2.3.1-bin
I will be grateful if someone in Help me with Kyle's installation and configuration them.
http://kylin.apache.org/docs/ this may help you. You can send email to dev#kylin.apache.org, then the questions will be discussed and answered in the mailing list. There are some tips for sending the email: 1. provide Kylin version 2. provide log information 3.provide the usage scenario. If you want to get a quick start, you can run Kylin in a Hadoop sandbox VM or in the cloud, for example, start a small AWS EMR or Azure HDInsight cluster and then install Kylin in one of the nodes. When you use Kylin-2.3.1, I suggest you use Spark-2.1.2.

What's the order to start up hdp services manually?

I face some problems to launch hortonworks services through Ambari by starting all services, So I decide to start those services manually and I'm not sure if there is a order I should respect when starting those services. I've installed almost all sevices that we could find on hortonworks data platform.
To start hortonworks data platform services manually through Ambari, there is a order to respect, the following link displays the list of the most frequent services we can use on HDP :
Ranger
Knox
ZooKeeper
HDFS
YARN
HBase
Hive Metastore
HiveServer2
WebHCat
Oozie
Hue
Storm
Kafka
Atlas
To be precise, Ambari starts services by following the Role command order definition files. These files may be defined per-service, or once for the entire stack.
So you may take a look at role_command_order.json files in your stack. For example, here is the role_command_order.json file for HDP-2.0.6 stack.
If the role_command_order.json file is missing, then it is inherited from some other stack. For example the <extends> tag here means that HDP-2.6 stack extends HDP-2.5 stack. Basically, all HDP-2.x stacks inherit
the role_command_order.json file for HDP-2.0.6 stack.

Installing Hue for an HDInsight HDP cluster

I am aware of installing Hue for HDInsight HDP cluster by deploying it on an edge node of the cluter (using a script action, link), it works fine but asks for the cluster credentials first and then directs me to the Hue login page. Is there a way to get rid of those credentials?
Else, is it possible to deploy Hue on a remote system and then point it to my HDInsight HDP cluster? If so how do I go about?
And which of the above two approaches is better?
Based on my understanding & experience, to answer your questions as below.
There is not any way to get rid of those credentials, due to the credential is to authenticate for Resource Management Template deployment, not only for cluster.
It's not possible to deploy Hue on a remote system, because of "Hue consists of a web service that runs on a special node in your cluster." as the Hue offical manual said from here.
Hope it helps.

Spark History UI not working | Ambari | YARN

I have a hadoop cluster setup using Ambari which has services like HDFS,YARN,spark running on the hosts.
When i run the sample spark pi in cluster mode as master yarn, the application gets successfully executed and I can view the same from resource manager logs.
But when i click on the history link, it does not show the spark history UI. How to enable/view the same?
First, check if your spark-history server is already configured by looking for spark.yarn.historyServer.address in spark-defaults.conf file.
If not configured, this link should help you configure the server: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.6/bk_installing_manually_book/content/ch19s04s01.html
If already configured, check if the history server host is accessible from all the nodes in the cluster, and also the port is open.

Resources