How can i make multiple hiveserver2 in HDP 2.2 and ambari 1.7 - hadoop

I am having 1 hiveserver2 in 30 node cluster now I wan to make 4 hiveserver2 daemon through ambari. How can i make multiple hiveserver2 in HDP 2.2 and ambari 1.7 ?
I know we can start hive server directly and set the properties to auto search but this daemon we cant monitor through ambari.

It's only supported in Ambari 2.0.0: https://issues.apache.org/jira/browse/AMBARI-8906

Related

How can I connect to Remote Linux Nodes on which Hadoop is installed using Apache Nifi instance installed on my local Windows Box?

I have installed Apache nifi 1.1.1 on My Windows Local System. How can I connect to Remote Linux Nodes on which Hadoop is installed using Apache Nifi instance installed on my local Windows Box?
Also How can I perform data migration activity on Remote Linux Nodes on which Hadoop is installed using these local instance of Nifi?
I have enabled Kerberos on these Remote Hadoop Cluster.
The "Unsupported major.minor version" is because Apache NiFi 1.x requires Java 8, and you tried to start it with a Java 7 JVM. You could install a Java 8 JDK just for NiFi to use, and leave all the Hadoop stuff using Java 7, and you can set NiFi's JAVA_HOME in bin/nifi-env.sh:
export JAVA_HOME=/path/to/jdk1.8.0/
If you are trying to connect NiFi on your local Windows system to remote Hadoop nodes, you will need the core-site.xml and hdfs-site.xml from your Hadoop cluster, and since you have kerberos enabled you will need the krb5.conf file from one of your Hadoop servers (/etc/krb5.conf).

HDP 2.5: Zeppelin won't run Notebook in Kerberos-enabled cluster

I set up a Hadoop cluster with Hortonworks Data Platform 2.5 and Ambari 2.4. I also added the Zeppelin service to the cluster installation via Ambari UI.
Since I enabled Kerberos, I can't run the Zeppelin Notebooks anymore. When I click "Run paragraph" or "Run all paragraphs" nothing seems to happen. I also don't get any new entries in my logs in /var/log/zeppelin/. Before enabling Kerberos I was able to run the paragraphs.
I tried some example notebooks, and also some of mine, same problem: nothing happens... Tried with admin and non-admin users.
Here are my "Spark" and "sh" interpreter settings (other paragraphs e.g. %sql also don't work):
The tutorial below captures the configuration of Ambari and Hadoop Kerberos:
Configuring Ambari and Hadoop for Kerberos

zookeeper and hadoop 2.6 + hbase 0.98

In hadoop 2.6 with hbase 0.98 is needed to install zookeeper explicitly ? because when I run hadoop and hbase I have a process named "HQuorumPeer" and I think this is zookeeper. Do hbase include zookeeper or I should install it seperately ?
https://www.quora.com/Do-I-have-to-install-Zookeeper-separately-even-for-HBase-Standalone-or-is-it-in-built-with-HBase-setup
If you are in Standalone mode, you don't need to, otherwise it can be better to install it on a diffrent cluster.

How to install impala on an already running hadoop cluster

I have an already up and running Hadoop, 5-node cluster. I want to install Impala on the HDFS cluster without the Cloudera Manager. Can anyone supposedly guide me through the process or a link of the same.
Thanks.

How to configure hue for apache hadoop, apache hbase, apache hive

I have a cluster of two machines and using apache hbase and apache hadoop. I have to use hue so that I can interect with hbase or hdfs through GUI. I have installed it successfully on my machine(ubuntu 14.04) but it is showing nothing about hdfs or tables etc. and gives error like
1.oozie server is not running
2.could not connect to local:9090
HBase thrift server cannot be contacted
How to do setting og hue so that it should connect to my running cluster.

Resources