Add Version Definition File URL in Ambari web UI - hadoop

I'm trying to create a cluster in Ambari web UI.
Create cluster web UI (image)
I reach a point where I need to put Version Definition File URL.
Add version (image)
Where can I find this URL, without being from cloudera?

You would need to create a cluster services stack that define what Ambari will install and manage.
https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services
One popular open-source version of services that can be used with Ambari is Apache BigTop

Related

Can I integrate Ambari after installing Apache Nifi?

I've a 3 node Apache Nifi cluster and now I would like to put monitoring on top of it. Apache Ambari will be a good monitoring tool for it. Will I be able to integrate Ambari with Nifi? Or I need to installed Ambari first and then use Ambari feature to install Nifi. Note : I'm using open source software and not HortonWork.
As far as I know, I don't think it is possible to add an existing NiFi cluster as a new service to Ambari. You need to have Ambari then create and install the NiFi service.
Once installed, to monitor your NiFi service, you can use AmbariReportingTask available in NiFi to report statistics to your Ambari server for monitoring.
Useful Links
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-ambari-nar/1.8.0/org.apache.nifi.reporting.ambari.AmbariReportingTask/index.html
https://pierrevillard.com/2017/05/16/monitoring-nifi-ambari-grafana/

Apache ambari installation

I'm trying to install ambari server + agents.
I have a doubt regarding ambari.
I tried to install ambari.
It always gets link with hortonwork
My doubt is that I have hadoop cluster of my own in Ubunu 16.0.Will ambari only work with HDP or is it possible to also make it work with custom built clusters?
Or if possible please share me detailed descriptive documentation
It's not clear where you downloaded Ambari from, but it sounds like you used the Hortonworks version of it. Not directly from https://ambari.apache.org
Ambari works with the concept of stacks. Each stack has a set of services and components. HDP is such a stack, but there are others, or you can even define your own, so yes, you can manage your own Hadoop installation components, but that really would be not much different from what Hortonworks already provides.
Besides, the HDP services and components have been tested to work together more throughly than off the shelf Hadoop installation.
If you don't want HDP components, there is also the Apache Bigtop project that provides installation packs for many Hadoop related services
Ambari expects Java and Hadoop to be installed in a certain way. I'm not sure how easy it is to setup for an existing Hadoop install.

How do I install components such as Apache Drill and Apache Hue in IBM Bluemix BigInsights Apache Hadoop

I am new to IBM Bluemix platform and exploring its BigInsights service. I can see pre configured components such as Pig Hive Hbase and others. But I want to know How can I install services like Drill or say Hue which is not configured by default. Also ssh to cluster nodes allows restricted access with no sudo rights in case one need to run yum commands.Does bluemix allows root access as I cannot see one. Thanks In advance.
As far as I know, it is not possible.
But you can use http://www.softlayer.com/ to build your own IOP (IBM Open Platform) Cluster in the cloud.
If you are interested in IBM's value-adds and you just want to try out:
https://www.youtube.com/watch?v=4p7LDeu_qQQ it is a nice tutorial to set up your own cluster via Docker.
This tutorial should be still valid for Hue:
https://developer.ibm.com/hadoop/2015/06/02/deploying-hue-on-ibm-biginsights/
Installing Drill doesn't look complicated:
https://drill.apache.org/docs/installing-drill-in-distributed-mode/
In conclusion: You need to move away from Bluemix, if you want to have a more customised BigInsights. But there are options: Softlayer, AWS, .. or just on your local computer (if you got sufficient resources, since some components like Hbase need a minimum amount of nodes)

HBase region servers going down when try to configure Apache Phoenix

I'm using CDH 5.3.1 and HBase 0.98.6-cdh5.3.1 and trying to configure Apache Phoenix 4.4.0
As per the documentation provided in Apache Phoenix Installation
Copied phoenix-4.4.0-HBase-0.98-server.jar file in lib directory (/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hbase/lib) of both master and region servers
Restarted HBase service from Cloudera Manager.
When I check the HBase instances I see the region servers are down and I don't see any problem in log files.
I even tried to copy all the jars from the phoenix folder and still facing the same issue.
I have even tried to configure Phoenix 4.3.0 and 4.1.0 but still no luck.
Can someone point me what else I need to configure or anything else that I need to do to resolve this issue
I'm able to configure Apache Phoenix using Parcels. Following are the steps to install Phoenix using Cloudera Manager
In Cloudera Manager, go to Hosts, then Parcels.
Select Edit Settings.
Click the + sign next to an existing Remote Parcel Repository URL, and add the following URL: http://archive.cloudera.com/cloudera-labs/phoenix/parcels/1.0/. Click Save Changes.
Select Hosts, then Parcels.
In the list of Parcel Names, CLABS_PHOENIX is now available. Select it and choose Download.
The first cluster is selected by default. To choose a different cluster for distribution, select it. Find CLABS_PHOENIX in the list, and click Distribute.
If you plan to use secondary indexing, add the following to the hbase-site.xml advanced configuration snippet. Go to the HBase service, click Configuration, and choose HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml. Paste in the following XML, then save the changes.
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
Whether you edited the HBase configuration or not, restart the HBase service. Click Actions > Restart
For detailed installation steps and other details refer this link
I dont think, Phoenix4.4.0 is compatible with CDH version you are running. This discussion on mailing list will help you:http://search-hadoop.com/m/9UY0h2n4MOg1IX6OR1

Can Hortonworks Ambari manage multiple clusters

I have been looking all over the web to see if Ambari can manage multiple clusters like Cloudera does. Is this possible in Ambari? If so, how? I have looked all over the Ambari web UI and only see options to add a new host or service, but nothing about adding a cluster.
It's in roadmap. For now it's possible to do so in API level, from version 2.0 it would be possible to manage multiple clusters from web UI.

Resources