Cloudera Manager Installation failing - hadoop

I am trying to create a small cluster for testing purposes on EC2 using Cloudera Manager 5.
These are the directions I am following, http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.7.1/Cloudera-Manager-Installation-Guide/cmig_install_on_EC2.html.
It is getting to the point where it executes, "Execute command SparkUploadJarServiceCommand on service spark" and it fails.
The error is "Upload Spark Jar failed on spark_master".
What is going wrong and how can I fix this?
Thanks for your help.

Adding the findings as an answer.
You have to open all the required ports for your Cloudera Manager to install it's components correctly.
For a complete guide of ports you need to open refer to:
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_ports_cdh4.html
If you are running Cloudera Manager in EC2 you can create a security group that allows all traffic/ports between the Cloudera Manager and its nodes.

Related

How do i started ambari hortonworks services?

I just installed Hortonwroks Sandbox via virtualbox. And when i started Ambari every services was red like you can see in this screenshot . Have i missed something? i'm a beginner in hadoop
Actually, when we start HDP Sandbox all Services services go into the starting stage except Strome, Atlas, Hbase (This can be checked by Gear Icon on the top right side from there you can check the reason behind of failed Services).
Try to Manually Start services in the following manner
Zookeeper
HDFS
YARN
MapReduce
Hive

HDP 2.5: Spark History Server UI won't show incomplete applications

I set-up a new Hadoop Cluster with Hortonworks Data Platform 2.5. In the "old" cluster (installed HDP 2.4) I was able to see the information about running Spark jobs via the History Server UI by clicking the link show incomplete applications:
Within the new installation this link opens the page, but it always sais No incomplete applications found! (when there's still an application running).
I just saw, that the YARN ResourceManager UI shows two different kind of links in the "Tracking UI" column, dependent on the status of the Spark application:
application running: Application Master
this link opens http://master_url:8088/proxy/application_1480327991583_0010/
application finished: History
this link opens http://master_url:18080/history/application_1480327991583_0009/jobs/
Via the YARN RM link I can see the running Spark app infos, but why can't I access them via Spark History Server UI? Was there somethings changed from HDP 2.4 to 2.5?
I solved it, it was a network problem: Some of the cluster hosts (Spark slaves) couldn't reach each other due to a incorrect switch configuration. Found it out, as I tried to ping each host from each other.
Since all hosts can ping each other hosts the problem is gone and I can see active and finished jobs in my Spark History server UI again!
I didn't noticed the problem, because the ambari-agents worked on each host, and the ambari-server was also reachable from each cluster host! However, since ALL hosts can reach each other the problem is solved!

Hadoop Web Interface <ip_address>:50070 is not working

I have established a single node hadoop cluster on Cent OS 7. It is successfully installed and all the daemons are up.
My challenge is when I go to web interface like ***IP_Address:50070*** it is not showing up anything. Please suggest how do I resolve it.
Things I tried:
Reconfigured properties, formatted HDFS and restarted all the daemons.
Please suggest. Thanks!

HBase region servers going down when try to configure Apache Phoenix

I'm using CDH 5.3.1 and HBase 0.98.6-cdh5.3.1 and trying to configure Apache Phoenix 4.4.0
As per the documentation provided in Apache Phoenix Installation
Copied phoenix-4.4.0-HBase-0.98-server.jar file in lib directory (/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hbase/lib) of both master and region servers
Restarted HBase service from Cloudera Manager.
When I check the HBase instances I see the region servers are down and I don't see any problem in log files.
I even tried to copy all the jars from the phoenix folder and still facing the same issue.
I have even tried to configure Phoenix 4.3.0 and 4.1.0 but still no luck.
Can someone point me what else I need to configure or anything else that I need to do to resolve this issue
I'm able to configure Apache Phoenix using Parcels. Following are the steps to install Phoenix using Cloudera Manager
In Cloudera Manager, go to Hosts, then Parcels.
Select Edit Settings.
Click the + sign next to an existing Remote Parcel Repository URL, and add the following URL: http://archive.cloudera.com/cloudera-labs/phoenix/parcels/1.0/. Click Save Changes.
Select Hosts, then Parcels.
In the list of Parcel Names, CLABS_PHOENIX is now available. Select it and choose Download.
The first cluster is selected by default. To choose a different cluster for distribution, select it. Find CLABS_PHOENIX in the list, and click Distribute.
If you plan to use secondary indexing, add the following to the hbase-site.xml advanced configuration snippet. Go to the HBase service, click Configuration, and choose HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml. Paste in the following XML, then save the changes.
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
Whether you edited the HBase configuration or not, restart the HBase service. Click Actions > Restart
For detailed installation steps and other details refer this link
I dont think, Phoenix4.4.0 is compatible with CDH version you are running. This discussion on mailing list will help you:http://search-hadoop.com/m/9UY0h2n4MOg1IX6OR1

Query regarding Cloudera Manager on EC2

I have some query regarding Cloudera Manager(Free Edition) on EC2. I am not sure this is the correct place to ask the question. If I am wrong please also let me know. Is there a place where I can put my questions regarding cloudera manager and Hadoop?
Current I am creating hadoop cluster using cloudera manager. I have m3.Xlarge EC2 Instances but the wizard does not have option to select m3.xlarge instance. Secondly, I have RHEL OS where as wizard does not have option for RHEL it has only Ubuntu 12.04 and Cent OS 6.3. Does that means it does not have support for RHEL?
Cloudera Manager only supports ubuntu and centos as of now. And please not whatever other instances, you might have created will not be used by the cloudera manager. It will automatically create new instances and you can check that in the management console of aws. So when you choose number of instances and type of instances( only supported type by cloudera manager is available), it will use your key and secret key to create them automatically.

Resources