Hortonworks HDP ambari AWS EC2 heartbeat lost - hadoop

HDP cluster deployed successfully on AWS EC2. After restart of the HDP cluster nodes, heartbeat lost from ambari server as all Public and Private IP’s and DNS are changed.
Where in ambari server we can configure new IP’s or DNS ??

First, Ambari requires to have FQHN for all your nodes. It is best practice to assign proper hostnames on all your nodes.
A simple word-around for getting back the heartbeat on your Ambari server is to run the following on all your clients nodes:
sudo ambari-agent restart your_ambari.server.hostname.com
It worked for me on Ambari 2.0 and Ubuntu 12. Good luck!

Related

does Ambari Agent generate host addresses

I am learning about cloudera and came across Ambari agent that resides in each host that is part of a hadoop cluster. So while configuring/creating the cluster does Ambari agent generate the IP addresses for the hosts and send them to DNS or is my understanding completely wrong.
Thanks in advance :)
The agent reports the host information to the Ambari server, it doesn't manipulate anything outside of the Hadoop processes.
The IP & hostname of the nodes would already be assigned prior to installation of the agent

How to reinstall Ambari Server on a crashed node and migrate the Cluster settings?

Two of my drives crashed on the Ambari Server node so I have to re-migrate my Ambari Cluster. No real data was lost (due to a different backup strategy) but the configuration files of the node, including Ambari Server configuration, are gone.
Because two drives crashed, I can not access any files from that node anymore (RAID 5).
I am now in the process of reinstalling the Ambari Server on the same node and would like to have my agents seamlessly reconnect to the "new" Ambari Server.
Is there a way to migrate the existing Cluster settings to the Ambari Server? I am thinking of Cluster settings that were distributed to the agents or similar.
If there is no such way to migrate the cluster, how would I go and install the Ambari Server? Do a fresh install and setup everything again? Will the Ambari agents be able to connect to the "new" Cluster without problems? Note that the Ambari Server will run on the same hostname/ip.

Heartbeat lost AMBARI HDP

I lost all the heartbeats with Ambari on one of the nodes of a cluster of 4 nodes.
http://i.stack.imgur.com/51Gie.png
I already tried to reboot the cluster, restart ambari-agent, ambari-server and restart some of the services manually like yarn. Nothing work and I am stuck now.
Ambari is in 2.1.1

What is the difference between apache Ambari Server and Agent

What is the difference between Apache Ambari Server and Agent?
What is the role\tasks of the server vs Agent?
Ambari server collects informations from all Ambari clients and sends operations to clients (start/stop/restart a service, change configuration for a service, ...).
Ambari client sends informations about machine and services installed on this machine.
You have one Ambari server for your cluster and one Ambari agent per machine on your cluster.
If you need more details, Ambari architecture's is explained here

Hadoop Dedoop Application unable to contact Hadoop Namenode : Getting "Unable to contact Namenode" error

I'm trying to use the Dedoop application that runs using Hadoop and HDFS on Amazon EC2. The Hadoop cluster is set up and the Namenode JobTracker and all other Daemons are running without error.
But the war Dedoop.war application is not able to connect to the Hadoop Namenode after deploying it on tomcat.
I have also checked to see if the ports are open in EC2.
Any help is appreciated.
If you're using Amazon AWS, I highly recommend using Amazon Elastic Map Reduce. Amazon takes care of setting up and provisioning the Hadoop cluster for you, including things like setting up IP addresses, NameNode, etc.
If you're setting up your own cluster on EC2, you have to be careful with public/private IP addresses. Most likely, you are pointing to the external IP addresses - can you replace them with the internal IP addresses and see if that works?
Can you post some lines of the Stacktrace from Tomcat's log files?
Dedoop must etablish an SOCKS proxy server (similar to ssh -D port username#host) to pass connections to Hadoop nodes on EC2. This is mainly because Hadoop resolves puplic IPS to EC2-internal IPs which breaks MR Jobs submission and HDFS management.
To this end Tomcat must be configured to to etablish ssh connections. The setup procedure is described here.

Resources