Install ambari on existing single node hadoop - hadoop

I have single node hadoop setup on Ubuntu(personal dev env.).
Now want to install Ambari.
Question: Can we install Ambari on existing hadoop set up if Yes than assist me.

No, Ambari requires you first install Ambari agents, then Ambari server to monitor the agents + install / configure Hadoop.
In theory, if you installed Hadoop in the same way that Ambari expects, it might work, but adding any node into Ambari will throw lots of warnings if the node is running any extra process other than the agent

Related

Ambari agent is not exist

I'm trying to setup the Ambari agent in each node in my cluster and it works fine for almost all my nodes unless one, when I try to retray the Ambari wizard to fix the problem for this one I get this error:
==========================
Running setup agent script...
==========================
Command start time 2017-12-01 14:52:13
Server error attempting a GET to /rhsm/ returned status 503
Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration
sed: can't read /etc/ambari-agent/conf/ambari-agent.ini: No such file or
directory('', None)
So obviously this error means therer is no ambari-agent installed in this machine,
so I verified if the agent is installed or not with "yum repolist ambari-agent" it replays that the service is already installed
and when I tried to install "yum install ambari-agent", I could not, also there is No directory called /etc/ambari-agent/
I thought about reinstall the ambari agent for this node but I m not sure if this is going to be the good decision.
what should I do??
Removing the agent then reinstalling the agent should resolve the issue. The following commands (run on the node that is failing to register) should achieve that:
sudo yum remove ambari-agent
sudo yum install ambari-agent
You should then be able to retry the installation in the Ambari UI using the cluster setup wizard.

Spark clustering with yarn

I would like to make spark clustering with yarn.
Do i need
installing hadoop master and slaves with yarn config?
installing hadoop master/slaves and yarn master/slaves separately?
If 1 is ok, I'm going to work with this docker image(link). Is it suitable for this?
Installing hadoop master and slave with yarn config is sufficient in order to run spark over yarn but then you also need to make sure that spark version you are downloading supports yarn. once installed spark should be able to access yarn configurations and required jar files related to yarn are also in path of spark.

setting up 3 node hadoop cluster

I want to setup a cluster of 3 nodes in my office innovation lab. All the 3 machines are having windows 7 installed. so I thought of creating a cluster using Ubuntu installed on all the 3 machines. so far I have followed below steps.
Installed VM ware on all the 3 machines
Installed Ubuntu on the 3 machines.
installed java 1.8 on all the machines
Please guide me what all steps do I need to follow to setup the cluster?
I have seen few videos where in they have created some local repository and did some setup for httpd also
thank
Brijesh
first you install hadoop version this command
rpm -ivh hadoop
then goes hadoop directory
cd /etc/hadoop
and open here hdfs-site.xml file and core-site.xml and edit property
[1]: http://i.stack.imgur.com/WkTIy.png
[2]: http://i.stack.imgur.com/uf89i.png
that's called is datanode . try if possible so on..

how to install apache phoenix to ambari 1.7 with hbase?

I'm new to hadoop. I want to install phoenix with hbase but I have installed hadoop cluster using ambari 1.7 on ubuntu. I'm not able to find any tutorial to do so.
If you build up your own Hadoop stack:
https://phoenix.apache.org/download.html
https://phoenix.apache.org/installation.html
If you use e.g. IBM Open Platform (which is for free btw):
https://developer.ibm.com/hadoop/blog/2015/10/21/installing-apache-phoenix-ibm-open-platform-apache-hadoop-4-1/
hbase should be available as service under add service button on home page.
For installing phoenix i used this link
http://dev.hortonworks.com.s3.amazonaws.com/HDPDocuments/HDP2/HDP-2-trunk/bk_installing_manually_book/content/upgrade-22-7-a.html
basically yum install phoenix on each node and then create soft links to the phoenix server jar file
hth

Cloudera Installation Error I want to know can we cloudera manager for Hadoop single node Cluster on ubuntu?

I am using ubuntu 12.04 64bit, I installed and ran sample hadoop programs with single node successfully.
I am getting the following error while installing cloudera manager on my ubuntu
Refreshing repository metadata failed. See
/var/log/cloudera-manager-installer/2.refresh-repo.log for details.
Click OK to revert this installation.
I want to know can we install Cloudera for Hadoop's Single node cluster on ubuntu. Please response me that Is it possible to install cloudera manager for single node or not. Or else Am i want to create multiple nodes for using cloudera with my hadooop
Yes, CM can run in a single node.
This error is because CM can not use apt-get install to get the packages. Which tutorial do you follow?
However, you can manually add the cloudera repo. See this thread.

Resources