I installed & built Apache Ambari 2.7.6 from:
Installation Guide for Ambari 2.7.6 .In launch install wizard step 1, Select Version is empty. How can i pass this step without using HDP/HDF and use own Apache?
Thanks
Installer step 1
To use open source with Ambari you'd need to build ambari with this patch:
https://issues.apache.org/jira/browse/AMBARI-25366
You can also use the Mpack that is provided by BigTop in this
pull: https://github.com/apache/bigtop/pull/669
It installs a management pack that could then be used to to install BigTop which is an opensource version of hadoop that packages versions of the Hadoop Zoo.
Build Ambari (with that patch) in ticket above, or download the management pack from BigTop
Install the Bigtop management pack
Then the 'bigTop' installed version will become available in the wizard.
Related
I am new to HDP installation using Ambari. I want to install Hadoop 2.9.0 using Ambari web installation. My Ambari version is 2.7.0.0 and I am using HDP 3.0 which has Hadoop 3.1.0. But I need to install Hadoop 2.9.0. Can someone please let me know if this can be done? And how can this be achieved?
I have not started the cluster installation yet and I'm done with Ambari installation.
Ambari uses pre-defined software stacks.
HDP does not offer any stack with Hadoop 2.9.0
You would therefore need to manually install that version of Hadoop yourself, although you can still manage the servers (but not the Hadoop configuration) using Ambari
In any case, there's little benefit to installing a lower version of the software, plus you won't get Hortonworks support if you do that
Is there any other alternative procedure of installing either from source as described in below link?
https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.5.2
or
installing HDP version of Ambari.
Do I have any other alternative source to easy install Ambari eg., like from binary files?
You can build and run Apache Ambari on OSX from source (via the doc link you provided).
There are no binary files that are pre-built and offered by Ambari for OSX.
Even if you get Ambari up and running on OSX you will NOT be able to provision an HDP stack on OSX. This is because the services defined by the HDP stack only support linux or windows. Yum, a package manager utility, is required to install the services (ie. Yarn, Spark, etc.) on linux. Yum is not available on osx.
You would have to write a custom stack for it to work on osx. That would be a good deal of work.
That being said there is nothing stopping you from installing services via source or pre-packaged bins from the various apache repos on OSX. This will allow you to run Hadoop processes locally on OSX but without the niceties provided by Ambari for provisioning, managing, and monitoring.
I want to install the Orange3-spark addon,
I checked the requirements and it was (Apache Spark, Pandas, and orange3),
Problem Description
I installed it according to the order below, added the spark addon (GUI approach), no error messages, but there were no widgets shown in the Spark ML section.. The Spark ML section is empty,
Installation Steps
Installed Apache Spark 2.1.1 with Hadoop 2.7 on a Windows 10 machine.
Scala 2.11.8 (comes with Spark)
Checked Spark using (spark-shell) in command prompt
Installed Anaconda 4.4.0 Python 3.6 version
Verified that Pandas is installed within Anaconda
Installed Orange version 3.4.4
Installed Spark Addon (GUI way from orange)
Can you please instruct me on what to do ?
I have never used Python before, I know the job of most of the above-mentioned components, however, this is the first time that I install any of the above. So please bear with me and be clear with your comments ;))
I had a similar problem with the network add on in Orange3 and overcame it by doing a terminal install with root access via sudo instead of the canvas gui install. Try a command line install running as administrator via runas.
I am using Cloudera Manager with CDH4.2.2 for my 3+1 cluster. On starting the installation with cloudera manager, it automatically downloads and installs JDK1.6. I want to use JDK1.7 with CDH for my convinience. Is it possible or is there any version of CDH which while installating Hadoop in the cluster automatically downloads and installs and successfully runs Hadoop with JDK1.7?
If yes, may I know which version of CDH is it and where do i get to download it from?
I want to work with JDK1.7 instead of 1.6 because i want to install Apache Giraph on CDH but it seems Giraph does not fit fine with JDK1.6 and needs the JDK1.7.
With Regards,
JDK 1.7 is supported for all CDH applications as of CDH 4.4 and Cloudera Manager 4.7.
That being said, no version of Cloudera Manager 4.x installs JDK 1.7 during the installation (latest version is 4.8.2). The only version of Cloudera Manager that installs JDK 1.7 automatically is 5.0.0.
To summarize: If you want an automated installation of JDK 1.7 via Cloudera Manager, you need to upgrade to CDH 5, and CM 5.0.0. Alternatively, you could upgrade to CDH4.4, and then perform a manual installation of JDK 1.7.
I am using hadoop 1.0.3 version .
I tried configuring hadoop Eclipse Indigo.
But It failed to start dfs and showed error failed to login.
plz suggest me what is the problem in map/reduce?
edit : I am using windows 7 so firstly install cygwin than hadoop1.0.4 and started service on
http://127.0.0.1:50030/ and http://localhost:50070/ successfully.
Check if you install hadoop properly. Try to install hadoop plugin, as you can see in install plugin. You can see a tutorial for this in haddop video tutorial. Also, take a look to eclipse setup for hadoop development.
Eclipse Indigo is version 3.7. You can read "Any eclipse before 3.6 is compatible to the eclipse plugin. (Doesn't work with Eclipse 3.7)" http://www.orzota.com/eclipse-setup-for-hadoop-development/