I'm interested in playing around with Mahout a bit (and by proxy Hadoop), I'm wondering if anyone has experience installing these projects locally. I know that Mahout is implemented in JAVA, but I'm not really sure about Hadoop. I read a bit about using them on amazon or rackspace, but I have my heart set on testing locally.
You do not need to download or install Hadoop to use Mahout locally, even its Hadoop-based bits. You do need to use Maven, which will manage downloading the dependencies. The Mahout command line, and examples involving mvn, should all just work.
Related
I have a cluster with Hadoop 2.0.0-cdh4.4.0, and I need to run Spark on it with YARN as resource manager. I got following information from http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version
You can enable the yarn profile and optionally set the yarn.version property if it is different from hadoop.version. Spark only supports YARN versions 2.2.0 and later.
I don't want to upgrade the whole Hadoop package to support YARN version 2.2.0, as my HDFS have massive data and upgrade it will cause too long break of service and be too risky to me.
I think the best choice to me may be use a higher version of YARN than 2.2.0 while keep the version of other parts of my Hadoop unchanged. If it's the way, what steps should I follow to get such a YARN package and to deploy it on my cluster?
Or are there other approach to run Spark on Hadoop 2.0.0-cdh4.4.0 with YARN as resource manager?
While you could theoretically upgrade just your YARN component my experience suggests that you run a large risk of library and other component incompatibilities if you do that. Hadoop consists of a lot of components but they're generally not as decoupled as they should be, which is one of the main reasons CDH, HDP and other Hadoop distributions bundle only certain versions known to work together and if you have commercial support with them but change the version of something they generally won't support you because things tend to break when you do this.
In addition, CDH4 reached End of Maintenance last year and is no longer being developed by Cloudera so if you find anything wrong you're going to find it hard to get fixes (generally you'll be told to upgrade to a newer version). I can also speak from experience that if you want to use newer versions of Spark (e.g. 1.5 or 1.6) then you also need a newer version of Hadoop (be it CDH, HDP or another one) as Spark has evolved so quickly and YARN support was bolted on later so there are loads of bugs and issues in earlier versions of both Hadoop and Spark.
Sorry, I know it's not the answer you're looking for but upgrading Hadoop to a newer version is probably the only way forward if you actually want stuff to work and don't want to spend a lot of time debugging version incompatibilities.
I want to use large scale machine learning algoritms and I want to use Mahout for this task, but it seems Mahout depends on Hadoop and Hadoop is distributed only as Lunix packages.
Is there any way to use Mahout\Hadoop on Windows?
Or maybe someone can suggest some alternatives?
There are multiple Hadoop vendors already. Hortonworks is one of them and released a version of their platform on Windows: HDP on Windows.
Mahout should be able to run on top of this!
Alternatively there is also Datameer, which you have to pay for (except you coming from academia) with their Smart Analytics feature!
I want to learn to install hadoop and hive on my machine . windows 64 bit os. please tell me the exact steps. it would help a lot for beginners like me. i tried and downloaded a hadoop (1.1.1) version but unable to install it.
Thanks!!
You can download the developer preview of Microsoft HDInsight Server for Windows. Beware, it is a developer preview and a lot of features available on the native platform are not there yet. Most importantly HBase, in my opinion. But there is Hive, Pig and obviously the possibility to run standard MapReduce jobs. It's fun to play with.
HDInsight Server Download Link
There also is a .NET SDK to play with. This one is in pre-release phase and very undocumented but with some knowledge and some Google skills you should be able to manage it.
SDK
I am new to HBASE and HADOOP and would require available compatible versions of hbase and hadoop to run my experiments.
The current stable version of at "http://apache.techartifact.com/mirror/hbase/" is hbase-0.94.1 . Can anybody kindly tell which version of hadoop should I use so that there is no
compatibility issue and no future data loss.
Please suggest from the hadoop and hbase releases that are currently available online.
below are the sites I am using for downloading these releases
http://apache.techartifact.com/mirror/hadoop/common/ (hadoop)
http://apache.techartifact.com/mirror/hbase/ (hbase)
If you want to be sure about the compatibility of the Hadoop and HBase distribution you are using, you might consider using the Apache Bigtop project or the Cloudera CDH package.
BigTop :
The primary goal of Bigtop is to build a community around the
packaging and interoperability testing of Hadoop-related projects.
This includes testing at various levels (packaging, platform, runtime,
upgrade, etc...) developed by a community with a focus on the system
as a whole, rather than individual projects.
Cloudera :
CDH consists of 100% open source Apache Hadoop plus nine other open
source projects from the Hadoop ecosystem. CDH is thoroughly tested
and certified to integrate with the widest range of operating systems
and hardware, databases and data warehouses, and business intelligence
and ETL systems.
Download both from stable folder. I do not know if some version is not compatible with other.
i use hadoop-0.20.203.0 with hbase-0.94.1 with out any problems.
we are Trying to figure out which Distribution of Linux be best suited for the Nutch-Hadoop Integration?.
we are planning to Use Clusters for Crawling large contents through Nutch.
Let me Know if You need more clarification on this question?.
Thanks you.
There is no much difference between any major Linux distribution in this case. But I'd recommend you one that has hadoop packages prepared. I'm using Cloudera's Hadoop distribution on debian and it works very well.
hadoop and hbase packages will be in the next Debian Stable version:
http://packages.debian.org/search?keywords=hadoop