How to get started with MapReduce using Hadoop? - hadoop

I've heard of Hadoop, but what else can I use to start in this topic...
what other API are there?
In general what is it needed to start
programming here?
what do you recommend to learn this
interesting issue

Go to the home page of the project, read about it as much as you can, set it up on you local machine. Follow instructions from this site for setting it up. I recommend these two books if you want to go deeper:
Hadoop: The definitive guide
Pro hadoop

great tutorial about hadoop http://developer.yahoo.com/hadoop/tutorial/

Check out Cloudera's web site, for additional useful resources on hadoop

And once you have gone through learning the basic approach you might want to look at a very integrated way of using Hadoop: http://redmine.z2-environment.net/projects/z2-environment/wiki/How_to_Hadoop (Disclaimer: I am one of the authors)

Second the Cloudera & Yahoo links above. Also, check out this paper I just wrote on the topic for some additional tips: http://images.globalknowledge.com/wwwimages/whitepaperpdf/WP_CL_Learning_Hadoop.pdf

Related

Hortonworks vs Cloudera Architecture Difference

What is the main difference between HortonWorks and Cloudera ? I see both of them follow the same Architecture.
After a good research, I found that Cloudera and HortonWorks has no difference. They both follow the same Architecture. They take the raw hadoop and do some regressive testing and give an honed product for enterprises. No difference. Just two brands for the same product.
Additionally this link explains more.

Installing Hadoop tar file vs Cloudera VM on Ubuntu

I am complete beginner to Hadoop and I saw various posts on internet whics tells about installing Cloudera VM using VMWare. Recently I saw a youtube video which shows how to install hadoop on ubuntu by downloading hadoop tar file from Apache but they didn't install Cloudera VM. My Question is:
What is the difference between the two approach? Is there any benefit using one over the another?
I want to learn Hadoop by myself and looking for the best way/more adopted way to learn it.
Cloudera is "yet-another distribution of hadoop" You can think of basic Hadoop as stock android in Nexus mobile phones and Cloudera Hadoop as androids in non-nexus phone. Its basically a custom built version.
Cloudera is more of a plug-and play version meaning you can download the VM and start playing with Hadoop.
On, the other hand,Hadoop in Ubuntu is a get your hands dirty mode where you work on building your own hadoop.
Personal Opinion - I suggest setting up your own Hadoop that helps better understanding of internals of Hadoop and the Hadoop learning activities that follow.
Hope it helps. Happy Hadooping!
I spent a lot of time playing with the Cloudera software and their Quickstart VM is good, until you start trying to e.g. add nodes. It was not designed to do that but when you have invested time using it it would be nice to use it as a basis for a real system.
So the next step would be to use CDH (Cloudera's 'proper' Hadoop) or Hortonworks version HDP or maybe even MapR (I've not used it).
CDH and HDP technologies have nice GUI features over basic Hadoop and are seemingly easier to setup. HOWEVER I spent a lot of time trying to get both CDH and HDP to work unsuccesfully.
They give red lights and cryptic messages when things go wrong and add a layer of obfuscation when trying to fix things. For example in plain hadoop you can easily change the configuration files but in CDH you can't access them directly you have to discover where Cloudera hides their various options.
If would recommend using plain hadoop unless you have a big organisation, lots of people and machines.
UPDATE: I have finally got HDP to work and it is really nice. Good Ambari GUi and you can use Zeppelin Notebooks to do fancy graphics.

how to develop an application with hadoop support

I am very much new to hadoop and bigdata, Using horton work I got an idea about Pig,Hive and different type of analysis available with hadoop, but still i am unclear about the development stage
please give me some example about getting started to build an application with hadoop analysis suport
You should reach this link
You will come to know about hadoop and it's installation.

How to get contributors for opensource search engine

We are a company developing opensource search engine.
It's hosted in github (https://github.com/fastcatgroup/fastcatsearch)
I think we need contributor globaly, but don't know where to start.
Have any good idea or strategies?
Giving it to Apache or Jboss community is good idea?
Thanks.
It's importance that the open-source solution is which many people need, and solution must be stable. Here's an simple strategy.
Make a stable opensource solution
Write a detail manuals online.
Promote an opensource using SNS
If online group getting larger, start to make a offline group.

Recommendation on multi-node hadoop cluster installation

What would be the best way of installing Hadoop 1.0 (whether it is Apache hadoop or CDH)? CDH seems to have some kind of installation manager but somehow I can't find good information on the Web after a couple of hours of searching. I only found documentation about pseudo mode installation.
Just visit Cloudera site. They have both Cloudera Manager free which is very good point to start and standalone CDH package. They also have complete set of documentation like installation guide for every version of such products.
Of course I'd recommend Cloudera blog and official Apache Hadoop site dicumentation for better understanding.
I am using Apache Hadoop not much issues except that I have to resolve any compatibility issue while using hadoop eco system components such as hive, pig, sqoop etc.
Cloudera Manager on the other side take care of most of these compatibility issues and kind of provides u a complete package with support.
Hope this helps!

Resources