Could any one explain me what is the diff between hadoop and cloudera hadoop?
What Is Apache Hadoop?
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
Cloudera is the leader in Apache Hadoop-based software and services and offers a powerful new data platform that enables enterprises and organizations to look at all their data — structured as well as unstructured — and ask bigger questions for unprecedented insight at the speed of thought.
Cloudera is startup company, They are providing commercial support for hadoop.
here are some of the advantages of Cloudera Hadoop:
1. Cloudera provides a tool SCM that would kind of automatically set up a hadoop cluster for you.
Cloudera bundles the hadoop related projects which is pretty ease to install on any standard linux boxes()
Cloudera ensures that the CDH release and the available hadoop projects for the release are compatible(for example you don’t have to take the hassle on finding the compatible hbase release with your hadoop release and integration between related projects etc)
There are a good number of large enterprises using CDH with cloudera support.(Cloudera provides various support packages)
A detailed explanations can be found here:
Related
I have watched hundreds of videos and red hundreds of articles. but they are all so complicated? Why people cant clearly make a good introduction of what is something before breaking it down? What the hell is Hadoop? I get it is some kind of file distributing system, it has cool features like high performance, HDFS, YARN, MapReduce , Hadoop Common bla bla. Please someone, tell me what is it? is it a software like Visual Studio, Anaconda Navigator, Android Studio or what? or is it a huge company that has thousands of data servers where you can upload your company's data and manage it perfectly over there? Why these videos on YouTube say that Hadoop is storage efficient? does it mean you use Hadoop's data servers and they save your data efficiently? I am absolutely sure that I am not the only one who asks these questions when they are watching these videos on YouTube.
Thanks In advance!
Hadoop is an ecosystem, means consist of several software which can work together in a distributed mode.
Three main software inside Hadoop which provided as service are : HDFS, YARN, MapReduce.
HDFS is a distributed file system. It means HDFS is a filesystem something like our PC filesystem (NTFS, FAT32, EXT4, etc.). The main goal of a file system is management of files. each file in a file system consist of blocks (consider a file is divided to chunks). our local filesystem remain these block on one machine but HDFS split(and distribute) them on several machine.
YARN is a resource manager like our resource manager inside our OS, but it manages resource from several machines. When you run an application (i.e. Notepad) inside your PC, your OS gives your application CPU core (when needed) and Memory. In Yarn when you submit and application it gives your application CPU core (when needed) and Memory on all machines and your application must know how to work in distributed mode.
MapReduce help you to write programs in distributed mode and you don't need know how to write your application in distributed mode. MapReduce is something like a programming language plus a compiler which Yarn can know how to run its program.
One day google decided to introduce a distributed file server,
and named it Google File System, or GFS.
This idea has published in 2003 a paper: [https://static.googleusercontent.com/media/research.google.com/en/ir/archive/gfs-sosp2003.pdf]
At the same time, Doug Cutting, the creator of Apache Lucene, were working on apache Nutch, this paper helped them to implement a nice distributed file system named NDFS or Nutch Distributed File System.
The, Nutch were able to split files in multiple blocks and store them in multiple servers and replicate them to preserver data reliability.
But this data needs to be processed, and you can simply fetch terabytes of data on a single machine and process them.
in 2004, Google again came with a great idea named Map Reduce.
[http://research.google.com/archive/mapreduce.html]
with Map Reduce, you can put your data processing codes near your data blocks, process them locally (map phase) then gather results and sum them up (reduce phase).
now Nutch has both ndfs and map reduce.
quoting from the book, Hadoop definitive guide by Tom White:
NDFS and the MapReduce
implementation in Nutch were applicable beyond the realm of search,
and in February 2006 they moved out of Nutch to form an independent
subproject of Lucene called Hadoop.
this is story of hadoop.
but now, other tools has came along to co-work with Hadoop
like Apache Hive which helps us to run SQL like queries on Big and distributed data on Hadoop File System (now named HDFS).
or Apache Pig that uses a high level language to run complex analysis on Hadoop stored files using map reduce framework.
there is much other tools like Hbase, Spark, Flume, Sqoop, Crunch, etc.
we dont call them Hadoop! but they live in a Hadoop centered ecosystem.
so we call this community, Hadoop Ecosystem.
Its better if you start reading the book: Hadoop definitive guide by Tom White
instead of reading multiple blog posts and YouTube videos.
I want to install hadoop, pig and hive in my laptop. I don't know how to install and configure hadoop,pig and hive and what software are required to do it.
Please let me know exact steps require to install/configure Hadoop, Pig and hive in laptop.
and i can use windows OS and i install the hadoop in windows OS
For beginners, I would recommend sticking to a good prepackaged Hadoop distribution/sandbox. Even if you want to learn how to setup up a Hadoop cluster before using the tools it provides (e.g. Hive etc.), setting up a common distribution is a lot easier at least in the beginning.
Prepackaged sandboxes for Hadoop are going to be in Linux. But most likely, you will not need to do a lot in Linux to start using Hadoop if you start from these sandboxes. Personally, I think the time you will save by avoiding support and documentation issues on Windows ports will compensate greatly for any added effort required for jumping into Linux, and you will at least enter the domain of Linux which itself is a tremendously important tool.
For prepackaged solutions, you may try to aim at Cloudera quickstart VM or MapR quickstart VM as these are the most widely used distributions. By using sandboxes, you will skip the installation process (which may be hectic if you don't know what you want and specially if you aren't familiar with Linux) and jump right into usage of tools. Due to availability of good documentation for large vendors such as Cloudera and MapR, you will also face lesser issues in accessing the tools you want to learn.
Follow the vendor specific setup guidelines (also listed on the download pages as getting started guides) for further details on setting up the sandbox.
Once you have the sandbox setup, you can use a lot of different ways to access Hive and Pig. You can use a command line interface for Hive (called beeline). If you are familiar with JDBC, you can access Hive through that. Install Apache-Thrift to enable much wider access options, but you can also save that for later.
I would not recommend learning Pig unless you have very specific uses for it. If you are familiar with Java (or Scala, or even Python, among other options), try writing some Map-Reduce style jobs to learn more about how Hadoop works. Open Ambari (or Cloudera Manger etc.) interface which comes pre-configured with these sandboxes and see the tools and services that come pre-packaged with the sandbox. These are the most common ones and can be used as a useful list for starters. Start learning about them (but skip Pig if you can, even if it is pre-installed ;)
Once you are familiar with the sandbox you have, I would suggest going for Apache Nifi which has easier learning curve and give a lot of flexibility. But you will most likely have to setup a new sandbox for that. It may also serve as a good revision exercise for learning. Integrate that with your Hadoop sandbox, implement some decent use cases and you will have some good experience to show.
I tried using ParallelALSFactorizationJob, but it crashes here:
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
Command line help mentions using filesystem, but it seems it wants hadoop. How can I run it on Windows, mahout.cmd file is broken:
"===============DEPRECATION WARNING==============="
"This script is no longer supported for new drivers as of Mahout 0.10.0"
"Mahout's bash script is supported and if someone wants to contribute a fix for this"
"it would be appreciated."
So is that possible (ALS + Windows - hadoop)?
Mahout is a community-driven project and its community is very strong.
"Apache Mahout is one of the first and most prominent Big Data machine
learning platforms. It implements machine learning algorithms on top
of distributed processing platforms such as Hadoop and Spark."
-Tiwary, C. (2015). Learning Apache Mahout.
Apache Spark is an open-source, in-memory, general purpose computing system that runs on both Windows and Unix like systems. Instead of Hadoop-like disk-based computation, Spark uses cluster memory to upload all the data into the memory, and this data can be queried repeatedly.
"As Spark is gaining popularity among data scientists, the Mahout
community is also quickly working on making Mahout algorithms function
on Spark's execution engine to speed up its calculation 10 to 100
times faster. Mahout provides several important building blocks to
create recommendations using Spark."
-Gupta, A (2015). Learning Apache Mahout Classification.
(This last book also provides a step by step guide Using Mahout's Spark shell (they don't use Windows and it isn't clear if they use Hadoop or not though). For more information on that topic, see the implementation section at https://mahout.apache.org/users/sparkbindings/play-with-shell.html.)
In addition to this, you can build recommendation engines using Spark such as DataFrames, RDD, Pipelines, and Transforms available in Spark MLlib and
in Spark, (...) the Alternating Least Squares (ALS) method is used for
generating model-based collaborative filtering.
-Gorakala, S. (2016). Building Recommendation Engines.
At this point, there's one question still to answer before answering your question: can we run Spark without Hadoop?.
So, yes, it's possible to use ALS method on Windows using Spark (without Hadoop).
I am new to hadoop. I recently read about basics of Apache Hadoop, Pig, Hive, HBase.
Then I came across term "Hadoop distribution" and examples were Cloudera, MAPR, HortonWorks.
So what is relation of Apache Hadoop (& its echo-system ) with "Hadoop Distribution"
Is it like Java Virtual machine specification (a document) and Oracle JVM, IBM JVM (working implementation of the document) ?
But we get zips from Apache, which are actually logic implemented.
So I am bit confused.
Since Hadoop is an open source project, a number of vendors have developed
their own distributions, adding new functionality or improving the code base
Vendor distributions are, of course, designed to overcome issues with the open source edition and provide additional value to customers, with a focus on things such as:
Reliability: The vendors react faster when bugs are detected. They promptly deliver fixes and patches, which makes their solutions more stable.
Support: A variety of companies provide technical assistance, which makes it possible to adopt the platforms for mission-critical and enterprise-grade tasks.
Completeness: Very often Hadoop distributions are supplemented with other tools to address specific tasks.
Have a look at this top-hadoop-distributions article and this presentation for benchmarking analysis among top three Hadoop distributions.
Based on Distributions and Commercial Support, The following companies provide products that include Apache Hadoop, a derivative work thereof, commercial support, and/or tools and utilities related to Hadoop.
Some companies release or sell products that include the official Apache Hadoop release files, and/or their own and other useful tools. Other companies or organizations release products that include artifacts build from modified or extended versions of the Apache Hadoop source tree. Such derivative works are not supported by the Apache Team: all support issues must be directed to the suppliers themselves.
I am working with massive data, my input data is about 100 GB.I want to choose one of the hadoop distributions, but i don't know to choose mapr cluster or cloudera cluster. i want to use free versions(mapr M3 and cloudera CDH4 that uses hadoop 0.20).
which of them is better? which configurations do i use that they work the best?
Thanks.
Actually speaking, answer to this question is the most common answer in this world, it depends. It's totally upto you and your requirements. One might find one particular flavor more suitable for his/her needs, and you might find the same flavor less useful. Moreover it's all about personal choice, like I personally like Apache's Hadoop. All are good. It's just that which one fits into your needs.
Which of them is better? is a controversial topic. Questions like this often end up as heated arguments. See this question for example. So, i'm not going to list down advantages of any one over the other. But there are certain differences among these different flavors of Hadoop which could probably help you during your thought process.
The major difference between CDH(Apache Hadoop as well) and MapR is that MapR uses its own proprietary file system, MapRFS instead of HDFS. The M3 Edition is free and available for unlimited production use. Support is provided on a community basis and through MapR's Forums. CDH is 100% open source and you can use the "Standard" version of Cloudera Manager without any charges. And Apache, well it's Apache :). Do what ever you feel like.
MapR has even partnered recently with Canonical, the organization behind the Ubuntu operating system, in an effort to make Hadoop available as an integrated part of Ubuntu through its repositories. The partnership announced that MapR's M3 Edition for Apache Hadoop will be packaged and made available for download as an integrated part of the Ubuntu operating system(see this if you need more info on this). The source code is available on Github. CDH codebase is same as Apache's, with some patches of their own.
But the free edition lacks some good features like JobTracker HA, NameNode HA, Mirroring, Snapshot etc. CDH4, being based on Hadoop-2.x provides you the HA features though. By virtue of its design MapR doesn't have any SPOF, like CDH3(or Hadoop-1.x) does. The MapRFS stores data in volumes, conceptually in a set of containers distributed across a cluster. Each container includes its own metadata, eliminating the central NameNode single point of failure. Still the API is Apache Hadoop compatible. MapR setup requirements differ from Apache/CDH. Like MapR requires raw volumes to be available for installation for example. Once you have the correct hardware & OS pre-requisites, setup times and eval times should be on the same order of magnitude as Apache/CDH.
IMHO, M3 is not gonna give you huge advantages over Apache/CDH as some of the catchy MapR features are not present in M3 free edition, like NFS-HA, Snapshots etc.
Being the first one Cloudera definitely has an extra edge in terms of experience and a solid customer base. But MapR has gone more innovative in terms of significant changes to the MapReduce and HDFS components to improve performance.
I'll write some more after sometime, as i'm on a call and you are waiting for the answer ;)