converting unstructured data to structured data using Hadoop - hadoop

I want to convert an unstructured data to structured data for easy data analysis, so i want to know if PIG or HIVE is the best. If not which other Hadoop tool can be used and how?

In my experience the most concise, yet statically typed and very flexible, is Scalding. It's robust, concise and functional.
Scalding is an open source Twitter project that sits on top of Cascading. Cascading sits on top of Hadoop. What cascading does is take user defined stages, and magically 'cascade' them down into as few MapReduce phases as possible.
This page pretty much proves Scalding is the best Hadoop API:
https://github.com/twitter/scalding/wiki/Rosetta-Code
Spark (not technically a Hadoop technology, it's actually much much better) now has a magic JsonRDD - you give it a JSON file(s) and it will magically work out the scheme.

Related

Hive - Is it a good fit for building a datawarehouse?

So like most Enterprise companies, we have built a data warehouse in Hadoop, with user queries supported in Hive, and now after a few months and user acceptance testing everyone is a little surprised about how it is not like a standard (Oracle/Netezza) database when used by end-users for ad-hoc data analysis.
While I understand that this is probably a very stupid way of doing projects (we should have researched the use cases and best fit technologies before building the product), and I know the basic technical aspects of how Hadoop differs from single node machines... I would still want to understand if using Hadoop/Hive makes sense for data warehouses in any scenario?
For instance,
Are there always trade-offs in query performance or can they be optimized with configuration changes, horizontal scaling of hardware?
Can it ever be as fast as something like Netezza - which uses non-commodity hardware but functions on a similar architecture?
Where is Hadoop great and absolutely defeats everything else in comparison?
I would argue the Hive MetaStore is useful more than HiveServer2 itself as the query interface.
The MetaStore is what Presto and Spark use to get data much quicker than MapReduce, but maybe not as fast as a well-optimized Tez query, and improvements are being made in Hive v2.x+ with LLAP, for example.
In the end, Hive is really only useful if the ingestion pipelines are actually storing the data in columnar formats of ORC or Parquet to begin with. From there, and reasonable query engine can scan through that data fairly quickly, and Hive just happens to be considered the defacto implementation of that access pattern, whereas Impala or Presto are often more used for adhoc access.
That being said, Hive (and other SQL on Hadoop) is not used for "building", it is used for "analyzing"
And I don't know what you mean by "standard" - Hive supports any ODBC/JDBC Connection, so it's not like you go to the CLI for all access, and HUE or Zeppelin make really nice notebooks for SQL analysis over Hive.
To answer your question,
Are there always trade-offs in query performance or can they be optimized with configuration changes, horizontal scaling of hardware?
If you are using only hive tool from Hadoop for Adhoc querying then that is not right choice for adhoc querying and data analysis. We have explore better option according to you use case and make tech selection from Hive LLAP, HBase, Spark, SparkSQL, Spark Streaming, Apache storm, Imapala, Apache Drill and Prestodb etc.
Can it ever be as fast as something like Netezza - which uses non-commodity hardware but functions on a similar architecture?
It is better tool now days most of organization using but you have to be specific about tech tools selection from Hadoop tech stack according to you use case and after studying it do right selection for technology.
Where is Hadoop great and absolutely defeats everything else in comparison?
Hadoop is best for implementing data lake platform in large organization where data scattered across multiple systems, and using Hadoop data lake you can have data at center place. Which can leveraged as data analytics platform for organization data which accumulated over the time period. Also can be used for data stream data processing to get results in real time.
Hope this will help.
Well, there are many benefits of using storing big data in HDFS or say Hadoop ecosystem. To name the most important ones, someone is there who can store and process huge data and the configuration is pretty straight forward.

Confusion between Operational and Analytical Big Data and on which category Hadoop operates?

I can't wrap my head around the basic theoretical concept of 'Operational and Analytical Big Data'.
According to me:
Operational Big Data: Branch where we can perform Read/write operations on big data using specially designed Databases (NoSQL). Somewhat similar to ETL in RDMS.
Analytical Big Data: Branch where we analyse data in retrospect and draw predictions using techniques like MPP and MapReduce. Somewhat similar to reporting in RDMS.
(Please feel free to correct wherever I'm wrong, it's just my understanding.)
So according to me, Hadoop is used for Analytical Big Data where we just process data for analysis but don't temper original data and hence is not an idea choice for ETL.
But recently I have come across this article which advocates using Hadoop for ETL: https://www.datanami.com/2014/09/01/five-steps-to-running-etl-on-hadoop-for-web-companies/
Hadoop (MapReduce) is not an efficient processing layer, IMO, without adequate tweaking, so out of the box, the answer is neither. Sure, MapReduce could be used, and under the hood, that API is what most higher level tools depend on, but since those other tools exist, you wouldn't want to go write ETL jobs in plain MapReduce.
You can combine Hadoop with Spark, Presto, HBase, Hive, etc. to unlock these other Operational or Analytical layers, some are useful for reporting use cases, and others are useful for ETL. Again, plenty of knobs to get useful results in a reasonable time compared to an RDBMS (or other NoSQL tools). Plus, it takes several attempts to know how to best store data in Hadoop to begin with (hint: not plaintext, and not lots of small files)
That link is over 5 years old now, and references Flume and Sqoop. Other "web scale" technologies have shown their worth in that time, meanwhile Flume and Sqoop have shown their age can be difficult to configure manage compared to tools like Apache NiFi.

Apache Hadoop vs Google Bigdata

Can any one explain me the key difference between Apache Hadoop vs
Google Bigdata
Which one is better(hadoop or google big data).
Simple answer would be.. it depends on what you want to do with your data.
Hadoop is used for massive storage of data and batch processing of that data. It is very mature, popular and you have lot of libraries that support this technology. But if you want to do real time analysis, queries on your data hadoop is not suitable for it.
Google's Big Query was developed specially to solve this issue. You can do real time processing on your data using google's big query.
You can use Big Query in place of Hadoop or you can also use big query with Hadoop to query datasets produced from running MapReduce jobs.
So, it entirely depends on how you want to process your data. If batch processing model is required and sufficient you can use Hadoop and if you want real time processing you have to choose Google's.
Edit: You can also explore other technologies that you can use with Hadoop like Spark, Storm, Hive etc.. (and choose depending on your use case)
Some useful links for more exploration:
1: gavinbadcock's blog
2: cloudacademy's blog

Pig vs Hive vs Native Map Reduce

I've basic understanding on what Pig, Hive abstractions are. But I don't have a clear idea on the scenarios that require Hive, Pig or native map reduce.
I went through few articles which basically points out that Hive is for structured processing and Pig is for unstructured processing. When do we need native map reduce? Can you point out few scenarios that can't be solved using Pig or Hive but in native map reduce?
Complex branching logic which has a lot of nested if .. else .. structures is easier and quicker to implement in Standard MapReduce, for processing structured data you could use Pangool, it also simplifies things like JOIN. Also Standard MapReduce gives you full control to minimize the number of MapReduce jobs that your data processing flow requires, which translates into performance. But it requires more time to code and introduce changes.
Apache Pig is good for structured data too, but its advantage is the ability to work with BAGs of data (all rows that are grouped on a key), it is simpler to implement things like:
Get top N elements for each group;
Calculate total per each group and than put that total against each row in the group;
Use Bloom filters for JOIN optimisations;
Multiquery support (it is when PIG tries to minimise the number on MapReduce Jobs by doing more stuff in a single Job)
Hive is better suited for ad-hoc queries, but its main advantage is that it has engine that stores and partitions data. But its tables can be read from Pig or Standard MapReduce.
One more thing, Hive and Pig are not well suited to work with hierarchical data.
Short answer - We need MapReduce when we need very deep level and fine grained control on the way we want to process our data. Sometimes, it is not very convenient to express what we need exactly in terms of Pig and Hive queries.
It should not be totally impossible to do, what you can using MapReduce, through Pig or Hive. With the level of flexibility provided by Pig and Hive you can somehow manage to achieve your goal, but it might be not that smooth. You could write UDFs or do something and achieve that.
There is no clear distinction as such among the usage of these tools. It totally depends on your particular use-case. Based on your data and the kind of processing you need to decide which tool fits into your requirements better.
Edit :
Sometime ago I had a use case wherein I had to collect seismic data and run some analytics on it. The format of the files holding this data was somewhat weird. Some part of the data was EBCDIC encoded, while rest of the data was in binary format. It was basically a flat binary file with no delimiters like\n or something. I had a tough time finding some way to process these files using Pig or Hive. As a result I had to settle down with MR. Initially it took time, but gradually it became smoother as MR is really swift once you have the basic template ready with you.
So, like I said earlier it basically depends on your use case. For example, iterating over each record of your dataset is really easy in Pig(just a foreach), but what if you need foreach n?? So, when you need "that" level of control over the way you need to process your data, MR is more suitable.
Another situation might be when you data is hierarchical rather than row-based or if your data is highly unstructured.
Metapatterns problem involving job chaining and job merging are easier to solve using MR directly rather than using Pig/Hive.
And sometimes it is very very convenient to accomplish a particular task using some xyz tool as compared to do it using Pig/hive. IMHO, MR turns out to be better in such situations as well. For example if you need to do some statistical analyses on your BigData, R used with Hadoop streaming is probably the best option to go with.
HTH
Mapreduce:
Strengths:
works both on structured and unstructured data.
good for writing complex business logic.
Weakness:
long development type
hard to achieve join functionality
Hive :
Strengths:
less development time.
suitable for adhoc analysis.
easy for joins
Weakness :
not easy for complex business logic.
deals only structured data.
Pig
Strengths :
Structured and unstructured data.
joins are easily written.
Weakness:
new language to learn.
converted into mapreduce.
Hive
Pros:
Sql like
Data-base guys love that.
Good support for structured data.
Currently support database schema and views like structure
Support concurrent multi users, multi session scenarios.
Bigger community support. Hive , Hiver server , Hiver Server2, Impala ,Centry already
Cons:
Performance degrades as data grows bigger not much to do, memory over flow issues. cant do much with it.
Hierarchical data is a challenge.
Un-structured data requires udf like component
Combination of multiple techniques could be a nightmare dynamic portions with UTDF in case of big data etc
Pig:
Pros:
Great script based data flow language.
Cons:
Un-structured data requires udf like component
Not a big community support
MapReudce:
Pros:
Dont agree with "hard to achieve join functionality", if you understand what kind of join you want to implement you can implement with few lines of code.
Most of the times MR yields better performance.
MR support for hierarchical data is great especially implement tree like structures.
Better control at partitioning / indexing the data.
Job chaining.
Cons:
Need to know api very well to get a better performance etc
Code / debug / maintain
Scenarios where Hadoop Map Reduce is preferred to Hive or PIG
When you need definite driver program control
Whenever the job requires implementing a custom Partitioner
If there already exists pre-defined library of Java Mappers or Reducers for a job
If you require good amount of testability when combining lots of large data sets
If the application demands legacy code requirements that command physical structure
If the job requires optimization at a particular stage of processing by making the best use of tricks like in-mapper combining
If the job has some tricky usage of distributed cache (replicated join), cross products, groupings or joins
Pros of Pig/Hive :
Hadoop MapReduce requires more development effort than Pig and Hive.
Pig and Hive coding approaches are slower than a fully tuned Hadoop MapReduce program.
When using Pig and Hive for executing jobs, Hadoop developers need not worry about any version mismatch.
There is very limited possibility for the developer to write java level bugs when coding in Pig or Hive.
Have a look at this post for Pig Vs Hive comparison.
All the things which we can do using PIG and HIVE can be achieved using MR (sometimes it will be time consuming though). PIG and HIVE uses MR/SPARK/TEZ underneath. So all the things which MR can do may or may not be possible in Hive and PIG.
Here is the great comparison.
It specifies all the use case scenarios.

SequenceFile alternative/extension that allows in-place updates

I like the convenience of a database where you can update a row in-place. But Hadoop relies on sequence files that are capable of being consumed in parallel.
I liked the idea of HBase where I can rewrite just one row; as well as being input to a map-reduce job. But HBase is not something a newb must mess with, right? What is a good tool/method for this?
I don't think it's very difficult to learn and use HBase.
Coming to your original question. The reason why we use HBase is same as the reason behind using any other DB, i.e random, real-time read/write access, which HDFS lacks like any other FS. And this is true for any filesystem, not just for HDFS. You could take ext4 & MySQL paradigm as an example.
And when you say re-write in HBase it is actually not update. You either put a new version of a cell or delete a cell and put new data at the same location.
And you can't say that Hadoop relies on sequence files to provide you the parallelism. Parallelism is something which is provided by Hadoop by virtue of its nature, i'e a distributed platform. You can handle almost any kind of file using Hadoop with almost euqal parallelism. The only advantage with sequence files is that they are more suitable for MapReduce processing as they are already in key/vale pairs.
You have to take it with a pinch of salt, but frankly speaking Hadoop doesn't understand update. If you could elaborate your use case a bit more, maybe I could suggest something better.

Resources