I have a question. How to do Mapreduce Job to implement a HiveQL statement. for example we have a table with column names color, width and some other columns. Suppose if i want to select color in hive i can give select color from tablename;. In the same way what is the code for getting color in Mapreduce.
You can use the Thrift server. You can connect to hive through JDBC. All you need is to include the hive-jdbc jar in your classpath.
However is this advisable? Well that i am not really sure. This is a very bad design pattern if you are doing it in the mapper as no. of mappers is determined by data size.
The same can be achieved as multiple inputs into MR job.
But then i do not know that much about your use case. So thrift will be the way to go.
For converting hive queries to mapreduce jobs, ysmart is the best option
http://ysmart.cse.ohio-state.edu/
Either ysmart can be downloaded or online version can be used.
Check the companion code Chapter 5 - Join Patterns from the MapReduce Design Patterns book. In the join pattern the fields are extracted in the mapper and emitted.
Related
Difference between map-reduce ,hive ,pig
pig : its a data flow language, it can work on any data basically used to convert semi structure ,unstructured data to structure so that can be used in hive advance analytics using windowing function etc.
Hive : Work on structure data and provide sql type query language .
I know at back end both pig and hive uses map -reduces .
I know map-reduce can be good tool for programmer ,hive or pig for sql guy
I just want to know is there any specific use cases where we go for hive,pig and map-reduce
basically we decide that we have to use pig here hive here or we must use map -reduce .
Map-Reduce: Has better performance than pig or hive but requires more development time.
PIg: Less development time but poor performance when compared to map-reduce.
Hve: SQL type language with some good features like partitioning and bucketing to improve performance reads.Also, hive enforces schema on read.
Pig is used to format your unstructured/semi structure data format.Lets say you have a timestamp in your data which is not as per Hive timestamp format.You can convert same using pigUDF and format your data.This is just a example to explain.You can do many more things using Pig.
Hive is basically used for structured data .This maynot work well with unstructured data.This takes more time to execute as it converts into Mapreduce job.I suggest you to use impala which is much faster than hive.
Pig is a data flow language. This means that you can not use if statements or loops.
If you need to do a lot of repetition, it would be preferable to learn mapreduce.
You are able to get around this by embedding pig into a python script but this would take even longer since it would have to load all the jar files with every iteration of the loop.
Basically it boils down to how much time you spend prototyping vs. how much production work you have.
If you are a data scientist or an analyst, most of your work is new projects that require a lot of prototyping. This means that you care about getting results fast. Then you would prefer Pig or Hive.
If you are in a development team, you want to build robust code based on agreed upon methodology that does not need to be tested and then you would prefer mapreduce.
There are companies like Cloudera that provide a package of Pig, Hive, and other Hadoop tools so you wouldn't have to choose between the two.
Map Reduce is a inner component of hadoop, other Pig and hive are hadoop eco systems it means run on the top of hadoop. The purpose of both mapreduce, pig and hive purpose is process the vast amount of data in different manner.
Mapreduce: apache implemented it. highly recommendable to process entire data, it's time consume and required program skills like java (highly recommendable), pyghon, ruby and other programming languages. total data aggregate and sort by using mapper and reducer functions. Hadoop use it by default.
Hive: Facebook implemented it. most of the analysts especially bigdata analysts use this tool to analyze the data especially structure data. Backend this hive tool use mapreduce to be processed. Internally Hive use special language called HQL, It's subset of SQL language. Who is wellever in SQL, they can goes with Hive. It's highly recommended to the Datawarehouse oriented projects. Much difficult to process un structured especially schema-less data.
Pig:
Pig is a scripting language, implemented by Yahoo. The main difference between pig and Hive is pig can process any type of data, either structured or unstructured data. It means it's highly recommendable for streaming data like satellite generated data, live events, schema-less data etc. Pig first load the data later programmer write a program depends on data to make it structured. Who is expert in programming languages they will choose this Hadoop ecosystems.
I have a set of Hadoop flows that were written before we started using Hive. When we added Hive, we configured the data files as external tables. Now we're thinking about rewriting the flows to output their results using HCatalog. Our main motivation to make the change is to take advantage of the dynamic partitioning.
One of the hurdles I'm running into is that some of our reducers generate multiple data sets. Today this is done with side-effect files, so we write out each record type to its own file in a single reduce step, and I'm wondering what my options are to do this with HCatalog.
One option obviously is to have each job generate just a single record type, reprocessing the data once for each type. I'd like to avoid this.
Another option for some jobs is to change our schema so that all records are stored in a single schema. Obviously this option works well if the data was just broken apart for poor-man's partitioning, since HCatalog will take care of partitioning the data based on the fields. For other jobs, however, the types of records are not consistent.
It seems that I might be able to use the Reader/Writer interfaces to pass a set of writer contexts around, one per schema, but I haven't really thought it through (and I've only been looking at HCatalog for a day, so I may be misunderstanding the Reader/Writer interface).
Does anybody have any experience writing to multiple schemas in a single reduce step? Any pointers would be much appreciated.
Thanks.
Andrew
As best I can tell, the proper way to do this is to use the MultiOutputFormat class. The biggest help for me was the TestHCatMultiOutputFormat test in Hive.
Andrew
Where does Apache HiveQL store the Map/Reduce code it generates?
I believe Hive doesn't really generate Map/Reduce code in the sense as you could get from Java, because it is interpreted by the Hive query planner.
If you want to get an idea of what kind of operations your Hive queries generate, you could prefix your queries with EXPLAIN and you will see the abstract syntax tree, the dependency graph, and the plan of each stage. More info on EXPLAIN here.
If you really want to see some Map/Reduce jobs, you could try YSmart which will translate your HiveQL statements into working Java Map/Reduce code. I haven't used it personally, but I know people who have and said good things about it.
It seems that Hive change this method every query execution.
http://hive.apache.org/docs/r0.9.0/api/org/apache/hadoop/hive/ql/exec/Task.html#execute(org.apache.hadoop.hive.ql.DriverContext)
I am a newbie on the MR and Hadoop front.
I wrote an MR for finding missing's in csv file and it is working fine.
now I have an usecase where i need to parse a csv file and code it with the regarding category.
ex: "11,abc,xyz,51,61,78","11,adc,ryz,41,71,38",.............
now this has to be replaced as "1,abc,xyz,5,6,7","1,adc,ryz,4,7,3",.............
here i am doing a mod of 10 but there will be different cases of mod's.
data size is in gb's.
I want to know how to replace the content in-place for the input. Is this achievable with MR?
Basically i have not seen any file handling or writing based hadoop examples any where.
At this point i do not want to go to HBase or other db tools.
You can not replace data in place, since HDFS files are append only, and can not be edited.
I think simplest way to achiece your goal is to register your data in the Hive as external table, and write your trnasformation in HQL.
Hive is a system sitting aside of hadoop and translating your queries to MR Jobs.
Its usage is not serious infrastructure decision as HBASE usage
What is the exact difference between Pig and Hive? I found that both have same functional meaning because they are used for doing same work. The only thing is implimentation which is different for both. So when to use and which technology? Is there any specification for both which shows clearly the difference between both in terms of applicability and performance?
Apache Pig and Hive are two projects that layer on top of Hadoop, and provide a higher-level language for using Hadoop's MapReduce library. Apache Pig provides a scripting language for describing operations like reading, filtering, transforming, joining, and writing data -- exactly the operations that MapReduce was originally designed for. Rather than expressing these operations in thousands of lines of Java code that uses MapReduce directly, Pig lets users express them in a language not unlike a bash or perl script. Pig is excellent for prototyping and rapidly developing MapReduce-based jobs, as opposed to coding MapReduce jobs in Java itself.
If Pig is "scripting for Hadoop", then Hive is "SQL queries for Hadoop". Apache Hive offers an even more specific and higher-level language, for querying data by running Hadoop jobs, rather than directly scripting step-by-step the operation of several MapReduce jobs on Hadoop. The language is, by design, extremely SQL-like. Hive is still intended as a tool for long-running batch-oriented queries over massive data; it's not "real-time" in any sense. Hive is an excellent tool for analysts and business development types who are accustomed to SQL-like queries and Business Intelligence systems; it will let them easily leverage your shiny new Hadoop cluster to perform ad-hoc queries or generate report data across data stored in storage systems mentioned above.
From a purely engineering point of view, I find PIG both easier to write and maintain than SQL-like languages. It is procedural, so you apply a bunch of relations to your data one-by-one, and if something fails you can easily debug at intermediate steps, and even have a command called “illustrate” which uses an algorithm to sample some data matching your relation. I’d say for jobs with complex logic, this is definitely much more convenient than Hive, but for simple stuff the gain is probably minimal.
Regarding interfacing, I find that PIG offers a lot of flexibility compared to Hive. You don’t have a notion of table in PIG so you manipulate files directly, and you can define loader to load it into pretty much any format very easily with loader UDFs, without having to go through the table loading stage before you can do your transformations. They have a nice feature in the recent versions of PIG where you can use dynamic invokers, i.e. use pretty much any Java method directly in your PIG script, without having to write a UDF.
For performance/optimization, from what I’ve seen you can directly control in PIG the type of join and grouping algorithm you want to use (I believe 3 or 4 different algorithms for each). I’ve personally never used it, but as you’re writing demanding algorithms it could probably be useful to be able to decide what to do instead of relying on the optimizer as it’s the case in Hive. So I wouldn’t say it necessarily performs better than Hive, but in cases where the optimizer makes the wrong decision, you have the option to choose what algorithm to use and have more control on what happens.
One of the cool things I did lately was splits: you can split your execution flow and apply different relations to each split. So you can have a non-linear dataset, split it based on a field, and apply a different processing to each part, and maybe join the results together in the end, all this in the same script. I don’t think you can do this in Hive, you’d have to write different queries for each case, but I may be wrong.
One thing to note also is that you can increment counters in PIG. Currently you can only do this in PIG UDFs though. I don’t think you can use counters in Hive.
And there are some nice projects that allow you to interface PIG with Hive as well (like HCatalog), so you can basically read data from a hive table, or write data to a hive table (or both) by simply changing your loader in the script. Supports dynamic partitions as well.
Apache Pig is a platform for analyzing large data sets. Pig's language, Pig Latin, is a simple query algebra that lets you express data transformations such as merging data sets, filtering them, and applying functions to records or groups of records. Users can create their own functions to do special-purpose processing.
Pig Latin queries execute in a distributed fashion on a cluster. Our current implementation compiles Pig Latin programs into Map-Reduce jobs, and executes them using Hadoop cluster.
https://cwiki.apache.org/confluence/display/PIG/Index%3bjsessionid=F92DF7021837B3DD048BF9529A434FDA
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
https://cwiki.apache.org/Hive/
What is the exact difference between Pig and Hive? I found that both have same functional meaning because they are used for doing same work.
Have a look at Pig Vs Hive Comparison in a nut shell from dezyre article
Hive scores over PIG in Partitions, Server, Web interface & JDBC/ODBC support.
Some differences:
Hive is best for structured Data & PIG is best for semi structured data
Hive used for reporting & PIG for programming
Hive used as a declarative SQL & PIG used as procedural language
Hive supports partitions & PIG does not
Hive can start an optional thrift based server & PIG can't
Hive defines tables before hand (schema) + stores schema information in database and PIG don't have dedicated metadata of database
Hive does not support Avro but PIG does
Pig also supports additional COGROUP feature for performing outer joins but hive does not. But both Hive & PIG can join, order & sort dynamically
So when to use and which technology?
Above difference clarifies your query.
HIVE : Structured data, SQL like queries and used for reporting purpose
PIG: Semi-structured data, program a work-flow involving a sequence of activities for Map Reduce jobs.
Regarding performance of job, both HIVE and PIG are slow compared to traditional Map Reduce job. Reason : Finally Hive or PIG scripts have to be converted into a series of Map Reduce jobs.
Have a look at related SE question:
Pig vs Hive vs Native Map Reduce
The main difference is PIG is a data flow language and Hive is data warehouse.
As PIG can be used similar as a step by step procedural language.
But HIVE is used as a declarative language.
PIG can be used for getting online streaming unstructured data. But HIVE can only access structured data and it can also access data from RDBMS databases such as SQL, NOSQL by using JDBC and ODBC drivers.
PIG can convert data into Avro format but PIG can't.
PIG can't create partitions but HIVE can do it.
As HIVE is top of PIG that's why HIVE can only access the data once it is processed by PIG.
It depends when we have to use PIG and HIVE if you are working structured, relational data then we can use HIVE else we can use PIG.
By PIG we can communicate with ETL tools but it takes more time compared with hive. But it is easy in PIG rather HIVE because in HIVE we have to create table before processing the data.