MapReduce code generated by Hive - hadoop

Where does Apache HiveQL store the Map/Reduce code it generates?

I believe Hive doesn't really generate Map/Reduce code in the sense as you could get from Java, because it is interpreted by the Hive query planner.
If you want to get an idea of what kind of operations your Hive queries generate, you could prefix your queries with EXPLAIN and you will see the abstract syntax tree, the dependency graph, and the plan of each stage. More info on EXPLAIN here.
If you really want to see some Map/Reduce jobs, you could try YSmart which will translate your HiveQL statements into working Java Map/Reduce code. I haven't used it personally, but I know people who have and said good things about it.

It seems that Hive change this method every query execution.
http://hive.apache.org/docs/r0.9.0/api/org/apache/hadoop/hive/ql/exec/Task.html#execute(org.apache.hadoop.hive.ql.DriverContext)

Related

What "Querying objectives" in CCDH certification specifically mean?

I am planning for CCDH certification. Can anyone help me with the below requirement? Is it like we have to write MR code similar to any of HiveQL like select,join etc? or is it something else?
http://www.cloudera.com/content/cloudera/en/training/certification/ccdh/prep.html
Querying Objectives
Write a MapReduce job to implement a HiveQL statement.
Write a MapReduce job to query data stored in HDFS.
Per this cloudera forum thread:
please explain what is really expected in part four. Are we really expected to write an entire job ? I can't imagine that, can you clarify this point ? And how difficult are the queries in HiveQL ? Do we have to know subtle select clause ?
In part 4: you are expected to read an entire job (driver, mapper, reducer) and dissect it and be able to understand what the code is doing or not doing. It's basically a dissection exercise: given the following code, what is the outcome. Queries in HiveQL are not difficult if you know HiveQL or SQL which are not difficult.
I can't guarantee it's true, but that's a post by a Cloudera employee (a bit dated though - it's from 2014-02).

Difference between hive,pig,map-reduce use cases

Difference between map-reduce ,hive ,pig
pig : its a data flow language, it can work on any data basically used to convert semi structure ,unstructured data to structure so that can be used in hive advance analytics using windowing function etc.
Hive : Work on structure data and provide sql type query language .
I know at back end both pig and hive uses map -reduces .
I know map-reduce can be good tool for programmer ,hive or pig for sql guy
I just want to know is there any specific use cases where we go for hive,pig and map-reduce
basically we decide that we have to use pig here hive here or we must use map -reduce .
Map-Reduce: Has better performance than pig or hive but requires more development time.
PIg: Less development time but poor performance when compared to map-reduce.
Hve: SQL type language with some good features like partitioning and bucketing to improve performance reads.Also, hive enforces schema on read.
Pig is used to format your unstructured/semi structure data format.Lets say you have a timestamp in your data which is not as per Hive timestamp format.You can convert same using pigUDF and format your data.This is just a example to explain.You can do many more things using Pig.
Hive is basically used for structured data .This maynot work well with unstructured data.This takes more time to execute as it converts into Mapreduce job.I suggest you to use impala which is much faster than hive.
Pig is a data flow language. This means that you can not use if statements or loops.
If you need to do a lot of repetition, it would be preferable to learn mapreduce.
You are able to get around this by embedding pig into a python script but this would take even longer since it would have to load all the jar files with every iteration of the loop.
Basically it boils down to how much time you spend prototyping vs. how much production work you have.
If you are a data scientist or an analyst, most of your work is new projects that require a lot of prototyping. This means that you care about getting results fast. Then you would prefer Pig or Hive.
If you are in a development team, you want to build robust code based on agreed upon methodology that does not need to be tested and then you would prefer mapreduce.
There are companies like Cloudera that provide a package of Pig, Hive, and other Hadoop tools so you wouldn't have to choose between the two.
Map Reduce is a inner component of hadoop, other Pig and hive are hadoop eco systems it means run on the top of hadoop. The purpose of both mapreduce, pig and hive purpose is process the vast amount of data in different manner.
Mapreduce: apache implemented it. highly recommendable to process entire data, it's time consume and required program skills like java (highly recommendable), pyghon, ruby and other programming languages. total data aggregate and sort by using mapper and reducer functions. Hadoop use it by default.
Hive: Facebook implemented it. most of the analysts especially bigdata analysts use this tool to analyze the data especially structure data. Backend this hive tool use mapreduce to be processed. Internally Hive use special language called HQL, It's subset of SQL language. Who is wellever in SQL, they can goes with Hive. It's highly recommended to the Datawarehouse oriented projects. Much difficult to process un structured especially schema-less data.
Pig:
Pig is a scripting language, implemented by Yahoo. The main difference between pig and Hive is pig can process any type of data, either structured or unstructured data. It means it's highly recommendable for streaming data like satellite generated data, live events, schema-less data etc. Pig first load the data later programmer write a program depends on data to make it structured. Who is expert in programming languages they will choose this Hadoop ecosystems.

Mapreduce Job to implement a HiveQL statement

I have a question. How to do Mapreduce Job to implement a HiveQL statement. for example we have a table with column names color, width and some other columns. Suppose if i want to select color in hive i can give select color from tablename;. In the same way what is the code for getting color in Mapreduce.
You can use the Thrift server. You can connect to hive through JDBC. All you need is to include the hive-jdbc jar in your classpath.
However is this advisable? Well that i am not really sure. This is a very bad design pattern if you are doing it in the mapper as no. of mappers is determined by data size.
The same can be achieved as multiple inputs into MR job.
But then i do not know that much about your use case. So thrift will be the way to go.
For converting hive queries to mapreduce jobs, ysmart is the best option
http://ysmart.cse.ohio-state.edu/
Either ysmart can be downloaded or online version can be used.
Check the companion code Chapter 5 - Join Patterns from the MapReduce Design Patterns book. In the join pattern the fields are extracted in the mapper and emitted.

Can Maps and Reduces be identified dynamically?

I want to figure out whether there is any software or algorithm which can help in identifying the maps and reduces in a given code on its own.
This is what that happens when you run Hive or Pig queries. You just submit your queries and they automatically get converted into corresponding MR jobs and you get the result without having to do anything additional. Have a look at ANTLR(ANother Tool for Language Recognition) which automatically parses a given Hive query into the corresponding MR job. ANTLR is a parser generator for reading, processing, executing, or translating structured text or binary files.
Do you need something else?Apologies if I have get it wrong.

Can we run queries from the Custom UDF in Hive?

guys I am newbie to Hive and have some doubts in it.
Normally we write custom UDF in Hive for the particular number of columns. (Consider UDF is in Java). Means it performs some operation on that particular column.
I am thinking that can we write such UDF through which we can give the particular column as a input to some query and can we return that query from UDF which will execute on Hive CLI by taking the column as a input?
Can we do this? If yes please suggest me.
Thanks and sorry for my bad english.
This is not possible out of the box because as the Hive query is running, there has been a plan already built that is going to execute. What you suggest is to dynamically change that plan while it is running, which is not only hard because the plan is already built, but also because the Hadoop MapReduce jobs are already running.
What you can do is have your initial Hive query output new Hive queries to a file, then have some sort of bash/perl/python script that goes through that and formulates new Hive queries and passes them to the CLI.

Resources