Hive - How to know which execution engine I am currently using - hadoop

I want to automate my hive ETL workflow in such a way
that I need to execute hive jobs on the basis of execution engine (Tez
or MR) because of memory constraints.
Would you please help, as I wanted to cross-check in-between of my whole work-flow which execution engine currently I'm dealing with.
Thanks in advance.

The Hive execution engine is controlled by hive.execution.engine property. It can be either of the following:
mr (Map Reduce, default)
tez (Tez execution, for Hadoop 2 only)
spark (Spark execution, for Hive 1.1.0 onward).
The property can be read & updated using hive/beeline cli
For reading - SET hive.execution.engine;
For updating - SET hive.execution.engine=tez;
If you want to programmatically get this value out, you must go for HiveClient which supports multiple ways like JDBC, Java, Python, PHP, Ruby, C++, etc.
References
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=82903061#ConfigurationProperties-hive.execution.engine
https://cwiki.apache.org/confluence/display/Hive/HiveClient

Related

Spark on Parquet vs Spark on Hive(Parquet format)

Our use case is a narrow table(15 fields) but large processing against the whole dataset(billions of rows). I am wondering what combination provides better performance:
env: CDH5.8 / spark 2.0
Spark on Hive tables(as format of parquet)
Spark on row files(parquet)
Without additional context of your specific product and usecase - I'd vote for SparkSql on Hive tables for two reasons:
sparksql is usually better than core spark since databricks wrote different optimizations in sparksql, which is higher abstaction and gives ability to optimize code(read about Project Tungsten). In some cases manually written spark core code will be better, but it demands from the programmer deep understanding of the internals. In addition sparksql sometimes is limited and doesn't permit you to control low-level mechanisms, but you can always fallback to work with core rdd.
hive and not files - I'm assuming hive with external metastore. Metastore saves definitions of partitions of your "tables"(in files it could be some directory). This is one of the most important parts for the good performance. I.e. when working with files spark will need to load this info(which could be time consuming - e.g. s3 list operation is very slow). So metastore permits spark to fetch this info in simple and fast way
There's only two options here. Spark on files, or Spark on Hive. SparkSQL works on both, and you should prefer to use the Dataset API, not RDD
If you can define the Dataset schema yourself, Spark reading the raw HDFS files will be faster because you're bypassing the extra hop to the Hive Metastore.
When I did a simple test myself years ago (with Spark 1.3), I noticed that extracting 100000 rows as a CSV file was orders of magnitude faster than a SparkSQL Hive query with the same LIMIT

Why does one action produce two jobs?

I use Spark 2.1.0.
Why does the following one action produce 2 identical jobs (same DAG in each one)? Shouldn't it produce just 1? Here you have the code:
val path = "/usr/lib/spark/examples/src/main/resources/people.txt"
val peopleDF = spark.
sparkContext.
textFile(path, 4).
map(_.split(",")).
map(attr => Person(attr(0), attr(1).trim.toInt)).
toDF
peopleDF.show()
I see that in the graphic interface when checking what is going on? I suppose it has something to do with all Data Frame transformation.
Although in general, a single SQL query may lead to more than one Spark job in this particular case Spark 2.3.0-SNAPSHOT gives only one (contrary to what you see).
The Job 12 is also pretty nice, i.e. just a single-stage no-shuffle Spark job.
The reason to see more than one Spark job per Spark SQL's structured query (using SQL or Dataset API) is that Spark SQL offers a high level API atop RDDs and uses RDDs and actions freely to make your life as a Spark developer and a Spark performance tuning expert easier. In most cases (esp. when you wanted to build abstractions), you'd have to fire up the Spark jobs yourself to achieve the comparable performance.

Hadoop data visualization

I am a new hadoop developer and I have been able to install and run hadoop services in a single-node cluster. The problem comes during data visualization. What purpose does MapReduce jar file play when I need to use a data visualization tool like Tableau. I have a structured data source in which I need to add a layer of logic so that the data could make sense during visualization. Do I need to write MapReduce programs if I am going to visualize with other tools? Please shed some light on how I could go about on this issue.
This probably depends on what distribution of Hadoop you are using and which tools are present. It also depends on the actual data preparation task.
If you don't want to actually write map-reduce or spark code yourself you could try SQL-like queries using Hive (which translates to map-reduce) or the even faster Impala. Using SQL you can create tabular data (hive tables) which can easily be consumed. Tableau has connectors for both of them that automatically translate your tableau configurations/requests to Hive/Impala. I would recommend connecting with Impala because of its speed.
If you need to do work that requires more programming or where SQL just isn't enough you could try Pig. Pig is a high level scripting language that compiles to map-reduce code. You can try all of the above in their respective editor in Hue or from CLI.
If you feel like all of the above still don't fit your use case I would suggest writing map-reduce or spark code. Spark does not need to be written in Java only and has the advantage of being generally faster.
Most tools can integrate with hive tables meaning you don't need to rewrite code. If a tool does not provide this you can make CSV extracts from the hive tables or you can keep the tables stored as CSV/TSV. You can then import these files in your visualization tool.
The existing answer already touches on this but is a bit broad, so I decided to focus on the key part:
Typical steps for data visualisation
Do the complex calculations using any hadoop tool that you like
Offer the output in a (hive) table
Pull the data into the memory of the visualisation tool (e.g. Tableau), for instance using JDBC
If the data is too big to be pulled into memory, you could pull it into a normal SQL database instead and work on that directly from your visualisation tool. (If you work directly on hive, you will go crazy as the simplest queries take 30+ seconds.)
In case it is not possible/desirable to connect your visualisation tool for some reason, the workaround would be to dump output files, for instance as CSV, and then load these into the visualisation tool.
Check out some end to end solutions for data visualization.
For example like Metatron Discovery, it uses druid as their OLAP engine. So you just link your hadoop with Druid and then you can manage and visualize your hadoop data accordingly. This is an open source so that you also can see the code inside it.

how to set spark RDD StorageLevel in hive on spark?

In my hive on spark job , I get this error :
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
thanks for this answer (Why do Spark jobs fail with org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0 in speculation mode?) , I know it may be my hiveonspark job has the same problem
since hive translates sql to a hiveonspark job, I don't how to set it in hive to make its hiveonspark job change from StorageLevel.MEMORY_ONLY to StorageLevel.MEMORY_AND_DISK ?
thanks for you help~~~~
You can use CACHE/UNCACHE [LAZY] Table <table_name> to manage caching. More details.
If you are using DataFrame's then you can use the persist(...) to specify the StorageLevel. Look at API here..
In addition to setting the storage level, you can optimize other things as well. SparkSQL uses a different caching mechanism called Columnar storage which is a more efficient way of caching data (as SparkSQL is schema aware). There are different set of config properties that can be tuned to manage caching as described in detail here (THis is latest version documentation. Refer to the documentation of version you are using).
spark.sql.inMemoryColumnarStorage.compressed
spark.sql.inMemoryColumnarStorage.batchSize

What is the difference between Apache Pig and Apache Hive?

What is the exact difference between Pig and Hive? I found that both have same functional meaning because they are used for doing same work. The only thing is implimentation which is different for both. So when to use and which technology? Is there any specification for both which shows clearly the difference between both in terms of applicability and performance?
Apache Pig and Hive are two projects that layer on top of Hadoop, and provide a higher-level language for using Hadoop's MapReduce library. Apache Pig provides a scripting language for describing operations like reading, filtering, transforming, joining, and writing data -- exactly the operations that MapReduce was originally designed for. Rather than expressing these operations in thousands of lines of Java code that uses MapReduce directly, Pig lets users express them in a language not unlike a bash or perl script. Pig is excellent for prototyping and rapidly developing MapReduce-based jobs, as opposed to coding MapReduce jobs in Java itself.
If Pig is "scripting for Hadoop", then Hive is "SQL queries for Hadoop". Apache Hive offers an even more specific and higher-level language, for querying data by running Hadoop jobs, rather than directly scripting step-by-step the operation of several MapReduce jobs on Hadoop. The language is, by design, extremely SQL-like. Hive is still intended as a tool for long-running batch-oriented queries over massive data; it's not "real-time" in any sense. Hive is an excellent tool for analysts and business development types who are accustomed to SQL-like queries and Business Intelligence systems; it will let them easily leverage your shiny new Hadoop cluster to perform ad-hoc queries or generate report data across data stored in storage systems mentioned above.
From a purely engineering point of view, I find PIG both easier to write and maintain than SQL-like languages. It is procedural, so you apply a bunch of relations to your data one-by-one, and if something fails you can easily debug at intermediate steps, and even have a command called “illustrate” which uses an algorithm to sample some data matching your relation. I’d say for jobs with complex logic, this is definitely much more convenient than Hive, but for simple stuff the gain is probably minimal.
Regarding interfacing, I find that PIG offers a lot of flexibility compared to Hive. You don’t have a notion of table in PIG so you manipulate files directly, and you can define loader to load it into pretty much any format very easily with loader UDFs, without having to go through the table loading stage before you can do your transformations. They have a nice feature in the recent versions of PIG where you can use dynamic invokers, i.e. use pretty much any Java method directly in your PIG script, without having to write a UDF.
For performance/optimization, from what I’ve seen you can directly control in PIG the type of join and grouping algorithm you want to use (I believe 3 or 4 different algorithms for each). I’ve personally never used it, but as you’re writing demanding algorithms it could probably be useful to be able to decide what to do instead of relying on the optimizer as it’s the case in Hive. So I wouldn’t say it necessarily performs better than Hive, but in cases where the optimizer makes the wrong decision, you have the option to choose what algorithm to use and have more control on what happens.
One of the cool things I did lately was splits: you can split your execution flow and apply different relations to each split. So you can have a non-linear dataset, split it based on a field, and apply a different processing to each part, and maybe join the results together in the end, all this in the same script. I don’t think you can do this in Hive, you’d have to write different queries for each case, but I may be wrong.
One thing to note also is that you can increment counters in PIG. Currently you can only do this in PIG UDFs though. I don’t think you can use counters in Hive.
And there are some nice projects that allow you to interface PIG with Hive as well (like HCatalog), so you can basically read data from a hive table, or write data to a hive table (or both) by simply changing your loader in the script. Supports dynamic partitions as well.
Apache Pig is a platform for analyzing large data sets. Pig's language, Pig Latin, is a simple query algebra that lets you express data transformations such as merging data sets, filtering them, and applying functions to records or groups of records. Users can create their own functions to do special-purpose processing.
Pig Latin queries execute in a distributed fashion on a cluster. Our current implementation compiles Pig Latin programs into Map-Reduce jobs, and executes them using Hadoop cluster.
https://cwiki.apache.org/confluence/display/PIG/Index%3bjsessionid=F92DF7021837B3DD048BF9529A434FDA
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
https://cwiki.apache.org/Hive/
What is the exact difference between Pig and Hive? I found that both have same functional meaning because they are used for doing same work.
Have a look at Pig Vs Hive Comparison in a nut shell from dezyre article
Hive scores over PIG in Partitions, Server, Web interface & JDBC/ODBC support.
Some differences:
Hive is best for structured Data & PIG is best for semi structured data
Hive used for reporting & PIG for programming
Hive used as a declarative SQL & PIG used as procedural language
Hive supports partitions & PIG does not
Hive can start an optional thrift based server & PIG can't
Hive defines tables before hand (schema) + stores schema information in database and PIG don't have dedicated metadata of database
Hive does not support Avro but PIG does
Pig also supports additional COGROUP feature for performing outer joins but hive does not. But both Hive & PIG can join, order & sort dynamically
So when to use and which technology?
Above difference clarifies your query.
HIVE : Structured data, SQL like queries and used for reporting purpose
PIG: Semi-structured data, program a work-flow involving a sequence of activities for Map Reduce jobs.
Regarding performance of job, both HIVE and PIG are slow compared to traditional Map Reduce job. Reason : Finally Hive or PIG scripts have to be converted into a series of Map Reduce jobs.
Have a look at related SE question:
Pig vs Hive vs Native Map Reduce
The main difference is PIG is a data flow language and Hive is data warehouse.
As PIG can be used similar as a step by step procedural language.
But HIVE is used as a declarative language.
PIG can be used for getting online streaming unstructured data. But HIVE can only access structured data and it can also access data from RDBMS databases such as SQL, NOSQL by using JDBC and ODBC drivers.
PIG can convert data into Avro format but PIG can't.
PIG can't create partitions but HIVE can do it.
As HIVE is top of PIG that's why HIVE can only access the data once it is processed by PIG.
It depends when we have to use PIG and HIVE if you are working structured, relational data then we can use HIVE else we can use PIG.
By PIG we can communicate with ETL tools but it takes more time compared with hive. But it is easy in PIG rather HIVE because in HIVE we have to create table before processing the data.

Resources