Hive query does not start the map phase - hadoop

I have a simple query that I'm trying to run in Hive 0.14:
select sum(tb.field1), sum(tb.field2), tb.month from dbwork.mytable tb
group by tb.month;
that is partitioned by month.
It gets stuck on the map phase:
INFO : Map 1: -/- Reducer 2: 0/486
INFO : Map 1: -/- Reducer 2: 0/486
INFO : Map 1: -/- Reducer 2: 0/486
INFO : Map 1: -/- Reducer 2: 0/486
The logs have not been generated yet, so not sure how to debug. What's going on? Why the task never starts?

This behavior usually appears when cluster has not enough resources to allocate to job. How much data your trying to play with, check Hadoop service statuses in ambari if you are using hortonworks and other admin dashboards if you are using any other distribution.

Related

ResourceManager webpage UI not showing Hive mapreduce job

Hive version: 3.1.3, Hadoop version: 3.3.4
I'm new to Hive and Hadoop environment. Followed these links for Hadoop and Hive installations. Was trying out Hive insertion using the example shown here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2))
CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC;
INSERT INTO TABLE students
VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
show databases; and CREATE query worked fine, but when INSERT INTO was called I got execution error which log is the following:
hive> INSERT INTO students VALUES ('fred gg', 36, 1.48), ('barney tt', 46, 2.02);
Query ID = vincent.chandra_20220908020315_f032aacb-ac55-47cf-bcb2-c6ba0f89dd88
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2022-09-08 02:03:17,325 Stage-1 map = 0%, reduce = 0%
Ended Job = job_local1202096292_0007 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
From searching around, it seems the console doesn't show insightful information regarding the error, and I came across a post saying to read the details on resourceManager webpage UI. I accessed the webpage (localhost:8088), but it's not displaying any job as shown below.
In case this helps in any way, here is jps return results:
5569 DataNode
6674 Jps
5842 ResourceManager
5461 NameNode
6567 RunJar
5944 NodeManager
I'm at a loss on what to do, debugging doesn't seem to be an option with resourceManager UI not showing any kind of Hive mapreduce job. Any advice would be helpful and appreciated.
EDIT: Actually just realized SecondaryNamenode also seems to be missing. I'll update if I find anything useful.

hive outputs map-reduce counter

Since HQL (hive sql) uses map-reduce underneath, I wonder how can I output the job's counters info (to console or to a log file) such as the counter info a map-reduce job outputs by default when job finishes.
I use counters in my custom Hive UDTF to get debug info logged for example:
context.getReporter().getCounter("MyCounter","Forward_Count").increment(1);
An example screen shot of the "counters info" i mentioned above:
map-reduce counter example pic

How to use map reduce output as an input for another map reduce job?

In the first map reduce job I am processing an HBase table and outputting a smaller list of the rowkeys. I need to use this list of strings in order to process another map reduce job which is pulling from a different HBase table and outputting to another Hbase table. What is the proper way to store and access the ouput of the first map reduce job?
Hadoop doesn't support streaming the output of one MR job to another. So, the output of the first MR job has to be stored in HDFS (or some other persistent storage) and then read in the second MR job. Create a DAG of jobs using Oozie or Azkaban. For a simple work flow use Hadoop's JobControl API.
Apache Tez which is still in the incubator phase allows streaming of data across MR tasks. As mentioned, Tez is still in the Incubator stage, so use it with a bit of caution.

Cassandra and MapReduce - minimal setup requirements

I need to execute MapReduce on my Cassandra cluster, including data locality, ie. each job queries only rows which belong to local Casandra Node where the job runs.
Tutorials exist, on how to setup Hadoop for MR on older Cassandra version (0.7). I cannot find such for current release.
What has changed since 0.7 in this regard ?
What software modules are required for minimal setup (Hadoop+HDFS+...)?
Do I need Cassandra Enterprise ?
Cassandra contains a few classes which are sufficient to integrate with Hadoop:
ColumnFamilyInputFormat - This is an input for a Map function which can read all rows from a single CF in when using Cassandra's random partitioner, or it can read a row range when used with Cassandra's ordered partitioner. Cassandra cluster has ring form, where each ring part is responsible for concrete key range. Main task of Input Format is to divide Map input into data parts which can be processed in parallel - those are called InputSplits. In Cassandra case this is simple - each ring range has one master node, and this means that Input Format will create one InputSplit for each ring element, and it will result in one Map task. Now we would like to execute our Map task on the same host where data is stored. Each InputSplit remembers IP address of its ring part - this is the IP address of Cassandra node responsible to this particular key range. JobTracker will create Map tasks form InputSplits and assign them to TaskTracker for execution. JobTracker will try to find TaskTracker which has the same IP address as InputSplit - basically we have to start TaskTracker on Cassandra host, and this will guarantee data locality.
ColumnFamilyOutputFormat - this configures context for Reduce function. So that the results can be stored in Cassandra
Results from all Map functions has to be combined together before they can be passed to reduce function - this is called shuffle. It uses local file system - from Cassandra perspective nothing has to be done here, we just need to configure path to local temp directory. Also there is no need to replace this solution with something else (like persisting in Cassandra) - this data does not have to be replicated, Map tasks are idempotent.
Basically using provided Hadoop integration gives up possibility to execute Map job on hosts where data resides, and Reduce function can store results back into Cassandra - it's all that I need.
There are two possibilities to execute Map-Reduce:
org.apache.hadoop.mapreduce.Job - this class simulates Hadoop in one process. It executes Map-Resuce task and does not require any additional services/dependencies, it needs only access to temp directory to store results from map job for shuffle. Basically we have to call few setters on Job class, which contain things like class names for Map task, Reduce task, input format, Cassandra connection, when setup is done job.waitForCompletion(true) has to be called - it starts Map-Reduce task and waits for results. This solution can be used to quickly get into Hadoop world, and for testing. It will not scale (single process), and it will fetch data over network, but still - it will be fine for beginning.
Real Hadoop cluster - I did not set it up yet, but as I understood, Map-Reduce jobs from previous example will work just fine. We need additionally HDFS which will be used to distribute jars containing Map-Reduce classes in Hadoop cluster.
yes I was looking for the same thing, seems DataStaxEnterprise has a simplified Hadoop integration,
read this http://wiki.apache.org/cassandra/HadoopSupport

What is Hive: Return Code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

I am getting:
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
While trying to make a copy of a partitioned table using the commands in the hive console:
CREATE TABLE copy_table_name LIKE table_name;
INSERT OVERWRITE TABLE copy_table_name PARTITION(day) SELECT * FROM table_name;
I initially got some semantic analysis errors and had to set:
set hive.exec.dynamic.partition=true
set hive.exec.dynamic.partition.mode=nonstrict
Although I'm not sure what the above properties do?
Full ouput from hive console:
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201206191101_4557, Tracking URL = http://jobtracker:50030/jobdetails.jsp?jobid=job_201206191101_4557
Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=master:8021 -kill job_201206191101_4557
2012-06-25 09:53:05,826 Stage-1 map = 0%, reduce = 0%
2012-06-25 09:53:53,044 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201206191101_4557 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
That's not the real error, here's how to find it:
Go to the hadoop jobtracker web-dashboard, find the hive mapreduce jobs that failed and look at the logs of the failed tasks. That will show you the real error.
The console output errors are useless, largely beause it doesn't have a view of the individual jobs/tasks to pull the real errors (there could be errors in multiple tasks)
I know I am 3 years late on this thread, however still providing my 2 cents for similar cases in future.
I recently faced the same issue/error in my cluster.
The JOB would always get to some 80%+ reduction and fail with the same error, with nothing to go on in the execution logs either.
Upon multiple iterations and research I found that among the plethora of files getting loaded some were non-compliant with the structure provided for the base table(table being used to insert data into partitioned table).
Point to be noted here is whenever I executed a select query for a particular value in the partitioning column or created a static partition it worked fine as in that case error records were being skipped.
TL;DR: Check the incoming data/files for inconsistency in the structuring as HIVE follows Schema-On-Read philosophy.
Adding some information here, as it took me awhile to find the hadoop jobtracker web-dashboard in HDInsight (Azure's Hadoop), and a colleague finally showed me where it was. There is a shortcut on the head node called "Hadoop Yarn Status" which is just a link to a local http page (http://headnodehost:9014/cluster in my case). When opened the dashboard looked like this:
In that dashboard you can find your failed application, and then after clicking into it you can look at the logs of the individual map and reduce jobs.
In my case it seemed to still be running out of memory in the reducers, even though I had cranked the memory in the configuration already. For some reason it was not surfacing the "java outofmemory" errors I got earlier though.
The top answer is right, that the error code doesn't give you much info. One of the common causes that we saw in our team for this error code was when the query was not optimized well. A known reason was when we do an inner join with the left side table magnitudes bigger than the table on right side. Swapping these tables would usually do the trick in such cases.
I removed the _SUCCESS file from the EMR output path in S3 and it worked fine.
I was also facing same error when I was inserting the data into HIVE external table which was pointing to Elastic search cluster.
I replaced the older JAR elasticsearch-hadoop-2.0.0.RC1.jar to elasticsearch-hadoop-5.6.0.jar, and everything worked fine.
My Suggestion is please use the specific JAR as per the elastic search version. Don't use older JARs if you are using newer version of elastic search.
Thanks to this post Hive- Elasticsearch Write Operation #409
Received this error when joining two tables. And one table is large in size and another table is small, which could fit into disk memory. In such a case, use
set hive.auto.convert.join = false
This might help to get rid of the above error. For more detail on this issue please refer to the below threads
Hive Map-Join configuration mystery
Hive.auto.convert.join = true what is the significance of this?
Even I faced the same issue - when checked on dashboard I found following Error. As the data was coming through Flume and had interrupted in between due to which may be there was inconsistency in few files.
Caused by: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected end-of-input within/between OBJECT entries
Running on fewer files it worked. Format consistency was the reason in my case.
I faced the same issue because I didn't have permission to query the database I was trying to.
In the case you don't have permission to query the table/database, besides the Return Code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask error, you will see that in Cloudera Manager is not even registering your query.
In my case, the solution was adding more RAM Memory to the Virtual Machines. Sometimes code 2 means that Map and Reduce nodes do not have enough memory.
Another option could be changing the properties "mapreduce.map.memory.mb" y "mapreduce.reduce.memory.mb" in the mapred-site.xml file.
I got the same error while creating the hive table in beeline and then tried to create through spark-shell which thrown actual error. In my case error was with disk space quota for hdfs directory.
org.apache.hadoop.ipc.RemoteException: The DiskSpace quota of /user/hive/warehouse/XXX_XX.db is exceeded: quota = 6597069766656 B = 6 TB but diskspace consumed = 6597493381629 B = 6.00 TB

Resources