PIG: script never ending - hadoop

I have a coordinator which contains one workflow, with several "PIG forks". Each "PIG fork" is the execution of the same PIG script with different parameters.
Such coordinator consumes all resources available on the cluster, because PIG scripts have lot of data to proceed. Now here is the problem.
Sometimes, the coordinator successfully terminates in two hours. Sometimes, it never ends.
In this second case, PIG logs are:
2016-05-19 04:14:08,884 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1460732649780_25701_m_000000_0 is : 1.0
...
2016-05-19 07:40:38,492 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1460732649780_25701_m_000000_0 is : 1.0
Such message then repeats indefinitely in every forked PIG scripts... and YARN seems frozen (it can't allocate new containers).
Do you have solutions to address such issue ?

Related

Spark job does not exit/stop when running on Mac OS X

I try to run a spark job on my local Mac OS, the entire job finishes fine, but the job just doesn't finish and hangs there.
When a Spark job finished gracefully, it'll shutdown all of its spawned processes and released all its associated port numbers and have soemthing like below in its console:
17/09/26 12:04:42 INFO SparkContext: Invoking stop() from shutdown hook
17/09/26 12:04:42 INFO SparkUI: Stopped Spark web UI at http://172.29.16.34:4040
17/09/26 12:04:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/09/26 12:04:42 INFO MemoryStore: MemoryStore cleared
17/09/26 12:04:42 INFO BlockManager: BlockManager stopped
17/09/26 12:04:42 INFO BlockManagerMaster: BlockManagerMaster stopped
17/09/26 12:04:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/09/26 12:04:42 INFO SparkContext: Successfully stopped SparkContext
17/09/26 12:04:42 INFO ShutdownHookManager: Shutdown hook called
17/09/26 12:04:42 INFO ShutdownHookManager: Deleting directory /private/var/folders/mc/fl1dm0jx48d086s5jsk18dtc0000gn/T/spark-088260a7-6c27-4dbf-878a-ec6a2efa14a3
I tried to debug what's going on with this Spark job, initially I thought something is hanging there, but until I put a print out line right at the end of my spark job and that line got printed out, I got confused what could be causing it to hang over there?
Any help please?
My spark: 1.0.1
My OS X: 10.11.6
Here's my spark job code:
SparkConf sparkConf = new SparkConf();
SparkContext sparkContext = new SparkContext(sparkConf);
JavaSparkContext sc = new JavaSparkContext(sparkContext);
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", zkQuorum);
conf.set("zookeeper.znode.parent", zkZNodeParent);
JavaPairRDD<ImmutableBytesWritable, Result> hBasePairRdd = sc.newAPIHadoopRDD(conf, TableInputFormat.class, ImmutableBytesWritable.class, Result.class);
EmbeddedSolrServer server = solrManager.setupSolrServer(shardIndex);
Thanks!
Actually, I just figured it out.
Thanks #shridharama for poking me to post some example source code of my spark job here.
Then I basically did a binary search of my code to see which part that was causing it from shutting down gracefully.
Then I eventually narrowed down to this single line:
EmbeddedSolrServer server = solrManager.setupSolrServer(shardIndex);
We use Spark job to build our solr index, this server process is never stopped.
So, then I happily added this line:
server.close();
And my spark job exits nicely on my Mac. Thanks!
But still, I don't understand how come this exact same spark job (same code, just different zookeeper configurations) could exit gracefully when running on my cluster?
Cheers!

Spark NullPointerException on SQLListener.onTaskEnd while finishing task

I have a Spark application using Scala which perform series of transformation, then writing the result to parquet file.
The transformation part finished without problem, the result output is written to HDFS correctly. The application is running on top of YARN cluster of 30 nodes.
However, the Spark application itself will not complete and exit the YARN. It will remain in resource manager.
After hanging for about an hour (consuming resources and vcores), then either it finishes or throw an error and killed itself.
Here is the error log of the application. Appreciate if anyone can shed some light on this matter.
16/08/24 14:51:12 INFO impl.ContainerManagementProtocolProxy: Opening proxy : phhdpdn013x.company.com:8041
16/08/24 14:51:22 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (phhdpdn013x.company.com:54175) with ID 1
16/08/24 14:51:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager phhdpdn013x.company.com:24700 with 2.1 GB RAM, BlockManagerId(1, phhdpdn013x.company.com, 24700)
16/08/24 14:51:29 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/08/24 14:51:29 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/08/24 15:11:00 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
at org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:167)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
16/08/24 15:11:46 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
aa
What is your version of Spark?
Your ERROR looks a lot like this issue
https://issues.apache.org/jira/browse/SPARK-12339

Failed to get broadcast_1_piece0 of broadcast_1 in Spark Streaming job

I am running spark jobs on yarn in cluster mode. The job get the messages from kafka direct stream. I am using broadcast variables and checkpointing every 30 seconds. When I start the job first time it runs fine without any issue. If I kill the job and restart it throws below exception in executor upon receiving a message from kafka:
java.io.IOException: org.apache.spark.SparkException: Failed to get broadcast_1_piece0 of broadcast_1
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1178)
at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
at net.juniper.spark.stream.LogDataStreamProcessor$2.call(LogDataStreamProcessor.java:177)
at net.juniper.spark.stream.LogDataStreamProcessor$2.call(LogDataStreamProcessor.java:1)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:172)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:172)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Does anyone have idea how to resolve this error?
Spark version: 1.5.0
CDH 5.5.1
When encountering issues where only the first run works, it always resulted in issues revolving the checkpoint data. Moreover, the use of checkpoints only happens when there is something to check, which is the first message from kafka.
I suggest you check if you the job is indeed dead, that is, maybe the process is still running on the machine that executed it.
try running a simple ps -fe and see if something is still running. if there are 2 processes trying to use the same checkpoint folder, it will always fail.
hope this helps

HBase distributed log-splitting keeps failing because unable to get a lease

We used up all the free space on our test HDFS cluster so HBase crashed. After cleaning up some space, we were able to restart HBase, but after the startup a distributed log split job keeps failing.
The job looks like this:
Splitting log file hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 into a temporary staging area.
The Regionserver are trying to get a lease on the file for some time:
2013-10-24 11:50:47,662 DEBUG org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or departed
2013-10-24 11:50:47,671 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker host-4,60020,1382614844870 acquired task /hbase/splitlog/hdfs%3A%2F%2F192.168.249.1%3A9000%2Fhdfs%2Fhbase%2F.logs%2Fhost-3%2C60020%2C1382113928374-splitting%2Fhost-3%252C60020%252C1382113928374.1382523937002
2013-10-24 11:50:47,672 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog: hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002, length=41274332
2013-10-24 11:50:47,672 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovering lease on dfs file hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002
2013-10-24 11:50:47,673 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=0 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 1ms
2013-10-24 11:50:50,674 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=1 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 3002ms
2013-10-24 11:50:51,674 DEBUG org.apache.hadoop.hbase.util.FSHDFSUtils: isFileClosed not available
2013-10-24 11:51:51,680 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=2 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 64008ms
Then the Master abort the job:
2013-10-24 11:55:48,685 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2013-10-24 11:55:48,687 WARN org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 interrupted, resigning
java.io.InterruptedIOException
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:136)
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverFileLease(FSHDFSUtils.java:54)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:780)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:414)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:381)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:112)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:280)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:211)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:179)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:118)
... 9 more
It seems to me that the problem is the Regionserver which are unable to get a lease on this file, because it's already open, so I checked with sudo -u hdfs hadoop fsck /hdfs/hbase/.logs/ -openforwrite, and it confirms:
OPENFORWRITE: /hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 41274332 bytes, 1 block(s), OPENFORWRITE:
/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002: Under replicated blk_1073337163743094520_3534698. Target Replicas is 3 but found 2 replica(s).
I tried to shut down HBase, but the file stays OPENFORWRITE. How could I remove this flag?
ps> Hadoop 1.0.1, HBase 0.94.12

Creation of symlink from job logs to ${hadoop.tmp.dir} failed in hadoop multinode cluster setup

When I run simple wordcount example in 3 node clustered hadoop, I got the following error. I checked all write/read permissions of necessary folders. This error does not stop mapreduce job but all workload gone to one machine in the cluster, other two machines gives same error above when a task arrives them.
12/09/13 09:38:37 INFO mapred.JobClient: Task Id : attempt_201209121718_0006_m_000008_0,Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Creation of symlink from /hadoop/libexec/../logs/userlogs/job_201209121718_0006/attempt_201209121718_0006_m_000008_0 to /hadoop/hadoop-datastore
/mapred/local/userlogs/job_201209121718_0006/attempt_201209121718_0006_m_000008_0 failed.
at org.apache.hadoop.mapred.TaskLog.createTaskAttemptLogDir(TaskLog.java:110)
at org.apache.hadoop.mapred.DefaultTaskController.createLogDir(DefaultTaskController.java:71)
at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:316)
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228)
12/09/13 09:38:37 WARN mapred.JobClient: Error reading task outputhttp://peter:50060/tasklog?plaintext=true&attemptid=attempt_201209121718_0006_m_000008_0&filter=stdout
12/09/13 09:38:37 WARN mapred.JobClient: Error reading task outputhttp://peter:50060/tasklog?plaintext=true&attemptid=attempt_201209121718_0006_m_000008_0&filter=stderr
What is that error about?
java.lang.Throwable: Child Error
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
It seems the memory allocated for the tasks trackers is more than the nodes actual memory. Check this link Explanation

Resources