Why is SparkContext being shutdown during Logistic Regression? - hadoop

I think it has something to do with memory, because it was working fine for smaller data sets. The program utilizes, and prematurely shuts down, while using Logistic Regression from Spark-Mllib. I am running this command below to start my spark program on HDFS.
export SPARK_CONF_DIR=/home/gs/conf/spark/latest
export SPARK_HOME=/home/gs/spark/latest
$SPARK_HOME/bin/spark-submit --class algoRunner --master yarn --deploy-mode cluster --conf spark.dynamicAllocation.enabled=true \
--executor-memory 8g --queue default --conf spark.hadoop.hive.querylog.location='${java.io.tmpdir}/hivelogs' \
~/spark/Product-Classifier-Pipeline-assembly-1.0.jar
I receive the following error:
17/08/02 21:53:40 ERROR ApplicationMaster: RECEIVED SIGNAL TERM
17/08/02 21:53:40 INFO SparkContext: Invoking stop() from shutdown hook
17/08/02 21:53:40 INFO SparkUI: Stopped Spark web UI at http://gsrd219n01.red.ygrid.yahoo.com:45546
17/08/02 21:53:40 INFO DAGScheduler: Job 10 failed: treeAggregate at LogisticRegression.scala:1670, took 2.351935 s
17/08/02 21:53:40 INFO DAGScheduler: ShuffleMapStage 19 (treeAggregate at LogisticRegression.scala:1670) failed in 1.947 s due to Stage cancelled because SparkContext was shut down
17/08/02 21:53:40 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#21bec75d)
17/08/02 21:53:40 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(10,1501710820713,JobFailed(org.apache.spark.SparkException: Job 10 cancelled because SparkContext was shut down))
17/08/02 21:53:40 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job 10 cancelled because SparkContext was shut down
org.apache.spark.SparkException: Job 10 cancelled because SparkContext was shut down

The driver memory wasn't large enough. Increasing it prevented these errors.

Related

AWS Glue JOB: Command failed with error code 1

We have python script for our glue job and the triggered runs for every one hour to convert the JSON S3 to parquet files and we are getting following issue..the following logs are taken from cloudwatch for the jobId
:
CoarseGrainedExecutorBackend: Driver commanded a shutdown
18/06/25 08:54:03 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from ip-172-31-34-26.ec2.internal/172.31.34.26:36135 is closed
18/06/25 08:54:03 ERROR OneForOneBlockFetcher: Failed while starting block fetches
java.io.IOException: Connection from ip-172-31-34-26.ec2.internal/172.31.34.26:36135 closed
at org.apache.spark.network.client.TransportResponseHandler.channelInactive(TransportResponseHandler.java:146)
at org.apache.spark.network.server.TransportChannelHandler.channelInactive(TransportChannelHandler.java:108)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at org.apache.spark.network.util.TransportFrameDecoder.channelInactive(TransportFrameDecoder.java:182)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:220)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1289)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:227)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:893)
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:691)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
18/06/25 08:54:03 INFO CoarseGrainedExecutorBackend: Driver from 172.31.47.44:45951 disconnected during shutdown
18/06/25 08:54:03 INFO CoarseGrainedExecutorBackend: Driver from 172.31.47.44:45951 disconnected during shutdown
18/06/25 08:54:03 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
18/06/25 08:54:03 INFO MemoryStore: MemoryStore cleared
18/06/25 08:54:03 INFO BlockManager: BlockManager stopped
18/06/25 08:54:03 INFO ShutdownHookManager: Shutdown hook called
Open Glue> Jobs > Edit your Job> Script libraries and job parameters (optional) > Job parameters near the bottom
Set the following: key: --conf value: spark.yarn.executor.memoryOverhead=1024 spark.driver.memory=10g
There is no way to fix this issue,AWS Glue has so many enhancements that are to be done.
As of now we split our folder into multiple sub folders and split our glue job to two to handle this scenario,and also the memory overhead was not being considered when we give our own script option.
You need to reduce the number of files that you are storing into the S3 bucket by accumulating the data into a single big file,glue is efficient on bigger files

Spark job does not exit/stop when running on Mac OS X

I try to run a spark job on my local Mac OS, the entire job finishes fine, but the job just doesn't finish and hangs there.
When a Spark job finished gracefully, it'll shutdown all of its spawned processes and released all its associated port numbers and have soemthing like below in its console:
17/09/26 12:04:42 INFO SparkContext: Invoking stop() from shutdown hook
17/09/26 12:04:42 INFO SparkUI: Stopped Spark web UI at http://172.29.16.34:4040
17/09/26 12:04:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/09/26 12:04:42 INFO MemoryStore: MemoryStore cleared
17/09/26 12:04:42 INFO BlockManager: BlockManager stopped
17/09/26 12:04:42 INFO BlockManagerMaster: BlockManagerMaster stopped
17/09/26 12:04:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/09/26 12:04:42 INFO SparkContext: Successfully stopped SparkContext
17/09/26 12:04:42 INFO ShutdownHookManager: Shutdown hook called
17/09/26 12:04:42 INFO ShutdownHookManager: Deleting directory /private/var/folders/mc/fl1dm0jx48d086s5jsk18dtc0000gn/T/spark-088260a7-6c27-4dbf-878a-ec6a2efa14a3
I tried to debug what's going on with this Spark job, initially I thought something is hanging there, but until I put a print out line right at the end of my spark job and that line got printed out, I got confused what could be causing it to hang over there?
Any help please?
My spark: 1.0.1
My OS X: 10.11.6
Here's my spark job code:
SparkConf sparkConf = new SparkConf();
SparkContext sparkContext = new SparkContext(sparkConf);
JavaSparkContext sc = new JavaSparkContext(sparkContext);
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", zkQuorum);
conf.set("zookeeper.znode.parent", zkZNodeParent);
JavaPairRDD<ImmutableBytesWritable, Result> hBasePairRdd = sc.newAPIHadoopRDD(conf, TableInputFormat.class, ImmutableBytesWritable.class, Result.class);
EmbeddedSolrServer server = solrManager.setupSolrServer(shardIndex);
Thanks!
Actually, I just figured it out.
Thanks #shridharama for poking me to post some example source code of my spark job here.
Then I basically did a binary search of my code to see which part that was causing it from shutting down gracefully.
Then I eventually narrowed down to this single line:
EmbeddedSolrServer server = solrManager.setupSolrServer(shardIndex);
We use Spark job to build our solr index, this server process is never stopped.
So, then I happily added this line:
server.close();
And my spark job exits nicely on my Mac. Thanks!
But still, I don't understand how come this exact same spark job (same code, just different zookeeper configurations) could exit gracefully when running on my cluster?
Cheers!

Spark NullPointerException on SQLListener.onTaskEnd while finishing task

I have a Spark application using Scala which perform series of transformation, then writing the result to parquet file.
The transformation part finished without problem, the result output is written to HDFS correctly. The application is running on top of YARN cluster of 30 nodes.
However, the Spark application itself will not complete and exit the YARN. It will remain in resource manager.
After hanging for about an hour (consuming resources and vcores), then either it finishes or throw an error and killed itself.
Here is the error log of the application. Appreciate if anyone can shed some light on this matter.
16/08/24 14:51:12 INFO impl.ContainerManagementProtocolProxy: Opening proxy : phhdpdn013x.company.com:8041
16/08/24 14:51:22 INFO cluster.YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (phhdpdn013x.company.com:54175) with ID 1
16/08/24 14:51:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager phhdpdn013x.company.com:24700 with 2.1 GB RAM, BlockManagerId(1, phhdpdn013x.company.com, 24700)
16/08/24 14:51:29 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/08/24 14:51:29 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/08/24 15:11:00 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
at org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:167)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
16/08/24 15:11:46 ERROR scheduler.LiveListenerBus: Listener SQLListener threw an exception
java.lang.NullPointerException
aa
What is your version of Spark?
Your ERROR looks a lot like this issue
https://issues.apache.org/jira/browse/SPARK-12339

Spark Pi Example in Cluster mode with Yarn: Association lost [duplicate]

This question already has answers here:
How to know what is the reason for ClosedChannelExceptions with spark-shell in YARN client mode?
(4 answers)
Closed 3 years ago.
I have three virtual machines running as distributed Spark cluster. I am using Spark 1.3.0 with an underlying Hadoop 2.6.0.
If I run the Spark Pi example
/usr/local/spark130/bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master yarn-client /usr/local/spark130/examples/target/spark-examples_2.10-1.3.0.jar 10000
I get this warning/errors and eventually an exception:
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/04/08 12:37:06 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM#virtm4:47128] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/04/08 12:37:12 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM#virtm4:45975] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/04/08 12:37:13 ERROR YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
When I check the logs of the container I see that it was SIGTERM-ed
15/04/08 12:37:08 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/04/08 12:37:08 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/04/08 12:37:08 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/04/08 12:37:12 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
15/04/08 12:37:12 INFO yarn.ApplicationMaster: Final app status: UNDEFINED, exitCode: 0, (reason: Shutdown hook called before final status was reported.)
15/04/08 12:37:12 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with UNDEFINED (diag message: Shutdown hook called before final status was reported.)
SOLUTION:
I solved the problem. I use Java7 now instead of Java8. This situation was reported as bug, but it was rejected as such https://issues.apache.org/jira/browse/SPARK-6388
Yet, changing Java version did work.
The association may be lost due to the Java 8 excessive memory allocation issue: https://issues.apache.org/jira/browse/YARN-4714
You can force YARN to ignore this by setting up the following properties in yarn-site.xml
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
I encountered similar problem before, until I find this issue
Try to stop your SparkContext instance explicitly sc.stop()

HBase distributed log-splitting keeps failing because unable to get a lease

We used up all the free space on our test HDFS cluster so HBase crashed. After cleaning up some space, we were able to restart HBase, but after the startup a distributed log split job keeps failing.
The job looks like this:
Splitting log file hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 into a temporary staging area.
The Regionserver are trying to get a lease on the file for some time:
2013-10-24 11:50:47,662 DEBUG org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or departed
2013-10-24 11:50:47,671 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker host-4,60020,1382614844870 acquired task /hbase/splitlog/hdfs%3A%2F%2F192.168.249.1%3A9000%2Fhdfs%2Fhbase%2F.logs%2Fhost-3%2C60020%2C1382113928374-splitting%2Fhost-3%252C60020%252C1382113928374.1382523937002
2013-10-24 11:50:47,672 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog: hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002, length=41274332
2013-10-24 11:50:47,672 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovering lease on dfs file hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002
2013-10-24 11:50:47,673 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=0 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 1ms
2013-10-24 11:50:50,674 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=1 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 3002ms
2013-10-24 11:50:51,674 DEBUG org.apache.hadoop.hbase.util.FSHDFSUtils: isFileClosed not available
2013-10-24 11:51:51,680 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: recoverLease=false, attempt=2 on file=hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 after 64008ms
Then the Master abort the job:
2013-10-24 11:55:48,685 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2013-10-24 11:55:48,687 WARN org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of hdfs://192.168.249.1:9000/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 interrupted, resigning
java.io.InterruptedIOException
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:136)
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverFileLease(FSHDFSUtils.java:54)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:780)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:414)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:381)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:112)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:280)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:211)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:179)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:118)
... 9 more
It seems to me that the problem is the Regionserver which are unable to get a lease on this file, because it's already open, so I checked with sudo -u hdfs hadoop fsck /hdfs/hbase/.logs/ -openforwrite, and it confirms:
OPENFORWRITE: /hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002 41274332 bytes, 1 block(s), OPENFORWRITE:
/hdfs/hbase/.logs/host-3,60020,1382113928374-splitting/host-3%2C60020%2C1382113928374.1382523937002: Under replicated blk_1073337163743094520_3534698. Target Replicas is 3 but found 2 replica(s).
I tried to shut down HBase, but the file stays OPENFORWRITE. How could I remove this flag?
ps> Hadoop 1.0.1, HBase 0.94.12

Resources