Whatsapp crashes while calling - android-4.4-kitkat

java.lang.Error: signal 11 (Address not mapped to object) at address 0x8 [at libmedia.so:0x5507e (_ZN7android11ClientProxy9interruptEv+0x3)]
at com.whatsapp.Voip.nativeHandleCallOfferAck(Native Method)
at com.whatsapp.VoiceService.a(VoiceService.java:404)
at com.whatsapp.messaging.r.a(r.java:731)
at com.whatsapp.alb.handleMessage(alb.java:4)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:136)
at android.os.HandlerThread.run(HandlerThread.java:61)
Caused by: java.lang.Error: signal 11 (Address not mapped to object)
at address 0x8 [at libmedia.so:0x5507e (_ZN7android11ClientProxy9interruptEv+0x3)]
at system.lib.libmedia_so.0x5507e(android::ClientProxy::interrupt():0x3:0)
at system.lib.libmedia_so.0x5d71b(android::AudioRecord::stop():0x1e:0)
at system.lib.libwilhelm_so.0xa1a3(Native Method)
at system.lib.libwilhelm_so.0x18e13(Native Method)
at data.app_lib.com_whatsapp_1.libwhatsapp_so.0x8a589(Native Method)
at data.app_lib.com_whatsapp_1.libwhatsapp_so.0x8a5a3(Native Method)

Related

WSVR0009E: Error occurred during startup, org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4

I am trying to ripple start my cluster, but getting below error:
[10/25/18 20:30:31:311 CEST] 00000001 WsServerImpl E WSVR0009E: Error occurred during startup
com.ibm.ws.exception.RuntimeError: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
at com.ibm.ws.runtime.component.ORBImpl.start(ORBImpl.java:490)
at com.ibm.ws.runtime.component.ContainerHelper.startComponents(ContainerHelper.java:540)
at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:627)
at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:618)
at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:555)
at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:311)
at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:224)
at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:697)
at com.ibm.ws.runtime.WsServer.main(WsServer.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:234)
at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:101)
at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at org.eclipse.equinox.internal.app.EclipseAppContainer.callMethodWithException(EclipseAppContainer.java:587)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:198)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at org.eclipse.core.launcher.Main.invokeFramework(Main.java:340)
at org.eclipse.core.launcher.Main.basicRun(Main.java:282)
at org.eclipse.core.launcher.Main.run(Main.java:981)
at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:413)
at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:174)
Caused by: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
at com.ibm.ws.orbimpl.transport.WSTransport.createListener(WSTransport.java:867)
at com.ibm.ws.orbimpl.transport.WSTransport.initTransports(WSTransport.java:605)
at com.ibm.rmi.iiop.TransportManager.initTransports(TransportManager.java:157)
at com.ibm.rmi.corba.ORB.set_parameters(ORB.java:1362)
at com.ibm.CORBA.iiop.ORB.set_parameters(ORB.java:1697)
at org.omg.CORBA.ORB.init(ORB.java:473)
at com.ibm.ws.orb.GlobalORBFactory.init(GlobalORBFactory.java:95)
at com.ibm.ejs.oa.EJSORBImpl.initializeORB(EJSORBImpl.java:169)
at com.ibm.ejs.oa.EJSServerORBImpl.(EJSServerORBImpl.java:88)
at com.ibm.ejs.oa.EJSORB.init(EJSORB.java:50)
at com.ibm.ws.runtime.component.ORBImpl.start(ORBImpl.java:482)
... 34 more
As per other posts in stackoverflow, this is problem with bootstrap port. As I am using cluster, I assume default ports will be 9809/9810. But, there is no entry for these ports in /etc/services.
How I can proceed here? I also tried to start individual servers, but got same error. netstat do not have any entry for these ports too.
/etc/services do have an entry for 2809, but I am using cluster. SO, I was not concerned about 2809.

Cleaning staging area Error:Hadoop

I am running a map reduce code on hadoop and getting the following error:
INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/home/dataflair/hdata/mapred/staging/intern124288574/.staging/job_local124288574_0001
Exception in thread "main" ExitCodeException exitCode=1: chmod: cannot access '/home/dataflair/hdata/mapred/staging/intern124288574/.staging/job_local124288574_0001': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:998)
at org.apache.hadoop.util.Shell.run(Shell.java:884)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1216)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1310)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1292)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:767)
at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:506)
at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:487)
at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:503)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:720)
at org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:648)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:167)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:101)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at wordCount.WordCount.main(WordCount.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)

some errors on our Spark, our spark yarn model can not work any away

The system report the error log tips under the lines, it saying that we have some problems in our hive system, but i had reviews so many times on my hive conf file, don't find any problems:
================================== Traceback (most recent call last): File "/data/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw) File "/data/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line
319, in get_return_value py4j.protocol.Py4JJavaError: An error
occurred while calling o22.sessionState. :
java.lang.IllegalArgumentException: Error while instantiating
'org.apache.spark.sql.hive.HiveSessionState':
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978)
... 13 more Caused by: java.lang.IllegalArgumentException: Error while instantiating
'org.apache.spark.sql.hive.HiveExternalCatalog':
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169)
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:157)
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32)
... 18 more Caused by: java.lang.NoSuchMethodException: org.apache.spark.sql.hive.HiveExternalCatalog.<init>(org.apache.spark.SparkConf,
org.apache.hadoop.conf.Configuration)
at java.lang.Class.getConstructor0(Class.java:2892)
at java.lang.Class.getDeclaredConstructor(Class.java:2058)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:164)
... 26 more

Websphere Application Server is not starting in Eclipse Mars

I have just Installed the WebSphere Application Server, The status on localHost shows its started but when I try to start it from Eclipse I am getting following :
[00000000 WsServerImpl E WSVR0009E: Error occurred during startup
com.ibm.ws.exception.RuntimeError: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
at com.ibm.ws.runtime.component.ORBImpl.start(ORBImpl.java:427)
at com.ibm.ws.runtime.component.ContainerHelper.startComponents(ContainerHelper.java:515)
at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:631)
at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:621)
at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:520)
at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:298)
at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:214)
at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:666)
at com.ibm.ws.runtime.WsServer.main(WsServer.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:599)
at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:213)
at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:93)
at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:74)
at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:599)
at org.eclipse.core.launcher.Main.invokeFramework(Main.java:340)
at org.eclipse.core.launcher.Main.basicRun(Main.java:282)
at org.eclipse.core.launcher.Main.run(Main.java:981)
at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:330)
at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:108)
Caused by: org.omg.CORBA.INTERNAL: CREATE_LISTENER_FAILED_4 vmcid: 0x49421000 minor code: 56 completed: No
at com.ibm.ws.orbimpl.transport.WSTransport.createListener(WSTransport.java:866)
at com.ibm.ws.orbimpl.transport.WSTransport.initTransports(WSTransport.java:604)
at com.ibm.rmi.iiop.TransportManager.initTransports(TransportManager.java:151)
at com.ibm.rmi.corba.ORB.set_parameters(ORB.java:1231)
at com.ibm.CORBA.iiop.ORB.set_parameters(ORB.java:1681)
at org.omg.CORBA.ORB.init(ORB.java:364)
at com.ibm.ws.orb.GlobalORBFactory.init(GlobalORBFactory.java:92)
at com.ibm.ejs.oa.EJSORBImpl.initializeORB(EJSORBImpl.java:179)
at com.ibm.ejs.oa.EJSServerORBImpl.<init>(EJSServerORBImpl.java:102)
at com.ibm.ejs.oa.EJSORB.init(EJSORB.java:55)
at com.ibm.ws.runtime.component.ORBImpl.start(ORBImpl.java:420)
... 29 more
What may be the reason behind this.

Exception when I use Apache spark on a large Dataset

So I get the following exception when I run some mapping and filtering on a relatively large dataset (~1 MB) using PySpark 1.4 in my local windows machine. I have tried using Apache Spark 1.6 and 2.0 and get the exact same exception.
However, if my data-set is small(~100 lines), the code works without any problem.
I have installed Apache Spark on my laptop and have decent amount of storage space and RAM. I tried increasing the memory allocation for spark but it still throws the error. Please help!
Input File: c:/sparklocal/data/in/Emp/est12.txt
16/09/29 15:32:58 ERROR PythonRDD: Python worker exited unexpectedly (crashed)
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at java.io.DataOutputStream.flush(Unknown Source)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.app
ly(PythonRDD.scala:251)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scal
a:208)
16/09/29 15:32:58 ERROR PythonRDD: This may have been caused by a prior exceptio
n:
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at java.io.DataOutputStream.flush(Unknown Source)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.app
ly(PythonRDD.scala:251)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scal
a:208)
16/09/29 15:32:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at java.io.DataOutputStream.flush(Unknown Source)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.app
ly(PythonRDD.scala:251)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scal
a:208)
16/09/29 15:32:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; abor
ting job
Traceback (most recent call last):
File "c:/sparklocal/py/test.py", line 27, in <module>
header = tv_platform_textslines.first()
File "c:\sparklocal\spark\spark-1.4.1-bin-hadoop2.6\python\lib\pyspark.zip\pys
park\rdd.py", line 1295, in first
File "c:\sparklocal\spark\spark-1.4.1-bin-hadoop2.6\python\lib\pyspark.zip\pys
park\rdd.py", line 1277, in take
File "c:\sparklocal\spark\spark-1.4.1-bin-hadoop2.6\python\lib\pyspark.zip\pys
park\context.py", line 897, in runJob
File "c:\sparklocal\spark\spark-1.4.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-sr
c.zip\py4j\java_gateway.py", line 538, in __call__
File "c:\sparklocal\spark\spark-1.4.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-sr
c.zip\py4j\protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.
api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in s
tage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0,
localhost): java.net.SocketException: Connection reset by peer: socket write er
ror
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at java.io.DataOutputStream.flush(Unknown Source)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.app
ly(PythonRDD.scala:251)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scal
a:208)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(D
AGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(D
AGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.
scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala
:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$
1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$
1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGSchedu
ler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

Resources