Spark Streaming : Driver refused to start receiver - spark-streaming

My streaming job has multiple custom receivers (10+) with checkpoint enabled.
When I start a job, I saw several receivers keeps restarting with the following error
[Executor task launch worker-1] receiver.ReceiverSupervisorImpl: Stopping receiver with message: Registered unsuccessfully because Driver refused to start receiver 3:
I searched and found a tip from http://scala4fun.tumblr.com/post/113172936582/how-to-spread-receivers-over-worker-hosts-in-spark. The idea is to delay the scheduler until enough number of executors are up.
It works when I start a job from a clean state. However, when I restart a job using a previous checkpoint, I am hitting the same issue. Many receivers keep restarting.
It looks like the trick didn't work when a checkpoint is used.
I am using the following function to create a StreamingContext, and the delay is done inside of fucntionToCreateContext
val ssc = StreamingContext.getOrCreate(checkpointDir, functionToCreateContext)
How can I start all the receivers when the job restarts with checkpoints?

Related

Spring batch master is waiting but worker startup failed in remote partitioning

I am stuck in a scenario in spring batch job remote partitioning where master started successfully but worker failed to start.The job is deployed on Aws batch , so master is waiting indefinitely for workers to finish since worker cannot comeup.
Can anyone suggest me the way to handle such scenario. I dont want my master node to wait till timeout has occured.
The manager is configurable with a timeout to fail if workers do not reply in time. So it won't wait indefinitely.
And if that happens, the job instance will fail and you can either:
restart it (only failed partitions will be restarted)
or abandon it and start a new instance.

Interrupting a job in quartz with multiple instances

I have 5 instances of an application using quartz in cluster mode both having the quartz scheduler running. (with postgresql)
org.quartz.jobStore.isClustered:true
org.quartz.scheduler.instanceName: myInstanceName
org.quartz.scheduler.instanceId: AUTO
So I have a job which starts and do some operations, update itself if necessary with new scheduled time or else deletes itself. (One job can contain only one trigger.)
The application has a UI interface to allow the user to cancel the job.
When the interrupt command is send from the UI;
If job is not currently working; I can pause the job or cancel.
If my job is currently working at that time, how can I stop the job with the correct instance and get the current state of the job? Basically I want to catch at that moment and save that data at that time, which user is actually interrupt moment
Does scheduler.interrupt(jobKey) interrupt my job which implements InterruptableJob correctly ?
Is scheduler.interrupt() exactly knows which instance should currently running the job and find the correct instance and get the right state of the job ?
Can u correct me, or which way should I go with ?
interrupt method implementations and getCurrentlyExecutingJobs() in quartz are not cluster aware,
which means the method has to be run on the instance which is executing that job, in other words only jobs with specified job key running in the current instance will be interrupted.
An interrupt request can be broadcasted to all running instances of quartz to cancel all instances of running jobs.
from: https://www.quartz-scheduler.org/api/2.1.7/org/quartz/Scheduler.html#interrupt(org.quartz.JobKey)
This method is not cluster aware. That is, it will only interrupt
instances of the identified InterruptableJob currently executing in
this Scheduler instance, not across the entire cluster.

Storm: How to resubmit topology automatically when it occurs exception?

I have a topology running on a Storm cluster with 3 supervisor nodes(32GRAM each node). In the first several days, the topology goes well, everything is ok. But the following error always occurred and the topology gone down after several days running:
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/topics/TOPICNAME/partitions at storm.kafka.ZkCoordinator.refresh
The topology uses a spout to consume messages from a remote Kafka service which sits on an remote server and this server is also the zookeeper service on.
I guess the reason for this exception is that the zookeeper server is instability, OR the network connection is unstable.
I have no permission to do anything with the remote kafka/zookeeper server, So I need a solution by my side to keep the topology running stably. Is there anyway to let the topology runs stably OR anyway to skip the exception while it comes out?
Or is there anyway to resubmit topology automatically?
Thank you very much!
The first thing you should have done is to google for what causes the connection loss error.
Then go to storm's log files and view which line of code is causing the error.
The right way to do things is to find out what is causing the error.
However, if you want the quicker temporary solution, then use Storm's REST API to kill the topology. Then you can use a normal Java program or a script in any language to re-launch the topology from the commandline.

SparkException: Master removed our application

I know there are other very similar questions on Stackoverflow but those either didn't get answered or didn't help me out. In contrast to those questions I put much more stack trace and log file information into this question. I hope that helps, although it made the question to become sorta long and ugly. I'm sorry.
Setup
I'm running a 9 node cluster on Amazon EC2 using m3.xlarge instances with DSE (DataStax Enterprise) version 4.6 installed. For each workload (Cassandra, Search and Analytics) 3 nodes are used. DSE 4.6 bundles Spark 1.1 and Cassandra 2.0.
Issue
The application (Spark/Shark-Shell) gets removed after ~3 minutes even if I do not run any query. Queries on small datasets run successful as long as they finish within ~3 minutes.
I would like to analyze much larger datasets. Therefore I need the application (shell) not to get removed after ~3 minutes.
Error description
On the Spark or Shark shell, after idling ~3 minutes or while executing (long-running) queries, Spark will eventually abort and give the following stack trace:
15/08/25 14:58:09 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED
org.apache.spark.SparkException: Job aborted due to stage failure: Master removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask
This is not very helpful (to me), that's why I'm going to show you more log file information.
Error Details / Log Files
Master
From the master.log I think the interesing parts are
INFO 2015-08-25 09:19:59 org.apache.spark.deploy.master.DseSparkMaster: akka.tcp://sparkWorker#172.31.46.48:46715 got disassociated, removing it.
INFO 2015-08-25 09:19:59 org.apache.spark.deploy.master.DseSparkMaster: akka.tcp://sparkWorker#172.31.33.35:42136 got disassociated, removing it.
and
ERROR 2015-08-25 09:21:01 org.apache.spark.deploy.master.DseSparkMaster: Application Shark::ip-172-31-46-49 with ID app-20150825091745-0007 failed 10 times, removing it
INFO 2015-08-25 09:21:01 org.apache.spark.deploy.master.DseSparkMaster: Removing app app-20150825091745-0007
Why do the worker nodes get disassociated?
In case you need to see it, I attached the master's executor (ID 1) stdout as well. The executors stderr is empty. However, I think it shows nothing useful to tackle the issue.
On the Spark Master UI I verified to see all worker nodes to be ALIVE. The second screenshot shows the application details.
There is one executor spawned on the master instance while executors on the two worker nodes get respawned until the whole application is removed. Is that okay or does it indicate some issue? I think it might be related to the "(it) failed 10 times" error message from above.
Worker logs
Furthermore I can show you logs of the two Spark worker nodes. I removed most of the class path arguments to shorten the logs. Let me know if you need to see it. As each worker node spawns multiple executors I attached links to some (not all) executor stdout and stderr dumps. Dumps of the remaining executors look basically the same.
Worker I
worker.log
Executor (ID 10) stdout
Executor (ID 10) stderr
Worker II
worker.log
Executor (ID 3) stdout
Executor (ID 3) stderr
The executor dumps seem to indicate some issue with permission and/or timeout. But from the dumps I can't figure out any details.
Attempts
As mentioned above, there are some similar questions but none of those got answered or it didn't help me to solve the issue. Anyway, things I tried and verified are:
Opened port 2552. Nothing changes.
Increased spark.akka.askTimeout which results in the Spark/Shark app to live longer but eventually it still gets removed.
Ran the Spark shell locally with spark.master=local[4]. On the one hand this allowed me to run queries longer than ~3 minutes successfully, on the other hand it obviously doesn't take advantage of the distributed environment.
Summary
To sum up, one could say that the timeouts and the fact long-running queries are successfully executed in local mode all indicate some misconfiguration. Though I cannot be sure and I don't know how to fix it.
Any help would be very much appreciated.
Edit: Two of the Analytics and two of the Solr nodes were added after the initial setup of the cluster. Just in case that matters.
Edit (2): I was able to work around the issue described above by replacing the Analytics nodes with three freshly installed Analytics nodes. I can now run queries on much larger datasets without the shell being removed. I intend not to put this as an answer to the question as it is still unclear what is wrong with the three original Analytics nodes. However, as it is a cluster for testing purposes, it was okay to simply replace the nodes (after replacing the nodes I performed a nodetool rebuild -- Cassandra on each of the new nodes to recover their data from the Cassandra datacenter).
As mentioned in the attempts, the root cause is a timeout between the master node, and one or more workers.
Another thing to try: Verify that all workers are reachable by hostname from the master, either via dns or an entry in the /etc/hosts file.
In my case, the problem was that the cluster was running in an AWS subnet without DNS. The cluster grew over time by spinning up a node, the adding the node to the cluster. When the master was built, only a subset of the addresses in the cluster was known, and only that subset was added to the /etc/hosts file.
When dse spark was run from a "new" node, then communication from the master using the worker's hostname failed and the master killed the job.

Spark workers and master not communicating (both start without error) in standalone cluster

I have the same question as Spark Clusters: worker info doesn't show on web UI , but I can't seem to figure out what the problem is.
In addition to what's written there, there are two extra interesting/useful points:
The workers will show up on their respective worker webuis --> http://xxx.xx.xx.xx:8081/ (but not the master's (http://yyy.yy.yy.yy:8080/))
The workers will end in a minute or so (presumably because they can't connect to the master), without an error message.
(if I run sbin/start-slaves.sh , I see :
root#198.23.89.40: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-alanbaresj5.hi.com.out
, and if I run again (without waiting for a minute or so),
root#198.23.89.40: org.apache.spark.deploy.worker.Worker running as process 30542. Stop it first.
Any thoughts?
Thanks!

Resources