Azure Databricks stream fails with StorageException: Could not verify copy source - azure-blob-storage

We have a Databricks job that has suddenly started to consistently fail. Sometimes it runs for an hour, other times it fails after a few minutes.
The inner exception is
ERROR MicroBatchExecution: Query [id = xyz, runId = abc] terminated with error
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: hadoop_azure_shaded.com.microsoft.azure.storage.StorageException: Could not verify copy source.
The job targets a notebook which consumes from event-hub with PySpark structured streaming, calculates some values based on the data, and streams data back to another event-hub topic.
The cluster is a pool with 2 workers and 1 driver running on standard Databricks 9.1 ML.
We've tried to restart job many times, also with clean input data and checkpoint location.
We struggle to determine what is causing this error.
We cannot see any 403 Forbidden errors in logs, which is sometimes mentioned on forums as a reason
.
Any assistance is greatly appreciated.

Issue resolved by moving checkpointing (used internally by Spark) location from standard storage to premium. I don't know why it suddenly started failing after months of running hardly without hiccup.
Premium storage might be a better place for checkpointing anyway since I/O is cheaper.

Related

Spark Structured Streaming - WAL performance degradation

We have a spark structured streaming query that reads data from eventhub, does some processing and write data back to eventhub. We have checkpointing enabled - we store the checkpoint data in the Azure Datalake Gen2.
When we run the query, we see something weird - over time, our query's performance (latency) slowly degrades. When we run the query for the first time, the batch duration time is ~3 secs. After a day of run, the batch duration time is 20 secs and after 2 days, we get to a 40 secs+.. Interestingly, when we delete the checkpoint folder (or otherwisely reset the checkpoint), the latency goes back to normal (2 secs).
Looking at the query performance after 2 days of running on the same checkpoint directory, it is quite clear that it is the write-ahead-log / "walCommit", which grows and after some time accounts for the majority of the processing time.
My questions are: what drives this behaviour - is it natural for walCommit to take longer and longer? Could it be Azure Datalake Gen2 specific? Do we even need write-ahead-logs for eventhub? What are general ways how to improve this (not assuming disabling the WAL)..
I've wrote you via Slack, but I will share the answer also here.
I’ve experienced same behavior, the reason was leak of hidden crc files in checkpoint/offsets directory. It is a hadoop rename bug and is workarounded in Spark 2.4.4.
Link to Spark JIRA
If following find command executed in checkpoint directory returns number > ~1000, you're affected by this bug:
find . -name "*.crc" | wc -l
Workaround for Spark < 2.4.4 is to disable creating crc files (suggested in JIRA comments) :
--conf spark.hadoop.spark.sql.streaming.checkpointFileManagerClass=org.apache.spark.sql.execution.streaming.FileSystemBasedCheckpointFileManager --conf spark.hadoop.fs.file.impl=org.apache.hadoop.fs.RawLocalFileSystem
Thanks #tomas-bartalos for the answer!
We found another issue, that was the real cause of our problem - properties of Azure Gen2 Storage (with hierarchical namespace enabled). It seems Azure Gen2 is slow when listing a lot of files. We tried to open the streaming checkpoint directory using the Azure Explorer and it took about 20 seconds (similar to the walCommit time). We switched to the Azure Blob Storage and the problem was gone. We haven't done anything with the crc files (tomas's answer) so we concluded that te storage mode was the main issue.

SparkException: Master removed our application

I know there are other very similar questions on Stackoverflow but those either didn't get answered or didn't help me out. In contrast to those questions I put much more stack trace and log file information into this question. I hope that helps, although it made the question to become sorta long and ugly. I'm sorry.
Setup
I'm running a 9 node cluster on Amazon EC2 using m3.xlarge instances with DSE (DataStax Enterprise) version 4.6 installed. For each workload (Cassandra, Search and Analytics) 3 nodes are used. DSE 4.6 bundles Spark 1.1 and Cassandra 2.0.
Issue
The application (Spark/Shark-Shell) gets removed after ~3 minutes even if I do not run any query. Queries on small datasets run successful as long as they finish within ~3 minutes.
I would like to analyze much larger datasets. Therefore I need the application (shell) not to get removed after ~3 minutes.
Error description
On the Spark or Shark shell, after idling ~3 minutes or while executing (long-running) queries, Spark will eventually abort and give the following stack trace:
15/08/25 14:58:09 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED
org.apache.spark.SparkException: Job aborted due to stage failure: Master removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask
This is not very helpful (to me), that's why I'm going to show you more log file information.
Error Details / Log Files
Master
From the master.log I think the interesing parts are
INFO 2015-08-25 09:19:59 org.apache.spark.deploy.master.DseSparkMaster: akka.tcp://sparkWorker#172.31.46.48:46715 got disassociated, removing it.
INFO 2015-08-25 09:19:59 org.apache.spark.deploy.master.DseSparkMaster: akka.tcp://sparkWorker#172.31.33.35:42136 got disassociated, removing it.
and
ERROR 2015-08-25 09:21:01 org.apache.spark.deploy.master.DseSparkMaster: Application Shark::ip-172-31-46-49 with ID app-20150825091745-0007 failed 10 times, removing it
INFO 2015-08-25 09:21:01 org.apache.spark.deploy.master.DseSparkMaster: Removing app app-20150825091745-0007
Why do the worker nodes get disassociated?
In case you need to see it, I attached the master's executor (ID 1) stdout as well. The executors stderr is empty. However, I think it shows nothing useful to tackle the issue.
On the Spark Master UI I verified to see all worker nodes to be ALIVE. The second screenshot shows the application details.
There is one executor spawned on the master instance while executors on the two worker nodes get respawned until the whole application is removed. Is that okay or does it indicate some issue? I think it might be related to the "(it) failed 10 times" error message from above.
Worker logs
Furthermore I can show you logs of the two Spark worker nodes. I removed most of the class path arguments to shorten the logs. Let me know if you need to see it. As each worker node spawns multiple executors I attached links to some (not all) executor stdout and stderr dumps. Dumps of the remaining executors look basically the same.
Worker I
worker.log
Executor (ID 10) stdout
Executor (ID 10) stderr
Worker II
worker.log
Executor (ID 3) stdout
Executor (ID 3) stderr
The executor dumps seem to indicate some issue with permission and/or timeout. But from the dumps I can't figure out any details.
Attempts
As mentioned above, there are some similar questions but none of those got answered or it didn't help me to solve the issue. Anyway, things I tried and verified are:
Opened port 2552. Nothing changes.
Increased spark.akka.askTimeout which results in the Spark/Shark app to live longer but eventually it still gets removed.
Ran the Spark shell locally with spark.master=local[4]. On the one hand this allowed me to run queries longer than ~3 minutes successfully, on the other hand it obviously doesn't take advantage of the distributed environment.
Summary
To sum up, one could say that the timeouts and the fact long-running queries are successfully executed in local mode all indicate some misconfiguration. Though I cannot be sure and I don't know how to fix it.
Any help would be very much appreciated.
Edit: Two of the Analytics and two of the Solr nodes were added after the initial setup of the cluster. Just in case that matters.
Edit (2): I was able to work around the issue described above by replacing the Analytics nodes with three freshly installed Analytics nodes. I can now run queries on much larger datasets without the shell being removed. I intend not to put this as an answer to the question as it is still unclear what is wrong with the three original Analytics nodes. However, as it is a cluster for testing purposes, it was okay to simply replace the nodes (after replacing the nodes I performed a nodetool rebuild -- Cassandra on each of the new nodes to recover their data from the Cassandra datacenter).
As mentioned in the attempts, the root cause is a timeout between the master node, and one or more workers.
Another thing to try: Verify that all workers are reachable by hostname from the master, either via dns or an entry in the /etc/hosts file.
In my case, the problem was that the cluster was running in an AWS subnet without DNS. The cluster grew over time by spinning up a node, the adding the node to the cluster. When the master was built, only a subset of the addresses in the cluster was known, and only that subset was added to the /etc/hosts file.
When dse spark was run from a "new" node, then communication from the master using the worker's hostname failed and the master killed the job.

Oozie Hive action on AWS - unpredictable ip sources break the job

I've been having a few days of unalloyed torture getting Hive jobs to run via Oozie on an AWS 5 machine cluster. The simplest job that involved the live metastore succeeds or fails unpredictably. The error messages are pretty unhelpful:
Hive failed, error message[Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [1]]
Thanks Oozie!
After a lot of fun changing just about every imaginable setting, I studied hivemetastore.log carefully (we have mySQL as the metastore) and realised that every successful request came from 172.31.40.3. Unsuccessful requests came from 172.31.40.2,172.31.40.4 and 172.31.40.5 . The Hive console app makes requests without problems on 172.31.40.1
This is getting somewhere after nearly week of having no idea whatsover is going on. The question is now, what do I need to change to allow all requests from 172.31.40.1-5 in? Or funnel Oozie requests solely through 172.31.40.1 or 172.31.40.3, either.
Why would only 172.31.40.1 and 172.31.40.3 work?
all ideas and suggestions warmly received.
many thanks
Toby
this was so simple in the end - the Oozie client was only installed on 2 of the 5 machines in the cluster. Corresponding, of course, to the 2 IP addresses that could make successful requests to the hive metastore
Once we installed the Oozie client onto all the machines in the cluster, all the jobs were automatically accepted and ran OK
obvious when you know the answer ...

"Unable to verify integrity of data" while running MR job

I'm running a relatively big MR job using Amazon Elastic Map Reduce.
I ran the job plenty of times on small data sets with no problem.
But when trying to run it on a large dataset I'm getting the following exception:
Error: com.amazonaws.AmazonClientException: Unable to verify integrity
of data download. Client calculated content length didn't match
content length received from Amazon S3. The data may be corrupt.
I googled it and the only recommendation I got was to set the following:
System.setProperty("com.amazonaws.services.s3.disableGetObjectMD5Validation","true");
That didn't help at all.
I'm using replication 3, 11 M1Large datanodes and 1 M1Medium master node.
Any workaround or known fix for this issue?
Apparently, this is a known bug. Or so I've been told by an Amazon employee here.
It occurs when running on large datasets where an S3 object is bigger than 2GB.
I managed to work around it by moving to Hadoop 2.4.0 and AMI 3.1.0.

'Stream Closed' error when using s3distcp to copy files from HDFS to Amazon S3

I am using s3distcp to copy files from HDFS to Amazon S3. Recently, I started getting the 'Stream Closed' error for reducer tasks. I noticed that the error only happened where there were multiple threads (each thread assigned a single file to upload to S3) under the same reducer task. The reducer task fails on this error, but eventually succeeds due to the retries.
My initial analysis was that the threads were using the same instance of the FileSystem class. To avoid that, I used configuration.setBoolean("fs.hdfs.impl.disable.cache", true) before calling FileSytem.get to avoid getting a cached instance. But, I am still getting the same error. Here is a part of the syslog:
Caused by: java.io.IOException: Stream closed
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2039)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1976)
at java.io.FilterInputStream.read(FilterInputStream.java:66)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager$RewindableInputStream.read(XMLEntityManager.java:2932)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:704)
at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:772)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:235)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1163)
Another point that I should mention is that I recently moved from Hadoop 0.20 to Hadoop 0.20.205 and a new AMI (2.0.4). This issue started happening after moving to the new version.
I have been blocked on this for a while now. Any help or guidance would be helpful.

Resources