I'm trying to write the result of multiple operations into an AWS Aurora PostgreSQL cluster. All the calculations performs right but, when I try to write the result into the database I get the next error:
py4j.protocol.Py4JJavaError: An error occurred while calling o12179.jdbc.
: java.lang.StackOverflowError
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:255)
I already tried to increase cluster size (15 r4.2xlarge machines), change number of partitions for the data to 120 partitions, change executor and driver memory to 4Gb each and I'm facing the same results.
The current SparkSession configuration is the next:
spark = pyspark.sql.SparkSession\
.builder\
.appName("profile")\
.config("spark.sql.shuffle.partitions", 120)\
.config("spark.executor.memory", "4g").config("spark.driver.memory", "4g")\
.getOrCreate()
I don't know if is a Spark configuration problem or if it's a programming problem.
Finally I found the problem.
The problem was an iterative read from S3 creating a really big DAG. I changed the way I read CSV files from S3 with the following instruction.
df = spark.read\
.format('csv')\
.option('header', 'true')\
.option('delimiter', ';')\
.option('mode', 'DROPMALFORMED')\
.option('inferSchema', 'true')\
.load(list_paths)
Where list_paths is a precalculated list of paths to S3 objects.
Related
I have multiple queries running on the same spark structured streaming session.
The queries are writing parquet records to Google Bucket and checkpoint to Google Bucket.
val query1 = df1
.select(col("key").cast("string"),from_json(col("value").cast("string"), schema, Map.empty[String, String]).as("data"))
.select("key","data.*")
.writeStream.format("parquet").option("path", path).outputMode("append")
.option("checkpointLocation", checkpoint_dir1)
.partitionBy("key")/*.trigger(Trigger.ProcessingTime("5 seconds"))*/
.queryName("query1").start()
val query2 = df2.select(col("key").cast("string"),from_json(col("value").cast("string"), schema, Map.empty[String, String]).as("data"))
.select("key","data.*")
.writeStream.format("parquet").option("path", path).outputMode("append")
.option("checkpointLocation", checkpoint_dir2)
.partitionBy("key")/*.trigger(Trigger.ProcessingTime("5 seconds"))*/
.queryName("query2").start()
Problem: Sometimes job fails with ava.lang.IllegalStateException: Race while writing batch 4
Logs:
Caused by: java.lang.IllegalStateException: Race while writing batch 4
at org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol.commitJob(ManifestFileCommitProtocol.scala:67)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:187)
... 20 more
20/07/24 19:40:15 INFO SparkContext: Invoking stop() from shutdown hook
This error is because there are two writers writing to the output path. The file streaming sink doesn't support multiple writers. It assumes there is only one writer writing to the path. Each query needs to use its own output directory.
Hence, in order to fix this, you can make each query use its own output directory. When reading back the data, you can load each output directory and union them.
You can also use a streaming sink that supports multiple concurrent writers, such as the Delta Lake library. It's also supported by Google Cloud: https://cloud.google.com/blog/products/data-analytics/getting-started-with-new-table-formats-on-dataproc . This link has instructions about how to use Delta Lake on Google Cloud. It doesn't mention the streaming case, but what you need to do is changing format("parquet") to format("delta") in your codes.
I'm investigating the performances of a Flink job that transports data from Kafka to an S3 Sink.
We are using a BucketingSink to write parquet files. The bucketing logic divides the messages having a folder per type of data, tenant (customer), date-time, extraction Id, etc etc. This results in each file is stored in a folder structure composed by 9-10 layers (s3_bucket:/1/2/3/4/5/6/7/8/9/myFile...)
If the data is distributed as bursts of messages for tenant-type we see good performances in writing, but when the data is more a white noise distribution on thousands of tenants, dozens of data types and multiple extraction IDs, we have an incredible loss of performances. (in the order of 300x times)
Attaching a debugger, it seems the issue is connected to the number of handlers open at the same time on S3 to write data. More specifically:
Researching in the hadoop libraries used to write to S3 I have found some possible improvements setting:
<name>fs.s3a.connection.maximum</name>
<name>fs.s3a.threads.max</name>
<name>fs.s3a.threads.core</name>
<name>fs.s3a.max.total.tasks</name>
But none of these made a big difference in throughput.
I also tried to flatten the folder structure to write to a single key like (1_2_3_...) but also this didn't bring any improvement.
Note: The tests have been done on Flink 1.8 with the Hadoop FileSystem (BucketingSink), writing to S3 using the hadoop fs libraries 2.6.x (as we use Cloudera CDH 5.x for savepoints), so we can't switch to StreamingFileSink.
After the suggestion from Kostas in https://lists.apache.org/thread.html/50ef4d26a1af408df8d9abb70589699cb6b26b2600ab6f4464e86ea4%40%3Cdev.flink.apache.org%3E
The culprit of the slow-down is this piece of code:
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L543-L551
This alone takes around 4-5 secs, with a total of 6 secs to open the file. Logs from an instrumented call:
2020-02-07 08:51:05,825 INFO BucketingSink - openNewPartFile FS verification
2020-02-07 08:51:09,906 INFO BucketingSink - openNewPartFile FS verification - done
2020-02-07 08:51:11,181 INFO BucketingSink - openNewPartFile FS - completed partPath = s3a://....
This together with the default setup of the bucketing sink with 60 secs inactivity rollover
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L195
means that with more than 10 parallel bucket on a slot by the time we finish creating the last bucket the first one became stale, so needs to be rotated generating a blocking situation.
We solved this by replacing the BucketingSink.java and deleting the FS check mentioned above:
LOG.debug("Opening new part file FS verification");
if (!fs.exists(bucketPath)) {
try {
if (fs.mkdirs(bucketPath)) {
LOG.debug("Created new bucket directory: {}", bucketPath);
}
}
catch (IOException e) {
throw new RuntimeException("Could not create new bucket path.", e);
}
}
LOG.debug("Opening new part file FS verification - done");
as we see that the sink works fine without it, now the file opening takes ~1.2sec.
Moreover we set the default inactive threshold to 5 mins. With this changes we can easily handle more than 200 buckets per slot (once the job takes speed it will ingest on all the slots so postponing the inactive timeout)
I am trying to split a large file (15GB) into multiple small files based on a key column inside the file.The same code works fine if i run it on few 1000s of rows.
My code is as below.
REGISTER /home/auto/ssachi/piggybank-0.16.0.jar;
input_dt = LOAD '/user/ssachi/sywr_sls_ln_ofr_dtl/sywr_sls_ln_ofr_dtl.txt-10' USING PigStorage(',');
STORE input_dt into '/user/rahire/sywr_sls_ln_ofr_dtl_split' USING org.apache.pig.piggybank.storage.MultiStorage('/user/rahire/sywr_sls_ln_ofr_dtl_split','4','gz',',');
Error is as below
ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 6015: During execution, encountered a Hadoop error.
HadoopVersion 2.6.0-cdh5.8.2
PigVersion 0.12.0-cdh5.8.2
I tried setting the below parameters assuming it is a memory issue, but it did not help.
SET mapreduce.map.memory.mb 16000;
SET mapreduce.map.java.opts 14400;
With the above parameters set, i got the below error.
Container exited with a non-zero exit code 1
org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable to recreate exception from backed error: AttemptID:attempt_1486048646102_2613_m_000066_3 Info:Exception from container-launch.
Whats the Cardinality of your " key column " is it in 1000?
If its in 1000 then you will get the error as your Mappers are dying because of OOME.
Do understand each Mapper now maintain 1000 file pointers and a associated buffer for each filePointer enough to occupy whole of your heap.
Can you please provide logs of your mappers for further investigation
Multioutput in MapReduce which is being called internally.
http://bytepadding.com/big-data/map-reduce/multipleoutputs-in-map-reduce/
I am trying to write a pair rdd to Elastic Search on Elastic Cloud on version 2.4.0.
I am using elasticsearch-spark_2.10-2.4.0 plugin to write to ES.
Here is the code I am using to write to ES:
def predict_imgs(r):
import json
out_d = {}
out_d["pid"] = r["pid"]
out_d["other_stuff"] = r["other_stuff"]
return (r["pid"], json.dumps(out_d))
res2 = res1.map(predict_imgs)
es_write_conf = {
"es.nodes" : image_es,
#"es.port" : "9243",
"es.resource" : "index/type",
"es.nodes.wan.only":"True",
"es.write.operation":"upsert",
"es.mapping.id":"product_id",
"es.nodes.discovery" : "false",
"es.net.http.auth.user": "username",
"es.net.http.auth.pass": "pass",
"es.input.json": "true",
"es.http.timeout":"1m",
"es.scroll.size":"10",
"es.batch.size.bytes":"1mb",
"es.http.retries":"1",
"es.batch.size.entries":"5",
"es.batch.write.refresh":"False",
"es.batch.write.retry.count":"1",
"es.batch.write.retry.wait":"10s"}
res2.saveAsNewAPIHadoopFile(
path='-',
outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf=es_write_conf)
The Error I get is as follows:
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 744 in stage 26.0 failed 4 times, most recent failure: Lost task 744.3 in stage 26.0 (TID 2841, 10.181.252.29): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
The interesting part is this works when I do a take on the first few elements on rdd2 and then make a new rdd out of it and write it to ES, it works flawlessly:
x = sc.parallelize([res2.take(1)])
x.saveAsNewAPIHadoopFile(
path='-',
outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf=es_write_conf)
I am using Elastic Cloud (cloud offering of Elastic Search) and Databricks (cloud offering of Apache Spark)
Could it be that ES is not able to keep up with the through put of Spark writing to ES ?
I increased our Elastic Cloud size from 2GB RAM to 8GB RAM.
Are there any recommended configs for the es_write_conf I used above? Any other confs that you can think of?
Does updating to ES 5.0 help?
Any help is appreciated. Have been struggling with this for a few days now. Thank you.
It looks like problem with pyspark calculations, not necessarly elasticsearch saving process. Ensure your RDDs are OK by:
Performing count() on rdd1 (to "materialize" results)
Performing count() on rdd2
If counts are OK, try with caching results before saving into ES:
res2.cache()
res2.count() # to fill the cache
res2.saveAsNewAPIHadoopFile(...
It the problem still appears, try to look at dead executors stderr and stdout (you can find them on Executors tab in SparkUI).
I also noticed the very small batch size in es_write_conf, try increasing it to 500 or 1000 to get better performance.
I would like to create a snapshot of the underlying HDFS, when running a spark job. The particular step involves deleting contents of some parquet files. I want to create a snapshot perform the delete operation, verify the operation results and proceed with next Steps.
However, I am unable to find a good way to access the HDFS API from my spark job. The directory I want to create a snapshot is tagged/marked snapshotable in HDFS. the command line method of creating the snapshot works, However I need to do this programmatically.
i am running Spark 1.5 on CDH 5.5.
any hints clues as to how I can perform this operation ?
Thanks
Ramdev
I have not verified this, but atleast I do not get Compile errors and in theory this solution should work.
This is scala code:
val sc = new SparkContext();
val fs = FileSystem.get(sc.hadoopConfig)
val snapshotPath = fs.createSnapshot("path to createsnapshot of","snapshot name")
.....
.....
if (condition satisfied) {
fs.deleteSnapshot(snapshotPath,"snapshot name")
}
I assume this will work in theory.