Spark performance tuning for the cartesianproduct - performance

I want to do join two datasets ,first dataset is with 4.5 GB and second dataset is with 5MB.
Below is my query ,
val data= rdd1.join(rdd2,regexp_replace($"rdd2.SUBSCRIBER_ID","^0*","") === regexp_replace($"rdd1.subscriberid","^0*", "" ) or
((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB")) or
((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and ($"rdd2.GENDER" === $"rdd1.gender")) or
((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB") and ($"rdd2.GENDER" === $"rdd1.gender")) or
((substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB") and ($"rdd2.GENDER" === $"rdd1.gender")))
It is running as Cartesian join, Used broadcast for rdd2 ,but there is no performance.
I am using these properties.
--num-executors 30 --driver-memory 12G --executor-memory 30G --executor-cores 6
--conf spark.sql.shuffle.partitions=2001 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer
--conf spark.cleaner.ttl=800 --conf spark.debug.maxToStringFields=1000
We do have 300 vcores availability.
How can I change my query to get the better performance
Any help is appreciated .

Try to remove the shuffle partition configuration and keep the cores only to 2-4.
Please try and let me know the result.

Removed "OR" condition ,created multiple dataframes and did the union all
val data= rdd1.join(rdd2,regexp_replace($"rdd2.SUBSCRIBER_ID","^0*","") === regexp_replace($"rdd1.subscriberid","^0*", "" ))
val data1 = rdd1.join(rdd2,((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB")))
val data2 = rdd1.join(rdd2,((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and ($"rdd2.GENDER" === $"rdd1.gender")))
val data3 = rdd1.join(rdd2,((substring($"rdd2.FIRST_NAME",0,3) === $"rdd1.firstName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB") and ($"rdd2.GENDER" === $"rdd1.gender")))
val data4= rdd1.join(rdd2,((substring($"rdd2.LAST_NAME",0,4) === $"rdd1.lastName") and (regexp_replace(substring($"rdd2.BIRTH_DATE",0,10),"-","") === $"rdd1.DOB") and ($"rdd2.GENDER" === $"rdd1.gender")))
val finaldata = data union data1 union data2 union data3 union data4

Related

applyInPandas() aggregation runs slowly on big delta table

I'm trying to create a gold table notebook in Databricks, however it would take 9 days to fully reprocess the historical data (43GB, 35k parquet files). I tried scaling up the cluster but it doesn't go above 5000 records/second. The bottleneck seems to be the applyInPandas() function. I'm wondering if I could replace pandas with anything else to make the gold notebook execute faster.
Silver table has 60 columns (read_id, reader_id, tracker_timestamp, event_type, ebook_id, page_id, agent_ip, agent_device_type, ...). Each row of data represents read event of an ebook. E.g 'page turn', 'click on image', 'click on link',... All of the events that have occurred in the single session have the same read.id. In the gold table I'm trying to group those events in sessions and calculate the number of times each event has occurred in the single session. So instead of 100+ rows of data for a read session in silver table I would end up just with a single aggregated row in gold table.
Input is the silver delta table:
import pyspark.sql.functions as F
import pyspark.sql.types as T
import pandas as pd
from pyspark.sql.functions import pandas_udf
input = (spark
.readStream
.format("delta")
.option("withEventTimeOrder", "true")
.option("maxFilesPerTrigger", 100)
.load(f"path_to_silver_bucket")
)
I use withWatermark and session_window functions to ensure I end up grouping all of the events from the single read session. (read session automatically ends 30 minutes after the last reader activity)
group = input.withWatermark("tracker_timestamp", "10 minutes").groupBy("read_id", F.session_window(input.tracker_timestamp, "30 minutes"))
In the next step I use the applyInPandas function like so:
sessions = group.applyInPandas(processing_function, schema=processing_function_output_schema)
Definition of the processing_function used in applyInPandas:
def processing_function(df):
surf_time_ms = df.query('event_type == "surf"')['duration'].sum()
immerse_time_ms = df.query('event_type == "immersion"')['duration'].sum()
min_timestamp = df['tracker_timestamp'].min()
max_timestamp = df['tracker_timestamp'].max()
shares = len(df.query('event_type == "share"'))
leads = len(df.query('event_type == "lead_store"'))
is_read = len(df.query('event_type == "surf"')) > 0
distinct_pages = df['page_id'].nunique()
data = {
"read_id": df['read_id'].values[0],
"surf_time_ms": surf_time_ms,
"immerse_time_ms": immerse_time_ms,
"min_timestamp": min_timestamp,
"max_timestamp": max_timestamp,
"shares": shares,
"leads": leads,
"is_read": is_read,
"number_of_events": len(df),
"distinct_pages": distinct_pages
}
for field in not_calculated_string_fields:
data[field] = df[field].values[0]
new_df = pd.DataFrame(data=data, index=['read_id'])
for x in all_events:
new_df[f"count_{x}"] = df.query(f"type == '{x}'").count()
for x in duration_events:
duration = df.query(f"event_type == '{x}'")['duration']
duration_sum = duration.sum()
new_df[f"duration_{x}_ms"] = duration_sum
if duration_sum > 0:
new_df[f"mean_duration_{x}_ms"] = duration.mean()
else:
new_df[f"mean_duration_{x}_ms"] = 0
return new_df
And finally, I'm writing the calculated row to the gold table like so:
for_partitioning = (sessions
.withColumn("tenant", F.col("story_tenant"))
.withColumn("year", F.year(F.col("min_timestamp")))
.withColumn("month", F.month(F.col("min_timestamp"))))
checkpoint_path = "checkpoint-path"
gold_path = f"gold-bucket"
(for_partitioning
.writeStream
.format('delta')
.partitionBy('year', 'month', 'tenant')
.option("mergeSchema", "true")
.option("checkpointLocation", checkpoint_path)
.outputMode("append")
.start(gold_path))
Can anybody think of a more efficient way to do a UDF in PySpark than applyInPandas for the above example? I simply cannot afford to wait 9 days to reprocess 43GB of data...
I've tried playing around with different input and output options (e.g. .option("maxFilesPerTrigger", 100)) but the real problem seems to be applyInPandas.
You could rewrite your processing_function into native Spark if you really wanted.
"read_id": df['read_id'].values[0]
F.first('read_id').alias('read_id')
"surf_time_ms": df.query('event_type == "surf"')['duration'].sum()
F.sum(F.when(F.col('event_type') == 'surf', F.col('duration'))).alias('surf_time_ms')
"immerse_time_ms": df.query('event_type == "immersion"')['duration'].sum()
F.sum(F.when(F.col('event_type') == 'immersion', F.col('duration'))).alias('immerse_time_ms')
"min_timestamp": df['tracker_timestamp'].min()
F.min('tracker_timestamp').alias('min_timestamp')
"max_timestamp": df['tracker_timestamp'].max()
F.max('tracker_timestamp').alias('max_timestamp')
"shares": len(df.query('event_type == "share"'))
F.count(F.when(F.col('event_type') == 'share', F.lit(1))).alias('shares')
"leads": len(df.query('event_type == "lead_store"'))
F.count(F.when(F.col('event_type') == 'lead_store', F.lit(1))).alias('leads')
"is_read": len(df.query('event_type == "surf"')) > 0
(F.count(F.when(F.col('event_type') == 'surf', F.lit(1))) > 0).alias('is_read')
"number_of_events": len(df)
F.count(F.lit(1)).alias('number_of_events')
"distinct_pages": df['page_id'].nunique()
F.countDistinct('page_id').alias('distinct_pages')
for field in not_calculated_string_fields:
data[field] = df[field].values[0]
*[F.first(field).alias(field) for field in not_calculated_string_fields]
for x in all_events:
new_df[f"count_{x}"] = df.query(f"type == '{x}'").count()
The above can probably be skipped? As far as my tests go, new columns get NaN values, because .count() returns a Series object instead of one simple value.
for x in duration_events:
duration = df.query(f"event_type == '{x}'")['duration']
duration_sum = duration.sum()
new_df[f"duration_{x}_ms"] = duration_sum
if duration_sum > 0:
new_df[f"mean_duration_{x}_ms"] = duration.mean()
else:
new_df[f"mean_duration_{x}_ms"] = 0
*[F.sum(F.when(F.col('event_type') == x, F.col('duration'))).alias(f"duration_{x}_ms") for x in duration_events]
*[F.mean(F.when(F.col('event_type') == x, F.col('duration'))).alias(f"mean_duration_{x}_ms") for x in duration_events]
So, instead of
def processing_function(df):
...
...
sessions = group.applyInPandas(processing_function, schema=processing_function_output_schema)
you could use efficient native Spark:
sessions = group.agg(
F.first('read_id').alias('read_id'),
F.sum(F.when(F.col('event_type') == 'surf', F.col('duration'))).alias('surf_time_ms'),
F.sum(F.when(F.col('event_type') == 'immersion', F.col('duration'))).alias('immerse_time_ms'),
F.min('tracker_timestamp').alias('min_timestamp'),
F.max('tracker_timestamp').alias('max_timestamp'),
F.count(F.when(F.col('event_type') == 'share', F.lit(1))).alias('shares'),
F.count(F.when(F.col('event_type') == 'lead_store', F.lit(1))).alias('leads'),
(F.count(F.when(F.col('event_type') == 'surf', F.lit(1))) > 0).alias('is_read'),
F.count(F.lit(1)).alias('number_of_events'),
F.countDistinct('page_id').alias('distinct_pages'),
*[F.first(field).alias(field) for field in not_calculated_string_fields],
# skipped count_{x}
*[F.sum(F.when(F.col('event_type') == x, F.col('duration'))).alias(f"duration_{x}_ms") for x in duration_events],
*[F.mean(F.when(F.col('event_type') == x, F.col('duration'))).alias(f"mean_duration_{x}_ms") for x in duration_events],
)

how to use group by function for DbFunctions.DiffMinutes

I have this Linq query.I want to show ab.InTime and DbFunctions value together
var sdv = (from ab in db.Attendances
where ab.Employee == 63 && ab.InTime.Value.Year == 2015 && ab.InTime.Value.Month == 8
select ab.InTime,DbFunctions.DiffMinutes(ab.InTime, ab.OutTime) / 60);
Maybe, you can use select new{...}. Please try this:
var sdv = (from ab in db.Attendances
where ab.Employee == 63 && ab.InTime.Value.Year == 2015 && ab.InTime.Value.Month == 8
select new{
InTime=ab.InTime,
Minutes=DbFunctions.DiffMinutes(ab.InTime, ab.OutTime) / 60)
};

Optimizing pig script

I am trying to generate aggregated output. The issue is that all the data is going to a single reducer(Filter and Count are creating a problem). How can I optimize the following script?
Expected output:
group, 10,2,12,34...
data = LOAD '/input/useragents' USING PigStorage('\t') AS (Col1:chararray,Col2:chararray,Col3:chararray,col4:chararray,col5:chararray);
grp1 = GROUP data BY UA PARALLEL 50;
fr1 = FOREACH grp1 {
fltrCol1 = FILTER data BY Col1 == 'Other';
fltrCol2 = FILTER data BY Col2 == 'Other';
fltrCol3 = FILTER data BY Col3 == 'Other';
fltrCol4 = FILTER data BY col4 == 'Other';
fltrCol5 = FILTER data BY col5 == 'Other';
cnt_fltrCol1 = COUNT(fltrCol1);
cnt_fltrCol2 = COUNT(fltrCol2);
cnt_fltrCol3 = COUNT(fltrCol3);
cnt_fltrCol4 = COUNT(fltrCol4);
cnt_fltrCol5 = COUNT(fltrCol5);
GENERATE group,cnt_fltrCol1,cnt_fltrCol2,cnt_fltrCol3,cnt_fltrCol4,cnt_fltrCol5;
}
You could put the filter logic before the group by adding fltrCol{1,2,3,4,5} columns as integers, than sum them up. From the top of my head here is the script :
data = LOAD '/input/useragents' USING PigStorage('\t') AS (Col1:chararray,Col2:chararray,Col3:chararray,col4:chararray,col5:chararray);
filter = FOREACH data GENERATE UA,
((Col1 == 'Other') ? 1 : 0) as fltrCol1,
((Col2 == 'Other') ? 1 : 0) as fltrCol2,
((Col3 == 'Other') ? 1 : 0) as fltrCol3,
((Col4 == 'Other') ? 1 : 0) as fltrCol4,
((Col5 == 'Other') ? 1 : 0) as fltrCol5;
grp1 = GROUP data BY UA PARALLEL 50;
fr1 = FOREACH grp1 {
cnt_fltrCol1 = SUM(fltrCol1);
cnt_fltrCol2 = SUM(fltrCol2);
cnt_fltrCol3 = SUM(fltrCol3);
cnt_fltrCol4 = SUM(fltrCol4);
cnt_fltrCol5 = SUM(fltrCol5);
GENERATE group,cnt_fltrCol1,cnt_fltrCol2,cnt_fltrCol3,cnt_fltrCol4,cnt_fltrCol5;
}

Linq join with exists

I'm busy rewriting a system and are using Linq queries to extract data from the database. I am used to plain old TSQL and stored procedures so my Linq skills are not the best.
I have a sql query that I try to rewrite in Linq that contains a join, where clause and IN statements. I do get it right but when I run the sql query I get a different value as from the Linq query. Somewhere I'm missing something and can't find the reason.
Here is the SQL:
select
isnull(Sum(QtyCC) + Sum(QtyEmployee), 0) *
isnull(Sum(UnitPrice), 0)[TotalRValue]
from
tbl_app_KGCWIssueLines a
inner join tbl_app_KGCWIssue b on b.IssueNrLnk = a.IssueNrLnk
where
b.CreationDate >= '2011-02-01' and
a.IssueNrLnk IN (
select
IssueNrLnk
from
tbl_app_KGCWIssue
where
CustomerCode = 'PRO002' and
ISNULL(Tier1,'') = 'PRO002' and
ISNULL(Tier2,'') = 'HAMD01' and
ISNULL(Tier3,'') = '02' and
ISNULL(Tier4,'') = '02001' and
ISNULL(Tier5,'') = 'PTAHQ001' and
ISNULL(Tier6,'') = '035' and
ISNULL(Tier7,'') = '' and
ISNULL(Tier8,'') = '' and
ISNULL(Tier9,'') = '' and
ISNULL(Tier10,'') = ''
)
And here is the Linq:
ctx.ObjectContext.tbl_app_KGCWIssue
.Join(ctx.ObjectContext.tbl_app_KGCWIssueLines,
i => i.IssueNrLnk, l => l.IssueNrLnk, (i, l) => new { i, l })
.Where(o => o.i.CreationDate >= IntervalStartDate)
.Where(p => ctx.ObjectContext.tbl_app_KGCWIssue
.Where(a =>
a.CustomerCode == CustomerCode &&
a.Tier1 == employee.Tier1 &&
a.Tier2 == employee.Tier2 &&
a.Tier3 == employee.Tier3 &&
a.Tier4 == employee.Tier4 &&
a.Tier5 == employee.Tier5 &&
a.Tier6 == employee.Tier6 &&
a.Tier7 == employee.Tier7 &&
a.Tier8 == employee.Tier8 &&
a.Tier9 == employee.Tier9 &&
a.Tier10 == employee.Tier10)
.Select(i => i.IssueNrLnk)
.Contains(p.l.IssueNrLnk))
.Sum(p => p.l.UnitPrice * (p.l.QtyEmployee + p.l.QtyCC));

How to configure LINQ request

I have 2 tables:
'AllowedDates'
- DayID int PK
- Day datetime
'AllowedTimes'
- TimeID int PK
- DayID int FK
- Hour int
- Minute int
also I have table 'Users':
- ID int PK
- FirstName nvarchar(max)
...
and table 'UserDeniedTimes':
DayID int FK
UserID int FK
Hour int
Minute int
I need to select users, which don't have deny time (record in UserDeniedTimes) for concrete DayID/Hour/Minute
I try to do the following:
var result = from i in _dbContext.Users
where i.UserDeniedTimes.All(
p => (!p.AllowedDate.AllowedTimes.Any(
a1 => a1.DayID == aTime.DayID
&& a1.Hour == aTime.Hour
&& a1.Minute == aTime.Minute
))
)
select new ...
it works correctly, but with one exception. If user has record in UserDeniedTimes for some day, but another time, this user is not selected too. For example, UserDeniedTimes has record:
DayID = 10
UserID = 20
Hour = 14
Minute = 30
this user will be not selected if aTime has the following values:
DayID = 10
Hour = 9
Minute = 30
but will be selected if DayID = 11. Why?
[ADDED]
it works correctly when I limit only by day:
var result = from i in _dbContext.Users
where i.UserDeniedTimes.All(
p => (!p.AllowedDate.AllowedTimes.Any(
a1 => a1.DayID == aTime.DayID
))
)
select new ...
but not works when I write:
var result = from i in _dbContext.Users
where i.UserDeniedTimes.All(
p => (!p.AllowedDate.AllowedTimes.Any(
a1 => a1.Hour == 14
))
)
select new ...
why? What is difference between magic DayID and Hour ?
[ADDED #2]
((time == null) || i.UserDeniedTimes.All(p =>
//p.AllowedDate.AllowedTimes.Any(a1 => a1.DayID != 33) &&
(p.AllowedDate.AllowedTimes.Any(a2 => a2.Hour != 14)
))) &&
does not work
((time == null) || i.UserDeniedTimes.All(p =>
p.AllowedDate.AllowedTimes.Any(a1 => a1.DayID != 33) &&
//(p.AllowedDate.AllowedTimes.Any(a2 => a2.Hour != 14)
))) &&
works
why?
Sometimes it helps to rephrase the problem: if I read you well, you don't want users, that have a deny time with at least one specified DayID/Hour/Minute:
where !i.UserDeniedTimes.Any(
p => (p.AllowedDate.AllowedTimes.Any(
a1 => a1.DayID == aTime.DayID
&& a1.Hour == aTime.Hour
&& a1.Minute == aTime.Minute
))
)
This should select the users you want. If not, please tell in some more words what exactly you
are trying to achieve.

Resources