Spark Drop Duplicates on multiple columns - Performance Issue [duplicate] - performance

I'm trying to use Spark dataframes instead of RDDs since they appear to be more high-level than RDDs and tend to produce more readable code.
In a 14-nodes Google Dataproc cluster, I have about 6 millions names that are translated to ids by two different systems: sa and sb. Each Row contains name, id_sa and id_sb. My goal is to produce a mapping from id_sa to id_sb such that for each id_sa, the corresponding id_sb is the most frequent id among all names attached to id_sa.
Let's try to clarify with an example. If I have the following rows:
[Row(name='n1', id_sa='a1', id_sb='b1'),
Row(name='n2', id_sa='a1', id_sb='b2'),
Row(name='n3', id_sa='a1', id_sb='b2'),
Row(name='n4', id_sa='a2', id_sb='b2')]
My goal is to produce a mapping from a1 to b2. Indeed, the names associated to a1 are n1, n2 and n3, which map respectively to b1, b2 and b2, so b2 is the most frequent mapping in the names associated to a1. In the same way, a2 will be mapped to b2. It's OK to assume that there will always be a winner: no need to break ties.
I was hoping that I could use groupBy(df.id_sa) on my dataframe, but I don't know what to do next. I was hoping for an aggregation that could produce, in the end, the following rows:
[Row(id_sa=a1, max_id_sb=b2),
Row(id_sa=a2, max_id_sb=b2)]
But maybe I'm trying to use the wrong tool and I should just go back to using RDDs.

Using join (it will result in more than one row in group in case of ties):
import pyspark.sql.functions as F
from pyspark.sql.functions import count, col
cnts = df.groupBy("id_sa", "id_sb").agg(count("*").alias("cnt")).alias("cnts")
maxs = cnts.groupBy("id_sa").agg(F.max("cnt").alias("mx")).alias("maxs")
cnts.join(maxs,
(col("cnt") == col("mx")) & (col("cnts.id_sa") == col("maxs.id_sa"))
).select(col("cnts.id_sa"), col("cnts.id_sb"))
Using window functions (will drop ties):
from pyspark.sql.functions import row_number
from pyspark.sql.window import Window
w = Window().partitionBy("id_sa").orderBy(col("cnt").desc())
(cnts
.withColumn("rn", row_number().over(w))
.where(col("rn") == 1)
.select("id_sa", "id_sb"))
Using struct ordering:
from pyspark.sql.functions import struct
(cnts
.groupBy("id_sa")
.agg(F.max(struct(col("cnt"), col("id_sb"))).alias("max"))
.select(col("id_sa"), col("max.id_sb")))
See also How to select the first row of each group?

I think what you might be looking for are window functions:
http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=window#pyspark.sql.Window
https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html
Here is an example in Scala (I don't have a Spark Shell with Hive available right now, so I was not able to test the code, but I think it should work):
case class MyRow(name: String, id_sa: String, id_sb: String)
val myDF = sc.parallelize(Array(
MyRow("n1", "a1", "b1"),
MyRow("n2", "a1", "b2"),
MyRow("n3", "a1", "b2"),
MyRow("n1", "a2", "b2")
)).toDF("name", "id_sa", "id_sb")
import org.apache.spark.sql.expressions.Window
val windowSpec = Window.partitionBy(myDF("id_sa")).orderBy(myDF("id_sb").desc)
myDF.withColumn("max_id_b", first(myDF("id_sb")).over(windowSpec).as("max_id_sb")).filter("id_sb = max_id_sb")
There are probably more efficient ways to achieve the same results with Window functions, but I hope this points you in the right direction.

Related

How to create a Matrix with p values from anova

I performed an ANOVA and corrected it with Tukey's test, so I got several values ​​of P.
Now I would like to build a Heatmap with these values ​​and for that I need to create an matrix with the values ​​of P to be able to make my Heat map
The first question would be how to fill a matrix with the anova p-values?
Then I made an ancova and obtained other p-values.
Now I would like to make a heatmap to compare these p-values ​​between the anova and the ancova.
Can someone help me ?
I will exemplify
anova_model <- aov( X ~ groups , data = T1)
postHocs <- glht(anova_model, linfct = mcp(groups = "Tukey"))
summary(postHocs)
This anova gave me several values ​​of P(!)
ancova_model <- aov( X ~ groups + age , data = T1)
postHocs <- glht(ancova_model, lymphct = mcp(groups = "Tukey"))
summary(postHocs)
This ancova gave me several other values ​​of P(!)
I would now like to create a Heat map to compare these P values. To see for example when age interferes a lot or not. I believe that before the ideal is to create a matrix before but I'm actually kind of lost.
Could someone help me?
Thank you very much

Performance difference map() vs withColumn()

I have a table with over 100 columns. I need to remove double quotes from certain columns. I found 2 ways to do it, using withColumn() and map()
Using withColumn()
cols_to_fix = ["col1", ..., "col20"]
for col in cols_to_fix:
df = df.withColumn(col, regexp_replace(df[col], "\"", ""))
Using map()
def remove_quotes(row: Row) -> Row:
row_as_dict = row.asDict()
cols_to_fix = ["col1", ..., "col20"]
for column in cols_to_fix:
if row_as_dict[column]:
row_as_dict[column] = re.sub("\"", "", str(row_as_dict[column]))
return Row(**row_as_dict)
df = df.rdd.map(remove_quotes).toDF(df.schema)
Here is my question. I found using map() takes about 4 times longer than withColumn() on a table that has ~25M records. I will really appreciate if any fellow stack overflow user can explain the reason for the performance difference, so that I can avoid similar pitfall in future.
firstly, one piece of advice: do not convert DataFrame to RDD and just do df.map(your function here), this may save a lot of time.
the following page
https://dzone.com/articles/apache-spark-3-reasons-why-you-should-not-use-rdds
would save us a lot of time, its main conclusion is that RDD is remarkably slow than DataFrame/Dataset, not to mention the time used for the conversion from DataFrame to RDD.
Let's talk about map and withColumn without any conversion between DataFrame to RDD now.
Conclusion first: map is usually 5x slower than withColumn. the reason is that map operation always involves deserialization and serialization while withColumn can operate on column of interest.
to be specific, map operation should deserialize the Row into several parts on which the operation will be carrying,
An example here :
assume we have a DataFrame which looks like
+--------+-----------+
|language|users_count|
+--------+-----------+
| Java| 20000|
| Python| 100000|
| Scala| 3000|
+--------+-----------+
Then we want to increment all the values in column users_count by 1, we can do it like this:
df.map(row => {
val usersCount = row.getInt(1) + 1
(row.getString(0), usersCount)
}).toDF("language", "users_count_incremented_by_1")
In the code above, we firstly need to deserialize every row to extract the values in the 2nd column, after that we output the modified values and save it as an DataFrame(this step requires serialization of (a,b) into Row(a, b) since DataFrame is nothing but a DataSet of Rows).
for more detailed explanation, check the following excellent article
https://medium.com/#fqaiser94/udfs-vs-map-vs-custom-spark-native-functions-91ab2c154b44
map can not operate on the column itself but have to operate on the values of the column, getting the values require deserialization, saving it as a DataFrame requires serialization.
But map is still of great use: with the help of map method people could implement very sophisticated operations while just built-in operations could be done if we just use withColumn.
To sum it up, map is slower but more flexible, withColumn is surely the most efficient while it's functionality is limited.

Is there a reasonable way to create tff clients datat sets?

I am wondering if there are any reasonable ways to generate clients data sets for federated learning simulation using tff core code? In the tutorial for the federated core, it uses the MNIST database with each client has only one distinct label in his data set. In this case, there are only 10 different labels available. If I want to have more clients, how can I do that? Thanks in advance.
If you want to create a dataset from scratch you can use tff.simulation.FromTensorSlicesClientData to covert tensors to tff clientdata object. Just you need to pass dictionary having client id as key and dataset as value.
client_train_dataset = collections.OrderedDict()
for i in range(1, split+1):
client_name = "client_" + str(i)
start = image_per_set * (i-1)
end = image_per_set * i
print(f"Adding data from {start} to {end} for client : {client_name}")
data = collections.OrderedDict((('label', y_train[start:end]), ('pixels', x_train[start:end])))
client_train_dataset[client_name] = data
train_dataset = tff.simulation.FromTensorSlicesClientData(client_train_dataset)
you can check my complete implementation here, where i have splitted mnist into 4 clients.
There are preprocessed simulation datasets in TFF that should serve this purpose quite nicely. Take, for example, loading EMNIST where the images are partitioned by writer, corresponding to user, as opposed to label. This can be loaded into a Python runtime rather simply (here creating train data with 100 clients):
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return {'x': tf.reshape(example['pixels'], [-1]), 'y': example['label']}
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).map(map_fn).shuffle(500).batch(20)
train_data = [client_data(n) for n in range(100)]
There are existing datasets partitioned in a similar way for extended MNIST (IE, includes handwritten characters in addition to digits), Shakespeare plays (partitioned by character), and Stackoverflow posts (partitioned by user). Documentation on these datasets can be found here.
If you wish to create your own custom dataset, please see the answer here.

Count word ocurrences in each row using Pig

I have a set of tweets that have many different fields
raw_tweets = LOAD 'input.tsv' USING PigStorage('\t') AS (tweet_id, text,
in_reply_to_status_id, favorite_count, source, coordinates, entities,
in_reply_to_screen_name, in_reply_to_user_id, retweet_count, is_retweet,
retweet_of_id, user_id_id, lang, created_at, event_id_id, is_news);
I want to find the most common words for each date. I managed to group the texts by date:
r1 = FOREACH raw_tweets GENERATE SUBSTRING(created_at,0,10) AS a, REPLACE
(LOWER(text),'([^a-z\\s]+)','') AS b;
r2 = group r1 by a;
r3 = foreach r2 generate group as a, r1 as b;
r4 = foreach r3 generate a, FLATTEN(BagToTuple(b.b));
Now it looks like:
(date text text3)
(date2 text2)
I removed the special characters, so only "real" words appear in the text.
Sample:
2017-06-18 the plants are green the dog is black there are words this is
2017-06-19 more words and even more words another phrase begins here
I want the output to look like
2017-06-18 the are is
2017-06-19 more words and
I don't really care about how many times the word appears. I just want to show the most common, if two words appear the same amount of times, show any of them.
While I'm sure there is a way to do this entirely in Pig, it would probably be more difficult than necessary.
UDFs are the way to go, in my opinion, and Python is just one option I will show because it's quick to register it in Pig.
For example,
input.tsv
2017-06-18 the plants are green the dog is black there are words this is
2017-06-19 more words and even more words another phrase begins here
py_udfs.py
from collections import Counter
from operator import itemgetter
#outputSchema("y:bag{t:tuple(word:chararray,count:int)}")
def word_count(sentence):
''' Does a word count of a sentence and orders common words first '''
words = Counter()
for w in sentence.split():
words[w] += 1
values = ((word,count) for word,count in words.items())
return sorted(values,key=itemgetter(1),reverse=True)
script.pig
REGISTER 'py_udfs.py' USING jython AS py_udfs;
A = LOAD 'input.tsv' USING PigStorage('\t') as (created_at:chararray,sentence:chararray);
B = FOREACH A GENERATE created_at, py_udfs.word_count(sentence);
\d B
Output
(2017-06-18,{(is,2),(the,2),(are,2),(green,1),(black,1),(words,1),(this,1),(plants,1),(there,1),(dog,1)})
(2017-06-19,{(more,2),(words,2),(here,1),(another,1),(begins,1),(phrase,1),(even,1),(and,1)})
If you are doing textual analysis, though, I would suggest
Removing stop words
Lemmatization / stemming
Use Apache Spark

How can I use the map datatype in Apache Pig?

I'd like to use Apache Pig to build a large key -> value mapping, look things up in the map, and iterate over the keys. However, there does not even seem to be syntax for doing these things; I've checked the manual, wiki, sample code, Elephant book, Google, and even tried parsing the parser source. Every single example loads map literals from a file... and then never uses them. How can you use Pig's maps?
First, there doesn't seem to be a way to load a 2-column CSV file into a map directly. If I have a simple map.csv:
1,2
3,4
5,6
And I try to load it as a map:
m = load 'map.csv' using PigStorage(',') as (M: []);
dump m;
I get three empty tuples:
()
()
()
So I try to load tuples and then generate the map:
m = load 'map.csv' using PigStorage(',') as (key:chararray, val:chararray);
b = foreach m generate [key#val];
ERROR 1000: Error during parsing. Encountered " "[" "[ "" at line 1, column 24.
...
Many variations on the syntax also fail (e.g., generate [$0#$1]).
OK, so I munge my map into Pig's map literal format as map.pig:
[1#2]
[3#4]
[5#6]
And load it up:
m = load 'map.pig' as (M: []);
Now let's load up some keys and try lookups:
k = load 'keys.csv' as (key);
dump k;
3
5
1
c = foreach k generate m#key; /* Or m[key], or... what? */
ERROR 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Hrm, OK, maybe since there are two relations involved, we need a join:
c = join k by key, m by /* ...um, what? */ $0;
dump c;
ERROR 1068: Using Map as key not supported.
c = join k by key, m by m#key;
dump c;
Error 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Fail. How do I refer to the key (or value) of a map? The map schema syntax doesn't seem to let you even name the key and value (the mailing list says there's no way to assign types).
Finally, I'd just like to be able to find all they keys in my map:
d = foreach m generate ...oh, forget it.
Is Pig's map type half-baked? What am I missing?
Currently pig maps need the key to a chararray (string) that you supply and not a variable which contains a string. so in map#key the key has to be constant string that you supply (eg: map#'keyvalue').
The typical use case of this is to load a complex data structure one of the element being a key value pair and later in a foreach statement you can refer to a particular value based on the key you are interested in.
http://pig.apache.org/docs/r0.9.1/basic.html#map-schema
In Pig version 0.10.0 there is a new function available called "TOMAP" (http://pig.apache.org/docs/r0.10.0/func.html#tomap) that converts its odd (chararray) parameters to keys and even parameters to values. Unfortunately I haven't found it to be that useful, though, since I typically deal with arbitrary dicts of varying lengths and keys.
I would find a TOMAP function that took a tuple as a single argument, instead of a variable number of parameters, to be much more useful.
This isn't a complete solution to your problem, but the availability of TOMAP gives you some more options for your constructing a real solution.
Great question!
I personally do not like Maps in Pig. They have a place in traditional programming languages like Java, C# etc, wherein its really handy and fast to lookup a key in the map. On the other hand, Maps in Pig have very limited features.
As you rightly pointed, one can not lookup variable key in the Map in Pig. The key needs to be Constant. e.g. myMap#'keyFoo' is allowed but myMap#$SOME_VARIABLE is not allowed.
If you think about it, you do not need Map in Pig. One usually loads the data from some source, transforms it, joins it with some other dataset, filter it, transform it and so on. JOIN actually does a good job of looking up the variable keys in the data.
e.g. data1 has 2 columns A and B and data2 has 3 columns X, Y, Z. If you join data1 BY A with data2 BY Z, JOIN does the work of a Map (from traditional language) which maps value of column Z to value of column B (via column A). So data1 essentially represents a Map A -> B.
So why do we need Map in Pig?
Usually Hadoop data are the dumps of different data sources from Traditional languages. If original data sources contain Maps, the HDFS data would contain a corresponding Map.
How can one handle the Map data?
There are really 2 use cases:
Map keys are constants.
e.g. HttpRequest Header data contains time, server, clientIp as the keys in Map. to access value of a particular key, one case access them with Constant key.
e.g. header#'clientIp'.
Map keys are variables.
In these cases, you would most probably would want to JOIN the Map keys with some other data set. I usually convert the Map to Bag using UDF MapToBag, which converts map data into Bag of 2 field tuples (key, value). Once map data is converted to Bag of tuples, its easy to join it with other data sets.
I hope this helps.
1)If you want to load map data it should be like "[programming#SQL,rdbms#Oracle]"
2)If you want to load tuple data it should be like "(first_name_1234,middle_initial_1234,last_name_1234)"
3)If you want to load bag data it should be like"{(project_4567_1),(project_4567_2),(project_4567_3)}"
my file pigtest.csv like this
1234|emp_1234#company.com|(first_name_1234,middle_initial_1234,last_name_1234)|{(project_1234_1),(project_1234_2),(project_1234_3)}|[programming#SQL,rdbms#Oracle]
4567|emp_4567#company.com|(first_name_4567,middle_initial_4567,last_name_4567)|{(project_4567_1),(project_4567_2),(project_4567_3)}|[programming#Java,OS#Linux]
my schema:
a = LOAD 'pigtest.csv' using PigStorage('|') AS (employee_id:int, email:chararray, name:tuple(first_name:chararray, middle_name:chararray, last_name:chararray), project_list:bag{project: tuple(project_name:chararray)}, skills:map[chararray]) ;
b = FOREACH a GENERATE employee_id, email, name.first_name, project_list, skills#'programming' ;
dump b;
I think you need to think in term of relations and the map is just one field of one record. Then you can apply some operations on the relations, like joining the two sets data and mapping:
Input
$ cat data.txt
1
2
3
4
5
$ cat mapping.txt
1 2
2 4
3 6
4 8
5 10
Pig
mapping = LOAD 'mapping.txt' AS (key:CHARARRAY, value:CHARARRAY);
data = LOAD 'data.txt' AS (value:CHARARRAY);
-- list keys
mapping_keys =
FOREACH mapping
GENERATE key;
DUMP mapping_keys;
-- join mapping to data
mapped_data =
JOIN mapping BY key, data BY value;
DUMP mapped_data;
Output
> # keys
(1)
(2)
(3)
(4)
(5)
> # mapped data
(1,2,1)
(2,4,2)
(3,6,3)
(4,8,4)
(5,10,5)
This answer could also help you if you just want to do a simple look up:
pass-a-relation-to-a-pig-udf-when-using-foreach-on-another-relation
You can load up any data and then convert and store in key value format to read for later use
data = load 'somedata.csv' using PigStorage(',')
STORE data into 'folder' using PigStorage('#')
and then read as a mapped data.

Resources