Convert rank() partition by oracle query to pyspark sql [duplicate] - oracle

I'm trying to use some windows functions (ntile and percentRank) for a data frame but I don't know how to use them.
Can anyone help me with this please? In the Python API documentation there are no examples about it.
Specifically, I'm trying to get quantiles of a numeric field in my data frame.
I'm using spark 1.4.0.

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:
import numpy as np
np.random.seed(1)
keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])
df = sqlContext.createDataFrame([
{"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])
Make sure you're using HiveContext (Spark < 2.0 only):
from pyspark.sql import HiveContext
assert isinstance(sqlContext, HiveContext)
Create a window:
from pyspark.sql.window import Window
w = Window.partitionBy(df.k).orderBy(df.v)
which is equivalent to
(PARTITION BY k ORDER BY v)
in SQL.
As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.
There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.
Finally we can use it for a query:
from pyspark.sql.functions import percentRank, ntile
df.select(
"k", "v",
percentRank().over(w).alias("percent_rank"),
ntile(3).over(w).alias("ntile3")
)
Note that ntile is not related in any way to the quantiles.

Related

pyspark groupBy and orderBy use together

Hi there I want to achieve something like this
SAS SQL: select * from flightData2015 group by DEST_COUNTRY_NAME order by count
My data looks like this:
This is my spark code:
flightData2015.selectExpr("*").groupBy("DEST_COUNTRY_NAME").orderBy("count").show()
I received this error:
AttributeError: 'GroupedData' object has no attribute 'orderBy'. I am new to pyspark. Pyspark's groupby and orderby are not the same as SAS SQL?
I also try sortflightData2015.selectExpr("*").groupBy("DEST_COUNTRY_NAME").sort("count").show()and I received kind of same error. "AttributeError: 'GroupedData' object has no attribute 'sort'"
Please help!
In Spark, groupBy returns a GroupedData, not a DataFrame. And usually, you'd always have an aggregation after groupBy. In this case, even though the SAS SQL doesn't have any aggregation, you still have to define one (and drop it later if you want).
(flightData2015
.groupBy("DEST_COUNTRY_NAME")
.count() # this is the "dummy" aggregation
.orderBy("count")
.show()
)
There is no need for group by if you want every row.
You can order by multiple columns.
from pyspark.sql import functions as F
vals = [("United States", "Angola",13), ("United States","Anguilla" , 38), ("United States","Antigua", 20), ("United Kingdom", "Antigua", 22), ("United Kingdom","Peru", 50), ("United Kingdom", "Russisa",13), ("Argentina", "United Kingdom",13),]
cols = ["destination_country_name","origin_conutry_name", "count"]
df = spark.createDataFrame(vals, cols)
#display(df.orderBy(['destination_country_name', F.col('count').desc()])) If you want count to be descending
display(df.orderBy(['destination_country_name', 'count']))

In pyspark, is df.select(column1, column2....) impact performance

for example, I have a dataframe with 10 columns, and later I need use this dataframe join with other dataframes. But in the dataframe only column1, and column2 are used, others are not useful.
If I do this:
df1 = df.select(['column1', 'column2'])
...
...
result = df1.join(other_df)....
Is this good for the performance?
If yes, why this is good, is there any document?
Thanks.
Spark is distributed lazily evaluated framework, which means either you select all columns or some of the columns they will be brought into the memory only when an action is applied to it.
So if you run
df.explain()
at any stage, it'll show you the projection of the column. So if the column is required only then it'll be available in memory else it'll not be selected.
It's better to specify the required column as it comes under best practices and also will improve your code in terms of understanding the logic.
To understand more about action and transformation visit here
Especially for a join, the least columns you have to use (and therefore select), the maximum it will be efficient.
Of course, Spark is lazy & optimized, which means as long as you don't call a triggering function like show() or count() for example, it won't change anything.
So doing :
df = df.select(["a", "b"])
df = df.join(other_df)
df.show()
OR join first and select after :
df = df.join(other_df)
df = df.select(["a", "b"])
df.show()
doesn't change anything because it will optimize and choose the select first, when compiling the query with a count() or show() after.
On the other hand and to answer your question,
Doing a show() or count() in between will definitely impact performances and the one with the lowest column will be definitely faster.
Try comparing :
df = df.select(["a", "b"])
df.count()
df = df.join(other_df)
df.show()
and
df = df.join(other_df)
df.count()
df = df.select(["a", "b"])
df.show()
You will see the difference in time.
The difference will might not be huge, but if you're using filters (df = df.filter("b" == "blabla"), it can be really really big, especially if you're working with joins.

Spark: lazy action?

I am working on a complex application. From source data, we compute many statistics, eg .
val df1 = sourceData.filter($"col1" === "val" and ...)
.select(...)
.groupBy(...)
.min()
val df2 = sourceData.filter($"col2" === "val" and ...)
.select(...)
.groupBy(...)
.count()
As the dataframe are grouped on the same columns, the result dataframes are then grouped together:
df1.join(df2, Seq("groupCol"), "full_outer")
.join(df3....)
.write.save(...)
(in my code this is done in a loop)
This is not performant, the problem is that each dataframe (I have about 30) ends with a action, so in my understanding each dataframe is computed and returned to the driver, which then sends back data to executors to perform the join.
This gives me memory error, I can increase the driver memory but I am looking for a better way of doing it. For ex. if all dataframes were computed only at the end (with the saving of the joined dataframe) I guess that everything would be managed by the cluster.
Is there a way to do a kind of lazy action? Or should I join the dataframes in another way?
Thx
First of all, the code you've shown contains only one action-like operation - DataFrameWriter.save. All other components are lazy.
But laziness doesn't really help you here. The biggest problem (assuming no ugly data skew or misconfigured broadcasting) is that the individual aggregations require separate shuffles and expensive subsequent merge.
A naive solution would be to leverage that:
the dataframe are grouped on the same columns
to shuffle first:
val groupColumns: Seq[Column] = ???
val sourceDataPartitioned = sourceData.groupBy(groupColumns: _*)
and use the result to compute individual aggregates
val df1 = sourceDataPartitioned
...
val df2 = sourceDataPartitioned
...
However, this approach is rather brittle and is unlikely to scale in presence large / skewed groups.
Therefore it would be much better to rewrite your code to perform only aggregation. Luckily for you, standard SQL behavior is all you need.
Let's start with structuring you code into three element tuples with:
_1 being a predicate (the condition you use with filter).
_2 being a list of Columns for which you want to compute aggregates.
_3 being an aggregate function.
Where example structure can look this:
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions.{count, min}
val ops: Seq[(Column, Seq[Column], Column => Column)] = Seq(
($"col1" === "a" and $"col2" === "b", Seq($"col3", $"col4"), count),
($"col2" === "b" and $"col3" === "c", Seq($"col4", $"col5"), min)
)
Now you compose aggregate expressions using
agg_function(when(predicate, column))
pattern
import org.apache.spark.sql.functions.when
val exprs: Seq[Column] = ops.flatMap {
case (p, cols, f) => cols.map {
case c => f(when(p, c))
}
}
and use it on the sourceData
sourceData.groupBy(groupColumns: _*).agg(exprs.head, exprs.tail: _*)
Add aliases when necessary.

Why is my nested Lua table printing out of order?

Beginner Lua quesiton - I'm just learning lua, and I wrote some code, a nested table to create something like a table with rows and columns.
However, when I iterate through the table using pairs(), it doesn't output in the same order I put it in. I put it in a Serial, Service Days, Connected, and it's coming out as Service Days, Serial, Connected. I am at a loss to figuring out why. I intentionally created the three rows different ways, since I'm just learning and trying to get comfortable with the different ways of dealing with Lua tables...
The code:
myTable = {}
myTable["headerRow"] = {
Serial = "Serial",
ServDays = "Service Days",
Connected = "Connected" }
myTable[1] = {
Serial = "B9FX",
ServDays = 7,
Connected = true }
myTable[2] = {}
myTable[2]["Serial"] = "2SHA"
myTable[2]["ServDays"] = 3
myTable[2]["Connected"] = true
for k, v in pairs(myTable) do
for k2, v2 in pairs(v) do
io.write(tostring(v2),",")
end
io.write("\n") --End the row
end
The result:
c:\lua>lua53 primer.lua
7,B9FX,true,
3,2SHA,true,
Service Days,Serial,Connected,
pairs uses the next function. Hence the order of traversal in a generic for loop using the pairs iterator is unspecified.
From the Lua reference manual:
https://www.lua.org/manual/5.3/manual.html#pdf-next
The order in which the indices are enumerated is not specified, even
for numeric indices. (To traverse a table in numerical order, use a
numerical for.)
The behavior of next is undefined if, during the traversal, you assign
any value to a non-existent field in the table. You may however modify
existing fields. In particular, you may clear existing fields.
If you do something like this:
myTable[2] = {}
myTable[2]["Serial"] = "2SHA"
myTable[2]["ServDays"] = 3
myTable[2]["Connected"] = true
Lua will not remember in which order you asigned values to table keys. It will only map keys to values.

streaming and bulk update to elasticsearch

As part of data analysis, I collect records I need to store in Elasticsearch. As of now I gather the records in an intermediate list, which I then write via a bulk update.
While this works, it has its limits when the number of records is so large that they do not fit into memory. I am therefore wondering if it is possible to use a "streaming" mechanism, which would allow to
persistently open a connection to elasticsearch
continuously update in a bulk-like way
I understand that I could simply open a connection to Elasticsearch and classically update as data are available but this is about 10 times slower, so I would like to keep the bulk mechanism:
import elasticsearch
import elasticsearch.helpers
import elasticsearch.client
import random
import string
import time
index = "testindexyop1"
es = elasticsearch.Elasticsearch(hosts='elk.example.com')
if elasticsearch.client.IndicesClient(es).exists(index=index):
ret = elasticsearch.client.IndicesClient(es).delete(index=index)
data = list()
for i in range(1, 10000):
data.append({'hello': ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))})
start = time.time()
# this version takes 25 seconds
# for _ in data:
# res = es.bulk(index=index, doc_type="document", body=_)
# and this one - 2 seconds
elasticsearch.helpers.bulk(client=es, index=index, actions=data, doc_type="document", raise_on_error=True)
print(time.time()-start)
You can always simply split data into n approximately equally sized sets such that each of them fits in memory and then do n bulk updates. This seems to be the easiest solution to me.

Resources