I want to do a benchmark between a compiler that I developed and Apache Jena, the think is that I have different queries to run but I need to know the memory usage in the different one, and the time execution.
For the time I am doing the following:
val t1 = System.nanoTime //time one starts here
val i: interpreterVRC = new interpreterVRC(q, d)
val x =(System.nanoTime - t1) / 1e9d //time one finish here
//////***************CONFIGURACION DE JENA
val in: InputStream = FileManager.get.open("peel.rdf")
//val in: InputStream = FileManager.get.open("dbpedia-johnpeel-agents.rdf")
val Jena = new JenaRdf(in)
/** *************** Queries Execution Jena **************/
val t2 = System.nanoTime // time two starts here
Jena.query_exec(q)
println((System.nanoTime - t1) / 1e9d,x) //prints time two and time 1
However, I need to perform this operation like 10 times and get the time average of this times executions. Then my question is if there is a way to implement this benchmarks, and How Can I know the memory usage in each execution?
Related
{
var history: RDD[(String, List[String]) = sc.emptyRDD()
val dstream1 = ...
val dstream2 = ...
val historyDStream = dstream1.transform(rdd => rdd.union(history))
val joined = historyDStream.join(dstream2)
... do stuff with joined as above, obtain dstreamFiltered ...
dstreamFiltered.foreachRDD{rdd =>
val formatted = rdd.map{case (k,(v1,v2)) => (k,v1) }
history.unpersist(false) // unpersist the 'old' history RDD
history = formatted // assign the new history
history.persist(StorageLevel.MEMORY_AND_DISK) // cache the computation
history.count() //action to materialize this transformation
}
This code logic is working fine for preserving all the previous RDDs which didn't successfully joined and saved for the future batches so that whenever we get a record with corresponding joining key for this RDD , we perform the join but I didn't got how this history is build up.
We can understand how the history builds up in this case by observing how the RDD lineage evolves over time.
We need two pieces of previous knowledge:
RDDs are immutable structures
Operations on RDD can be expressed in functional terms by the function to be applied and references to the input RDDs.
Let's see a quick example, using the classical wordCount:
val txt = sparkContext.textFile(someFile)
val words = txt.flatMap(_.split(" "))
In simplified terms, txt is a HadoopRDD(someFile). words is a MapPartitionsRDD(txt, flatMapFunction). We speak of the lineage of words as the DAG (Direct Acyclic Graph) that is formed of this chaining of operations.: HadoopRDD <-- MapPartitionsRDD.
We can apply the same principles to our streaming operation:
At iteration 0, we have
var history: RDD[(String, List[String]) = sc.emptyRDD()
// -> history: EmptyRDD
...
val historyDStream = dstream1.transform(rdd => rdd.union(history))
// -> underlying RDD: rdd.union(EmptyRDD)
join, filter
// underlying RDD: ((rdd.union(EmptyRDD).join(otherRDD)).filter(pred)
map
// -> underlying RDD: ((rdd.union(EmptyRDD).join(otherRDD)).filter(pred).map(f)
history.unpersist(false)
// EmptyRDD.unpersist (does nothing, it was never persisted)
history = formatted
// history = ((rdd.union(EmptyRDD).join(otherRDD)).filter(pred).map(f)
history.persist(...)
// history marked for persistence (at the next action)
history.count()
// ((rdd.union(EmptyRDD).join(otherRDD)).filter(pred).map(f).count()
// cache result of: ((rdd.union(EmptyRDD).join(otherRDD)).filter(pred).map(f)
At iteration 1, we have (adding rdd0, rdd1 as iteration index):
val historyDStream = dstream1.transform(rdd => rdd.union(history))
// -> underlying RDD: rdd1.union(((rdd0.union(EmptyRDD).join(otherRDD0)).filter(pred).map(f))
join, filter
// underlying RDD: ((rdd1.union(((rdd0.union(EmptyRDD).join(otherRDD0)).filter(pred).map(f)).join(otherRDD1)).filter(pred)
map
// -> underlying RDD: ((rdd1.union(((rdd0.union(EmptyRDD).join(otherRDD0)).filter(pred).map(f)).join(otherRDD1)).filter(pred).map(f)
history.unpersist(false)
// history0.unpersist (marks the previous result for removal, we used it already for our computation above)
history = formatted
// history1 = ((rdd1.union(((rdd0.union(EmptyRDD).join(otherRDD0)).filter(pred).map(f)).join(otherRDD1)).filter(pred).map(f)
history.persist(...)
// new history marked for persistence (at the next action)
history.count()
// ((rdd1.union(((rdd0.union(EmptyRDD).join(otherRDD0)).filter(pred).map(f)).join(otherRDD1)).filter(pred).map(f).count()
// cache result sothat we don't need to compute it next time
This iterative process goes on with each iteration.
As we can see, the graph representing the RDD computation keeps on growing. cache reduces the cost of making all calculations each time. checkpoint is needed every so often to write a concrete computed value of this growing graph so that we can use it as baseline instead of having to evaluate the whole chain.
An interesting way to see this process in action is by adding a line within the foreachRDD to inspect the current lineage:
...
history.unpersist(false) // unpersist the 'old' history RDD
history = formatted // assign the new history
println(history.toDebugString())
...
I have a large dataframe that has been cached like
val largeDf = someLargeDataframe.cache
Now I need to union it with a tiny one and cached it again
val tinyDf = someTinyDataframe.cache
val newDataframe = largeDf.union(tinyDf).cached
tinyDf.unpersist()
largeDf.unpersist()
It is very inefficient since it need to re-cached all the data again. Is there any efficient way to add a little amount of data to a large cached dataframe?
After reading Teodors's explanation, I know that I can't unpersist the old dataframe before I do some action on my new dataframe. But what if I need to do something like this?
def myProcess(df1: Dataframe, df2: Dataframe): Dataframe{
val df1_trans = df1.map(....).cache
val df2_trans = df2.map(....).cache
doSomeAction(df1_trans, df2_trans)
val finalDf = df1_trans.union(df2_trans).map(....).cache
// df1_trans.unpersist()
// df2_trans.unpersist()
finalDf
}
I want my df1_trans & df2_trans to be cached to improve the performance inside the function since they will be called more than once, but the dataframe I need to return in the end is also constructed by df1_trans & df2_trans, if I can't unpersist them before leaving the function, I can never find other place to do this, however, if I unpersist them, my finalDf will not benefit from cache.
What can I do in this situation? Thanks!
val largeDf = someLargeDataframe.cache
val tinyDf = someTinyDataframe.cache
val newDataframe = largeDf.union(tinyDf).cache
If you call unpersist() now before any action that goes through all your largeDf dataframe you won't benefit from caching the two dataframes.
tinyDf.unpersist()
largeDf.unpersist()
I wouldn't worry about caching the unioned dataframe as long as the two other dataframes are already cached, you won't likely see a performance hit.
Benchmark the following:
========= now? ============
val largeDf = someLargeDataframe.cache
val tinyDf = someTinyDataframe.cache
val newDataframe = largeDf.union(tinyDf).cache
tinyDf.unpersist()
largeDf.unpersist()
#force evaluation
newDataframe.count()
========= alternative 1 ============
val largeDf = someLargeDataframe.cache
val tinyDf = someTinyDataframe.cache
val newDataframe = largeDf.union(tinyDf).cache
#force evaluation
newDataframe.count()
tinyDf.unpersist()
largeDf.unpersist()
======== alternative 2 ==============
val largeDf = someLargeDataframe.cache
val tinyDf = someTinyDataframe.cache
val newDataframe = largeDf.union(tinyDf)
newDataframe.count()
======== alternative 3 ==============
val largeDf = someLargeDataframe
val tinyDf = someTinyDataframe
val newDataframe = largeDf.union(tinyDf).cache
#force evaluation
newDataframe.count()
Is there any efficient way to add a little amount of data to a large cached dataframe?
I don't think any other operation could beat union. I did think that broadcast function might help here, but after having a look at the execution plan I don't think so anymore.
That led me to write the answer. If you want to know if your caching has any effect on a query, explain it:
explain(): Unit Prints the physical plan to the console for debugging purposes.
With the following example, broadcast does not affect union (which is now not surprising given it's a hint for joins and other physical operators just ignore it).
scala> left.union(broadcast(right)).explain
== Physical Plan ==
Union
:- *Range (0, 4, step=1, splits=8)
+- *Range (0, 3, step=1, splits=8)
It's also worthwhile to use Details for Query under SQL tab.
I'm just trying to answer the part of the question that is still unanswered here. There is one way to unpersist the df1_trans and df2_trans from your myProcess() function .You can create a temp view of the DataFrames using df1_trans.createOrReplaceTempView(viewName) and df2_trans.createOrReplaceTempView(viewName). See the Dataset API for reference. Then, after you do some action on those two DataFrames and you are ready to unpersist, you can do so like this - sqlContext.table(viewName).unpersist, where viewName is the name you used to create the temp view.
Hope this helps!
I've been trying to execute 10,000 queries over a relatively large dataset 11M. More specifically I am trying to transform an RDD using filter based on some predicate and then compute how many records conform to that filter by applying the COUNT action.
I am running Apache Spark on my local machine having 16GB of memory and an 8-core CPU. I have set the --driver-memory to 10G in order to cache the RDD in memory.
However, because I have to re-do this operation 10,000 times it takes unusually long for this to finish. I am also attaching my code hoping it will make things more clear.
Loading the queries and the dataframe I am going to query against.
//load normalized dimensions
val df = spark.read.parquet("/normalized.parquet").cache()
//load query ranges
val rdd = spark.sparkContext.textFile("part-00000")
Parallelizing the execution of queries
In here, my queries are collected in a list and using par are executed in parallel. I then collect the required parameters that my query needs, to filter the Dataset. The isWithin function calls a function and tests whether the Vector contained in my dataset is within the given bounds by my queries.
Now after filtering my dataset, I execute count to get the number of records that exist in the filtered dataset and then create a string reporting how many that was.
val results = queries.par.map(q => {
val volume = q(q.length-1)
val dimensions = q.slice(0, q.length-1)
val count = df.filter(row => {
val v = row.getAs[DenseVector]("scaledOpen")
isWithin(volume, v, dimensions)
}).count
q.mkString(",")+","+count
})
Now, what I have in mind is that this task is generally really hard given the large dataset that I have and trying to run such thing on a single machine. I know this could be much faster on something running on top of Spark or by utilizing an index. However, I am wondering if there is a way to make it faster as it is.
Just because you parallelize access to a local collection it doesn't mean that anything is executed in parallel. Number of jobs that can be executed concurrently is limited by the cluster resources not driver code.
At the same time Spark is designed for high latency batch jobs. If number of jobs goes into tens of thousands you just cannot expect things to be fast.
One thing you can try is to push filters down into a single job. Convert DataFrame to RDD:
import org.apache.spark.mllib.linalg.{Vector => MLlibVector}
import org.apache.spark.rdd.RDD
val vectors: RDD[org.apache.spark.mllib.linalg.DenseVector] = df.rdd.map(
_.getAs[MLlibVector]("scaledOpen").toDense
)
map vectors to {0, 1} indicators:
import breeze.linalg.DenseVector
// It is not clear what is the type of queries
type Q = ???
val queries: Seq[Q] = ???
val inds: RDD[breeze.linalg.DenseVector[Long]] = vectors.map(v => {
// Create {0, 1} indicator vector
DenseVector(queries.map(q => {
// Define as before
val volume = ???
val dimensions = ???
// Output 0 or 1 for each q
if (isWithin(volume, v, dimensions)) 1L else 0L
}): _*)
})
aggregate partial results:
val counts: breeze.linalg.DenseVector[Long] = inds
.aggregate(DenseVector.zeros[Long](queries.size))(_ += _, _ += _)
and prepare final output:
queries.zip(counts.toArray).map {
case (q, c) => s"""${q.mkString(",")},$c"""
}
INTRODUCTION
I have to write distributed application which counts maximum number of unique values for 3 records. I have no experience in such area and don't know frameworks at all. My input could looks as follow:
u1: u2,u3,u4,u5,u6
u2: u1,u4,u6,u7,u8
u3: u1,u4,u5,u9
u4: u1,u2,u3,u6
...
Then beginning of the results should be:
(u1,u2,u3), u4,u5,u6,u7,u8,u9 => count=6
(u1,u2,u4), u3,u5,u6,u7,u8 => count=5
(u1,u3,u4), u2,u5,u6,u9 => count=4
(u2,u3,u4), u1,u5,u6,u7,u8,u9 => count=6
...
So my approach is to first merge each two of records, and then merge each merged pair with each single record.
QUESTION
Can I do such operation like working (merge) on more than one input row on the same time in framewors like hadoop/spark? Or maybe my approach is incorrect and I should do this different way?
Any advice will be appreciated.
Can I do such operation like working (merge) on more than one input row on the same time in framewors like hadoop/spark?
Yes, you can.
Or maybe my approach is incorrect and I should do this different way?
It depends on the size of the data. If your data is small, it's faster and easier to do it locally. If your data is huge, at least hundreds of GBs, the common strategy is to save the data to HDFS(distributed file system), and do analysis using Mapreduce/Spark.
A example spark application written in scala:
object MyCounter {
val sparkConf = new SparkConf().setAppName("My Counter")
val sc = new SparkContext(sparkConf)
def main(args: Array[String]) {
val inputFile = sc.textFile("hdfs:///inputfile.txt")
val keys = inputFile.map(line => line.substring(0, 2)) // get "u1" from "u1: u2,u3,u4,u5,u6"
val triplets = keys.cartesian(keys).cartesian(keys)
.map(z => (z._1._1, z._1._2, z._2))
.filter(z => !z._1.equals(z._2) && !z._1.equals(z._3) && !z._2.equals(z._3)) // get "(u1,u2,u3)" triplets
// If you have small numbers of (u1,u2,u3) triplets, it's better prepare them locally.
val res = triplets.cartesian(inputFile).filter(z => {
z._2.startsWith(z._1._1) || z._2.startsWith(z._1._2) || z._2.startsWith(z._1._3)
}) // (u1,u2,u3) only matches line starts with u1,u2,u3, for example "u1: u2,u3,u4,u5,u6"
.reduceByKey((a, b) => a + b) // merge three lines
.map(z => {
val line = z._2
val values = line.split(",")
//count unique values using set
val set = new util.HashSet[String]()
for (value <- values) {
set.add(value)
}
"key=" + z._1 + ", count=" + set.size() // the result from one mapper is a string
}).collect()
for (line <- res) {
println(line)
}
}
}
The code is not tested. And is not efficient. It can have some optimization (for example, remove unnecessary map-reduce steps.)
You can rewrite the same version using Python/Java.
You can implement the same logic using Hadoop/Mapreduce
Let's say I have a a script that iterates over a list of 400 objects.
Each object has anywhere from 1 to 10 properties.
Each property is a reasonable size string or a somewhat large integer.
Is there a significant difference in performance of saving these objects into ScriptDB vs saving them into Spreadsheet(w/o doing it in one bulk operation).
Executive Summary
Yes, there is a significant difference! Huge! And I have to admit that this experiment didn't turn out the way I expected.
With this amount of data, writing to a spreadsheet was always much faster than using ScriptDB.
These experiments support the assertions regarding bulk operations in the Google Apps Script Best Practices. Saving data in a spreadsheet using a single setValues() call was 75% faster than line-by-line, and two orders of magnitude faster than cell-by-cell.
On the other hand, recommendations to use Spreadsheet.flush() should be considered carefully, due to the performance impact. In these experiments, a single write of a 4000-cell spreadsheet took less than 50ms, and adding a call to flush() increased that to 610ms - still less than a second, but an order of magnitude tax seems ludicrous. Calling flush() for each of the 400 rows in the sample spreadsheet made the operation take almost 12 seconds, when it took just 164 ms without it. If you've been experiencing Exceeded maximum execution time errors, you may benefit from both optimizing your code AND removing calls to flush().
Experimental Results
All timings were derived following the technique described in How to measure time taken by a function to execute. Times are expressed in milliseconds.
Here are the results from a single pass of five different approaches, two using ScriptDB, three writing to Spreadsheets, all with the same source data. (400 objects with 5 String & 5 Number attributes)
Experiment 1
Elapsed time for ScriptDB/Object test: 53529
Elapsed time for ScriptDB/Batch test: 37700
Elapsed time for Spreadsheet/Object test: 145
Elapsed time for Spreadsheet/Attribute test: 4045
Elapsed time for Spreadsheet/Bulk test: 32
Effect of Spreadsheet.flush()
Experiment 2
In this experiment, the only difference from Experiment 1 was that we called Spreadsheet.flush() after every setValue/s call. The cost of doing so is dramatic, (around 700%) but does not change the recommendation to use a spreadsheet over ScriptDB for speed reasons, because writing to spreadsheets is still faster.
Elapsed time for ScriptDB/Object test: 55282
Elapsed time for ScriptDB/Batch test: 37370
Elapsed time for Spreadsheet/Object test: 11888
Elapsed time for Spreadsheet/Attribute test: 117388
Elapsed time for Spreadsheet/Bulk test: 610
Note: This experiment was often killed with Exceeded maximum execution time.
Caveat Emptor
You're reading this on the interwebs, so it must be true! But take it with a grain of salt.
These are results from very small sample sizes, and may not be completely reproducible.
These results are measuring something that changes constantly - while they were observed on Feb 28 2013, the system they measured could be completely different when you read this.
The efficiency of these operations is affected by many factors that are not controlled in these experiments; caching of instructions & intermediate results and server load, for example.
Maybe, just maybe, someone at Google will read this, and improve the efficiency of ScriptDB!
The Code
If you want to perform (or better yet, improve) these experiments, create a blank spreadsheet, and copy this into a new script within it. This is also available as a gist.
/**
* Run experiments to measure speed of various approaches to saving data in
* Google App Script (GAS).
*/
function testSpeed() {
var numObj = 400;
var numAttr = 10;
var doFlush = false; // Set true to activate calls to SpreadsheetApp.flush()
var arr = buildArray(numObj,numAttr);
var start, stop; // time catchers
var db = ScriptDb.getMyDb();
var sheet;
// Save into ScriptDB, Object at a time
deleteAll(); // Clear ScriptDB
start = new Date().getTime();
for (var i=1; i<=numObj; i++) {
db.save({type: "myObj", data:arr[i]});
}
stop = new Date().getTime();
Logger.log("Elapsed time for ScriptDB/Object test: " + (stop - start));
// Save into ScriptDB, Batch
var items = [];
// Restructure data - this is done outside the timed loop, assuming that
// the data would not be in an array if we were using this approach.
for (var obj=1; obj<=numObj; obj++) {
var thisObj = new Object();
for (var attr=0; attr < numAttr; attr++) {
thisObj[arr[0][attr]] = arr[obj][attr];
}
items.push(thisObj);
}
deleteAll(); // Clear ScriptDB
start = new Date().getTime();
db.saveBatch(items, false);
stop = new Date().getTime();
Logger.log("Elapsed time for ScriptDB/Batch test: " + (stop - start));
// Save into Spreadsheet, Object at a time
sheet = SpreadsheetApp.getActive().getActiveSheet().clear();
start = new Date().getTime();
for (var row=0; row<=numObj; row++) {
var values = [];
values.push(arr[row]);
sheet.getRange(row+1, 1, 1, numAttr).setValues(values);
if (doFlush) SpreadsheetApp.flush();
}
stop = new Date().getTime();
Logger.log("Elapsed time for Spreadsheet/Object test: " + (stop - start));
// Save into Spreadsheet, Attribute at a time
sheet = SpreadsheetApp.getActive().getActiveSheet().clear();
start = new Date().getTime();
for (var row=0; row<=numObj; row++) {
for (var cell=0; cell<numAttr; cell++) {
sheet.getRange(row+1, cell+1, 1, 1).setValue(arr[row][cell]);
if (doFlush) SpreadsheetApp.flush();
}
}
stop = new Date().getTime();
Logger.log("Elapsed time for Spreadsheet/Attribute test: " + (stop - start));
// Save into Spreadsheet, Bulk
sheet = SpreadsheetApp.getActive().getActiveSheet().clear();
start = new Date().getTime();
sheet.getRange(1, 1, numObj+1, numAttr).setValues(arr);
if (doFlush) SpreadsheetApp.flush();
stop = new Date().getTime();
Logger.log("Elapsed time for Spreadsheet/Bulk test: " + (stop - start));
}
/**
* Create a two-dimensional array populated with 'numObj' rows of 'numAttr' cells.
*/
function buildArray(numObj,numAttr) {
numObj = numObj | 400;
numAttr = numAttr | 10;
var array = [];
for (var obj = 0; obj <= numObj; obj++) {
array[obj] = [];
for (var attr = 0; attr < numAttr; attr++) {
var value;
if (obj == 0) {
// Define attribute names / column headers
value = "Attr"+attr;
}
else {
value = ((attr % 2) == 0) ? "This is a reasonable sized string for testing purposes, not too long, not too short." : Number.MAX_VALUE;
}
array[obj].push(value);
}
}
return array
}
function deleteAll() {
var db = ScriptDb.getMyDb();
while (true) {
var result = db.query({}); // get everything, up to limit
if (result.getSize() == 0) {
break;
}
while (result.hasNext()) {
var item = result.next()
db.remove(item);
}
}
}
ScriptDB has been deprecated. Do not use.